Camera Obscura: 5th century B.C.
Long before there was the camera, there was the camera obscura. Literally translated as “dark chamber,” these devices consisted of darkened rooms or enclosed boxes with a tiny opening on one side.
The first record of the camera obscura principle goes back to Ancient Greece when Aristotle noticed how light passing through a small hole into a darkened room produces an image on the wall opposite, during a partial eclipse of the sun. This optical phenomenon was also mentioned by the Chinese philosopher Mozi, but a full account of how it worked didn’t arrive until the 11th century when the Arab scholar Alhazen described a working model.
In the 13th Century, the camera obscura was used by astronomers to view the sun. In the 16th Century, camera obscuras became an invaluable aid to artists who used them to create drawings with perfect perspective and accurate detail. Portable camera obscuras were made for this purpose.
In Victorian times, much larger public camera obscuras became popular seaside attractions, where spying on courting couples became a popular pastime.
Photochemistry: 18th and 19th centuries.
While the camera obscura allowed for the viewing of images in real time, several centuries passed before inventors stumbled upon a method for permanently preserving them using chemicals. A major breakthrough came in 1725 when the German professor Johann Heinrich Schulze found that silver salts darkened when exposed to light. Fascinated, Schulze cut the letters out of a piece of paper and placed it on top of a silver mixture.
“Before long,” he recounted, “I found that the sun’s rays…wrote the words and sentences so accurately or distinctly on the chalk sediment, that many people…were led to attribute the result to all kinds of artifices.”
Others later built on Schulze’s research, and in 1827, a French inventor named Joseph Nicéphore Niépce used a camera obscura and a pewter plate coated with a light-sensitive material called Bitumen of Judea to capture and “fix” an image. His eight-hour-long exposure of the courtyard of his home is now considered the world’s first photograph.
Photography’s next giant leap came courtesy of Louis Daguerre, a French artist, and inventor who partnered with Niépce in the late 1820s. In 1837, Daguerre discovered that exposing iodized silver plates to light left behind a faint image that could be developed using mercury fumes.
The new technique not only produced a sharper and more refined picture, but it also cut the exposure time down from several hours to around 10 or 20 minutes. Daguerre christened his new process the “Daguerreotype,” and in 1839, he agreed to make it public in exchange for a pension from the French government. After some tweaking to shorten the exposure process to less than a minute, his invention swept across the world and gave rise to a booming portrait industry, particularly in the United States.
Around the same time that “Daguerreotypomania” was taking hold, the British inventor William Henry Fox Talbot unveiled his own photographic process called the “Calotype.” Calotype or talbotype is an early photographic process that uses paper coated with silver iodide. The term calotype comes from the Greek kalos – “beautiful”, and tupos – “impression”.
This method traded the Daguerreotype’s metal plates for sheets of high-quality photosensitive paper. When exposed to light, the paper produced a latent image that could be developed and preserved by rinsing it with hyposulphite.
The results were slightly fuzzier than Daguerreotypes, but they offered one key advantage: ease of reproduction. Unlike Daguerreotypes, which only made one-off images, the Calotype allowed photographers to produce endless copies of a picture from a single negative. This process would later become one of the basic principles of photography.
The Wet-Collodion Process: 1851
Daguerreotypes and Calotypes were both rendered obsolete in 1851, after a sculptor named Frederick Scott Archer pioneered a new photographic method that combined crisp image quality with negatives that could be easily copied. Archer’s secret was a chemical called collodion, a medical dressing that also proved highly effective as a means for coating light-sensitive solutions onto glass plates. While these “wet plates” reduced exposure times to only a few seconds, using them was often quite the chore.
The plates had to be exposed and processed before the collodion mixture dried and hardened, so photographers were forced to travel with portable darkroom tents or wagons if they wanted to take pictures in the field. Despite this drawback, the wet-collodion process’s unparalleled quality and cheap cost made it an instant success. One of its most famous practitioners was Mathew Brady, who used wet plates to produce thousands of stunning battlefield photos during the Civil War.
Dry Plates: 1871-1878
For most of the 1800s, the panoply of noxious solutions and mixtures involved in using a camera made photography difficult for anyone without a working knowledge of chemistry.
That finally changed in the 1870s, when Robert L. Maddox and others perfected a new type of photographic plate that preserved silver salts in gelatin. Since they retained their light-sensitivity for long periods of time, these “dry” plates could be prepackaged and mass-produced, freeing photographers from the annoying task of prepping and developing their own wet plates on the fly. Dry plates also offered much quicker exposures, allowing cameras to more clearly capture moving objects. In the 1880s, photographer Eadweard Muybridge used dry plate cameras to conduct a series of famous studies of humans and animals in motion. His experiments have since been cited as a crucial step in the development of cinema.
Flexible Roll Film: 1884-1889
Photography didn’t truly become accessible to amateurs until the mid-1880s, when inventor George Eastman began producing film on rolls. Film was more lightweight and resilient than clunky glass plates, and the use of a roll allowed photographers to take multiple pictures in quick succession.
In 1888, Eastman used flexible film as the primary selling point of his first Kodak camera, a small, 100-exposure model that customers could use and then send back to the manufacturer to have their photos developed. Eastman’s camera was remarkably easy to use—he marketed it to Victorian shutterbugs under the slogan “You press the button, we do the rest”—but its coated paper film produced fairly low quality photos.
In 1888 Eastman was the one who founded the Kodak Company. Film would improve by leaps and bounds with the introduction of celluloid a year later, and remained the standard means of photography for nearly a century until the advent of digital cameras.
The yearning for color photography was practically as old as the medium itself itself, but a viable method didn’t arrive until 1907. That was the year the French brothers Louis and Auguste Lumière—perhaps better known as early pioneers of cinema—began marketing an additive color process they dubbed “Autochrome.” The Lumieres found the key to their invention in a most unlikely place: the potato.
Autochrome is an additive color “mosaic screen plate” process. The medium consists of a glass plate coated on one side with a random mosaic of microscopic grains of potato starch dyed red-orange, green, and blue-violet (an unusual but functional variant of the standard red, green, and blue additive colors) which act as color filters.
Autochrome would reign as the world’s most popular color film technique until 1935, when a more sophisticated color process arrived in the form of the Eastman Kodak Company’s legendary Kodachrome film.