How Digital Cameras Work Back | Up | Next

Digital cameras have a sensor in them that is comprised of a grid of tiny light sensing elements called pixels. There are millions of pixels in a DSLR sensor.

The sensor collects photons of light and turns them into electrons. These electrons make up an electrical charge that is measured and converted into a number. From this point on, our images are made up completely of numbers, or digits. This is why they are called digital cameras.

Computers are very good at handling numbers. This is why it is easy to change the numbers, and the way that the image looks, in a computer.

Dynamic Range

Dynamic range is the range of brightness from dark to light. Human vision can accommodate a very large range of brightness. Cameras have a much smaller amount of dynamic range where they can record detail. Scenes in the real world frequently have more dynamic range than a camera can record. Certain astronomical objects, such as the Orion Nebula, can also have a dynamic range that is greater than a digital camera can record in a single exposure.

Bit Depth

We perceive the real world as a series of continuous tones. For example, we see the twilight sky in the west continuously and smoothly changing to the dark sky of night rising in the east at the same time.

Digital cameras however represent these continuous tones of the real world in a series of discrete steps. The number of photons recorded represent the brightness of a particular tone, and this is turned into a digital number that labels the brightness of that same tone.

The bit depth of a camera is the number of steps of tone into which the camera's dynamic range is divided.

Mathematically, bit depth is specified in base 2 notation. Eight bits is 28 or 2 raised to the 8th power (2x2x2x2x2x2x2x2) and equals 256 steps, or levels, of tone, or information.

12-bits equals 212 or 2 raised to the 12th power and equals 4096 levels. 16-bits is 216 or 2 raised to the 16th power and equals 65,536 levels.

The human visual system can be fooled into thinking that discrete steps of tone are continuous. It takes somewhere between 128 and 256 steps to do this. DSLR cameras usually work in 12 or 14 bits of bit depth. This means they divide the dynamic range they can record into 4096 or 16,384 steps. This may seem like a lot, but it gives the camera more data to work with when it does its internal adjustments to the image before converting the data down to 8 bits when a JPEG file is written.

Raw files work with the camera's full bit depth. This is why it is better to shoot Raw for astronomical images. Most deep-sky subjects are very faint. Having more steps of tone in these areas produce better pictures when the data is stretched apart to increase the contrast. Too few steps of tone stretched apart can leave visible gaps in the tones of the image that result in posterization. Posterization is when a continuous tone breaks down into steps of banding.

Color from a Black and White Sensor

The sensor in a DSLR camera can really only record black and white images. A clever trick is used to create color. Red, green and blue color filters are placed over individual pixels in a pattern called a Bayer array. Each pixel has only one colored filter over it, but full color can be re-constructed by examining the color of a pixel's neighbors.

Mouse-Over
Hold your mouse cursor over the image to see a comparison between the black and white Bayer image and the color image that is created out of it. The Bayer image has all of the information needed to create color. This image is enlarged considerably so the Bayer array is visible.

This trick can be accomplished because of the way the eye and brain sense color. Each digital color image is made up of three individual parts, or channels, one each of red, green and blue. When these are put together, they make up a full color image.

Bayer Pattern
The Bayer array is named after its inventor, Bryce Bayer, a Kodak engineer. It consists of a grid, or mosaic, where alternating rows are made of red and green, or blue and green filtered pixels. This produces a final mosaic with twice as many green pixels as either red or blue. Bayer found that human perception was more sensitive to luminance (brightness) information in the green portion of the spectrum. Luminance resolution could be improved by using more green pixels in the array, and this meant more perceived sharpness in the final image.

As you can see in any normal daytime color image taken with a DSLR camera, the Bayer array does a remarkably good job of creating accurate color from a black and white sensor.

Color from Emission Nebulas

Some nebulas are red or pink colored because they glow from the emission of hydrogen-alpha light which lies in the deep-red part of the spectrum. This is a particular part of the spectrum to which digital cameras are sensitive, but to which human eyes are not that sensitive.

M8, the Lagoon Nebula, is a complex of hydrogen-alpha emission nebulosity.
To make cameras record color more like the way that human's see it, camera engineers place a special filter in front of the sensor in most DSLR cameras that blocks most of this deep red hydrogen-alpha light.

This makes taking a picture of these types of nebulas more difficult with a standard DSLR camera. It's not impossible to take pictures of the brightest emission nebulas like the Orion Nebula or the Lagoon Nebula, but it is much harder to take pictures of the faint ones.

This special filter in most DSLRs is called a long-wavelength filter because the hydrogen-alpha wavelengths are on the long-wavelength end of the spectrum of visible light.

Canon came out with special cameras called the 20Da and 60Da that were made with a modified long-wavelength filter that passed more of these red wavelengths, making it easier to take pictures of these red nebulas. The "a" in the camera's name stood for "astrophotography".

Once you get seriously into more advanced astrophotography and you want to shoot faint red emission nebulas, you may want to get your camera modified by replacing the stock long-wavelength filter with one that passes most of the red hydrogen-alpha wavelength.

You can try to do it yourself if you are very handy with small delicate electronic parts, or you can have it done by one of several individuals that offer this service, such as Hap Griffin, or Gary Honis. Gary also offers instructions on how to do it yourself, but it is not for the faint of heart.

Note that modifying your camera voids the manufacturer's warranty!

How Digital Cameras Work - The Bottom Line

The pixels in a DSLR sensor collect and store photons of light that are eventually counted and turned into the numbers that make up the digital image file.




Back | Up | Next