A word before I start: The number one feature everybody asks about a digital camera is “How many megapixels does it have?”. This refers to the density of pixels on the camera sensor, and defines the resolution of the image that comes out of the camera. In theory, more megapixels means a sharper image that can be blown up larger for prints. But consider that even a 1080p HD TV screen only has about two million pixels – it can only show a 2MP image. All those other pixels are wasted, unless you zoom in. And in practice, your lens probably isn’t good enough to use all those pixels properly anyway – zoom in to try and see individual pixels and you’ll find that the image breaks down and blurs long before you reach that point. Now there are benefits to using a high resolution camera, but none of them apply under the extreme conditions of astrophotography. For more information, follow this link about the super-high resolutions now being offered by some smartphones. You can totally ignore megapixels when buying a camera for astrophotography.
I feel safe saying that just about every human being on Earth has access to a digital camera. In many countries, there are more mobile phones than there are people, and it is getting very hard to find a phone that doesn’t also have a camera built in. Now these cameras are not very good – in order to squeeze them into the tiny space available, the manufacturers have to use the tiniest sensors (usually only a few millimeters on a side) together with tiny fixed-focus lenses, which severely limits the performance in low-light conditions. Try taking a picture in a dark room to see what I mean, or try pointing up at a starry sky and snapping a few shots. The results will not be good. But if you have a telescope handy, and it is pointed at something bright, like the Moon or one of the classical five planets, you can simply hold up the camera lens to the eyepiece of the telescope and take a picture that way. It takes a bit of fiddling to get the phone in the right place to be able to see through the telescope, but once you get it right you’ll find that it does a pretty good job of capturing what you saw with your eye. It won’t win any prizes, but it will look pretty good on your facebook wall! This type of astrophotography, by the way, is called “Eyepiece Projection”, since you use the telescope eyepiece to project the image directly into the camera lens.
Most middle-class families can do better, though, with their compact “point-and-shoot” cameras. These come in a wide range of prices, with features to match, and typical examples include Canon’s Powershot, Fujitsu’s FinePix and Nikon’s CoolPix ranges. The bigger the lens at the front, and the higher the “Optical Zoom” goes, the better the camera will perform. Eyepiece projection works well with these cameras, although you might find that the bigger lens makes it harder to get the camera lined up correctly. Luckily, most telescope manufacturers sell little clamps that strap on to the side of the telescope and hold the camera in place. Once you’ve found the exact spot, tighten the clamps and take your pictures.
Compact cameras have the advantage over cameraphones that you can manually control settings like exposure length, aperture, and sensitivity. Setting the longest exposures (usually thirty seconds), fastest ISO’s and shortest f/ratio will reveal a wealth of faint details that you couldn’t see with your eyes, but the resulting images will probably be a lot noisier than you’re happy with (Noise, incidentally, is that graininess that shows up in the darker parts of the image. Since astronomical images are usually mostly a black background, noise can get really obvious and be very irritating. In severe cases, it starts to look a bit like the static on a detuned old-fashioned TV)
Back in the days of film, SLR camera’s were defined by the arrangement of mirrors and prisms which allowed the photographer to look through the same lens that would be used to expose the film, as opposed to the cheaper cameras that had a seperate viewfinder. Digital SLR’s (DSLR) use the same arrangement: a mirror sits between the lens and the sensor, redirecting the view up to the viewfinder, and which flips upwards out of the way when you press the shutter release, making a distinctive clicking noise. But they have three other features that make them great for astrophotography: The lenses are removable, allowing you to customise the optics to whatever your needs are for a specific photograph, you have access to full manual control of ALL settings, and the sensor is much bigger. As with compact cameras, they come in a huge range of prices, with features to match. If you plan to use the camera only for astrophotography, then most of those features become irrelevant, and you can save a lot of money by choosing a cheaper model.
Astrophotography with a DSLR camera is a huge step forward from working with a compact camera. Even a cheap entry-level DSLR like my own Canon EOS 1100d is capable of some incredible images. I’ve been doing this for more than two years now, and I still haven’t gotten good enough that I can blame the camera for anything. Half of the beautiful astronomical images you see on APOD, or on magazine covers, were taken by amateur photographers with DSLR cameras. Largely this is because of the flexibility that comes from having full manual control of all settings, and from the ability to change lenses.
You can use the stock lens, open up the aperture to the lowest f/stop, shortest focal length, and fastest ISO to take an image of the sky, and you’ll get far better results than from the same experiment performed on a compact camera. This is because the much larger sensor translates to physically larger pixels, which can absorb more photons. This in turn translates to a stronger signal which better overrides the electrical noise – what engineers call “Improving the Signal-to-Noise Ratio (SNR)”. A high SNR means that you get more faint detail and less noise, and that’s key when you consider that astrophotography can almost be defined as the art of imaging extremely faint objects against a black background. DSLR’s also tend to have a wider dynamic range (measured in bits, because the data comes off the sensor in the form of a digital signal), which gives you more space to play with when processing the image to tease out the finer details.
Dedicated astrophotography cameras are generally called “CCD cameras”. There are two technologies used for digital camera sensors: CCD and CMOS. Both have been around for over fifty years, and both are widely used in different camera applications. Traditionally, CCD has certain advantages that made it more desirable for astronomical and scientific work, while CMOS has tended to be the default choice in consumer equipment (like digital cameras, webcams, etc). CCD’s are seen as being more sensitive, with higher dynamic ranges, while CMOS is more rugged, uses less power, and doesn’t suffer from “Blooming” when overexposed. But both technologies have advanced rapidly so that, outside of the most rigorous scientific environments, there’s little benefit to choosing one over the other. This means that if you buy a “CCD camera” from your local telescope dealer, there’s a reasonable chance (especially if it’s a cheaper model), that it actually has a CMOS sensor. It’s a bit of a misnomer, in other words.
At first, CCD cameras seem a step backwards. They are tiny, have no view-finder, and look a little like webcams that have lost their lens. Out of the box, they are useless and must be paired with an optical system and a computer before they will do anything. They are also usually monochrome, and expensive. So why would you buy one of these? Because, like a racing car built for the track, every component that is not essential to the core function of imaging the stars has been removed, and what’s left has been tuned for maximum performance. They are designed to do one single thing and do it very well.
CCD’s are usually monochrome for one simpe reason: extra sensitivity. The individual pixels on a sensor are monochrome. Normal camera’s achieve colour by placing series of tiny filters over the pixels in a fixed pattern. Each square of four pixels has one red-filtered pixel, one blue-filtered pixel, and two greens. This bias towards green is to mimic the human eye’s natural response to light: We’re most sensitive to green light, which is why green laser’s appear to be so intensely bright compared to red lasers of the same power rating. The result of this filtering is that each pixel actually loses a large portion of the light that it would otherwise receive, since any light that isn’t the same colour as the filter is blocked. Monochrome cameras have no such filtering, and can therefore capture much brighter images with shorter exposure times. If you want colour (as is usualy the case), you use a set of external filters. Place a Red filter over the camera, capture your image, then repeat with Green and Blue filters, then combine the shots digitally to create a colour image.
That sounds like a tremendous pain, and it is. However, since you’re no longer merging four pixels into one, you get much higher resolution data. And rather than just mixing three colour images together, you can add a fourth image with no filters, to capture the fine details and brightness variations. Combining these four channels (Red, Green, Blue and Luminance) leads to spectacularly detailed and crisp images that a DSLR camera can only approximate… if you have the expertise and the software!
But the real power of this technique begins to show when you use narrowband filters. If you look at, say, an emission nebula, the bulk of the light comes either from the stars within the nebula or the excited hydrogen gas from which it is made. Stars shine because they are hot, so that their light covers the entire spectrum. The gas, on the other hand, emits light only in very specific colours for reasons explained by quantum physics. This means that if you use the right narrowband filter, you can filter out everything EXCEPT one particular substance, and so reveal details of the structure of the nebula that would otherwise have been hidden by all the other junk in the way.
Since there are various molecules of interest in these clouds, we can take several images with filters corresponding to different gases: One popular selection is Silicon, Hydrogen and Oxygen (or to be precise, specific ionised forms of these gases: SII, Ha and OIII). The resulting images will still appear grey, because the camera itself is monochrome, but if we then colour them in Red, Green and Blue respectively, we can mix them together to create a full colour image. That particular set is known as the Hubble Palette, since it is a standard combination used for all those beautiful images from the Hubble Space Telescope. Now purists will note that the resulting image does NOT represent the “true” appearance of the nebula, since it is highly filtered and uses “False-colour”. However, if you’re going to treat the performance of the human eye as the only accurate gauge of how a camera should perform then you have to abandon astrophotography altogether, since the entire field is based on revealing objects to small or faint to be seen any other way. In fact, there is a movement to replace the term “False Colour Image” with “Enhanced Colour”, since it suggests that the image is fake, when in fact it is merely a different representation: Like a medical X-ray, we’re revealing what is actually there, and there is nothing false about it.