Any camera and optical system has its share of imperfections.  From dust and fingerprints on the lens to imperfections in the optical design to electrical noise in the sensor, they are always there and leave their mark in every single picture.  Many of these flaws are well understood by photographers, and any commercial review of camera bodies and lenses will look carefully at these issues and compare them to how other products perform.  For the most part, though, modern cameras are so good that casual photographers never even notice most of these problems.  The only one that really stands out is noise – that grainy speckling that shows up in low-light images.  Expensive cameras with fast lenses show traces of it on long exposures under moonless night skies, while camera phones show it on most shots taken indoors.

But under typical astrophotography conditions, there is very little light coming into the camera, which allows the flaws to stand out.  Fortunately, it is possible to digitally remove their effects, provided we know exactly what they are.  To measure these effects, we capture a series of calibration frames.

Bias/Offset

The most obvious flaw, especially in cheaper cameras, is noise.  All electrical devices have a certain level of noise signal caused by the random motions of individual electrons within the conductive elements.  This signal cannot be characterised, because it is random.  However, we do find another source of spurious signals, and that comes from the characteristics of the electronics within the camera sensor.  If we amplify this signal, we find that it forms a fixed pattern, and comes from the tiny quality variations in the manufacturing process.  Each pixel has a very slightly different sensitivity to its neighbours, and this variation does not change over time.   To record it, we put the camera’s lens cap on, set the shutter speed to the fastest possible setting, and capture a few hundred frames.  Why so many?  Because we can combine them all to boost the underlying characteristic variations.  Each pixel of each image is added to the same pixel from each other image, and an average value is found.  This average value is then stored, and the process is repeated for all the remaining pixels.  Because the electrical noise is perfectly random, it tends to cancel out if there are enough images, and what’s left is a map of the characteristics of the camera sensor.  This map is called a Bias, or an Offset image, depending on who you speak to, and will be used in pre-processing of every other frame captured.

Example of a Flat Field calibration frame, showing vignetting and dust particles. Image credit: Allen VersfeldFlat

The second major source of systematic error is optical in nature.  Everything that the light touches on its way to the sensor has an effect on the signal, and this too can be mapped.  The lens (or telescope) has a characteristic set of distortions, and dust on the sensor (or further up the optical train) leaves tiny out-of-focus shadows.  These effects are faint enough to not be obvious in regular terrestrial shots taken under daylight, but are very obvious in astronomical images.  To map them, we capture an image of a featureless uniformly lit surface.  Some people use a sheet of white paper under sunlight, some use the twilight sky near the zenith, and others use specially constructed light-boxes suspended in front of the lens.  The goal is to get a picture that is as perfectly, uniformly smooth as possible.

When this image is carefully analysed, with suitably stretched contrast levels, the flaws in the optical system become painfully obvious.  Dust particles appear as donut shaped shadows, while the inherent failings of the lens design show as light and dark regions.  In order to reduce the effect of other noise sources, we again capture a number of frames and combine them.  The resulting master image is called a Flat Field.  Unlike the Bias frame, however, this must be created anew every time the camera is used.  This is because the flaws it maps change from day to day.  If I set up my rig tonight, I might find that there are a few more specks of dust on the sensor, and the camera might have been mounted at a different orientation on the telescope, or the lens might be set to a different focal length causing the elements to be in a different position relative to each other.  This means that the flaws mapped in the Flat Field change from night to night (or even hour to hour as the temperature falls and the glass shrinks).  If we change the configuration of the optical train in any way, by changing a lens, swapping a filter, or even adjusting the aperture, we have to create a fresh set of flat field images for that particular configuration.  Fortunately, we don’t need a large number of flat field images to create a good master.

Dark

Our final calibration image goes back to the sensor.  Because camera sensors are made in bulk, there are always quality variations.  Even the most expensive cameras will always have a few faulty pixels, which show up as either “dead” (always black), or “hot” (oversensitive – show up in a black image as points of light).  As with the other flaws described on this page, they aren’t usually obvious in terrestrial photography, but show up quite clearly against the dark sky background.  These hot pixels are fairly static, but their intensity varies with different conditions, so that they become more and more obvious as you increase the ISO rating and temperature.  They also tend to change over time – hot pixels heal, healthy pixels go faulty.  They are a normal fact of life in digital photography, but it can be very irritating to have a bunch of spurious “stars” on your image showing up where there should just be darkness.

To map out the hot pixels, we capture a series of dark frames for each object that we image.  Once an imaging run is complete, we leave all the camera settings exactly as they are, then replace the lens cap and capture another batch of frames.  More frames are better, but personally I usually stop at 11.  These frames are combined in the same way as before to remove random electrical noise, and the resulting image is our Master Dark.

Putting It All Together

The process of calibrating your images is a little more complex than I’ve described.  The flaws mapped affect ALL images, including the calibration frames, so we have to proceed in a certain order.  Fortunately, the Bias image is built from such short frames that there is no time for hot pixels to appear, and since it is captured in total darkness the optical flaws are irrelevant.  We create our bias image, and then we create our master dark.  First, we subtract the bias image from each dark frame, then we combine them as an arithmetic mean (The value of each pixel is added to its corresponding partners from all the images, and divided by the number of images).  What remain are the hot pixels.  Now we have our master dark and bias frames, we can subtract both of them from each individual flat field frame, ensuring that we are not mapping anything except the optical flaws.

And now that we have our three calibration frames (Bias, Dark and Flat), we are ready to pre-process our actual photographs.


Comments

Calibration — No Comments

Like what you read? Hate it? Write a reply:

This site uses Akismet to reduce spam. Learn how your comment data is processed.

HTML tags allowed in your comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>