M42 from my backyard
A few weeks ago, we had a spell of unseasonable clear weather, and I planned an evening to image the Crab Nebula in Taurus. I wasn’t particularly disciplined about it, though, and ended up pointing the telescope all over the sky, and grabbed about 30 frames of M42, the great nebula in Orion. I processed it a few weeks later, and was pleasantly surprised at how popular it’s become on my various social media accounts. It’s even been picked as the cover image for a local astronomical society magazine, and they wanted to know how I went about creating this pretty picture. Hence, this article.
So first up, I set the telescope on its pier in the garden as the Sun was setting, and mounted my camera (Canon EOS 1100d – a neat little budget model DSLR that seems to be quite popular with other amateur astrophotographers). I focused as well as I could on a distant water tower (probably about 10km away), threaded the data cable through the mount to avoid snags, and waited. Then. at that magic moment of twilight when the sky is darkening but the stars haven’t yet appeared, I captured twenty one images of the sky, near the zenith, to be used later to create a master flat field image. Then it was lens-cap on, power off, dust-cover on, and inside to bath the kids.
Several hours later, after astronomical twilight had ended, I went back out, hooked up the laptop, and got to work. Because the sky was so special, I took the time to fine-tune my polar alignment, using the drift method. I had intended to spend the rest of the evening capturing M1 (the Crab Nebula – a supernova remnant which I’d managed to never see before), but M42 was just too tempting a target, so I cut the M1 run short, and swung the telescope North to find Orion. A few test shots later, and I’d settled on the exposure settings I was going to use: 10 seconds at ISO1600. I captured thirty exposures, then popped on the lens cap and snapped another ten images for use as dark frames. I grabbed a few other targets of opportunity, then packed up for the evening.
A few weeks later, I got around to processing. First order of business: get a Bias frame. This was easy – I’d created one months ago, and since camera electronics don’t change much over time, I could just re-use it. Then it was time to create the master Dark: load the ten dark frames, subtract the master bias from each of them, and then take a median sum. Or just use the “Create a master dark” feature of my favourite processing software! And then I created the master Flat Field – take all those images of a uniform light source (the sky, in this case), subtract the master bias (as with the Dark frames), normalise them to all have the same average brightness, and finally find their median sum. Save as Master Flat. And then, to finish off the calibration sequence, load up that master Dark again, and generate a map of the hot pixels.
Of course, I could have done these steps by hand if I wanted (and if you’re using photoshop instead of proper astronomical imaging tools, you pretty much have to), but I didn’t want, because we have the technology now to do it automatically. The end result of all that calibration work was that I now had a Master Bias (also called a Master Offset), which maps the slight imperfections in voltage supplied to each pixel on the camera sensor, the Master Dark, which maps out the differences in sensitivity to noise for each pixel in the camera, the Master Flat which maps out differences in sensitivity to signal on each pixel, and any other factors that can dim the incoming light – dust on the sensor, vignetting from the optical design, etc. Oh and I also had a list of the locations of all my camera’s hot pixels. Between them, these four files map out every failing of my camera setup, from electronics to sensors to optics, so that they can be corrected out of the actual pictures I take. Most of these flaws are affected by outside circumstances like temperature, so they have to be measured each time the system is set up.
So finally I could begin working on the actual pictures. I began by loading in the thirty raw images I’d captured, and running the automated pre-processing function. This step applies each of those calibration files to each raw image, to clean them up before I begin processing properly. The sequence that this process follows looks like this: First, the Bias and Dark images are subtracted from each image, and then each image is divided by the Master Flat. Incidentally, when I talk about doing arithmetic like adding or dividing of images, I mean that the images are compared pixel-by-pixel, and the operation (addition, subtraction, etc) is performed between them. So with those raw images cleaned up and saved, we then “Develop” the images by colouring in the individual pixels (Red, Green and Blue) to change from an ugly black and white grid to an actual picture, and then move to the next step: registration.
Registration is a fancy word for “Alignment”. Even the best mounts never track 100% true, so there will always be some slight shift between images of the same part of the sky. This proces corrects for that. It can be done manually, by comparing two images and measuring the pixel locations of individual stars and then offsetting the second image until they align perfectly, or you can let the computer do it for you. I recommend the second way – it’s faster, it’s easier, and the computer does a better job. Iris offers a number of different algorithms to take care of this, varying in speed and complexity. Which to use depends on the image – sometimes a given algorithm will simply fail to lock on to the same star in each image, and the registration will fail. Somebody more experienced than me might well be able to tell at a glance which method to use, but I just pick the fastest. If it fails, I work my way down the list till I get a good result. Not very efficient, not very clever, but it does eventually get the job done. Incidentally, this is easily the slowest part of the job – even a fast modern computer can sit chugging away for hours on a large set of images using one of the more complex algorithms which adjust not only for lateral motion, but field rotation and distortion as well. But when it’s done, you have a new set of files, a little larger than before, padded with different sized borders to allow the actual data to be shifted around without losing anything.
At this stage, we can start with the real magic: Stacking. There are a number of stacking algorithms, starting with simple arithmetic addition and progressing to complex statistical procedures, but they have the goals of brightening the image and reducing noise. The brightening happens as a direct result of the addition, but the noise reduction comes from the fact that signal is constant while noise is random. As we add the images, the signal is present in every frame, while the noise varies, so the signal gets brighter faster than the noise does. Result: The Signal to Noise ratio improves, faint details become visible, and your faint murky raw frames become a gloriously bright picture. Other algorithms aim for different results. Median stacking sets each pixel in an image fo the median values of the same pixel from all the raw images, while Min/Max rejection is the same as simple addition but with the additional step of rejecting any pixel on a raw images that varies too much from the other raws – useful for eliminating cosmic ray hits on the sensor, passing satellites, etc.
For this image, I wanted to use Simple Addition, because I wanted to bring out those faint outlying regions of the nebula. However, I knew there were going to be a lot of bad raw frames, because my mount doesn’t track very well. So I inspected each of them by eye, deleting any with bad motion blur or star trailing from tracking errors, keeping those with perfectly round stars, and using my judgement on the marginal ones. In the end, I selected eleven of the original thirty frames, and stacked them.
The result was nice, but it still needed post-processing. First, I set the white balance. My software has two commands to make this simple: First, the “Black” command – select a region of sky with no stars, and the software adjusts the colour levels as needed to make this portion of the image appear totally black. Then, “White” which requires you to select an area that you know is white (The planet Venus, perhaps, or the Moon, or a G2V star), so that the software can calculate the correct colour balance. Sometimes, I just select a large area containing as many stars as I can, because I’m more worried about natural-appearing, than scientifically accurate. But for this image, I used previously calculated colour levels and set the white balance manually.
I now had a reasonably true representation of the nebula, but the edges were too faint while the heart was so bright that the little group of four stars known as the Trapezium could not be seen. So changed the view mode from Linear to Logarithmic, and adjusted the levels till I was happy with the amount of detail visible across all regions. Finally, I boosted the colour saturation slightly (not too much – I prefer to avoid that Hollywood CGI Effects look!), and was left with the image at the top of the page.
If you’d like to know more, or want to compare notes, feel free to leave a comment below, or mail me at [email protected]