Constantly Learning
I have just completed processing my third image of the famous southern radio galaxy Centaurus A. Each time I go back to this object, I get better results, even though I’m using the exact same equipment each time. It just goes to prove that it’s not the equipment that makes the shot, but the photographer. Just to labour the point, here are all three of my images (so far) of Centaurus A, along with notes of how I captured each shot and what I learned each time to make the next one better.
First, though, a rundown on what all three images have in common: They were all shot with the same camera, through the same telescope, from the same location, at roughly the same time of year. That’s one Canon EOS 1100d entry level DSLR camera, one Celestron Powerstar C8 mounted on a wooden pier in my rural backyard in the South African highveld, in autumn. They were also all shot by the same guy who has been trying to master the art of deep sky astrophotography for several years.
First Attempt

1600 ISO
Stacked in Iris, used RL algorithm to correct for tracking errors. Post-processing in DigiKam (noise reduction, colour saturation)
The first image was captured in April 2014. I remember spending a very long time trying to find the target. The Powerstar C8 is fork mounted, on an equatorial wedge, with no Goto feature. Centaurus A is very far south, less than 30º from the south celestial pole, and that particular configuration is quite clumsy to operate on objects near the poles. So I spent upwards of an hour moving between chart and awkwardly placed finderscope, trying to match up barely-visible stars, before returning to the laptop to grab a test frame (usually 30 seconds at 6400 ISO) to check whether I was on target.
By the time I had eventually placed the galaxy near the centre of the camera sensor, and fired off a few more test frames to choose my camera settings, it was already late. So I quickly snapped sixteen light frames, threw the lens cap back on for a set of dark frames, then moved the entire rig inside to get flats off an LED monitor. Sixteen seems like a very small number of frames to me now, but it was about what I usually went for at that stage. Unfortunately, my mount is plagued with quite bad periodic error, at times slewing off in right ascension faster than the manual tracking controls can keep up with! As a result, all my images from then tended to come out with stretched out stars showing spikes in odd directions, and this image was no exception. To try and repair this, I ran the stacked image through the Richardson Lucy algorithm (which was developed for the Hubble Space Telescope to try and resolve the blurry images it produced before astronauts were able to install a set of corrective optics). These days I never use LR, but at the time it seemed like magic and I applied it indiscriminately.
Second Attempt

Stacked with darks, flats and offsets in Iris.
Cropped, stretched, colour balanced in Iris
The second image was captured a full year later, in April 2015. There was one change to the equipment: this time, the telescope was fitted with a field corrector that reduces some of the optical flaws inherent in the C8’s optical design, and reduces the focal length to 1260mm (or f/6.3 in photographic terms). This results in a wider field of view, shorter exposures, and stars that are in focus across the entire image and not just at the centre.
But more importantly, I collected a great deal more data. Instead of stopping at sixteen frames, I keep going till I had 121. Why that odd number? Because the increase in the signal-noise ratio improves by a factor of the square root of the number of exposures when stacking (in other words, stacking 4 frames doubles the SNR, 9 frames triples it). It doesn’t matter whether the improvement is a round number, of course, but it seemed important at the time.
All those extra frames did more than just improve the SNR, though. They also made it practical to discard bad frames – those where tracking errors or gusts of wind had caused the telescope to move mid-exposure, turning stars into streaks. With those frames gone (about half of them…), the stacked image was suddenly a great deal cleaner with fine details more visible and round stars.
One final change: To get a brighter picture, I set the camera to its most sensitive ISO rating of 6400. This dramatically increased the noise, of course, but I reasoned that this would be dealt with by the increased stacking. The results are significantly better, although there is now a fine speckling of noise that’s survived the stacking process, mostly in the Red channel for some reason.
Third Attempt

The third, and most recent, image was captured in May 2015. After the success of the second attempt, I took it “more data” approach a step further by capturing more than 460 frames over two evenings. But this was not the only change: This was my first serious imaging run with my new tethering software. Up till recently, I’ve always shot with a tethered camera, meaning that it is connected to a laptop PC with a USB cable, and I used software on the laptop to operate the camera. This lets me avoid touching the camera and setting it shaking at the beginning of each shot, but it also lets me program imaging runs so that I can head indoors for a warming mug of tea while the machinery gets on with things. I’ve always used Canon’s EOS Tools that ship with the camera, but I recently discovered Astro Photography Tool (APT) and now use it almost exclusively. It does everything the Canon tool does, but with a few extra touches that are useful when imaging the skies. It has a night-vision mode, the live view does a quick’n’dirty stacking to increase sensitivity and make fainter stars visible, and allows you to save pre-set imaging plans. For a full list of all the extra feature, check out the product website, but in summary it’s made my imaging sessions a lot more repeatable, and comfortable.
My unique circumstances give me a narrow window of opportunity to actually capture images: I generally spend a half hour or so as twilight begins getting all the hardware in place, and then get to actual imaging sometime around 9pm, then pack everything up again before midnight. This meant I needed two nights to get all the data I wanted, and happily the weather allowed me to shoot on two evenings only a few days apart. Conditions weren’t identical, though – my astro weather forecast from 7timer claimed that the second evening had better atmospheric transparency, but worse seeing. The data confirmed this: second set of images were noticeably brighter and blurrier. The first evening produced 162 frames, and the second produced 301 frames.
The two sets each had their own Dark and Flat frames, and were pre-processed separately. I visually inspected each individual image and manually discarded those which I subjectively felt showed insufficiently round stars (in other words, any images where the mount drifted off target for any reason). This left me with 186 usable frames, which I then stacked with IRIS’s composit2 method, which the manual describes as a “robust average image using a continuous adaptive weighting scheme that is derived from the data themselves”, referencing the “Artificial Skepticism Stacking Algorithm”, if you’re curious to know how it works. It’s all very clever, and is a faster alternative to the classic sigma-clipping algorithm. Unlike the previous two images, I based the white balancing on a portion of the galaxy itself. I found some big observatory images of this galaxy, identified a section that appeared closest to pure white, and directed IRIS to set white balance automatically based on that region. Finally, I set the levels to display in logarithm mode (as opposed to linear) to make the variations in brightness more visible.
Conclusion
Now that final image is by no means perfect. It is still noisy, finer details are not very visible, and the outer regions of the galaxy are not visible at all. But that’s okay – this is a learning process. I can blame a lot of problems on my cheap camera and shaky mount, yet as I gain experience I keep producing images that I would have thought impossible months earlier. And I’m not even that good a photographer, so it just proves what I said at the beginning: It’s the photographer who makes the shot, NOT the camera. With time, my skills will continue to improve and I’ll keep coming back to Centaurus A so that I have a clear record of how my images are stacking up.