A few nights ago, the clouds parted and revealed the first clear sky in a very long time. So I hauled out the telescope, camera, laptop and furniture to spend the evening capturing a target I’d had my eye on for some time: The radio galaxy NGC 5128, more famously known as Centaurus A. My telescope is quite old – it was made in the early 1990’s – so although it was an expensive and state of the art instrument in its day, it is quite primitive by modern computerised standards. The electronics in its mount are able to drive the tracking motor at different speeds, but it has no GoTo abilities, and it has no digital setting circles.
If I want to look at something, I have to find the target by hand, using star charts and the finder scope. So once everything was set up, I opened up my star atlas and plotted a course from an object I know well (Omega Centauri, the grandest globular cluster in the sky) to my target a few degrees away. My atlas of choice is Deep Sky Hunter, a free-to-download book of starcharts in PDF format designed to be printed out and bound. Having found my starting point, and the target for the evening, I then studied the region to find recognizable patterns of stars that I could use to plot a route. Once I had this memorized, I went outside to the scope, swung it around it to point at Omega Cen, knelt down to look through the finder-scope, and moved it one star at a time along the path I’d chosen. This technique is called “Star Hopping”, and was the standard way of navigating the stars for amateur astronomers up until the invention of the computerised telescope mount.
It works well, but took me a long time. I am out of practice, and took several wrong turns. At the end of the journey, I began snapping a series 10 second exposures taken at the camera’s fastest (and noisiest) sensitivity to test my view and confirm that I was on target. I repeated this a few times, seeing nothing and tweaking the aim a little, until suddenly it was there on the bottom right corner of my image: a fuzzy brown sphere, split in the middle by a broad lane of dust. It was magical! In all my years of observing the skies, I have somehow managed to never see a galaxy through my own telescope, and suddenly there it was, clear as day. I skipped a little dance, hugged myself a bit, and then got back to work. I centred the image, and set the camera software to capture a long sequence of 50 images, each at 10 seconds and ISO 1600. While that ran, I lay on the ground with binoculars, exploring the southern milky way to see what I could find.
Eventually the procedure was done, and I got on with the boring job of capturing the reference data: Another set of 50 “Dark frames”, identical to the actual image data, but with the lens cap on, followed by 50 “Bias” frames (same again, but now with the shutter speed set to it’s absolute fastest value), and then moved the telescope inside for the “Flat field” frames. That all takes long enough that I went straight to bed afterwards, and put off the processing for another evening. But when I finally got to start working on building all of that data into an image, I discovered that the telescope hadn’t been tracking properly. Not only were all the images stretched out, with stars appearing as lines instead of points, but the target itself moved downwards from frame to frame, until it began vanishing off the edge altogether. In summary, I had to discard more than half of my data and was left with only 16 useful images to stack. What a disaster.
Still, that was enough to clearly reveal the galaxy. And my favourite stacking software, Iris, has a few neat tricks up its sleeve that I could use to try and fix the tracking damage. There is a feature in many photo editing tools called Deconvolving, which can be used to clean up stars to turn them into the neat sharp points of light they’re supposed to be. The problem I have is that I don’t understand what it actually does, nor how to use it. Iris solves this for me quite neatly with its implementation of the Richardson-Lucy deconvolution algorithm. Simply draw a block to select a small region around an isolated non-saturated star, and fire off the process. Since it knows what a star is supposed to look like, it can compare that with what is actually in the image. Once it knows how to turn it back into a non-blurred image, it can then run that process across the entire image, and undo the damage. How does it work? Well it’s not as good as if the tracking had worked in the first place. Not even close. But it is still an enormous improvement – look carefully at the image and consider that each of those blobby stars was once a short star-trail! And as bad as the stars themselves look, the galaxy itself looks pretty good, with visible structure.
So all in all, I’m told that this is an image to be proud of. But there was too much that went wrong in the first place, that I can fix. And I will fix it – just watch this space to see how it comes out next time!