The problem they're trying to solve is one of angular resolution: There's a limit to how small an object we can see at any particular distance. A healthy human eye can resolve down to slightly worse than 1 minute of arc, which is like looking at the eye of a needle at arm's length, or a loaf of bread 1km away. To improve on that limit, we can use optical tools, like binoculars or telescopes. The bigger and more expensive they are, the better the resolution.
The Earth's atmosphere complicates the issue, however. Over long distances, it is a seething turbulent mass of cells of different temperature and density, and these cells have different indexes of refraction. Light passing through the entire thickness of the atmosphere gets distorted and scattered with a result not unlike the patterns of light at the bottom of a swimming pool when people have been swimming in it. A large telescope, theoretically capable of extremely fine resolution, gets all these distortions in the same field of view, blurring the image and canceling out the resolution benefits.
There are two ways to solve this problem: Either try to minimise the disturbance by building observatories as high as possible in very still quiet regions (up tall mountains, or even in space), or use the clever new technology of adaptive optics, which uses laser beams to monitor the state of the air from second to second and rapidly adjusts the focus of the telescope to compensate. But both of these ways are very expensive, and have limits which cost a lot of money to transcend.
What A. Richichi, W.P. Chen, O. Fors and P.F. Wang suggest in their paper
is a highly sensitive way to study objects which happen to be in the right position to be temporarily eclipsed by the Moon (an event known as a Lunar Occultation). The theory gets complex pretty quickly, but those who took high school physics will remember two important properties of the wave nature of light: Diffraction and Interference. When light passes through a slit, it bends and spreads out slightly -- the narrower the slit, the greater the diffraction. You get the same effect when light passes an edge. Light coming from different parts of the same source diffract at slightly different angles, which leads to interference effects (which I won't bother to explain here. Just think about waves of water, how they can amplify or cancel each other out at different points. Light does the same thing). The resulting pattern of light and dark lines is called a diffraction pattern. Now if the source is a complex object, like a binary star
, then the diffraction pattern will be different from that of a single star. You'll get two very slightly different diffraction patterns overlapping each other. This effect would be sensitive enough to separate binary stars
that are close enough that no telescope on earth could have revealed their double nature!
Richichi and his team set up an experiment to test this technique and report an impressive resolution of 7.5 milli-arcseconds - that's equivalent to spotting an egg at a distance of 1000km. As incredible as this is, modern telescopes like the ESO's Very Large Telescope
can do seven times better. But consider what the VLT costs: Four gigantic telescopes, each operating in tandem using baseline interferometry, each over 8 meters in diameter. Such an installation costs billions in construction and operating costs. The new technique achieves comparable results, with both the cost and the complexity being significantly less.
The only obvious weakness: Your target has to be positioned right, so that the Moon will occult
it. And, because the Moon only travels in one direction, your results are two dimensional rather than providing a full picture. Still, it's an extremely clever idea, which will doubtless inspire new and interesting ways to use the same technique.