3rd Generation Calibration – SKA eNews – July 2015
The family of new calibration techniques known as third-generation calibration or 3GC.
A radio interferometer does not directly observe an image of the sky like an optical telescope. Instead, it measures “visibilities” or Fourier components, which requires complicated maths to render into images. So calibration therefore becomes critical in relating numbers coming off the telescope to underlying physical quantities. Historically, our understanding of interferometric calibration has proceeded in three distinct phases.
The best description of the first phase can be borrowed from Jan Noordam of the Netherlands Institute for Radio Astronomy (ASTRON). He calls it ‘first generation calibration’ or 1GC, simply comparing the signal of each baseline to the signal from a known source (the calibrator). This was only good enough for a dynamic range (DR) of around 100 to 1, but enabled many of the pioneering discoveries of radio astronomy.
‘Second generation calibration’ or 2GC was ushered in during the 1980s by the invention of selfcal, which enabled the building of a model of the observed sky, simultaneously correcting for direction independent effects (DIEs) introduced by the antenna. This was a revolution because it allowed DRs of up to 100,000 to 1, which was good enough to exploit the full capabilities of the telescopes of the time. As instruments became increasingly sensitive, more subtle instrumental effects became a bottleneck for moving to a higher DR.
‘Third generation calibration’ or 3GC is the family of new calibration techniques for dealing with direction-dependent effects (DDE). These include variations in the primary beam or sensitivity pattern of each antenna (figure2).
Dealing with such effects is critical in order to take full advantage of the capabilities of new telescopes such as the MeerKAT, LOFAR, ASKAP, the upgraded JVLA and the SKA.
DDEs are varied and subtle so 3GC is a more like a big toolbox than a single technique, and nowadays many tools in this toolbox have a “Made in Africa” label. While the MeerKAT is being built, these tools are polished with data from the JVLA – a dish array with half the number of antennas, but more collecting area and massive bandwidth, giving roughly similar calibration challenges to those of MeerKAT.
Figure 1 shows a new map of 3C147 using JVLA data from Rick Perley (NRAO), obtained at the Centre for Radio Astronomy Techniques & Technologies (RATT) at Rhodes University. This boasts a world-record DR of 5 million to 1 and is a showcase for the 3GC tools being developed in South Africa.
The biggest challenge to high DR with dish arrays is the rotation of the primary beam pattern, corresponding to the rotation of the sky overhead, intrinsic to the alt-azimuthal mounts employed by the JVLA and the MeerKAT, which in turn causes time-variable DDEs. If not corrected for, these manifest as artefacts in the image (figure 3a).
One of the early 3GC approaches to dealing with this was the differential gains (DG) technique developed at ASTRON. It solved for a time-variable gain towards individual bright sources. These solutions would then track the rotating beam variations (figure 3d) and remove the artefacts (figure 3c).
This is the bluntest tool in the 3GC toolbox, but a new approach implemented by Modhurita Mitra, Sphesihle Makhathini, Griffin Foster and others at RATT and SKA South Africa uses a model for the rotating primary beam that removes most of the artefacts (figure 3b) before resorting to DGs, while the remaining “garbage” can be taken care of by solving for DGs on longer timescales (figure 3e) to achieve a perfectly clean 5 million DR image.
Work by Trienko Grobler and Ridhima Nunhokee shows that an image with DG solutions on longer timescales is more “pure” in the sense that more accurate astrophysical parameters can be recovered.
Computing expense has been a concern with 3GC techniques. A comforting new benchmark is that the processing time for the 5 million image is now better than real-time: 10 hours of compute time for an image from 14 hours of telescope time, on a single high-end compute node.
New work by Cyril Tasse (formerly of RATT and SKA South Africa, now a collaborator at the Observatory of Paris Meudon), with RATT, promises to cut the calibration time by another order of magnitude. This bodes well for the MeerKAT, suggesting that such detailed 3GC processing for its data will be affordable even for individual scientists and small groups.
However, with computational costs taking a backseat, the “labour” costs of 3GC are becoming increasingly important. After all, the 5 million images required a lot of human input and fine-tuning to produce.
Sphesihle Makhathini is working on turning these lessons into automated processing pipelines that can produce such results with minimal or no human intervention. In particular, at the early stages of calibration, before the major errors have been corrected for, it is vitally important to discriminate between artefacts (figure 3a) and real sources, lest the artefacts become “locked in” to the sky model. Sphesihle’s work with Lerato Sebokolodi (RATT) provides a reliable way to pinpoint and eliminate the artefacts.
A new effort led by Arun Aniyan (SKA South Africa/DOME/RATT) aims to apply machine learning (ML) techniques to such problems.
Complicated Extended Sources
Complicated extended sources, of which Cygnus A is perhaps the most famous, present a very different challenge.
Raw radio images are corrupted by a complicated point spread function due to gaps between telescopes. Correcting for this is known as ‘deconvolution’.
For point-like sources such as those in the 3C147 field, deconvolution is easily handled by the venerable CLEAN algorithm (dating from 1974), but this breaks down on complex sources such as Cygnus A.
One very promising development is a new family of deconvolution algorithms based on compressive sensing (CS) theory. Quite a few CS algorithms have been published and discussed, but the first one “to market” in the sense of being released as public and fully functional software able to process real data, is a joint African/French effort.
The original algorithm, called MORESANE, was developed by Arwa Dabbech from Tunisia, working at Observatoire de la Côte d’Azur (OCA) in France, and implemented into a working tool called PyMORESANE, including GPU acceleration, by Jonathan Kenyon at RATT.
This implementation was then readily incorporated into a software package called WSCLEAN by Andre Offringa (ASTRON), replacing the standard CLEAN loop and resulting in one of the fastest and most capable imagers to date.
Figure 4 shows a comparison of models for Cygnus A recovered by CLEAN and PyMORESANE from a small fraction of new JVLA observations of this source. With more JVLA data of this source coming, more exciting images should be in the offing.
Such collaborations show that 3GC is more than a toolbox – it is a community and one in which South Africa is quickly becoming the hub.
With 3GC becoming mainstream, there are more exciting developments to look forward to. In particular, Bayesian techniques will enable robust statistical inferences about the radio sky, as opposed to the historical “this image looks plausible so it must be true” approach. And a new collaboration called Bayesian Inference for Radio Observations (BIRO) between UCT, RATT, AIMS and UCL (UK) is investigating the use of Bayesian techniques. More about these in the future – watch this space.