Details: Gridding, Weight Functions, the W-term Fundamentals of Radio Interferometry (Sections 5.3, 5.4, 5.5) Griffin Foster SKA SA/Rhodes University NASSP 2016
Array Visibility Sampling Visibilities are regularly sampled in time, but not in the UVW-space (irregular sampling). The Fourier transform of irregularly sampled data requires use of the generic Discrete Fourier Transform (DFT). For an N x N image from M (M ~ N2) visibility samples this requires N2 x M computational operations. NASSP 2016
Array Visibility Sampling The Fourier transform of regularly sampled data can be done with a Fast Fourier Transform (FFT). In order to transform an irregularly sampled signal to regularly sampling a gridding operation must be performed. For an N x N image from N x N visibility samples this requires 2N log (N) computational operations. NASSP 2016
VLA Antenna Layout NASSP 2016
VLA UV Coverage NASSP 2016
VLA UV Coverage NASSP 2016
VLA UV Coverage NASSP 2016
VLA UV Coverage NASSP 2016
VLA UV Coverage NASSP 2016
Mapping Irregular Sampling to a Regular Grid NASSP 2016
Mapping Irregular Sampling to a Regular Grid Continuous Visibility Function Array Sampling Function Gridding Convolution Kernel Gridding Operator (Shah) NASSP 2016
De-gridding De-gridding Convolution Kernel Array Sampling Function Gridded Visibilities De-gridding Convolution Kernel NASSP 2016
Mapping Irregular Sampling to a Regular Grid NASSP 2016
Image Size and Resolution Given the the resolution and the desired field of view the number of pixels in the image (the image size) is For a given image resolution and image size the uv domain resolution/grid size is: And the number of pixels is unchanged Nu = Nl, Nv = Nm. NASSP 2016
Anti-Aliasing Filter Effects NASSP 2016
Anti-Aliasing Filter Effects NASSP 2016
Window Functions NASSP 2016
The Dirty Image NASSP 2016
The Dirty Image For simplicity, let us assume the gridding operation is perfect, the regularly sampled visibilities VS are then the product of the observed visibility function Vobs and the array sampling function S. Using the van Cittert-Zernike approximate relation, the dirty image ID is the Fourier transform of the observed visibilities By the convolution theorem The Fourier transform of the sampling function is the point spread function (PSF) and the Fourier transform of the visibility function is the true sky image. NASSP 2016
KAT-7 Sampling Function NASSP 2016
KAT-7 Sampling Function What happens when a position in the visibility space is sampled multiple times during an observation? NASSP 2016
Weighting Function: Generalization of the Sampling Function R (reliability): reliability of a given sample T (taper): matched filter function D (density): normalization of redundant sampling S (array sampling): uv-coverage sampling NASSP 2016
1-D Weight Functions NASSP 2016
Natural Weighting: maximizes sensitivity at the cost of resolution. Density Functions Natural Weighting: maximizes sensitivity at the cost of resolution. Uniform Weighting: maximizes resolution at the cost of sensitivity. NASSP 2016
Weight Example: True Sky NASSP 2016
Weight Example: Natural and Uniform Weighting NASSP 2016
Weight Example: Natural and Uniform Weighting NASSP 2016
Robust (Briggs') Weighting NASSP 2016
Taper Functions NASSP 2016
Matched Filtering Given a known signal in noise, in order to maximize the signal to noise of the signal a matched filter is used. The matched filter is the time-reverse of the input signal (in 1-D), this amounts to a cross-correlation in all dimensions. SNR Maximized Signal Noisy signal Matched Filter NASSP 2016
Matched Filters True Signal NASSP 2016
True Signal in a Noisy System Matched Filters True Signal in a Noisy System Matched Filter Cross-Correlation Filtered Signal NASSP 2016
True Signal in a Noisy System Un-Matched Filters True Signal in a Noisy System Un-Matched Filter Cross-Correlation Filtered Signal NASSP 2016
Matched Filters True Signal Noisy Signal NASSP 2016
Matched Filters Filter Filter Filter Filtered Signal Filtered Signal NASSP 2016
NASSP 2016
The more complete van Cittert-Zernike Theorem Our 2-D Fourier relation between the visibility function and the sky is an approximate form of the van Cittert-Zernike theorem: A more complete version of van Cittert-Zernike is: Unfortunately, this is not a 2-D Fourier relation. NASSP 2016
Approximations to reduce to a 2D Fourier transform Small Angle/Narrow Field of View Approximation: From positional astronomy, the sky instensity is distributed on the celestial (unit) sphere such that for any position (l,m,n): If we are only interested in a small area on the sky then the extant of (l,m) is small: Then the van Cittert-Zernike theorem can be approximated as: NASSP 2016
Approximations to reduce to a 2D Fourier transform Delay term/w-term Approximation: If our visibility sampling is approximately on a plane, or again we are only interested in a narrow field of view then the so-called w-term can be seen as a constant delay term, with a simple correction w=0. NASSP 2016
W-term Approximation NASSP 2016
Simple Array Layouts NASSP 2016
Simple Array Layouts An array is coplanar if there exists (approximately) a 2-D plane all visibility measurement lie on in the 3-D visibility space. NASSP 2016
Coplanar W-term For a coplanar array there exists a w-plane in which (l, m) coordinates can be linearly transformed to (l', m') NASSP 2016
Projection Effects NASSP 2016
In the small angle approximation we can Taylor expand the w-term: Non-coplanar Sampling Unfortunately, few modern arrays are coplanar...so approximation become the name of the game. In the small angle approximation we can Taylor expand the w-term: Change in Phase → Change in Position Phase factors NASSP 2016
W-Term Effects: Uncorrected NASSP 2016
W-Term Effects: Corrected NASSP 2016
Correcting Non-coplanar Wide-field Effects Full 3D Fourier Transform: the 2D Fourier transform is used for computational efficiency, but a 3D transform is a 'more correct' transform. Snapshot Imaging: when imaging for short time periods a non-coplanar array can be well approximated as coplanar. Facet Imaging: form many small, narrow field images and stitch them together to form a single wide field image. W-Projection: apply convolutional filters to the visibilities to project down to a single w-plane. W-Stacking: form multiple images at different w-planes and stack the resulting images in the image domain. See Section 5.5 NASSP 2016