Interacademiaal Lecture 6 1. Spatial Tapering 2. Non-coplanar arrays (w term) 3.Bandwidth- & Time smearing 4. Wide-band imaging
Interacademiaal Tentamen IAC06 tentamen There will be a centrally organised schriftelijk exam. This will take place on June 7 in 3 locations, from 13:30 to 16:30: Leiden, supervisor Van Langevelde Groningen, supervisor Oosterloo Utrecht, supervisor Pols The exam will be based on several chapters from the Synthesis Imaging book and the material from the courses, guest lectures and practica. The book is allowed at the exam, as is a scientific calculator, the material from the courses or notes are not allowed, nor is internet access. From the book we have selected chapters 1-12, 17, 20, 22, 28, 29 We have dropped the idea of asking you to prepare by reading scientific papers
Interacademiaal Spatial tapering
Interacademiaal Previous praktikum Spectral line data of NGC 1023
Interacademiaal Previous praktikum Integrated H I image (moment 0) ― unmasked vs masked
Interacademiaal Weighting schemes Because we make the image after the observation, we can still choose how to make the image: assign different weights to each visibility: In this way, we can “tune” the image to what we want to see: optimise noise and/or resolution
Interacademiaal Weighting and WSRT Because WSRT has many redundant baselines: choice of weighting very important for WSRT Natural weightingRobust = 0Uniform weighting
Interacademiaal Weighting and WSRT Because WSRT has many redundant baselines: choice of weighting very important for WSRT Natural weighting noise = 0.5 Robust = 0 noise = 0.6 Uniform weighting noise = 0.7 Difference in noise 40% (factor 2 in observing time!!!)
Interacademiaal Alternative: Spatial Tapering To detect extended source: smooth image to lower spatial resolution Remember: units in image are Jy/beam if one doubles the area of the beam: signal per beam of extended source in image doubles noise increases only by 10-20% signal/noise increases Optimum beam to detect a source has same extent as source
Interacademiaal Related to Wiener Filtering u source noise No reason to add these data !!!!!
Interacademiaal ” resolution30” resolution
Interacademiaal Tapering Smooth image taper down long baselines: with Gaussianmultiply uv plane with Gaussian Am “throwing away” baselines noise increases! Improvement: multi-scale (e.g. wavelets)
Interacademiaal Imaging with non-coplanar arrays
Interacademiaal The problem: You will remember the complete imaging equation: Under certain conditions, the inversion of this is simple: 1. Coplanar arrays (E-W or VLA snapshot) 2. Small field of view (i.e. n-1 can be ignored)
Interacademiaal Example
Interacademiaal
Interacademiaal Simple geometric effect: The apparent shape of the array varies across the field of view This gives an extra phase term e -2πiw(n-1) This phase term should be small over the field of view:
Interacademiaal Must represent celestial sphere via a projection onto plane Distance AA’: The “extra” phase is given by AA’ multiplied by 2πw:
Interacademiaal A simple picture: Planar array a coplanar array is stretched or squeezed when seen from different locations in the field of view
Interacademiaal A simple picture: non-coplanar array a non-coplanar array is “distorted” in shape when seen from different locations in the field of view
Interacademiaal Changes in the beam Different array geometry leads to position variant PSF
Interacademiaal What about snapshots??? Snapshot: VLA coplanar, but as seen from the sky, this plane rotates through the day The apparent source position in a 2-D image thus rotates following a conic section
Interacademiaal
Interacademiaal Snapshots: warping Instantaneous planar data can be regridded to: Involves coordinate distortions Complications for deconvolution
Interacademiaal The 3-D Image Volume After some juggling, one finds that: where F is related to the desired intensity, I(l,m), by: Brightness defined only on “Celestial Sphere”: Visibility measured in (u,v,w) space remove extra w-phase for field centre
Interacademiaal The image volume is not a physical space. It is a mathematical construct.
Interacademiaal D deconvolution of image volume Taking into account the sampling of the observations: All 2D deconvolution theory can be extended straightforwardly to 3D Solve 3D convolution equation using any deconvolution algorithm, but must constrain solution to lie on celestial sphere
Interacademiaal Schematically True image After decon Dirty image After projection
Interacademiaal How to deal with it? Compute the entire 3-d image volume. The most straightforward approach. But this approach is hugely wasteful in computing resources! The minimum number of ‘vertical planes’ needed is: BΘ 2 /λ The number of volume pixels to be calculated is: 4 B 3 Θ 2 /λ 3 But the number of pixels actually needed is: 4 B 2 /λ 2 So the fraction of effort which is wasted is 1 – λ/(BΘ 2). And this about 90% at 20 cm in VLA A-configuration, for a full primary beam image.
Interacademiaal
Interacademiaal The polyhedron approach The polyhedron approach approximates the unit sphere with small flat planes, each of which stays close to the sphere’s surface. facets For each subimage, the entire dataset must be phase-shifted, and the (u,v,w) recomputed for the new plane.
Interacademiaal Polyhedron Approach, (cont.) How many facets are needed? If we want to minimize distortions, the plane mustn’t depart from the unit sphere by more than the synthesized beam, /B. Simple analysis (see the book) shows the number of facets will be: N f ~ 2 B/D 2 or twice the number needed for 3-D imaging. But the size of each image is much smaller, so the total number of cells computed is much smaller. The extra effort in phase computation and (u,v,w) rotation is more than made up by the reduction in the number of cells computed. This approach was the standard until not so long ago.
Interacademiaal Polyhedron Approach, (cont.) Procedure is then: Determine number of facets, and the size of each. Generate each facet image, rotating the (u,v,w) and phase-shifting the phase center for each. Jointly deconvolve the set. The Clark/Cotton/Schwab major/minor cycle system is well suited for this. Project the finished images onto a 2-d surface. Added benefit of this approach: As each facet is independently generated, one can do a separate antenna-based calibration for each facet. Useful if calibration is a function of direction as well as time. This is needed for meter-wavelength imaging.
Interacademiaal W-projection Is it possible to project the data onto a single (u,v) plane, accounting for all the necessary phase shifts? Answer is YES! Tim Cornwell has developed a new algorithm, termed ‘w-projection’, to do this. Available only in AIPS++, this approach permits a single 2-d image/deconvolution, and eliminates the annoying edge effects which accompany facet re-projection.
Interacademiaal Convolutional relationship between planes in u,v,w space Assume w constant: Frater & Docherty 1980
Interacademiaal W-Projection Each visibility, at location (u,v,w) is mapped to the w=0 plane, with a phase shift proportional to the distance Each visibility is mapped to ALL the points lying within a cone whose full angle is the same as the field of view of the desired map – ~2λ/D for a full-field image w u u0,w0u0,w0 u0u0 u1,v1u1,v1 ~2λ/D ~2λw 0 /D
Interacademiaal The W-projection algorithm Calculate gridding kernel for range of values of Fourier transform phase screens multiplied by spheroidal function (needed to control aliasing) Image to Fourier Taper image by spheroidal function Fourier transform Estimate sampled visibilities by convolving gridded values with w dependent kernel Fourier to Image Convolve sampled visibilities onto grid using w dependent kernel Inverse Fourier transform Correct for spheroidal function Deconvolution Deconvolve in minor cycles using PSF for image center Reconcile to visibility data in major cycles
Interacademiaal Example Fourier transformUVW space facetsW projection
Interacademiaal Bandwidth & Time Smearing
Interacademiaal Effects of finite bandwidth The monochromatic imaging equation Real life: instrument has finite bandwidth What happens if we ignore this ?
Interacademiaal Unit of length is wavelength uv coordinates scale with frequency: Finite bandwidth causes radial smearing in uv plane
Interacademiaal Moreover: phase is affected due to wrong delay: soso s
Interacademiaal Averaging of V along radius in uv plane + phase turn G bandpass Effect on off-axis point source (with sampling function S):
Interacademiaal delaycorresponding to offset position l-l o delay function Position dependent distortion function
Interacademiaal Radial smearing
Interacademiaal
Interacademiaal Simpler version If frequencies range from to spatially, for a point x we move from so source width becomes
Interacademiaal Simpler version
Interacademiaal Time averaging
Interacademiaal uv tracks move through uv plane azimuthal smearing Phase of point source: Phase changes as Reduction of amplitude
Interacademiaal
Interacademiaal E.g. WSRT: 60 sec integration at 21 cm At 1 x FWHM primary beam: 3%loss 2 x FWHM11%loss Alternative: In 1 minute, the longest baseline (3 km) sweeps 13 m. This is almost OK for Nyquist sampling the uv plane (dish diameter is 25 m) In 1 minute, the fan beam sweeps 4.4 arcsec at half power. Just acceptable for 12 arcsec beam. At twice half power this is 8.8 arcsec and is too much For imaging large fields, the standard 60 sec is too long
Interacademiaal Multi-Frequency Synthesis
Interacademiaal Wavelength is unit of length for an interferometer the size of the array changes over the observing band (chromatic aberration)
Interacademiaal With single broad band this gives degradation of images at edges Observe in spectral mode !!!!! Spectral mode allows to put every data sample to be put at correct uv coordinate, not at average coordinate of entire band Broad band gives radial smearing in uv coverage can improve uv coverage & reduce sidelobe level
Interacademiaal uv coverage improvement single channel MHz 20-MHz band 64 channels
Interacademiaal Wide band: have to consider spectral effects u v 1 v 2 u for PSF: every uv point gets value 1 but source will have non-zero spectral index PSF will not deconvolve source
Interacademiaal Spectral dirty beam (SDB) Dirty beam response of the perfect point source Linearise spectral response: Now we have TWO dirty beams: B 0 normal dirty beam B 1 spectral dirty beam Can be generalised to higher order; Legendre polynomials
Interacademiaal B 0 is the normal dirty beam i.e. how unit flux apears in the image B 1 gives how “unit” spectral variation is reflected in image
Interacademiaal Example of spectral dirty beam B0B0 B1B1 Amplitude of B 1 much smaller. B 0 has peak at centre, B 1 has a minimum
Interacademiaal Deconvolution Clean every channel but: clean is not linear, so not optimal Include spectral index in model in maximum entropy has expected value with
Interacademiaal Deconvolution Normal Clean: Find j and a 0 that minimise “Double Clean” with B 0 and B 1. Find j, a 0 and a 1 that minimises (implemented as mfclean in Miriad)
Interacademiaal Example: NGC 2841: 2x130 MHz frequency switching 23 cm17 cm WSRT continuum setup: 2x8 bands of 20 MHz each, each band 64 channels, full pol
Interacademiaal single channel ( MHz) single band, 64 channels (20 MHz) 16 Bands 2 x 130 MHz
Interacademiaal MFS can give strong sidelobe suppression (used in Lofar) 1 Channel2 x 130 MHz
Interacademiaal MFS only MFS + SDB
Interacademiaal Additional issues Primary beam frequency dependent introduces spectral index 18 cm22 cm
Interacademiaal Additional issues Faraday rotation Important at low frequencies!!!!
Interacademiaal Acknowledgements For this lecture I made extensive use of material prepared by other people, in particular Tim Cornwell, Rick Perley and from the Synthesis Imaging book.