13.07.2015 Views

[tel-00726959, v1] Caractériser le milieu interstellaire ... - HAL - INRIA

[tel-00726959, v1] Caractériser le milieu interstellaire ... - HAL - INRIA

[tel-00726959, v1] Caractériser le milieu interstellaire ... - HAL - INRIA

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

J. Pety and N. Rodríguez-Fernández: Revisiting the theory of interferometric wide-field synthesis<strong>tel</strong>-<strong>00726959</strong>, version 1 - 31 Aug 2012quantity we want to estimate, i.e., I, to many noisy 5 measurements,V(u p , u s ), via a product by B (assumed to be perfectlydefined), we can invoke a simp<strong>le</strong> <strong>le</strong>ast-squares argument (seee.g. Bevington & Robinson 2003) to demonstrate that the optimumweighting function isW ( u p , u − u p)=w ( u p , u − u p)B(up − u )∫u pw ( u p , u − u p)B2 ( u p − u ) du p, (60)with w(u p , u s ) the weight computed from the inverse of thenoise variance of V(u p , u s ). Using Eq. (20), it is then easyto demonstrate that D(u) = 1, and then I dirty (α) = I(α). Thedirty image is a direct estimate of the sky brightness; i.e.,deconvolution is superfluous.Comp<strong>le</strong>te sampling. The signal is Nyquist-samp<strong>le</strong>d, but it has afinite support in both the uv and sky planes, implying a finitesynthesized resolution and a finite field of view. In contrastto the previous case, this one may have practical applications,e.g., observations done with ALMA in its compactconfiguration. Indeed, the large number of independent baselinescoup<strong>le</strong>d to the design of the ALMA compact configurationensure a comp<strong>le</strong>te, almost Gaussian, sampling for eachsnapshot. In this case, the best choice may be to choose theweighting function so that all the dirty beams are identical tothe same Gaussian function. In this case, the deconvolutionwould also be superfluous.Incomp<strong>le</strong>te sampling. This is the more general case studied inSect. 3.2. The signal not only has a finite support but it alsois undersamp<strong>le</strong>d (at <strong>le</strong>ast in the uv plane). The deconvolutionis mandatory. The choice of the weighting function thus willdepend on imaging goals.If the user needs the best signal-to-noise ratio, some kindof natural weighting will be needed. It is tempting to useEq. (60) as a natural weighting scheme. However, the maincondition for derivation of this weighting function, i.e., theEkers & Rots Eq. (15), is not valid anymore, as the noisymeasured quantity (SV) is now linked to the quantity wewant to estimate (I) by a local average (see Eq. (31)). Thisis why it was more appropriate to try to get a Gaussian dirtybeam shape in the comp<strong>le</strong>te sampling case.If the signal-to-noise ratio is high enough, the user has twochoices. Either he/she wants to maximize angular resolutionpower and needs some kind of robust weighting, or he/shewants to get the more homogeneous dirty beam shape overthe who<strong>le</strong> field of view. This requirement cannot always befully met. The Ekers & Rots scheme enab<strong>le</strong>s us to recoverunmeasured spatial frequencies only in regions near to measuredones, because B has a finite support.5.3. DeconvolutionWriting the image-plane measurement equation in aconvolution-like way is very interesting because all thedeconvolution methods developed in the past 30 years areoptimized to treat deconvolution prob<strong>le</strong>ms (see e.g. Högbom1974; Clark 1980; Schwab 1984; Narayan & Nityananda 1986).For instance, it should be possib<strong>le</strong> to deconvolve Eq. (23) withjust slight modifications to the standard CLEAN algorithms.Indeed, Eq. (23) can be interpreted as the convolution of the5 The noise is assumed to have a Gaussian probability distributionfunction.sky brightness by a set of dirty beams, so that the only change,once a CLEAN component is found, would be the need to findthe right dirty beam in this set in order to remove the CLEANcomponent from the residual image.Following Clark (1980)andSchwab (1984), most algorithmstoday deconvolve in alternate minor and major cyc<strong>le</strong>s. During aminor cyc<strong>le</strong>, a solution of the deconvolution is sought with a simplified(hence approximate) dirty beam. During a major cyc<strong>le</strong>,the current solution is subtracted either from the original dirtyimage using the exact dirty beam or from the measured visibilities,implying a new gridding step. In both cases, the major cyc<strong>le</strong>sresult in greater accuracy. The iteration of minor and majorcyc<strong>le</strong>s enab<strong>le</strong>s one to find an accurate solution with better computingefficiency. In our case, the approximate dirty beams usedin the minor cyc<strong>le</strong> could be 1) dirty beams of a much smal<strong>le</strong>r sizethan the image; or 2) a reduced set of dirty beams (i.e., guessingthat the typical variation sizesca<strong>le</strong> of the dirty beams with thesky coordinate is much larger than the primary beamwidth); or3) both simultaneously. The model would be subtracted from theoriginal visibilities before re-imaging at each major cyc<strong>le</strong>. Thetrade-off is between the memory space needed to store a full setof accurate dirty beams and the time needed to image the data ateach major step. Some quantitative analysis is needed to knowhow far the dirty beams can be approximated in the minor cyc<strong>le</strong>.It is worth noting that the accuracy of the deconvolved imagewill be affected by edge effects. Indeed, the dirty brightnessat the edges of the observed field of view is attenuated by the primarybeam shape. When deconvolving these edges, the deconvolvedbrightness will be <strong>le</strong>ss precise, because the primary beamhas a low amplitude there. This only affects the edges, becauseinside the field of view, every sky position should be observed afraction of the time with a primary beam amplitude between 0.5and 1. This edge effect is neverthe<strong>le</strong>ss expected to be much <strong>le</strong>sstroub<strong>le</strong>some than the inhomogeneous noise <strong>le</strong>vel resulting fromstandard mosaicking imaging (see Sect. 7.1).6. Short spacings6.1. The missing flux prob<strong>le</strong>mRadio interferometers are bandpass instruments; i.e., they filterout not only the spacings longer than the largest baseline <strong>le</strong>ngthbut also the spacings shorter than the shortest baseline <strong>le</strong>ngth,which is typically comparab<strong>le</strong> to the diameter of the interferometerantennas. In particular, radio interferometers do not measurethe visibility at the center of the uv plane (the so-cal<strong>le</strong>d “zerospacing”), which is the total flux of the source in the measuredfield of view.The lack of short baselines or short spacings has strong effectsas soon as the size of the source is more than about 1/3 to1/2 of the interferometer primary beam. Indeed, when the sizeof the source is small compared to the primary beam of the interferometer,the deconvolution algorithms use, in one way oranother, the information of the flux at the lowest measured spatialfrequencies for extrapolating the total flux of the source. Theextreme case is a point source at the phase center for which theamplitude of all the visibilities is constant and equal to the totalflux of the source: extrapolation is then exact. However, thelarger the size of the source, the worse the extrapolation, whichthen underestimates the total source flux. This is the well-knownprob<strong>le</strong>m of the missing flux that observers sometimes note whencomparing a source flux measured by a mm interferometer withthe flux observed with a sing<strong>le</strong>-dish antenna.Page 11 of 21

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!