11.07.2015 Views

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

5.3 SR Algorithms 177One can argue that this so-called oracle solution is the best that we can hope to dogiven knowledge <strong>of</strong> Ɣ and no other information, see, for example [1]. Given additionalknowledge about e or x true , we might be able to improve this estimate, but the basic idearemains that knowledge <strong>of</strong> the support set greatly simplifies our problem.Greedy algorithms attempt to capitalize on this notion by identifying the support set <strong>of</strong>x true in an iterative manner. These techniques are far older than CS, tracing back to at leastan iterative deconvolution algorithm known as CLEAN [89], which is equivalent to themore-recent Matching Pursuits (MP) algorithm [90]. Here, we will describe the OrthogonalMatching Pursuits (OMP) algorithm [91] as a typical example <strong>of</strong> greedy methods. Asalready suggested, OMP computes a series <strong>of</strong> estimates <strong>of</strong> the support set denoted by Ɣ k .OMP is based on the idea <strong>of</strong> viewing A as a dictionary whose columns representpotential basis elements that can be used to represent the measured vector y. At eachiteration, the residual error term y − A ˆx k is backprojected or multiplied by A H to obtainthe signal q k = A H ( y − A ˆx k ). The largest peak in q k is identified and the correspondingatom is added to the index set, that is Ɣ k+1 = Ɣ k ∪ argmax i |qi k |. The new estimate <strong>of</strong> thesignal is then computed as ˆx k+1 = ( AƔ H ) −1k+1A Ɣk+1 AHƔk+1y.Basically, at each iteration OMP adds to the dictionary the atom that can explain thelargest fraction <strong>of</strong> the energy still unaccounted for in the reconstruction <strong>of</strong> ˆx. Figure 5-7provides an example that may clarify this process. The example involves a single radarpulse with 500 MHz <strong>of</strong> bandwidth used to reconstruct a range pr<strong>of</strong>ile containing threepoint targets. The initial set Ɣ 0 is empty. The top left pane shows the plot <strong>of</strong> A H y. Thepeak <strong>of</strong> this signal is selected as the first estimate <strong>of</strong> the signal x true , and the correspondingamplitude is computed. The resulting estimate is shown on the top right <strong>of</strong> the figure.Notice that the estimate <strong>of</strong> the amplitude is slightly <strong>of</strong>f from the true value shown with across. At the next iteration, the dominant central peak is eliminated from the backprojectederror, and the algorithm correctly identifies the leftmost target. Notice in the middle rightpane that the amplitude estimate for the middle target is now correct. OMP reestimates all<strong>of</strong> the amplitudes at each iteration, increasing the computational cost but improving theresults compared with MP. The final iteration identifies the third target correctly. Noticethat the third target was smaller than some <strong>of</strong> the spurious peaks in the top left rangepr<strong>of</strong>ile, but OMP still exactly reconstructs it after only three iterations.For very sparse signals with desirable A matrices, OMP performs well and requires lesscomputation than many other methods. Unfortunately, as discussed in detail in [91,92], theperformance guarantees for this algorithm are not as general as the RIP-based guaranteesfor algorithms like IHT or BPDN. Indeed, the guarantees lack uniformity and only holdin general in a probabilistic setting. Two issues with the OMP algorithm are perhapsresponsible for these limitations. The first is that the algorithm only selects a single atomat each iteration. The second, and perhaps more crucial limitation, is that the algorithmcannot correct mistakes in the support selection. Once an atom is in the set Ɣ, it stays there.Two virtually identical algorithms—compressive sampling matching pursuits(CoSaMP) [39] and subspace pursuit (SP) [93]—were developed to overcome these limitations.They select atoms in a large group at each iteration and include a mechanism forremoving atoms from the dictionary when they are found to be redundant. 33 We can then33 These algorithms are closely related to hard-thresholding approaches. Indeed, many <strong>of</strong> the SR algorithmsbeing developed blur the lines between the classes <strong>of</strong> algorithms we have used throughout ourdiscussion.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!