11.07.2015 Views

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

164 CHAPTER 5 <strong>Radar</strong> Applications <strong>of</strong> Sparse Reconstructionconstraint on R 2s (A), even though the signal <strong>of</strong> interest is assumed to be s sparse, becausethe pro<strong>of</strong>, like the previous pro<strong>of</strong> for uniqueness <strong>of</strong> the sparse solution, relies on preservingthe energy <strong>of</strong> differences between s-sparse signals.We should emphasize that this condition is sufficient but not necessary. Indeed, goodreconstructions using (5.12) are <strong>of</strong>ten observed with a measurement matrix that does noteven come close to satisfying the RIP. For example, the authors in [37] demonstrate recovery<strong>of</strong> sinusoids in noise using SR with performance approaching the Cramer-Rao lowerbound for estimating sinusoids in noise, despite having the RIC for A approach unity. 17Nonetheless, RIP <strong>of</strong>fers powerful performance guarantees and a tool for proving resultsabout various algorithms, as we shall see in Section 5.3. Indeed, in some cases the effort toprove RIP-based guarantees for an algorithm has led to improvements in the algorithm itself,such as the CoSaMP algorithm [39] discussed in Section 5.3.4. Other conditions existin the literature, for example, [40–42], and indeed developing less restrictive conditionsis an active area <strong>of</strong> research.5.2.5.3 Matrices that Satisfy RIPAt this point, it may seem that we have somehow cheated. We started with an NP-hardproblem, namely finding the solution with the smallest l 0 norm. The result given in (5.15)states that we can instead solve the convex relaxation <strong>of</strong> this problem and, in the noise-freecase, obtain the exact same solution. The missing detail is that we have assumed that Ahas a given RIC. Unfortunately, computing the RIC for a given matrix is an NP-hard task.Indeed, it requires computing an SVD <strong>of</strong> every possible subset <strong>of</strong> n columns <strong>of</strong> A. For amatrix <strong>of</strong> any meaningful size, this is effectively impossible.So, it seems that we may have traded an NP-hard reconstruction task for an NP-hardmeasurement design task. Put another way, for our straightforward convex reconstructionproblem to have desirable properties, we must somehow design a measurement matrix Awith a property that we cannot even verify. Fortunately, an elegant solution to this problemexists. Rather than designing A, we will use randomization to generate a matrix that willsatisfy our RIC requirements with very high probability. 18Numerous authors have explored random matrix constructions that yield acceptableRICs with high probability. Matrices with entries that are chosen from a uniform randomdistribution, a Gaussian distribution, 19 a Bernoulli distribution, as well as other examplessatisfy the required RIP provided that M is greater than Cs log(N/s) for some distributiondependentconstant C [21]. A very important case for radar and medical imaging applicationsis that a random selection <strong>of</strong> the rows <strong>of</strong> a discrete Fourier transform matrix alsosatisfies a similar condition with M ≥ Cs log 4 (N) [45]. Furthermore, A can be constructed17 See also [38] for an investigation <strong>of</strong> modifying the RIP property to address performance guarantees insituations where the standard RIP is violated.18 Several attempts have been made to develop schemes for constructing A matrices deterministically withthe desired RIP properties, for example [47]. However, these results generally require more measurements(i.e., larger M) to guarantee the same RIC. See [43] for an example construction based on Reed-Mullercodes that does not satisfy RIP for all vectors but preserves the energy <strong>of</strong> a randomly drawn sparse vectorwith high probability. Expander graphs have also been explored as options for constructing appropriateforward operators with accompanying fast reconstruction algorithms; see for example [44].19 Indeed, this result holds for the wider class <strong>of</strong> sub-Gaussian distributions.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!