11.07.2015 Views

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

176 CHAPTER 5 <strong>Radar</strong> Applications <strong>of</strong> Sparse Reconstructionapproaches discussed in Section 5.3.1.2. An analysis and discussion <strong>of</strong> one <strong>of</strong> these iterativereweighted least squares (IRLS) approaches in terms <strong>of</strong> RIP is provided in [85].One class <strong>of</strong> IRLS algorithms solves a sequence <strong>of</strong> l 2 -regularized problems with aweighting matrix derived from the previous iterate. An example is the focal underdeterminedsystem solver (FOCUSS) algorithm [86], which predates the work on CS. Here,we provide an example <strong>of</strong> an algorithm that solves a sequence <strong>of</strong> reweighted l 1 problemsas proposed in [87]. In particular, the noisy-data version <strong>of</strong> the algorithm is given asˆx k+1 = argminx1W i,i k =| ˆx i k |+ε∥ W k x ∥ ∥1subject to ‖Ax − y‖ 2 ≤ σIn other words, W k is a diagonal matrix with elements equal to the inverses <strong>of</strong> the amplitudes<strong>of</strong> the elements <strong>of</strong> the previous iterate 31 . The next estimate <strong>of</strong> x is obtained bysolving a new BP σ problem with this weighting matrix applied to x. As a coefficientbecomes small, the weight applied to it will become very large, driving the coefficienttoward zero. In this way, the reweighting scheme promotes sparse solutions. Indeed, thisapproach can reconstruct sparse solutions with fewer measurements than a straightforwardBP σ approach. Unfortunately, the price for this performance improvement is the need tosolve a sequence <strong>of</strong> l 1 optimization problems. These subproblems can be solved with one<strong>of</strong> the techniques we have already discussed.This algorithm can be derived from a MM framework where the cost function tobe minimized is the sum <strong>of</strong> the logarithms <strong>of</strong> the absolute values <strong>of</strong> the coefficients<strong>of</strong> x [87]. This function’s unit ball is more similar to the l 0 norm than that <strong>of</strong> the l 1norm, intuitively explaining the improved performance. This approach can be extendedto nonconvex minimization with p < 1; see [88].5.3.4 Greedy MethodsWe now turn our attention to a somewhat different class <strong>of</strong> algorithms for sparse reconstruction.Greedy algorithms are principally motivated by computational efficiency, althoughthey <strong>of</strong>ten exhibit somewhat inferior performance to the approaches already considered.They do not arise from optimizing an l p -penalized cost function but instead rely on iterativeattempts to identify the support <strong>of</strong> the unknown vector x true . Notice that if we knowa priori which elements <strong>of</strong> x true are nonzero, indexed by the set Ɣ, then we can solve themuch easier problemy = A Ɣ x trueƔ + e (5.35)where subscripting by Ɣ indicates that we have thrown away the entries or columns notincluded in Ɣ. Since x true is sparse, the matrix A Ɣ now has many more rows than columns.Put simply, this problem is overdetermined and can be solved easily using least squares.In particular, we can estimate x true as 32ˆx Ɣ = ( A H Ɣ A Ɣ) −1AHƔ y (5.36)31 The ε factor is included to ensure finite weights.32 Naturally we set entries outside the set Ɣ to zero.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!