11.07.2015 Views

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

172 CHAPTER 5 <strong>Radar</strong> Applications <strong>of</strong> Sparse Reconstructionin (5.25) that the TV norm <strong>of</strong> the magnitude <strong>of</strong> the image rather than the complex-valuedreflectivity, is penalized. This choice is made to allow rapid phase variations. 24 Noticealso that the l p norm is used with 0 < p ≤ 2. As we have mentioned, selecting p < 1can improve performance but yields a nonconvex problem. Cetin and Karl replace the l pterms in the cost function with differentiable approximations, derive an approximation forthe Hessian <strong>of</strong> the cost function, and implement a quasi-Newton method.Kragh [77] developed a closely related algorithm (for the case with λ 2 = 0) alongwith additional convergence guarantees leveraging ideas from majorization minimization(MM). 25 For the case with no TV penalty, both algorithms 26 end up with an iteration <strong>of</strong>the formˆx k+1 = [ A H A + h( ˆx k ) ] −1A H y (5.26)where h(·) is a function based on the norm choice p. The matrix inverse can be implementedwith preconditioned conjugate gradients to obtain a fast algorithm. Notice that A H A <strong>of</strong>tenrepresents a convolution that can be calculated using fast Fourier transforms (FFTs). Amore detailed discussion <strong>of</strong> these algorithms and references to various extensions to radarproblems <strong>of</strong> interest, including nonisotropic scattering, can be found in [17].These algorithms do not begin to cover the plethora <strong>of</strong> existing solvers. Nonetheless,these examples have proven useful in radar applications. The next section will considerthresholding algorithms for SR. As we shall see, these algorithms trade generality forfaster computation while still providing solutions to QP λ .5.3.2 Thresholding AlgorithmsThe algorithms presented at the end <strong>of</strong> Section 5.3.1.2 both require inversion <strong>of</strong> a potentiallymassive matrix at each iteration. While fast methods for computing this inverse exist, weare now going to consider a class <strong>of</strong> algorithms that avoids this necessity. These algorithmswill be variations on the iterationˆx k+1 = η { ˆx k − μA H ( A ˆx k − y )} (5.27)where η{·} is a thresholding function. The motivation for considering an algorithm <strong>of</strong>this form becomes apparent if we examine the cost function for QP λ . First, for ease <strong>of</strong>discussion let us define two functions that represent the two terms in this cost function.Adopting the notation used in [79]f (x) = ‖Ax − y‖ 2 2g(x) = λ ‖x‖ 124 Imagine, for example, the phase response <strong>of</strong> a flat plate to see intuitively why penalizing variation inthe phase would be problematic.25 MM relies on replacing the cost function <strong>of</strong> interest with a surrogate function that is strictly greater(hence majorization) but easier to minimize (hence minimization). The idea is to majorize the functionnear the current estimate, minimize the resulting approximation, and repeat. The perhaps more familiarexpectation maximization algorithm is actually a special case <strong>of</strong> MM, as detailed in an excellenttutorial [78].26 A particular step size must be selected to obtain this form for the algorithm in [75]. It is also worthemphasizing that this version can handle the more general problem with λ 2 ≠ 0. The resulting algorithmsimply uses a more complicated expression for the function h.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!