11.07.2015 Views

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

160 CHAPTER 5 <strong>Radar</strong> Applications <strong>of</strong> Sparse ReconstructionRegularization using the l 1 norm has a long history, for example, [25]. We shall discussseveral formulations <strong>of</strong> the problem described in (5.12) and algorithms for solving itin Section 5.3.1. When the problem is solved with an l 2 penalty in place <strong>of</strong> the l 1 norm,the result is termed Tikhonov regularization [26], 12 which is known in the statistics communityas ridge regression [28]. This formulation has the advantage <strong>of</strong> <strong>of</strong>fering a simple,closed-form solution that can be implemented robustly with an SVD [20]. Unfortunately,as in the noise-free case, this approach does not promote sparsity in the resulting solutions.We mention Tikhonov regularization because it has a well-known Bayesian interpretationusing Gaussian priors. It turns out that the l 1 -penalized reconstruction can also be derivedusing a Bayesian approach.To cast the estimation <strong>of</strong> x true in a Bayesian framework, we must adopt priors on thesignal and disturbance. First, we will adopt a Laplacian prior 13 on the unknown signalx true and assume that the noise e is circular Gaussian with known covariance , that is,e ∼ CN(0,){p(x true ) ∝ exp − λ 2∥ x true∥ }∥1where the normalization constant on p(x true ) is omitted for simplicity. Given no otherinformation, we could set = I, but we will keep the generality. We can then find theMAP estimate easily asˆx λ = argmaxx= argmaxx= argmaxx= argmaxx= argminxp(x| y)p( y|x)p(x)p( y)p( y|x)p(x)exp{− 1 } {2 ‖Ax − y‖2 exp − λ }2 ‖x‖ 1‖Ax − y‖ 2 + λ ‖x‖ 1where ‖x‖ 2 = x H −1 x. The resulting optimization problem is precisely what we wouldexpect given the colored Gaussian noise prior. Since is a covariance matrix, and hencepositive definite and symmetric, the problem is convex and solvable with a variety <strong>of</strong>techniques. In fact, we can factor the inverse <strong>of</strong> the covariance using the Cholesky decompositionas −1 = R H R to obtainˆx λ = argminx= argminx= argminx‖Ax − y‖ 2 + λ ‖x‖ 1‖RAx − Ry‖ 2 2 + λ ‖x‖ 1∥ Āx − ȳ ∥ 2 2 + λ ‖x‖ 1 (5.13)12 An account <strong>of</strong> the early history <strong>of</strong> Tikhonov regularization, dating to 1955, is given in [27].13 Recent analysis has shown that, while the Laplacian prior leads to several standard reconstructionalgorithms, random draws from this distribution are not compressible. Other priors leading to the samel 1 penalty term but yielding compressible realizations have been investigated. See [29] for details.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!