11.07.2015 Views

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

168 CHAPTER 5 <strong>Radar</strong> Applications <strong>of</strong> Sparse Reconstruction5.3.1 Penalized Least SquaresThe convex relaxation <strong>of</strong> the l 0 reconstruction problem given in (5.12) can be viewedas a penalized least squares problem. We have already seen that this problem arises ina Bayesian framework by assuming a Gaussian noise prior and a Laplacian signal prior.This approach has a long history, for example, [25], and the use <strong>of</strong> the l 1 norm for a radarproblem was specifically proposed at least as early as [62].The good news is that (5.12) is a linear program for real data and a second-order coneprogram for complex data [1]. As a result, accurate and fairly efficient methods such asinterior point algorithms exist to solve (5.12) [24]. Unfortunately, these solvers are notwell suited to the extremely large A matrices in many problems <strong>of</strong> interest and do notcapitalize on the precise structure <strong>of</strong> (5.12). As a result, a host <strong>of</strong> specialized algorithmsfor solving these penalized least squares problems has been created.This section explores several <strong>of</strong> these algorithms. It should be emphasized that solversguaranteeing a solution to (5.12) or one <strong>of</strong> the equivalent formulations discussed hereininherit the RIP-based performance guarantee given in (5.15), provided that A meets therequired RIP requirement.5.3.1.1 Equivalent Optimization Problems and the Pareto FrontierFirst, we will discuss several equivalent formulations <strong>of</strong> (5.12). We will adopt the nomenclatureand terminology used in [63]. In this framework, the optimization problem solvedin (5.12) is referred to as basis pursuit de-noising (BPDN), or BP σ . This problem solvedin the noise-free setting with σ = 0 is called simply basis pursuit (BP), and its solutionis denoted ˆx BP . The theory <strong>of</strong> Lagrange multipliers indicates that we can solve an unconstrainedproblem that will yield the same solution, provided that the Lagrange multipler isselected correctly. We will refer to this unconstrained problem as l 1 penalized quadraticprogram and denote it as QP λ . Similarly, we can solve a constrained optimization problem,but with the constraint placed on the l 1 norm <strong>of</strong> the unknown vector instead <strong>of</strong> the l 2 norm<strong>of</strong> the reconstruction error, to obtain yet a third equivalent problem. We will use the nameLASSO [64], popular in the statistics community, interchangeably with the notation LS τfor this problem. The three equivalent optimization problems can be written as(BP σ )(QP λ )(LS τ )ˆx σ = argminxˆx λ = argminxˆx τ = argminx‖x‖ 1 subject to ‖Ax − y‖ 2 ≤ σ (5.18)λ ‖x‖ 1 + ‖Ax − y‖ 2 2 (5.19)‖Ax − y‖ 2 subject to ‖x‖ 1 ≤ τ (5.20)We note that a fourth problem formulation known as the Dantzig selector also appears inthe literature [65] and can be expressed as(DS ζ )ˆx ζ = argminx‖x‖ 1 subject to ∥∥ A H (Ax − y) ∥ ∥∞≤ ζ (5.21)but this problem does not yield the same set <strong>of</strong> solutions as the other three. For a treatment<strong>of</strong> the relationship between DS ζ and the other problems, see [41].The first three problems are all different ways <strong>of</strong> arriving at the same set <strong>of</strong> solutions.To be explicit, the solution to any one <strong>of</strong> these problems is characterized by a triplet <strong>of</strong>values (σ, λ, τ) which renders ˆx σ = ˆx λ = ˆx τ . Unfortunately, it is very difficult to map

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!