11.07.2015 Views

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

5.2 CS Theory 161where Ā = RA, and ȳ = Ry. This problem is equivalent to (5.12) when λ is chosencorrectly, as detailed in Section 5.3.1.1. Readers familiar with adaptive processing willrecognize the application <strong>of</strong> R as a pre-whitening step. Indeed, this processing is the l 1version <strong>of</strong> the typical pre-whitening followed by matched filtering operation used in, forexample, STAP [15,30].Returning to the geometric interpretation <strong>of</strong> the problem, examination <strong>of</strong> Figure 5-1provides an intuitive geometric reason that the l 1 norm is effective for obtaining sparsesolutions. In particular, sparse solutions contain numerous zero values and thus lie on thecoordinate axes in several <strong>of</strong> their dimensions. Since the l 1 unit ball is “spiky” (i.e., morepointed along the coordinate axes than the rounded l 2 norm), a potential solution x withzero entries will tend to have a smaller l 1 norm than a non-sparse solution. We could <strong>of</strong>course consider p < 1 to obtain ever more “spiky” unit balls, as is considered in [31].Using p < 1 allows sparse signals to be reconstructed from fewer measurements thanp = 1, but at the expense <strong>of</strong> solving a non-convex optimization problem that could featurelocal minima.This geometric intuition can be formalized using so-called tube and cone constraints asdescribed in, for example, [1]. Using the authors’ terminology, the tube constraint followsfrom the inequality constraint in the optimization problem represented in equation (5.12):∥∥A(x true − ˆx σ ) ∥ ∥2≤ ∥ ∥Ax true − y ∥ ∥2+ ‖A ˆx σ − y‖ 2≤ 2σThe first line is an application <strong>of</strong> the triangle inequality satisfied by any norm, and thesecond follows from the assumed bound on e and the form <strong>of</strong> (5.12). Simply put, anyvector x that satisfies ‖Ax − y‖ 2 ≤ σ must lie in a cylinder centered around Ax true .When we solve the optimization problem represented in equation (5.12), we choose thesolution inside this cylinder with the smallest l 1 norm.Since ˆx σ is a solution to the convex problem described in (5.12) and thus a globalminimum, we obtain the cone constraint 14 ‖ ˆx σ ‖ 1 ≤ ∥∥ x true∥ ∥1. Thus, the solution to (5.12)must lie inside the smallest l 1 ball that contains x true . Since this l 1 ball is “spiky”, our hopeis that its intersection with the cylinder defined by the tube constraint is small, yielding anaccurate estimate <strong>of</strong> the sparse signal x true . These ideas are illustrated in two dimensions inFigure 5-5. The authors <strong>of</strong> [1] go on to prove just such a result, a performance guarantee forCS. However, sparsity <strong>of</strong> the true signal is not enough by itself to provide this guarantee.We will need to make additional assumptions on the matrix A.5.2.5 Performance GuaranteesWe should emphasize at this point that SR by itself is not CS. Instead, CS involvescombining SR algorithms with constraints on the measurement matrix A, typically satisfiedthrough randomized measurement strategies, to provide provable performance guarantees.In this section, we will explore several conditions on A that yield performance guaranteesfor CS.14 The authors in [1] actually use the name cone constraint for a condition on the reconstruction errorthat is derived from this inequality.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!