11.07.2015 Views

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

Principles of Modern Radar - Volume 2 1891121537

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

5.3 SR Algorithms 181only a subset <strong>of</strong> vectors that are well separated is used for communication. This separationallows the symbols to be accurately distinguished by the receiver. CS techniques thatleverage structured sparsity operate on a similar principle.Before delving into more elaborate approaches, we mention that the simplest priorknowledge that we might exploit would be differing prior probabilities on which coefficients<strong>of</strong> x true are nonzero. This information could easily be available in various radarscenarios and other problems. For example, in a surveillance application looking for vehicleson a road network, it might make sense to drastically reduce the probability <strong>of</strong>target presence at large distances from roadways. This approach can be implemented byreplacing λ with a diagonal weighting matrix when solving QP λ . This modification canbe incorporated into any CS algorithm; see problem 9. Notice that this scheme is closelyrelated to the iterative reweighting reconstruction algorithms. Indeed, the authors in [86]suggest incorporating prior information into the reconstruction weights. As another example,in [110] the authors determine optimal weighting schemes for a signal with twogroups <strong>of</strong> coefficients that share different prior probabilities.However, we can also incorporate information about the structure <strong>of</strong> the sparsitypattern rather than simply assigning weights to specific elements. An excellent intuitiveexample <strong>of</strong> this idea is found in [111]. The authors seek to detect changes between images.Naturally, these change detection images will be sparse. However, they will also beclumpy, in that the nonzero coefficients will tend to occur in tightly clustered groups. Theauthors use a Markov random field as a prior on x true to promote clustering <strong>of</strong> nonzerocoefficients. Motivated by the Markov model, they propose an intuitive greedy algorithmknown as LaMP. The algorithm is reminiscent <strong>of</strong> CoSaMP. The primary difference is thatrather than simply pruning to enforce the desired sparsity after computing the error term,LaMP uses the known structure <strong>of</strong> the sparsity pattern to determine the most likely supportset for the nonzero coefficients. LaMP performs significantly better than CoSaMP onsignals that satisfy this structural assumption.Baraniuk et al. [112] extend the RIP framework to deal with structured sparse signalsand provide RIP-like performance guarantees for modified versions <strong>of</strong> CoSaMP and IHT.Like the LaMP algorithm, the key to the improved performance is the exploitation <strong>of</strong>additional signal structure. For some classes <strong>of</strong> structured sparsity, the algorithm is able toreconstruct sparse signals from order s measurements, as opposed to the order s log(N/s)required by algorithms that focus on simple sparsity. In particular, their approach is developedfor signals characterized by wavelet trees and block sparsity. See [113] for additionaldiscussion <strong>of</strong> these ideas with connections to graphical models. Block sparsity is also exploitedin [114]. One can also consider the joint sparsity <strong>of</strong> multiple signals as in [115].In [116], the authors address extensions <strong>of</strong> the RIP condition to address block-sparsesignals and develop recovery algorithms for them.On a final note, [117] combines belief-propagation-based CS algorithms with ideasfrom turbo equalization [118] to develop an innovative framework for incorporating structuredsparsity information. The algorithm alternates between two decoding blocks. Thefirst block exploits the structure <strong>of</strong> the measurements, while the second block leveragesthe known structure <strong>of</strong> the sparsity patterns. By exchanging s<strong>of</strong>t decisions, that is probabilitiesthat individual elements <strong>of</strong> x true are nonzero, these two blocks cooperativelyidentify the support <strong>of</strong> the true signal. The paper demonstrates excellent results for simulationsusing data derived from a simple Markov chain model. This approach appearspromising for incorporating structural and prior information into SR and related inferencetasks.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!