15.05.2015 Views

Boyd Convex Optimization book - SFU Wiki

Boyd Convex Optimization book - SFU Wiki

Boyd Convex Optimization book - SFU Wiki

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

334 6 Approximation and fitting<br />

Thus we seek a function f ∈ F that fits the data well,<br />

f(u i ) ≈ y i , i = 1, . . . , m,<br />

with a sparse coefficient vector x, i.e., card(x) small. In this case we refer to<br />

f = x 1 f 1 + · · · + x n f n = ∑ x i f i ,<br />

i∈B<br />

where B = {i | x i ≠ 0} is the set of indices of the chosen basis elements, as a sparse<br />

description of the data. Mathematically, basis pursuit is the same as the regressor<br />

selection problem (see §6.4), but the interpretation (and scale) of the optimization<br />

problem are different.<br />

Sparse descriptions and basis pursuit have many uses. They can be used for<br />

de-noising or smoothing, or data compression for efficient transmission or storage<br />

of a signal. In data compression, the sender and receiver both know the dictionary,<br />

or basis elements. To send a signal to the receiver, the sender first finds a sparse<br />

representation of the signal, and then sends to the receiver only the nonzero coefficients<br />

(to some precision). Using these coefficients, the receiver can reconstruct<br />

(an approximation of) the original signal.<br />

One common approach to basis pursuit is the same as the method for regressor<br />

selection described in §6.4, and based on l 1 -norm regularization as a heuristic for<br />

finding sparse descriptions. We first solve the convex problem<br />

minimize<br />

∑ m<br />

i=1 (f(u i) − y i )) 2 + γ‖x‖ 1 , (6.18)<br />

where γ > 0 is a parameter used to trade off the quality of the fit to the data,<br />

and the sparsity of the coefficient vector. The solution of this problem can be used<br />

directly, or followed by a refinement step, in which the best fit is found, using the<br />

sparsity pattern of the solution of (6.18). In other words, we first solve (6.18), to<br />

obtain ˆx. We then set B = {i | ˆx i ≠ 0}, i.e., the set of indices corresponding to<br />

nonzero coefficients. Then we solve the least-squares problem<br />

minimize<br />

∑ m<br />

i=1 (f(u i) − y i )) 2<br />

with variables x i , i ∈ B, and x i = 0 for i ∉ B.<br />

In basis pursuit and sparse description applications it is not uncommon to have<br />

a very large dictionary, with n on the order of 10 4 or much more. To be effective,<br />

algorithms for solving (6.18) must exploit problem structure, which derives from<br />

the structure of the dictionary signals.<br />

Time-frequency analysis via basis pursuit<br />

In this section we illustrate basis pursuit and sparse representation with a simple<br />

example. We consider functions (or signals) on R, with the range of interest [0, 1].<br />

We think of the independent variable as time, so we use t (instead of u) to denote<br />

it.<br />

We first describe the basis functions in the dictionary. Each basis function is a<br />

Gaussian sinusoidal pulse, or Gabor function, with form<br />

e −(t−τ)2 /σ 2 cos(ωt + φ),

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!