18.01.2015 Views

Karjalainen, Pasi A. Regularization and Bayesian methods for ...

Karjalainen, Pasi A. Regularization and Bayesian methods for ...

Karjalainen, Pasi A. Regularization and Bayesian methods for ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

50 3. <strong>Regularization</strong> theory<br />

can be very sensitive to errors in the data z even though the solutions were strictly<br />

speaking continuous functions of the data.<br />

The need to obtain stable solutions to ill-posed problems has led to several<br />

<strong>methods</strong> in which the original problem is modified so that the solution of the<br />

modified problem is near to the solution of the original problem but is less sensitive<br />

to errors in the data. These <strong>methods</strong> are often called the regularization <strong>methods</strong>.<br />

In regularization <strong>methods</strong> in<strong>for</strong>mation about the desirable solution is used <strong>for</strong> the<br />

modification of the original problem. In this sense the regularization approach is<br />

related to the <strong>Bayesian</strong> approach to estimation.<br />

Several regularization <strong>methods</strong> can be interpreted as stabilization of the inversion<br />

of the matrix H T H in the least squares solution. Such <strong>methods</strong> are e.g.<br />

Tikhonov regularization <strong>and</strong> principal component regression. Other approaches<br />

to regularization are e.g. the use of truncated iterative <strong>methods</strong>, such as conjugate<br />

gradient method, <strong>for</strong> solving linear systems of equations [78]. The truncated<br />

singular value docomposition [75, 33] is also commonly used. All regularization<br />

<strong>methods</strong> can be interpreted in the same <strong>for</strong>malism with the so-called filter factors,<br />

see e.g. [182]. General references on regularization <strong>methods</strong> are e.g. [209, 72] or<br />

[56] <strong>and</strong> especially [201] in connection to Bayes estimation.<br />

3.2 Tikhonov regularization<br />

We discuss here the most popular regularization method, Tikhonov regularization<br />

method [208, 207] <strong>for</strong> solution of the least squares problem. We define here the<br />

generalized Tikhonov regularized solution ˆθ α with equation<br />

ˆθ α = arg min<br />

θ<br />

{‖L 1 (Hθ − z)‖ 2 + α 2 ‖L 2 (θ − θ ∗ )‖ 2 } (3.4)<br />

where L T 1 L 1 = W is a positive definite matrix. The regularization parameter α<br />

controls the weight given to minimization of the side constraint<br />

Ω(θ) = ‖L 2 (θ − θ ∗ )‖ 2 (3.5)<br />

relative to minimization of the weighted least squares index<br />

l LS = ‖L 1 (Hθ − z)‖ 2 (3.6)<br />

The term θ ∗ is the initial (prior) guess <strong>for</strong> the solution. In more common definitions<br />

W = I, see e.g. [77].<br />

The matrix L 2 is typically either the identity matrix I or a discrete approximation<br />

D d of the d’th derivative operator. In this case L 2 is a b<strong>and</strong>ed matrix with<br />

full row rank. For example the 1’st <strong>and</strong> 2’nd difference matrices are<br />

⎛<br />

⎞<br />

1 −1 0 · · · 0<br />

. 0 .. . .. . ..<br />

. ..<br />

D 1 =<br />

⎜<br />

(3.7)<br />

⎝<br />

.<br />

. .. . .. . .. ⎟ 0 ⎠<br />

0 · · · 0 1 −1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!