Numerical Methods Contents - SAM
Numerical Methods Contents - SAM
Numerical Methods Contents - SAM
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
0 2 4 6 8 10 12 14 16<br />
value of ∥ ∥ F(x<br />
(k) )<br />
∥ ∥<br />
2<br />
18<br />
16<br />
14<br />
2<br />
12<br />
10<br />
8<br />
6<br />
4<br />
2<br />
0<br />
0 2 4 6 8 10 12 14 16 10−16<br />
Damped Newton method<br />
Convergence behaviour of the Newton method:<br />
10 2<br />
10 0<br />
10 −2<br />
10 −4<br />
10 −6<br />
10 −8<br />
10 −10<br />
10 −12<br />
10 −14<br />
norm of grad Φ(x (k) )<br />
value of ∥ ∥ F(x<br />
(k) )<br />
∥ ∥<br />
2<br />
18<br />
16<br />
14<br />
2<br />
12<br />
10<br />
8<br />
6<br />
4<br />
2<br />
0<br />
0 2 4 6 8 10 12 14 16 10−16<br />
Not−damped Newton method<br />
10 4<br />
10 2<br />
10 0<br />
10 −2<br />
10 −4<br />
10 −6<br />
10 −8<br />
10 −10<br />
10 −12<br />
10 −14<br />
norm of grad Φ(x (k) )<br />
Idea: damping of the Gauss-Newton correction in (6.5.7) using a penalty term<br />
∥<br />
instead of ∥F(x (k) ) + DF(x (k) )s∥ 2<br />
∥<br />
minimize ∥F(x (k) ) + DF(x (k) )s∥ 2 + λ ‖s‖ 2 2 .<br />
λ > 0 ˆ= penalty parameter (how to choose it ? → heuristic)<br />
⎧<br />
∥<br />
10 , if ∥F(x ⎪⎨<br />
(k) ) ∥ ≥ 10 ,<br />
2 ∥<br />
λ = γ ∥F(x (k) ∥<br />
) ∥ , γ := 1 , if 1 < ∥F(x (k) ) ∥ < 10 ,<br />
2 2<br />
⎪⎩ ∥<br />
0.01 , if ∥F(x (k) ) ∥ ≤ 1 . 2<br />
Modified (regularized) equation for the corrector s:<br />
(<br />
)<br />
DF(x (k) ) T DF(x (k) ) + λI s = −DF(x (k) )F(x (k) ) . (6.5.8)<br />
initial value (1.8, 1.8, 0.1) T (red curve) ➤ Newton method caught in local minimum,<br />
initial value (1.5, 1.5, 0.1) T (cyan curve) ➤ fast (locally quadratic) convergence.<br />
Ôº¿ º<br />
Ôº¿ º<br />
0.9<br />
10 0<br />
0.8<br />
10 −2<br />
Gauss-Newton method:<br />
initial value (1.8, 1.8, 0.1) T (red curve),<br />
initial value (1.5, 1.5, 0.1) T (cyan curve),<br />
convergence in both cases.<br />
Notice:<br />
linear convergence.<br />
value of ∥ ∥ F(x<br />
(k) )<br />
∥ ∥<br />
2<br />
0.7<br />
2<br />
0.6<br />
0.5<br />
0.4<br />
0.3<br />
0.2<br />
0.1<br />
10 −4<br />
10 −6<br />
10 −8<br />
10 −10<br />
10 −12<br />
10 −14<br />
norm of the corrector<br />
7 Filtering Algorithms<br />
Perspective of signal processing:<br />
vector x ∈ R n ↔ finite discrete (= sampled) signal.<br />
0<br />
0 2 4 6 8 10 12 14 16 10−16<br />
Gauss−Newton method<br />
✸<br />
X = X(t) ˆ= time-continuous signal, 0 ≤ t ≤ T ,<br />
“sampling”: x j = X(j∆t) , j = 0, ...,n − 1,<br />
n ∈ N, n∆t ≤ T .<br />
∆t > 0 ˆ= time between samples.<br />
6.5.3 Trust region method (Levenberg-Marquardt method)<br />
Sampled values arranged in a vector x =<br />
(x<br />
As in the case of Newton’s method for non-linear systems of equations, see Sect. 3.4.4: often overshooting<br />
of Gauss-Newton corrections occurs.<br />
Ôº¿ º<br />
0 , ...,x n−1 ) T ∈ R n .<br />
Note: vector indices 0, ...,n − 1 !<br />
(“C-style indexing”).<br />
Remedy as in the case of Newton’s method: damping.<br />
X = X(t)<br />
x 0<br />
x<br />
x n−2 1<br />
x<br />
x n−1 2<br />
t 0 t 1 t 2 t n−2 t n−1 Fig. time 90<br />
Ôº¼ º½