11.03.2014 Views

Segmentation of Stochastic Images using ... - Jacobs University

Segmentation of Stochastic Images using ... - Jacobs University

Segmentation of Stochastic Images using ... - Jacobs University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

4.4 Generalized Spectral Decomposition<br />

functions are not fixed a priori. With the flexible basis functions we find a solution having significant<br />

fewer modes, i.e. K ≪ N, but nearly the same approximation quality.<br />

Nouy [113] showed how to compute the modes <strong>of</strong> an optimal approximation in the energy norm<br />

‖v‖ 2 A = E(vT Av) <strong>of</strong> the problem, i.e. such that<br />

∥<br />

∥<br />

∥u −∑ K ∥∥<br />

j=1 λ 2<br />

∥<br />

jU j = min ∥<br />

∥u −∑ K ∥∥<br />

A γ,V<br />

j=1 γ 2<br />

jV j . (4.19)<br />

A<br />

The next sections provide details about the GSD method, pro<strong>of</strong>s for the optimality <strong>of</strong> the approximation<br />

and implementation details. Further details about the GSD method can be found in [113].<br />

4.4.1 Best Approximation<br />

For deterministic linear systems <strong>of</strong> equations, it is possible to formulate an associated minimization<br />

problem, whose solution is the same as the solution <strong>of</strong> the weak formulation. For the discrete version<br />

<strong>of</strong> SPDEs, this minimization problem allows developing efficient methods for the solution <strong>of</strong> the<br />

weak formulation.<br />

The discrete version <strong>of</strong> the problem (4.15) is equivalent to the minimization problem<br />

( 1<br />

J (u) = min J (v), where J (v) = E<br />

v∈IR n ⊗S p 2 vT Au − v b)<br />

T<br />

. (4.20)<br />

This equivalence is well-known for deterministic problems, but holds for the expectation in stochastic<br />

equations, too. Using this relation, the best approximation <strong>of</strong> order M is<br />

J<br />

(<br />

∑<br />

M<br />

i=1 λ iU i<br />

)<br />

( )<br />

M<br />

= min J<br />

V 1 ,...V M ∈IR ∑ n i=1 γ iV i<br />

γ i ,...,γ M ∈S p<br />

. (4.21)<br />

It is well-known that in the deterministic setting the best approximation can be defined recursively:<br />

Let (λ 1 ,...,λ M−1 ),(U 1 ,...,U M−1 ) be the best approximation <strong>of</strong> order M − 1. Then the best approximation<br />

<strong>of</strong> order M is<br />

J<br />

(<br />

∑<br />

M<br />

i=1 λ iU i<br />

)<br />

(<br />

)<br />

= min J γV +∑ M−1<br />

V ∈IR n<br />

j=1 λ iU i<br />

γ∈S p<br />

. (4.22)<br />

This recursive definition is in general not true in the stochastic case (see the following calculations),<br />

but numerical tests show that we achieve good approximations for stochastic operators. With the<br />

recursive definition, we develop efficient numerical schemes for the solution <strong>of</strong> the minimization<br />

problem. The functional decomposes into two parts when we use the recursive definition:<br />

(<br />

λ M U M +∑ M−1<br />

i=1 λ iU i<br />

)<br />

J<br />

)<br />

1<br />

= E(<br />

2 (λ MU M ) T Au − (λ M U M ) T b<br />

( 1<br />

(<br />

M−1<br />

+ E<br />

2 ∑<br />

)<br />

i=1 λ iU i ) T b<br />

i=1<br />

) T λ iU i Au − ( ∑ M−1<br />

} {{ }<br />

already minimized<br />

. (4.23)<br />

The second summand <strong>of</strong> the equation is minimized already due to the recursive definition <strong>of</strong> the<br />

minimization. Introducing the residual values<br />

ũ = u −∑ M−1<br />

i=1 λ iU i and ˜b = b − ∑ M−1<br />

i=1 Aλ iU i (4.24)<br />

41

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!