11.03.2014 Views

Segmentation of Stochastic Images using ... - Jacobs University

Segmentation of Stochastic Images using ... - Jacobs University

Segmentation of Stochastic Images using ... - Jacobs University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 6 <strong>Segmentation</strong> <strong>of</strong> <strong>Stochastic</strong> <strong>Images</strong> Using Elliptic SPDEs<br />

edge weight is a random variable, is given by the same expression as the classical edge weight, but<br />

the quantities extracted from the image are random variables. Thus, the random variable describing<br />

the edge weight <strong>of</strong> the edge between neighboring pixels i and j is, cf. (2.5)<br />

(<br />

w i j (ξ ) = exp −β (g i (ξ ) − g j (ξ )) 2) . (6.1)<br />

Replacing the random variables by their polynomial chaos expansion, we have to compute<br />

w i j (ξ ) = exp<br />

( ( ) )<br />

N<br />

−β ∑ α=1 gi αΨ α (ξ ) −∑ N 2<br />

α=1 g αΨ j α (ξ )<br />

. (6.2)<br />

Section 3.3 describes how to perform calculations for random variables represented in the polynomial<br />

chaos. Note that we do not calculate the exponential <strong>of</strong> the polynomial chaos expansion explicitly.<br />

Instead, we compute a Galerkin projection <strong>of</strong> the exponential in the polynomial chaos via (3.39).<br />

From the definition <strong>of</strong> the stochastic edge weights, it is easy to generalize the node degrees to<br />

stochastic node degrees represented in the polynomial chaos:<br />

d i (ξ ) =<br />

∑<br />

{ j∈V :e i j ∈E}<br />

w i j (ξ ) =<br />

N<br />

∑ ∑<br />

{ j∈V :e i j ∈E} α=1<br />

w i, j<br />

α Ψ α (ξ ) . (6.3)<br />

The normalization step, to ensure that the maximal difference between g i and g j is one, is not straightforward<br />

because the quantities g i are random variables. A normalization <strong>of</strong> random variables is to<br />

ensure that the expected value <strong>of</strong> the random variable is one. This is achieved by dividing the difference<br />

<strong>of</strong> neighboring pixels by the maximal difference <strong>of</strong> the expected value <strong>of</strong> neighboring pixels:<br />

(g i (ξ ) − g j (ξ )) 2 (u i (ξ ) − u j (ξ )) 2<br />

=<br />

. (6.4)<br />

max k,l∈V,ek,l ∈E E<br />

((u k (ξ ) − u l (ξ ))<br />

2)<br />

From the stochastic edge weights and the stochastic node degrees it is easy to build the stochastic<br />

analog <strong>of</strong> the Laplacian matrix given by, cf. (2.9)<br />

⎧<br />

⎨ d i (ξ ) if i = j<br />

L i j (ξ ) = −w i j (ξ ) if v i and v j are adjacent nodes<br />

⎩<br />

0 otherwise<br />

(6.5)<br />

= ∑ N α=1 Lα Ψ α (ξ ) .<br />

The stochastic combinatorial Laplacian matrix has a representation in the polynomial chaos. The<br />

coefficient L α in this polynomial chaos expansion is a matrix containing at position Li α j the αth<br />

coefficient <strong>of</strong> the polynomial chaos expansion <strong>of</strong> either d i (ξ ) if i = j or <strong>of</strong> −w i j (ξ ) respectively zero.<br />

To define the linear system <strong>of</strong> equations to solve the stochastic random walker problem, we start<br />

with the stochastic analog <strong>of</strong> the weighted Dirichlet integral. It is given by taking the expected value<br />

<strong>of</strong> the classical weighted Dirichlet integral R w and inserting the stochastic quantities there:<br />

( ∫<br />

)<br />

1<br />

E(R w [u(ξ )]) = E w|∇u(ξ )| 2 dx . (6.6)<br />

2 D<br />

As for the classical energy (cf. Section 2.2), a minimizer is a harmonic function satisfying<br />

−∇ · (w(ξ )∇u(ξ )) = 0 in D × Ω<br />

u = 1 on V O<br />

u = 0 on V B .<br />

(6.7)<br />

58

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!