30.12.2014 Views

Christoph Florian Schaller - FU Berlin, FB MI

Christoph Florian Schaller - FU Berlin, FB MI

Christoph Florian Schaller - FU Berlin, FB MI

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Christoph</strong> <strong>Schaller</strong> - STORMicroscopy 9<br />

Once more we can limit the tting to the one-dimensional case as all occuring distributions are<br />

rotationally symmetric. Hence we use C i = ∑ j S ij as the observation data and t with G i =<br />

N ´ i+ 1 2<br />

p<br />

i− 1 1D (x)dx, where p 1D denotes the one dimensional Gaussian distribution centered in x 0 with<br />

2<br />

standard deviation σ, i.e. p 1D (x) = √ 1<br />

2πσ<br />

exp(− (x−x0)2<br />

2σ<br />

). 2<br />

We repeat the least squares approach<br />

0 = d<br />

dx 0<br />

( ∑ i<br />

(C i − G i ) 2 )<br />

⇐⇒ 0 = ∑ 2(C i − G i ) d<br />

dx 0<br />

G i ,<br />

d<br />

but now requiring to calculate<br />

dx 0<br />

G i = N d p<br />

i− 1 1D (x)dx.<br />

2<br />

Luckily we can switch integral and dierentiation here as [i− 1 2 , i+ 1 2 ] is nite, p 1D(x) is continuous<br />

and<br />

d<br />

dx 0<br />

p 1D (x) exists and is continuous, too. Therefore we obtain<br />

d<br />

dx 0<br />

G i = N ´ i+ 1 2 (x−x 0)<br />

i− 1 σ<br />

p 2 1D (x)dx<br />

2<br />

and thus 0 = ∑ (C i − G i ) ´ i+ 1 2<br />

(x − x<br />

i− 1 0 )p 1D (x)dx.<br />

2<br />

Knowing that e(x) = 1 x−x0<br />

2erf( For simplicity we denote e i± = e(i ± 1 2<br />

dx 0<br />

´ i+ 1<br />

2<br />

σ √ ) satises 2 e′ (x) = p 1D (x), plugging in yields G i = N ( e(i + 1 2 ) − e(i − 1 2 )) .<br />

). Furthermore we can integrate<br />

ˆ i+ 1<br />

2<br />

i− 1 2<br />

For our least squares problem follows<br />

(x − x 0 )p 1D (x)dx = [ −σ 2 p 1D (x) ] i+ 1 2<br />

.<br />

i− 1 2<br />

0 = ∑ (C i − Ne i+ + Ne i− )σ 2 [p 1D (i + 1 2 ) − p 1D(i − 1 2 )]<br />

⇐⇒ 0 = ∑ (C i − Ne i+ + Ne i− )<br />

(exp(− (i + 1 2 − x 0) 2<br />

2σ 2 ) − exp(− (i − 1 2 − x 0) 2 )<br />

2σ 2 ) .<br />

} {{ }<br />

=:f(x 0)<br />

Now we just need to solve this nonlinear equation. If we do not want to approximate, an application<br />

of Newton's method for f(x 0 ) started in the pixel center of the local maximum should suce. In order<br />

to apply the iteration x n+1 = x n − f(xn) we need to know f ′ (x f ′ n)<br />

(x 0 ), too. As e i± = e(i ± 1 2<br />

) depends on<br />

x 0 , the product rule yields<br />

f ′ (x 0 ) = ∑ (Np 1D (i + 1 2 ) − Np 1D(i − 1 (<br />

2 )) exp(− (i + 1 2 − x 0) 2<br />

2σ 2 ) − exp(− (i − 1 2 − x 0) 2 )<br />

2σ 2 )<br />

+ ∑ ( i +<br />

1<br />

2<br />

(C i − Ne i+ + Ne i− )<br />

− x 0<br />

σ 2 exp(− (i + 1 2 − x 0) 2<br />

2σ 2 ) − i − 1 2 − x 0<br />

σ 2 exp(− (i − 1 2 − x 0) 2 )<br />

2σ 2 ) .<br />

By denoting<br />

and generalizing<br />

p i± (x n ) := exp(− (i ± 1 2 − x n) 2<br />

2σ 2 ) = √ 2πσp 1D (i ± 1 2 )<br />

e i± (x n ) := 1 2 erf(i ± 1 2 − x n<br />

σ √ )<br />

2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!