14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 2. Neural Computation 16:1827-1850, 2004 65<br />

1836 F. Theis<br />

Putt<strong>in</strong>g this <strong>in</strong> equation 3.4 yields<br />

0 = ( f ∂i∂j f − (∂i f )(∂j f ))(x)<br />

= �<br />

bkibkj((g1 ⊗···⊗gn)(g1 ⊗···⊗g ′′<br />

k ⊗···⊗gn)<br />

k<br />

− (g1 ⊗···⊗g ′ k ⊗···⊗gn) 2 )(Bx)<br />

= �<br />

bkibkjg 2 1 ⊗···⊗g2 k−1 ⊗<br />

k<br />

�<br />

gkg ′′<br />

k<br />

− g′2<br />

k<br />

for x ∈ R n . B is <strong>in</strong>vertible, so the whole function is zero:<br />

�<br />

k<br />

bkibkjg 2 1 ⊗···⊗g2 k−1 ⊗<br />

�<br />

gkg ′′<br />

k<br />

�<br />

⊗ g 2 k+1 ⊗···⊗g2 n (Bx)<br />

�<br />

− g′2<br />

k ⊗ g 2 k+1 ⊗···⊗g2n ≡ 0. (3.5)<br />

Choose x ∈ R n with gk(xk) �= 0 for k = 1,...,n. Evaluat<strong>in</strong>g equation 3.5<br />

at (x1,...,xl−1, y, xl+1,...,xn) for variable y ∈ R and divid<strong>in</strong>g the result<strong>in</strong>g<br />

one-dimensional equation by the constant g 2 1 (x1) ···g 2 l−1 (xl−1)g 2 l+1 (xl+1) ···<br />

g 2 n (xn) shows<br />

bliblj<br />

�<br />

glg ′′<br />

l<br />

�<br />

− g′2<br />

l<br />

⎛<br />

(y) =−⎝<br />

�<br />

k�=l<br />

gkg<br />

bkibkj<br />

′′<br />

k<br />

g 2 k<br />

− g′2<br />

k<br />

(xk)<br />

⎞<br />

⎠ g 2 l<br />

(y) (3.6)<br />

for y ∈ R. So for <strong>in</strong>dices l and i �= j with bliblj �= 0, it follows from equation<br />

3.6 that there exists a ∈ C such that gk satisfies the differential equation<br />

ag2 l − glg ′′<br />

l + g′2<br />

l ≡ 0, that is, equation 3.3.<br />

Proof of Theorem 2. i. S is assumed to have at most one gaussian or determ<strong>in</strong>istic<br />

component and exist<strong>in</strong>g covariance. Set X := AS.<br />

We first show us<strong>in</strong>g whiten<strong>in</strong>g that A can be assumed to be orthogonal.<br />

For this, we can assume S and X to have no determ<strong>in</strong>istic component at all<br />

(because arbitrary choice of the matrix coefficients of the determ<strong>in</strong>istic components<br />

does not change the covariance). Hence, by assumption, Cov(X)<br />

is diagonal and positive def<strong>in</strong>ite, so let D1 be diagonal <strong>in</strong>vertible with<br />

Cov(X) = D2 1 . Similarly, let D2 be diagonal <strong>in</strong>vertible with Cov(S) = D2 2 .<br />

Set Y := D −1<br />

1 X and T := D−1<br />

2 S, that is, normalize X and S to covariance I.<br />

Then<br />

Y = D −1<br />

1<br />

X = D−1<br />

1 AS = D−1<br />

1 AD2T

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!