14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 16. Proc. ICA 2006, pages 917-925 235<br />

The basis choice from lemma 3 together with assumption (ii) can be used to prove<br />

the follow<strong>in</strong>g fact:<br />

Lemma 4. The non-Gaussian transformation is <strong>in</strong>vertible i.e. ANN∈ Gl(n).<br />

The next lemma can be seen as modification of lemma 1, and <strong>in</strong>deed it can be shown<br />

similarly.<br />

Lemma 5. If ˆSN fulfills � ˆSNHˆSN −∇ˆSN(∇ˆSN) ⊤� e1+ ˆS 2 Nc≡0 for some constant vector<br />

c∈R n , then the source component (SN)1 is Gaussian and <strong>in</strong>dependent of (SN)(2 : n).<br />

Here more generally ei ∈ Rn denotes the i-th unit vector. Putt<strong>in</strong>g these lemmas<br />

together, we can f<strong>in</strong>ally prove theorem 1: Accord<strong>in</strong>g to lemma 4, ANN is <strong>in</strong>vertible, so<br />

from the left yields<br />

multiply<strong>in</strong>g equation (5) from lemma 2 by B −1<br />

N A−1<br />

NN<br />

�<br />

ˆSNHˆSN −∇ˆSN(∇ˆSN) ⊤� B ⊤ NA⊤ GN + CˆS 2 N≡ 0 (6)<br />

for any BN∈ Gl(n) and some fixed, real matrix C∈Mat(n×(d− n)).<br />

We claim that AGN= 0. If not, then there exists v∈R d−n with�A⊤ GNv�=1. Choose<br />

BN from (4) such that B−1 N SN is decorrelated. This is <strong>in</strong>variant under left-multiplication<br />

by an orthogonal matrix, so we may moreover assume that B⊤ NA⊤ GNv=e1. Multiply<strong>in</strong>g<br />

equation (6) <strong>in</strong> turn by v from the right therefore shows that the vector function<br />

�<br />

ˆSNHˆSN −∇ˆSN(∇ˆSN) ⊤� e1+ cˆS 2 N≡ 0 (7)<br />

is zero; here c := Cv∈R. This means that ˆSN fulfills the condition of lemma 5, which<br />

implies that (SN)1 is Gaussian and <strong>in</strong>dependent of the rest. But this contradicts (ii) for<br />

S, hence AGN= 0. Plugg<strong>in</strong>g this result <strong>in</strong>to equation (5), evaluation at sN= 0 shows<br />

that ANGA ⊤ GG = 0. S<strong>in</strong>ce AGN = 0 and A∈Gl(d), necessarily AGG ∈ Gl(d−n), so<br />

ANG= 0 as was to prove.<br />

2 Simulations<br />

In this section, we will provide experimental validation of the uniqueness result of<br />

corollary 1. In order to stay unbiased and not test a s<strong>in</strong>gle algorithm, we have to uniformly<br />

search the parameter space for possibly equivalent model representations. The<br />

model assumptions (1) will not be perfectly fulfilled, so we <strong>in</strong>troduce a measure of<br />

model deviation based on 4-th order cumulants <strong>in</strong> the follow<strong>in</strong>g.<br />

Let the non-Gaussian dimension n and the total dimension d be fixed. Given a random<br />

vector X=(XN, XG), we can without loss of generality assume that Cov(X)=I.<br />

Any possible model deviation consists of (i) a deviation from the <strong>in</strong>dependence of XN<br />

and XG and (ii) a deviation from the Gaussianity of XG. In the case of non-vanish<strong>in</strong>g<br />

kurtoses, the former can be approximated for example by<br />

δI(X) :=<br />

1<br />

n(n−d)d 2<br />

n�<br />

d�<br />

d�<br />

i=1 j=n+1 k=1 l=1<br />

d�<br />

cum 2 (Xi, X j, Xk, Xl),

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!