14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 14. Proc. ICASSP 2006 195<br />

1.2<br />

1.1<br />

1<br />

0.9<br />

0.8<br />

0.7<br />

0.6<br />

0.5<br />

0.4<br />

0.3<br />

0.2<br />

x<br />

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1<br />

(a) POSH for n=2, p=1<br />

x<br />

1.5π(S 2<br />

1 )<br />

(b) POSH for n=3, p=1<br />

π(S 2<br />

2 )<br />

1.6<br />

1.4<br />

1.2<br />

1<br />

0.8<br />

0.6<br />

0.4<br />

0.2<br />

0<br />

2S 1<br />

0.5<br />

S 1<br />

2<br />

x<br />

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2<br />

(c) POSH for n=2, p=0.5<br />

Fig. 2. Start<strong>in</strong>g from x0 (◦), we alternately project onto cS 1 and S 2. POSH performance is illustrated for p=1<strong>in</strong> dimensions 2 (a) and 3 (b),<br />

were a projection via PCA is displayed — no <strong>in</strong>formation is lost hence the sequence of po<strong>in</strong>ts lies <strong>in</strong> a plane as shown <strong>in</strong> the proof theorem<br />

3.2. Figure (c) shows application of POCS for n=2and p=0.5.<br />

Table 2. Performance of the POSH algorithm 3 for vary<strong>in</strong>g parameters.<br />

See text for details.<br />

n p c �y POSH − yscan�2<br />

2 0.8 1.2 0.005±0.0008<br />

2 4 0.9 0.02±0.005<br />

3 0.8 1.2 0.02±0.009<br />

3 4 0.9 0.04±0.03<br />

[2]). The distance between his and our solution was calculated to<br />

give a mean value of 5·10 −13 ± 5·10 −12 i.e. we get virtually always<br />

the same solution.<br />

In figures 2(a) and (b), we show application for p=1; we visualize<br />

the performance <strong>in</strong> 3 dimensions by project<strong>in</strong>g the data via PCA<br />

— which by the way throws away virtually no <strong>in</strong>formation (confirmed<br />

by experiment) <strong>in</strong>dicat<strong>in</strong>g the validness of theorem 3.2 also <strong>in</strong><br />

higher dimensions. In figure 2(c) a projection for p=0.5 is shown.<br />

Now, we perform batch-simulations for vary<strong>in</strong>g p. For this, we<br />

uniformly sample the start<strong>in</strong>g vector x ∈ [0, 1] n <strong>in</strong> 100 runs, and<br />

compare the POSH algorithm result with the true projection. POSH<br />

is performed start<strong>in</strong>g with the p-norm projection us<strong>in</strong>g algorithm 1<br />

and 100 iterations. As the true projectionπM(x) cannot be determ<strong>in</strong>ed<br />

<strong>in</strong> closed form, we scan [0, 1] n−1 us<strong>in</strong>g the stepsizeε=0.01<br />

to give the first (n−1) coord<strong>in</strong>ates of our estimate y ofπM(x); its<br />

n-th coord<strong>in</strong>ate is then constructed to guarantee y∈S n−1<br />

p (for p1) respectively. Us<strong>in</strong>g Taylor-approximation<br />

of (y+ε) p , it can easily be shown that two adjacent grid po<strong>in</strong>ts have<br />

maximal difference � �<br />

��(y1+ε,...,yn+ε)� p p−�y� p �<br />

�<br />

p�≤<br />

pnε+O(ε 2 ) if<br />

y∈[0, 1] n and p≥1. Hence by tak<strong>in</strong>g only vectors y as approximation<br />

ofπM(x) with � �<br />

��y�2 2− 1� �<br />

�<br />

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!