14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 6. LNCS 3195:726-733, 2004 117<br />

4 Fabian J. Theis, Anke Meyer-Baese, and Elmar W. Lang<br />

PSfrag replacements<br />

0.8<br />

0.6<br />

0.4<br />

0.2<br />

0<br />

1<br />

1d−autocov<br />

2d−autocov<br />

0 50 100 150 200 250 300<br />

τ respectively |(τ1, τ2)| (rescaled to N)<br />

Fig. 1. Example of one- and two-dimensional autocovariance coefficient of the grayscale<br />

128 × 128 Lena image after normalization to variance 1.<br />

where the expectation is taken over (z1, . . . , zM ). Rs(τ1, . . . , τM ) can be estimated<br />

given equidistant samples by replac<strong>in</strong>g random variables by sample values<br />

and expectations by sums as usual.<br />

The advantage of us<strong>in</strong>g multidimensional autocovariances lies <strong>in</strong> the fact that<br />

now the multidimensional structure of the data set can be used more explicitly.<br />

For example, if row concatenation is used to construct s(t) from the images,<br />

horizontal l<strong>in</strong>es <strong>in</strong> the image will only give trivial contributions to the autocovariance<br />

(see examples <strong>in</strong> figure 2 and section 4). Figure 1 shows the oneand<br />

two-dimensional autocovariance of the Lena image for vary<strong>in</strong>g τ respectively<br />

(τ1, τ2) after normalization of the image to variance 1. Clearly, the twodimensional<br />

autocovariance does not decay as quickly with <strong>in</strong>creas<strong>in</strong>g radius as<br />

the one-dimensional covariance. Only at multiples of the image height, the onedimensional<br />

autocovariance is significantly high i.e. captures image structure.<br />

Our contribution consists of us<strong>in</strong>g multidimensional autocovariances for jo<strong>in</strong>t<br />

diagonalization. We replace the BSS assumption of diagonal one-dimensional autocovariances<br />

by diagonal multi-dimensional autocovariances of the sources. Note<br />

that also the multidimensional covariance satisfies the equation 2. Aga<strong>in</strong> we as-<br />

sume whitened x(z1, . . . , zK). Given a autocovariance matrix ¯ Rx<br />

�<br />

τ (1)<br />

1<br />

, . . . , τ (1)<br />

M<br />

with n different eigenvalues, multidimensional AMUSE (mdAMUSE) detects the<br />

orthogonal unmix<strong>in</strong>g mapp<strong>in</strong>g W by diagonalization of this matrix.<br />

In section 2, we discussed the advantages of us<strong>in</strong>g SOBI over AMUSE. This<br />

of course also holds <strong>in</strong> this generalized case. Hence, the multidimensional SOBI<br />

algorithm (mdSOBI ) consists of the jo<strong>in</strong>t diagonalization of a set of symmetrized<br />

multidimensional autocovariances<br />

� �<br />

¯Rx τ (1)<br />

�<br />

(1)<br />

1 , . . . , τ M , . . . , ¯ �<br />

Rx<br />

τ (K)<br />

1<br />

, . . . , τ (K)<br />

��<br />

M<br />

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!