14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 15. Neurocomput<strong>in</strong>g, 69:1485-1501, 2006 203<br />

group together similar column vectors of the trajectory matrix X.<br />

A cluster<strong>in</strong>g algorithm like k-means [15] is appropriate for problems where the<br />

time structure of the signal is irrelevant. If, however, time or spatial correlations<br />

matter, cluster<strong>in</strong>g should be based on f<strong>in</strong>d<strong>in</strong>g an appropriate partition<strong>in</strong>g<br />

of {M − 1, . . . , L − 1} <strong>in</strong>to K successive segments, s<strong>in</strong>ce this preserves the <strong>in</strong>herent<br />

correlation structure of the signals. In any case the number of columns<br />

<strong>in</strong> each sub-trajectory matrix X (j) amounts to Lj such that the follow<strong>in</strong>g<br />

completeness relation holds:<br />

K�<br />

Lj = L − M + 1 (3)<br />

j=1<br />

The mean vector mj <strong>in</strong> each cluster can be considered a prototype vector and<br />

is given by<br />

mj = 1<br />

Xcj = 1<br />

X (j) [1, . . . , 1] T , j = 1, . . . , K (4)<br />

Lj<br />

Lj<br />

where cj is a vector with Lj entries equal to one which characterizes the<br />

cluster<strong>in</strong>g. Note that after the cluster<strong>in</strong>g the set {k = 0, . . . , L − M − 1} of<br />

<strong>in</strong>dices of the columns of X is split <strong>in</strong> K disjo<strong>in</strong>t subsets Kj. Each trajectory<br />

sub-matrix X (j) is formed with those columns of the matrix X, the <strong>in</strong>dices of<br />

which belong to the subset Kj of <strong>in</strong>dices.<br />

2.3 Pr<strong>in</strong>cipal <strong>Component</strong> <strong>Analysis</strong> and <strong>Independent</strong> <strong>Component</strong> <strong>Analysis</strong><br />

PCA [23] is one of the most common multivariate data analysis tools. It tries<br />

to l<strong>in</strong>early transform given data <strong>in</strong>to uncorrelated data (feature space). Thus<br />

<strong>in</strong> PCA [4] a data vector is represented <strong>in</strong> an orthogonal basis system such<br />

that the projected data have maximal variance. PCA can be performed by<br />

eigenvalue decomposition of the data covariance matrix. The orthogonal transformation<br />

is obta<strong>in</strong>ed by diagonaliz<strong>in</strong>g the centered covariance matrix of the<br />

data set.<br />

In ICA, given a random vector, the goal is to f<strong>in</strong>d its statistically ICs. In<br />

contrast to correlation-based transformations like PCA, ICA renders the output<br />

signals as statistically <strong>in</strong>dependent as possible by evaluat<strong>in</strong>g higher-order<br />

statistics. The idea of ICA was first expressed by Jutten and Hérault [11]<br />

while the term ICA was later co<strong>in</strong>ed by Comon [3]. With LICA we will use<br />

the popular FastICA algorithm by Hyvär<strong>in</strong>en and Oja [14], which performs<br />

ICA by maximiz<strong>in</strong>g the non-Gaussianity of the signal components.<br />

6

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!