14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

116 Chapter 6. LNCS 3195:726-733, 2004<br />

SOBI based on multi-dimensional autocovariances 3<br />

for τ �= 0. By assumption ¯ Rs(τ) is diagonal, so equation 3 is an eigenvalue decomposition<br />

of the symmetric matrix ¯ Rx(τ). If we furthermore assume that ¯ Rx(τ)<br />

or equivalently ¯ Rs(τ) has n different eigenvalues, then the above decomposition<br />

i.e. A is uniquely determ<strong>in</strong>ed by ¯ Rx(τ) except for orthogonal transformation<br />

<strong>in</strong> each eigenspace and permutation; s<strong>in</strong>ce the eigenspaces are one-dimensional<br />

this means A is uniquely determ<strong>in</strong>ed by equation 3 except for permutation. In<br />

addition to this separability result, A can be recovered algorithmically by simply<br />

calculat<strong>in</strong>g the eigenvalue decomposition of ¯ Rx(τ) (AMUSE, [3]).<br />

In practice, if the eigenvalue decomposition is problematic, a different choice<br />

of τ often resolves this problem. Nontheless, there are sources <strong>in</strong> which some<br />

components have equal autocovariances. Also, due to the fact that the autocovariance<br />

matrices are only estimated by a f<strong>in</strong>ite amount of samples, and due to<br />

possible colored noise, the autocovariance at τ could be badly estimated. A more<br />

general BSS algorithm called SOBI (second-order bl<strong>in</strong>d identification) based on<br />

time decorrelation was therefore proposed by Belouchrani et al. [4]. In addition<br />

to only diagonaliz<strong>in</strong>g a s<strong>in</strong>gle autocovariance matrix, it takes a whole set of autocovariance<br />

matrices of x(t) with vary<strong>in</strong>g time lags τ and jo<strong>in</strong>tly diagonalizes<br />

the whole set. It has been shown that <strong>in</strong>creas<strong>in</strong>g the size of this set improves<br />

SOBI performance <strong>in</strong> noisy sett<strong>in</strong>gs [1].<br />

Algorithms for perform<strong>in</strong>g jo<strong>in</strong>t diagonalization of a set of symmetric commut<strong>in</strong>g<br />

matrices <strong>in</strong>clude gradient descent on the sum of the off-diagonal terms,<br />

iterative construction of A by Givens rotation <strong>in</strong> two coord<strong>in</strong>ates [7] (used <strong>in</strong> the<br />

simulations <strong>in</strong> section 4), an iterative two-step recovery of A [8] or more recently<br />

a l<strong>in</strong>ear least-squares algorithm for diagonalization [9], where the latter two algorithms<br />

can also search for non-orthogonal matrices A. Jo<strong>in</strong>t diagonalization<br />

has been used <strong>in</strong> BSS us<strong>in</strong>g cumulant matrices [10] or time autocovariances [4,5].<br />

3 Multidimensional SOBI<br />

The goal of this work is to improve SOBI performance for random processes<br />

with a higher dimensional parametrization i.e. for data sets where the random<br />

processes s and x do not depend on a s<strong>in</strong>gle variable t, but on multiple variables<br />

(z1, . . . , zM ). A typical example is a source data set, <strong>in</strong> which each component<br />

si represents an image of size h × w. Then M = 2 and samples of s are given at<br />

z1 = 1, . . . , h, z2 = 1, . . . , w. Classically, s(z1, z2) is transformed to s(t) by fix<strong>in</strong>g<br />

a mapp<strong>in</strong>g from the two-dimensional parameter set to the one-dimensional time<br />

parametrization of s(t), for example by concatenat<strong>in</strong>g columns or rows <strong>in</strong> the<br />

case of a f<strong>in</strong>ite number of samples. If the time structure of s(t) is not used, as<br />

<strong>in</strong> all classical ICA algorithms <strong>in</strong> which i.i.d. samples are assumed, this choice<br />

does not <strong>in</strong>fluence the result. However, <strong>in</strong> time-structure based algorithms such<br />

as AMUSE and SOBI results can vary greatly depend<strong>in</strong>g on the choice of this<br />

mapp<strong>in</strong>g, see figure 2.<br />

Without loss of generality we aga<strong>in</strong> assume centered random vectors. Then<br />

def<strong>in</strong>e the multidimensional covariance to be<br />

Rs(τ1, . . . , τM ) := E � s(z1 + τ1, . . . , zM + τM)s(z1, . . . , zM ) ⊤�

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!