14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 12. LNCS 3195:718-725, 2004 177<br />

2 Fabian J. Theis and Shun-ichi Amari<br />

If an n-dimensional vector is (n − 1)-sparse, that is it <strong>in</strong>cludes at least one<br />

zero component, it is simply said to be sparse. The goal of Sparse <strong>Component</strong><br />

<strong>Analysis</strong> of level k (k-SCA) is to decompose a given m-dimensional random<br />

vector x <strong>in</strong>to<br />

x = As (1)<br />

with a real m × n-matrix A and an n-dimensional k-sparse random vector s. s<br />

is called the source vector, x the mixtures and A the mix<strong>in</strong>g matrix. We speak<br />

of complete, overcomplete or undercomplete k-SCA if m = n, m < n or m > n<br />

respectively. In the follow<strong>in</strong>g without loss of generality we will assume m ≤ n<br />

because the undercomplete case can be easily reduced to the complete case by<br />

projection of x.<br />

Theorem 1 (Matrix identifiability). Consider the k-SCA problem from equation<br />

1 for k := m − 1 and assume that every m × m-submatrix of A is <strong>in</strong>vertible.<br />

Furthermore let s be sufficiently rich represented <strong>in</strong> the sense that for any <strong>in</strong>dex<br />

set of n − m + 1 elements I ⊂ {1, ..., n} there exist at least m samples of s such<br />

that each of them has zero elements <strong>in</strong> places with <strong>in</strong>dexes <strong>in</strong> I and each m − 1<br />

of them are l<strong>in</strong>early <strong>in</strong>dependent. Then A is uniquely determ<strong>in</strong>ed by x except for<br />

left-multiplication with permutation and scal<strong>in</strong>g matrices.<br />

Theorem 2 (Source identifiablity). Let H be the set of all x ∈ R m such<br />

that the l<strong>in</strong>ear system As = x has an (m − 1)-sparse solution s. If A fulfills<br />

the condition from theorem 1, then there exists a subset H0 ⊂ H with measure<br />

zero with respect to H, such that for every x ∈ H \ H0 this system has no other<br />

solution with this property.<br />

The above two theorems show that <strong>in</strong> the case of overcomplete BSS us<strong>in</strong>g<br />

(m−1)-SCA, both the mix<strong>in</strong>g matrix and the sources can uniquely be recovered<br />

from x except for the omnipresent permutation and scal<strong>in</strong>g <strong>in</strong>determ<strong>in</strong>acy. We<br />

refer to [8] for proofs of these theorems and algorithms based upon them. We<br />

also want to note that the present source recovery algorithm is quite different<br />

from the usual sparse source recovery us<strong>in</strong>g l1-norm m<strong>in</strong>imization [7] and l<strong>in</strong>ear<br />

programm<strong>in</strong>g. In the case of sources with sparsity as above, the latter will not<br />

be able to detect the sources.<br />

2 Postnonl<strong>in</strong>ear overcomplete SCA<br />

2.1 Model<br />

Consider n-dimensional k-sparse sources s with k < m. The postnonl<strong>in</strong>ear mix<strong>in</strong>g<br />

model [9] is def<strong>in</strong>ed to be<br />

x = f(As) (2)<br />

with a diagonal <strong>in</strong>vertible function f with f(0) = 0 and a real m × n-matrix A.<br />

Here a function f is said to be diagonal if each component fi only depends on<br />

xi. In abuse of notation we will <strong>in</strong> this case <strong>in</strong>terpret the components fi of f as

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!