14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

96 Chapter 4. Neurocomput<strong>in</strong>g 64:223-234, 2005<br />

4 Separability of bounded postnonl<strong>in</strong>ear BSS<br />

In this section we prove separability of postnonl<strong>in</strong>ear BSS; <strong>in</strong> the proof we will<br />

see how the two conditions from def<strong>in</strong>ition 6 turn out to be necessary.<br />

Theorem 7 (Separability of bounded postnonl<strong>in</strong>ear BSS) Let A, W ∈<br />

Gl(n) and one of them mix<strong>in</strong>g and not absolutely degenerate, h : Rn → Rn be<br />

a diagonal <strong>in</strong>jective analytic function such that h ′ i �= 0 and let S be a fullybounded<br />

<strong>in</strong>dependent random vector. If W �<br />

h(AS) �<br />

is <strong>in</strong>dependent, then there<br />

exists a scal<strong>in</strong>g L ∈ Gl(n) and v ∈ Rn with LA ∼ W−1 and h(x) = Lx + v.<br />

So let f ◦ A be the mix<strong>in</strong>g model and W ◦ g the separat<strong>in</strong>g model. Putt<strong>in</strong>g the<br />

two together we get the above mix<strong>in</strong>g-separat<strong>in</strong>g model with h := g ◦ f. The<br />

theorem shows that if the mix<strong>in</strong>g-separat<strong>in</strong>g model preserves <strong>in</strong>dependence<br />

then it is essentially trivial i.e. h aff<strong>in</strong>e l<strong>in</strong>ear and the matrices equivalent (up to<br />

scal<strong>in</strong>g). As usual, the model is assumed to be <strong>in</strong>vertible, hence identifiability<br />

and uniqueness of the model follow from the separability.<br />

Def<strong>in</strong>ition 8 A subset P ⊂ R n is called parallelepipeds, if it is the l<strong>in</strong>ear<br />

image of a square, that is<br />

P = A([a1, b1] × . . . × [an, bn])<br />

for ai < bi,i = 1, . . . , n and A ∈ Gl(n). A parallelepipeds P is said to be<br />

tilted, if A is mix<strong>in</strong>g and no 2 × 2-m<strong>in</strong>or is absolutely degenerate. Let i �= j ∈<br />

{1, . . . , n} and c ∈ {a1, b1} × . . . × {an, bn}, then<br />

A �<br />

{c1} × . . . × [ai, bi] × . . . × [aj, bj] × . . . × {cn} �<br />

is called a 2-face of P and A(c) is called a corner of P . If n = 2 the parallelepipeds<br />

are called parallelograms.<br />

Lemma 9 Let f1, . . . , fn be n one-dimensional analytic <strong>in</strong>jective functions<br />

with f ′ i �= 0, and let f := f1 × · · · × fn be the <strong>in</strong>duced <strong>in</strong>jective mapp<strong>in</strong>g on<br />

R n . Let P, Q ⊂ R n be two parallelepipeds, one of them tilted. If f(P ) = Q (or<br />

equivalently for the boundaries f(∂P ) = ∂Q), then f is aff<strong>in</strong>e l<strong>in</strong>ear diagonal.<br />

Here ∂P denotes the boundary of the parallelepiped P i.e. the set of po<strong>in</strong>ts of<br />

P not ly<strong>in</strong>g <strong>in</strong> its <strong>in</strong>terior (which co<strong>in</strong>cides with the union of its faces).<br />

In the proof we see that the requirement for P or Q be<strong>in</strong>g tilted can be weakened<br />

slightly. It would suffice that enough 2-m<strong>in</strong>ors are not absolutely degenerate.<br />

Nevertheless the set of mix<strong>in</strong>g matrices hav<strong>in</strong>g no absolutely degenerate<br />

2 × 2-m<strong>in</strong>ors is very large <strong>in</strong> the sense that its complement has measure zero<br />

<strong>in</strong> Gl(n).<br />

7

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!