14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 4. Neurocomput<strong>in</strong>g 64:223-234, 2005 93<br />

f(ax) = −f(x) implies that f ≡ 0. In the case that b = 1 aga<strong>in</strong> f is constant<br />

s<strong>in</strong>ce f(ax) = f(x) and a 0 = 1 = b.<br />

By m-times differentiat<strong>in</strong>g the homogeneity equation we f<strong>in</strong>ally get bf (m) (x) =<br />

a m f (m) (ax), where f (m) denotes the m-th derivative of f. Evaluat<strong>in</strong>g this at 0<br />

yields bf (m) (0) = a m f (m) (0). S<strong>in</strong>ce f is assumed to be analytic, f is determ<strong>in</strong>ed<br />

uniquely by its derivatives at 0. Now either there exists an n ≥ 0 with b = a n ,<br />

hence f (m) (0) = 0 for all m �= n and therefore f(x) = cx n or else f ≡ 0. ✷<br />

Def<strong>in</strong>ition 3 [10] We call a random vector X with density pX bounded, if<br />

its density pX is bounded. Denote supp pX := {x|pX(x) �= 0} the support of<br />

pX i.e. the closure of the non-zero po<strong>in</strong>ts of pX.<br />

We further call an <strong>in</strong>dependent random vector X fully bounded, if supp pXi is<br />

an <strong>in</strong>terval for all i. So we get supp pX = [a1, b1] × . . . × [an, bn].<br />

S<strong>in</strong>ce a connected component of supp pX <strong>in</strong>duces a restricted, fully bounded<br />

random vector, without loss of generality we will <strong>in</strong> the follow<strong>in</strong>g assume to<br />

have fully bounded densities. In the case of l<strong>in</strong>ear <strong>in</strong>stantaneous BSS the follow<strong>in</strong>g<br />

separability result is well known and can be derived from a more general<br />

version of this theorem for non-bounded densities [3]. But <strong>in</strong> the context of<br />

fully bounded random vectors, this follows already from the fact that <strong>in</strong> this<br />

case <strong>in</strong>dependence is equivalent to hav<strong>in</strong>g support with<strong>in</strong> a cube with sides<br />

parallel to the coord<strong>in</strong>ate planes, and only matrices equivalent to the identity<br />

leave this property <strong>in</strong>variant:<br />

Theorem 4 (Separability of bounded l<strong>in</strong>ear BSS) Let M ∈ Gl(n) be<br />

an <strong>in</strong>vertible matrix and S a fully bounded <strong>in</strong>dependent random vector. If MS<br />

is aga<strong>in</strong> <strong>in</strong>dependent, then M is equivalent to the identity.<br />

This theorem <strong>in</strong>deed proves separability of the l<strong>in</strong>ear ICA model, because if<br />

X = AS and W is a demix<strong>in</strong>g matrix such that WX is <strong>in</strong>dependent, then<br />

M := WA ∼ I, so W −1 ∼ A as desired. As the model is <strong>in</strong>vertible and the<br />

<strong>in</strong>determ<strong>in</strong>acies are trivial, identifiability and uniqueness follow directly.<br />

3 Separability of postnonl<strong>in</strong>ear BSS<br />

In this section we <strong>in</strong>troduce the postnonl<strong>in</strong>ear BSS model and further discuss<br />

its identifiability.<br />

Def<strong>in</strong>ition 5 [9] A function f : R n → R n is called diagonal or componentwise<br />

if each component fi(x) of f(x) depends only on the variable xi.<br />

4

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!