14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

272 Chapter 20. Signal Process<strong>in</strong>g 86(3):603-623, 2006<br />

sparseness is measured by comb<strong>in</strong><strong>in</strong>g the Euclidean norm �.�2 and the 1-norm<br />

�x�1 := �<br />

i |xi| as follows:<br />

√<br />

n − �x�1/�x�2<br />

sparseness(x) := √ (3)<br />

n − 1<br />

if x ∈ R n \ {0}. So sparseness(x) = 1 (maximal) if x conta<strong>in</strong>s n −1 zeros, and<br />

it reaches zero if the absolute value of all coefficients of x co<strong>in</strong>cide.<br />

The devised algorithm is based on the iterative application of a gradient decent<br />

step and a projection step, thus restrict<strong>in</strong>g the search to the subspace of sparse<br />

solutions. We perform the factorization us<strong>in</strong>g the publicly available Matlab<br />

library nmfpack 1 , which is used <strong>in</strong> [14].<br />

So NMF decomposes X <strong>in</strong>to nonnegative A and nonnegative S. The assumption<br />

that A has nonnegative coefficients is very well fulfilled by s-EMG record<strong>in</strong>gs;<br />

however as seen before the sources also have negative entries. In order<br />

to be able to apply the algorithms, we therefore preprocess the data us<strong>in</strong>g the<br />

function<br />

⎧<br />

⎪⎨ 0, x < 0<br />

κ(x) =<br />

(4)<br />

⎪⎩ x, x ≥ 0<br />

to cut off non-zero values; this yields the new random vector (sample matrix)<br />

X+ := (κ(X1), . . .,κ(Xn)) ⊤ . For comparison, we also construct a new sample<br />

set by simply leav<strong>in</strong>g out samples that have at least one negative value. Here<br />

we model this by the random vector X∗.<br />

2.2.3 Sparse component analysis<br />

Sparse component analysis (SCA) [3,4] requires strong sparseness <strong>in</strong> the sources<br />

only — this is then sufficient to decompose the observations. In order to def<strong>in</strong>e<br />

the SCA model, a vector x ∈ R n is said to be k-sparse if x has at most<br />

k non-zero entries. This k-sparseness implies k0-sparseness for k0 ≥ k. If an<br />

n-dimensional vector is k-sparse for k = n − 1, it is simply said to be sparse.<br />

The goal of sparse component analysis of level k (k-SCA) is to decompose X<br />

<strong>in</strong>to X = AS as above such that each sample (i.e. column) of S is k-sparse.<br />

In the follow<strong>in</strong>g we will assume k = n − 1.<br />

Note that, <strong>in</strong> contrast to the ICA model, the above model is not translation<br />

<strong>in</strong>variant. However, it is easy to see that if <strong>in</strong>stead of A we allow an aff<strong>in</strong>e<br />

l<strong>in</strong>ear transformation, the translation constant can be determ<strong>in</strong>ed from X<br />

only as long as the sources are non-determ<strong>in</strong>istic. In other words, <strong>in</strong>stead of<br />

assum<strong>in</strong>g k-sparseness of the sources we could also assume that at any time<br />

1 http://www.cs.hels<strong>in</strong>ki.fi/u/phoyer/<br />

11

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!