14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 20. Signal Process<strong>in</strong>g 86(3):603-623, 2006 263<br />

N)−matrix) <strong>in</strong> different ways. One of the simplest approaches lies <strong>in</strong> a l<strong>in</strong>ear<br />

decomposition, X = AS, with A an (m × N)−matrix (mix<strong>in</strong>g matrix) and S<br />

an (n × N)−matrix stor<strong>in</strong>g the sources. Both A and S are unknown, hence<br />

this problem is often described as bl<strong>in</strong>d source separation (BSS). To get a<br />

well-def<strong>in</strong>ed problem, A and S have to satisfy additional properties such as:<br />

• the source components Si (rows of S) are assumed to be realizations of<br />

stochastically <strong>in</strong>dependent random variables — this method is called <strong>in</strong>dependent<br />

component analysis (ICA) [1,2],<br />

• the sources S are required to conta<strong>in</strong> as many zeros as possible — we then<br />

speak of sparse component analysis (SCA) [3,4],<br />

• A and S are assumed to be nonnegative, which is denoted by nonnegative<br />

matrix factorization (NMF) [5].<br />

The above-mentioned models as well as their <strong>in</strong>terplay have recently been <strong>in</strong><br />

the focus of many researchers, for <strong>in</strong>stance concern<strong>in</strong>g the question of how<br />

ICA and sparseness are related to each other and how they can be <strong>in</strong>tegrated<br />

<strong>in</strong>to BSS algorithms [6–10], how to deal with nonnegativity <strong>in</strong> the ICA case<br />

[11,12] or how to extend NMF <strong>in</strong> order to <strong>in</strong>clude sparseness [13,14]. Much<br />

work has already been devoted to these subjects, and their applications to<br />

various fields are currently emerg<strong>in</strong>g. Indeed, l<strong>in</strong>ear representations such as the<br />

above have several potential applications <strong>in</strong>clud<strong>in</strong>g decomposition of objects<br />

<strong>in</strong>to ‘natural’ components [5], redundancy and dimensionality reduction [2],<br />

biomedical data analysis, micro-array data m<strong>in</strong><strong>in</strong>g or enhancement, feature<br />

extraction of images <strong>in</strong> nuclear medic<strong>in</strong>e, etc. [1,2].<br />

In this study, we will analyze and compare the above models, not from a theoretical<br />

po<strong>in</strong>t of view but rather from a concrete, real-world example, namely<br />

the analysis of surface electromyogram (s-EMG) data sets. An electromyogram<br />

(EMG) denotes the electric signal generated by a contract<strong>in</strong>g muscle<br />

[15]; its study is relevant to the diagnosis of motoneuron diseases [16] as well<br />

as neurophysiological research [17]. In general, EMG measurements make use<br />

of <strong>in</strong>vasive, pa<strong>in</strong>ful needle electrodes. An alternative is to use s-EMG, which<br />

is measured us<strong>in</strong>g non-<strong>in</strong>vasive, pa<strong>in</strong>less surface electrodes. However, <strong>in</strong> this<br />

case the signals are rather more difficult to <strong>in</strong>terpret due to noise and overlap<br />

of several source signals as shown <strong>in</strong> Fig 1(a). We have already applied ICA <strong>in</strong><br />

order to solve the s-EMG decomposition problem [18], however performance<br />

<strong>in</strong> real-world noisy s-EMG is still problematic, and it is yet unknown if the<br />

assumption of <strong>in</strong>dependent sources holds very well <strong>in</strong> the sett<strong>in</strong>g of s-EMG.<br />

In the present work, we apply sparse BSS methods based on various model<br />

assumptions to s-EMG signals. We first outl<strong>in</strong>e each of those methods and<br />

the correspond<strong>in</strong>g performance <strong>in</strong>dices used for their comparison. We then<br />

present the decompositions obta<strong>in</strong>ed with each method, and f<strong>in</strong>ally discuss<br />

these results <strong>in</strong> section 4.<br />

2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!