14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

50 Chapter 1. Statistical mach<strong>in</strong>e learn<strong>in</strong>g of biomedical data<br />

1.7 Outlook<br />

We considered the (mostly l<strong>in</strong>ear) factorization problem<br />

x(t) = As(t) + n(t). (1.18)<br />

Often, the noise n(t) was not explicitly modeled but <strong>in</strong>cluded <strong>in</strong> s(t). We assumed that x(t)<br />

is known as well as some additional <strong>in</strong>formation about the system itself. Depend<strong>in</strong>g on the<br />

assumptions, different problems and algorithmic solutions can be derived to solve such <strong>in</strong>verse<br />

problems:<br />

• statistically <strong>in</strong>dependent s(t): we proved that <strong>in</strong> this case ICA could solve (1.18) uniquely,<br />

see section 1.2; this holds even <strong>in</strong> some nonl<strong>in</strong>ear generalizations.<br />

• approximate spatiotemporal <strong>in</strong>dependence <strong>in</strong> A and s(t): we provided a very robust,<br />

simple algorithm for the spatiotemporal separation based on jo<strong>in</strong>t diagonalization, see<br />

section 1.3.2.<br />

• statistical <strong>in</strong>dependence between groups of sources s(t): aga<strong>in</strong>, uniqueness except for transformation<br />

with<strong>in</strong> the blocks can be proven, and the constructive proof results <strong>in</strong> a simple<br />

update algorithm as <strong>in</strong> the l<strong>in</strong>ear case, see section 1.3.3.<br />

• sparseness of the sources s(t): <strong>in</strong> the context of SCA, we relaxed the assumptions of s<strong>in</strong>gle<br />

source sparseness to multi-dimensional sparseness constra<strong>in</strong>ts, for which we were still able<br />

to prove uniqueness and to derive an algorithm, see section 1.4.1.<br />

• non-negativity of A and s(t): a sparse extension of the pla<strong>in</strong> NMF model was analyzed<br />

<strong>in</strong> terms of existence an uniqueness, and a generalization for lp-sparse sources has been<br />

proposed <strong>in</strong> order to better approximate comb<strong>in</strong>atorial sparseness <strong>in</strong> the l0-sense, see<br />

section 1.4.2.<br />

• Gaussian or noisy components <strong>in</strong> s(t): we proposed various denois<strong>in</strong>g and dimension reduction<br />

schemes, and proved uniqueness of a non-Gaussian signal subspace <strong>in</strong> section 1.5.<br />

F<strong>in</strong>ally, <strong>in</strong> section 1.6, we applied some of the above methods to biomedical data sets recorded<br />

by functional MRI, surface EMG and optical microscopy.<br />

Other work<br />

In this summary, data analysis was discussed from the viewpo<strong>in</strong>t of data factorization models<br />

such as (1.18). Before conclud<strong>in</strong>g, some other works of the author <strong>in</strong> related areas should be<br />

mentioned:<br />

Primarily only discussed <strong>in</strong> mathematics, optimization on Lie groups has become an important<br />

topic <strong>in</strong> the field of mach<strong>in</strong>e learn<strong>in</strong>g and neural networks, s<strong>in</strong>ce many cost functions are<br />

def<strong>in</strong>ed on parameter spaces that more naturally obey a non-Euclidean geometry. Consider for<br />

example Amari’s natural gradient (Amari, 1998): simply by tak<strong>in</strong>g <strong>in</strong>to account the geometry

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!