14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

1.7. Outlook 51<br />

noise<br />

data signal<br />

<strong>in</strong>dependent structured models<br />

Figure 1.28: Future analysis framework.<br />

of the search space Gl(n) of all <strong>in</strong>vertible (n × n)-matrices, Amari was able to considerably improve<br />

search performance and accuracy thus provid<strong>in</strong>g an equivariant ICA algorithm. In Theis<br />

(2005b), we gave an overview of various gradient calculations on Gl(n) and presented generalizations<br />

to over- and undercomplete cases, realized by a semidirect product. These ideas were used<br />

<strong>in</strong> Squart<strong>in</strong>i and Theis (2006), where we def<strong>in</strong>ed some alternative Riemannian metrics on the<br />

parameter space of non-square matrices, correspond<strong>in</strong>g to various translations def<strong>in</strong>ed there<strong>in</strong>.<br />

Such metrics allowed us to derive novel, efficient learn<strong>in</strong>g rules for two ICA based algorithms<br />

for overdeterm<strong>in</strong>ed bl<strong>in</strong>d source separation.<br />

In Meyer-Baese et al. (2006), we studied optimization and statistical learn<strong>in</strong>g on a neural<br />

network that self-organized to solve a BSS problem. The result<strong>in</strong>g onl<strong>in</strong>e learn<strong>in</strong>g solution used<br />

the nonstationarity of the sources to achieve the separation. For this, we divided the problem<br />

<strong>in</strong>to two learn<strong>in</strong>g problems one of which is solved by an anti-Hebbian learn<strong>in</strong>g and the other by<br />

an Hebbian learn<strong>in</strong>g process. The stability of related networks is discussed <strong>in</strong> Meyer-Bäse et al.<br />

(2006).<br />

A major application of unsupervised learn<strong>in</strong>g <strong>in</strong> neuroscience lies <strong>in</strong> the analysis of functional<br />

MRI data sets. A problem we faced dur<strong>in</strong>g the course of our analyses was the efficient storage<br />

and retrieval of many, large-dimensional spatiotemporal data sets. Although some dimension<br />

reduction or region-of-<strong>in</strong>terest selection may be performed beforehand to reduce the sample<br />

size (Keck et al., 2006), we wanted to compress the data as well as possible without loos<strong>in</strong>g<br />

<strong>in</strong>formation. For this, we proposed a novel lossless compression method named FTTcoder (Theis<br />

and Tanaka, 2005) for the compression of images and 3d sequences collected dur<strong>in</strong>g a typical<br />

fMRI experiment. The large data sets <strong>in</strong>volved <strong>in</strong> this popular medical application necessitated<br />

novel compression algorithms to take <strong>in</strong>to account the structure of the recorded data as well as<br />

the experimental conditions, which <strong>in</strong>clude the 4d record<strong>in</strong>gs, the used stimulus protocol and<br />

marked regions of <strong>in</strong>terest (ROI). For this, we used simple temporal transformations and entropy<br />

cod<strong>in</strong>g with context model<strong>in</strong>g to encode the 4d scans after preprocess<strong>in</strong>g with the ROI mask<strong>in</strong>g.<br />

The compression algorithm as well as the fMRI toolbox and the algorithms for spatiotemporal<br />

and subspace BSS are all available onl<strong>in</strong>e at http://fabian.theis.name.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!