14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

230 Chapter 16. Proc. ICA 2006, pages 917-925<br />

Uniqueness of Non-Gaussian Subspace <strong>Analysis</strong><br />

Fabian J. Theis 1 and Motoaki Kawanabe 2<br />

1 Institute of Biophysics, University of Regensburg, 93040 Regensburg, Germany<br />

2 Fraunhofer FIRST.IDA, Kekuléstraße 7, 12439 Berl<strong>in</strong>, Germany<br />

fabian@theis.name andnabe@first.fhg.de<br />

Abstract. Dimension reduction provides an important tool for preprocess<strong>in</strong>g<br />

large scale data sets. A possible model for dimension reduction is realized by<br />

project<strong>in</strong>g onto the non-Gaussian part of a given multivariate record<strong>in</strong>g. We prove<br />

that the subspaces of such a projection are unique given that the Gaussian subspace<br />

is of maximal dimension. This result therefore guarantees that projection<br />

algorithms uniquely recover the underly<strong>in</strong>g lower dimensional data signals.<br />

An important open problem <strong>in</strong> signal process<strong>in</strong>g is the task of efficient dimension<br />

reduction, i.e. the search for mean<strong>in</strong>gful signals with<strong>in</strong> a higher dimensional data set.<br />

Classical techniques such as pr<strong>in</strong>cipal component analysis hereby def<strong>in</strong>e ‘mean<strong>in</strong>gful’<br />

us<strong>in</strong>g second-order statistics (maximal variance), which may often be <strong>in</strong>adequate for<br />

signal detection, i.e. <strong>in</strong> the presence of strong noise. This contrasts to higher order models<br />

<strong>in</strong>clud<strong>in</strong>g projection pursuit [1,2] or non-Gaussian subspace analysis (NGSA) [3,4].<br />

While the former extracts a s<strong>in</strong>gle non-Gaussian <strong>in</strong>dependent component from the data<br />

set, the latter tries to detect a whole non-Gaussian subspace with<strong>in</strong> the data, and no<br />

assumption of <strong>in</strong>dependence with<strong>in</strong> the subspace is made.<br />

The goal of l<strong>in</strong>ear dimension reduction can be def<strong>in</strong>ed as the search of a projection<br />

W∈Mat(n×d) of a d-dimensional random vector X with n

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!