14.02.2013 Views

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

Mathematics in Independent Component Analysis

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

156 Chapter 10. IEEE TNN 16(4):992-996, 2005<br />

992 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 16, NO. 4, JULY 2005<br />

IV. CONCLUSION<br />

In this letter, an enhancement of the NBLM algorithm is proposed. It<br />

is shown that, by locally adapt<strong>in</strong>g one learn<strong>in</strong>g coefficient of the NBLM<br />

for each neighborhood, significant improvements on the performance<br />

of the method are achieved. The suggested modification requires only<br />

m<strong>in</strong>or changes <strong>in</strong> the orig<strong>in</strong>al algorithm, and re<strong>in</strong>forces the local character<br />

of the NBLM.<br />

With the proposed local adaptation, the modified NBLM achieves<br />

better performance than the LM method even for very small neighborhood<br />

sizes. This allows very large NNs to be efficiently tra<strong>in</strong>ed <strong>in</strong> a<br />

fraction of the time LM would require, still reach<strong>in</strong>g lower error rates.<br />

Moreover, it makes possible to reta<strong>in</strong> the efficiency of that method <strong>in</strong><br />

thosesituations wheretheapplication of theorig<strong>in</strong>al algorithm was<br />

impractical.<br />

ACKNOWLEDGMENT<br />

The authors would like to thank the reviewers for their useful h<strong>in</strong>ts<br />

and constructive comments throughout the review<strong>in</strong>g process.<br />

REFERENCES<br />

[1] K. Levenberg, “A method for the solution of certa<strong>in</strong> problems <strong>in</strong> least<br />

squares,” Q. Appl. Math., vol. 2, pp. 164–168, 1944.<br />

[2] D. W. Marquardt, “An algorithm for least-squares estimation of nonl<strong>in</strong>ear<br />

parameters,” J. Soc. Ind. Appl. Math., vol. 11, pp. 431–441, 1963.<br />

[3] M. T. Hagan and M. Menhaj, “Tra<strong>in</strong><strong>in</strong>g feedforward networks with<br />

theMarquardt algorithm,” IEEE Trans. Neural Netw., vol. 5, no. 6, pp.<br />

989–993, Nov. 1994.<br />

[4] G. Lera and M. P<strong>in</strong>zolas, “A quasilocal Levenberg-Marquardt algorithm<br />

for neural network tra<strong>in</strong><strong>in</strong>g,” <strong>in</strong> Proc. Int. Jo<strong>in</strong>t Conf. Neural Networks<br />

(IJCNN’98), vol. 3, Anchorage, AK, May, pp. 2242–2246.<br />

[5] , “Neighborhood-based Levenberg-Marquardt algorithm for neural<br />

network tra<strong>in</strong><strong>in</strong>g,” IEEE Trans. Neural Netw., vol. 13, no. 5, pp.<br />

1200–1203, Sep. 2002.<br />

[6] M. P<strong>in</strong>zolas, J. J. Astra<strong>in</strong>, J. R. González, and J. Villadangos, “Isolated<br />

hand-written digit recognition us<strong>in</strong>g a neurofuzzy scheme and multiple<br />

classification,” J. Intell. Fuzzy Syst., vol. 12, no. 2, pp. 97–105, Dec. 2002.<br />

[7] J. M. Cano, M. P<strong>in</strong>zolas, J. J. Ibarrola, and J. López-Coronado, “Identificación<br />

de funciones utilizando sistemas lógicos difusos,” <strong>in</strong> Proc. XXI<br />

Jornadas Automática, Sevilla, Spa<strong>in</strong>, 2000.<br />

[8] M. P<strong>in</strong>zolas, J. J. Ibarrola, and J. López-Coronado, “A neurofuzzy<br />

scheme for on-l<strong>in</strong>e identification of nonl<strong>in</strong>ear dynamical systems with<br />

variabletransfer function,” <strong>in</strong> Proc. Int. Jo<strong>in</strong>t Conf. Neural Networks<br />

(IJCNN’00), Como, Italy, pp. 215–221.<br />

1045-9227/$20.00 © 2005 IEEE<br />

Sparse <strong>Component</strong> <strong>Analysis</strong> and Bl<strong>in</strong>d Source Separation<br />

of Underdeterm<strong>in</strong>ed Mixtures<br />

Pando Georgiev, Fabian Theis, and Andrzej Cichocki<br />

Abstract—In this letter, we solve the problem of identify<strong>in</strong>g matrices ƒ<br />

and e know<strong>in</strong>g only their multiplication ˆaeƒ, under<br />

some conditions, expressed either <strong>in</strong> terms of e and sparsity of ƒ (identifiability<br />

conditions), or <strong>in</strong> terms of ˆ (sparse component analysis (SCA) conditions).<br />

We present algorithms for such identification and illustrate them<br />

by examples.<br />

Index Terms—Bl<strong>in</strong>d source separation (BSS), sparse component analysis<br />

(SCA), underdeterm<strong>in</strong>ed mixtures.<br />

I. INTRODUCTION<br />

One of the fundamental questions <strong>in</strong> data analysis, signal process<strong>in</strong>g,<br />

data m<strong>in</strong><strong>in</strong>g, neuroscience, etc. is how to represent a large data set ˆ<br />

(given <strong>in</strong> form of a @� 2 xA-matrix) <strong>in</strong> different ways. A simple approach<br />

is a l<strong>in</strong>ear matrix factorization<br />

ˆ a eƒ e P �2� Y ƒ P �2x<br />

where the unknown matrices e (dictionary) and ƒ (sourcesignals)<br />

have some specific properties, for <strong>in</strong>stance:<br />

1) therows of ƒ are (discrete) random variables, which are statistically<br />

<strong>in</strong>dependent as much as possible—this is <strong>in</strong>dependent component<br />

analysis (ICA) problem; 2) ƒ conta<strong>in</strong>s as many zeros as possible—this<br />

is the sparse representation or sparse component analysis<br />

(SCA) problem; 3) the elements of ˆY e, and ƒ arenonnegative—this<br />

is nonnegative matrix factorization (NMF) [8].<br />

There is a large amount of papers devoted to ICA problems [2], [5]<br />

but mostly for thecase� ! �. We refer to [1], [6], [7], and [9]–[11]<br />

for some recent papers on SCA and underdeterm<strong>in</strong>ed ICA @� `�A.<br />

A related problem is the so called bl<strong>in</strong>d source separation (BSS)<br />

problem, <strong>in</strong> which we know a priori that a representation such as <strong>in</strong><br />

(1) exists and the task is to recover the sources (and the mix<strong>in</strong>g matrix)<br />

as accurately as possible. A fundamental property of the complete BSS<br />

problem is that such a recovery (under assumptions <strong>in</strong> 1) and non-Gaussianity<br />

of the sources) is possible up to permutation and scal<strong>in</strong>g of the<br />

sources, which makes the BSS problem so attractive.<br />

In this letter, we consider SCA and BSS problems <strong>in</strong> the underdeterm<strong>in</strong>ed<br />

case (� `�, i.e., more sources than sensors, which is more<br />

challeng<strong>in</strong>g problem), where the additional <strong>in</strong>formation compensat<strong>in</strong>g<br />

the limited number of sensors is the sparseness of thesources. It should<br />

be noted that this problem is quite general and fundamental, s<strong>in</strong>ce the<br />

sources could be not necessarily sparse <strong>in</strong> time doma<strong>in</strong>. It would be sufficient<br />

to f<strong>in</strong>d a l<strong>in</strong>ear transformation (e.g., wavelet packets), <strong>in</strong> which<br />

the sources are sufficiently sparse.<br />

In the sequel, we present new algorithms for solv<strong>in</strong>g the BSS<br />

problem: matrix identification algorithm and source recovery algorithm<br />

under conditions that the source matrix ƒ has at most � 0 I<br />

nonzero elements <strong>in</strong> each column and if the identifiability conditions<br />

Manuscript received November 20, 2003; revised July 25, 2004.<br />

P. Georgiev is with ECECS Department, University of C<strong>in</strong>c<strong>in</strong>nati, C<strong>in</strong>c<strong>in</strong>nati,<br />

OH 45221 USA (e-mail: pgeorgie@ececs.uc.edu).<br />

F. Theis is with the Institute of Biophysics, University of Regensburg,<br />

D-93040 Regensburg, Germany (e-mail: fabian@theis.name).<br />

A. Cichocki is with the Laboratory for Advanced Bra<strong>in</strong> Signal Process<strong>in</strong>g,<br />

Bra<strong>in</strong> Science Institute, The Institute of Physical and Chemical Research<br />

(RIKEN), Saitama 351-0198, Japan (e-mail: cia@bsp.bra<strong>in</strong>.riken.jp).<br />

Digital Object Identifier 10.1109/TNN.2005.849840<br />

(1)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!