160 Chapter 10. IEEE TNN 16(4):992-996, 2005 996 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 16, NO. 4, JULY 2005 the excellent performance of the another two our algorithms <strong>in</strong> the un- can be described by a l<strong>in</strong>ear weighted sum of the <strong>in</strong>puts, followed by derdeterm<strong>in</strong>ed BSS problem, for separation of artificially created sig- some nonl<strong>in</strong>ear transfer function [1], [2]. In this letter, however, the nals with sufficient level of sparseness. neural comput<strong>in</strong>g models studied are based on artificial neurons which often have b<strong>in</strong>ary <strong>in</strong>puts and outputs, and no adjustable weight between REFERENCES nodes. Neuron functions are stored <strong>in</strong> lookup tables, which can be [1] F. Abrard, Y. Deville, and P. White, “From bl<strong>in</strong>d source separation to bl<strong>in</strong>d source cancellation <strong>in</strong> the underdeterm<strong>in</strong>ed case: A new approach based on time-frequency analysis,” <strong>in</strong> Proc. 3rd Int. Conf. <strong>Independent</strong> implemented us<strong>in</strong>g commercially available random access memories (RAMs). These systems and the nodes that they are composed of will be described, respectively, as weightless neural networks (WNNs) and <strong>Component</strong> <strong>Analysis</strong> and Signal Separation (ICA’2001), San Diego, CA, weightless nodes [1], [2]. They differ from other models, such as the Dec. 9–13, 2001, pp. 734–739. [2] A. Cichocki and S. Amari, Adaptive Bl<strong>in</strong>d Signal and Image Process<strong>in</strong>g. New York: Wiley, 2002. [3] S. Chen, D. Donoho, and M. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comput., vol. 20, no. 1, pp. 33–61, 1998. weighted neural networks, whose tra<strong>in</strong><strong>in</strong>g is accomplished by means of adjustments of weights. In the literature, the terms “RAM-based” and “N-tuple based” have been used to refer to WNNs. In this letter, the computability (computational power) of a class of [4] D. Donoho and M. Elad, “Optimally sparse representation <strong>in</strong> general (nonorthogonal) dictionaries via � m<strong>in</strong>imization,” Proc. Nat. Acad. Sci., vol. 100, no. 5, pp. 2197–2202, 2003. [5] A. Hyvär<strong>in</strong>en, J. Karhunen, and E. Oja, <strong>Independent</strong> <strong>Component</strong> <strong>Analysis</strong>. New York: Wiley, 2001. WNNs, called general s<strong>in</strong>gle-layer sequential weightless neural networks (GSSWNNs), is <strong>in</strong>vestigated. Such a class is an important representative of the research on temporal pattern process<strong>in</strong>g <strong>in</strong> (WNNs) [3]–[8]. As oneof thecontributions, an algorithm (constructiveproof) [6] A. Jourj<strong>in</strong>e, S. Rickard, and O. Yilmaz, “Bl<strong>in</strong>d separation of disjo<strong>in</strong>t to map any probabilistic automaton (PA) <strong>in</strong>to a GSSWNN is presented. orthogonal signals: Demix<strong>in</strong>g N sources from 2 mixtures,” <strong>in</strong> Proc. 2000 IEEE Conf. Acoustics, Speech, and Signal Process<strong>in</strong>g (ICASSP’00), vol. 5, Istanbul, Turkey, Jun. 2000, pp. 2985–2988. [7] T.-W. Lee, M. S. Lewicki, M. Girolami, and T. J. Sejnowski, “Bl<strong>in</strong>d sourse separation of more sources than mixtures us<strong>in</strong>g overcomplete rep- In fact, the proposed method not only allows the construction of any PA, but also <strong>in</strong>creases the class of functions that can be computed by such networks. For <strong>in</strong>stance, at a theoretical level, these networks are not restricted to f<strong>in</strong>ite-state languages (regular languages) and can now resentaitons,” IEEE Signal Process. Lett., vol. 6, no. 4, pp. 87–90, 1999. [8] D. D. Lee and H. S. Seung, “Learn<strong>in</strong>g the parts of objects by nonnegative matrix factorization,” Nature, vol. 40, pp. 788–791, 1999. [9] F. J. Theis, E. W. Lang, and C. G. Puntonet, “A geometric algorithm for overcomplete l<strong>in</strong>ear ICA,” Neurocomput., vol. 56, pp. 381–398, 2004. [10] K. Waheed and F. Salem, “Algebraic overcomplete <strong>in</strong>dependent com- deal with some context-free languages. Practical motivations for <strong>in</strong>vestigat<strong>in</strong>g probabilistic automata and GSSWNNs arefound <strong>in</strong> their possible application to, among others th<strong>in</strong>gs, syntactic pattern recognition, multimodal search, and learn<strong>in</strong>g control [9]. ponent analysis,” <strong>in</strong> Proc. Int. Conf. <strong>Independent</strong> <strong>Component</strong> <strong>Analysis</strong> (ICA’03), Nara, Japan, pp. 1077–1082. II. DEFINITIONS [11] M. Zibulevsky and B. A. Pearlmutter, “Bl<strong>in</strong>d source separation by sparse decomposition <strong>in</strong> a signal dictionary,” Neural Comput., vol. 13, no. 4, pp. A. Probabilistic Automata 863–882, 2001. Probabilistic automata are a generalization of ord<strong>in</strong>ary determ<strong>in</strong>istic f<strong>in</strong>itestateautomata (DFA) for which an <strong>in</strong>put symbol could takethe Equivalence Between RAM-Based Neural Networks and Probabilistic Automata automaton <strong>in</strong>to any of its states with a certa<strong>in</strong> probability [9]. Def<strong>in</strong>ition 2.1: A PA is a 5-tuple e€ a@6Y Y rY �sYpA, where • 6a�'IY'PY FFFY'�6�� is a f<strong>in</strong>ite set of ordered symbols called the <strong>in</strong>put alphabet; Marcilio C. P. de Souto, Teresa B. Ludermir, and Wilson R. de Oliveira • • a ��HY�PY FFFY�� r is a mapp<strong>in</strong>g of �� is a f<strong>in</strong>ite set of states; 2 6 <strong>in</strong>to theset of � 2 � stochastic state transition matrices (where � is the number of states <strong>in</strong> ). The Abstract—In this letter, the computational power of a class of random <strong>in</strong>terpretation of r@—�AY—� P 6, can bestated as follows. access memory (RAM)-based neural networks, called general s<strong>in</strong>gle-layer sequential weightless neural networks (GSSWNNs), is analyzed. The theoretical results presented, besides help<strong>in</strong>g the understand<strong>in</strong>g of the temporal behavior of these networks, could also provide useful <strong>in</strong>sights for the develop<strong>in</strong>g of new learn<strong>in</strong>g algorithms. r@—�A a‘��� @—�A“, where ���@—�A ! H is theprobability of � enter<strong>in</strong>g state �� from state �� under <strong>in</strong>put —�, and ��� a �aI I, for all � aIYFFFY�. Thedoma<strong>in</strong> of r can be extended from 6 to 6 Index Terms—Automata theory, computability, random access memory (RAM) node, probabilistic automata, RAM-based neural networks, weightless neural networks (WNNs). I. INTRODUCTION The neuron model used <strong>in</strong> the great majority of work <strong>in</strong>volv<strong>in</strong>g neural networks is related to variations of the McCulloch–Pitts neuron, which will becalled theweighted neuron. A typical weighted neuron Manuscript received July 6, 2003; revised October 26, 2004. M. C. P. de Souto is with the Department of Informatics and Applied <strong>Mathematics</strong>, Federal University of Rio Grande do Norte, Campus Universitario, Natal-RN 59072-970, Brazil (e-mail: marcilio@dimap.ufrn.br). T. B. Ludermir is with the Center of Informatics, Federal University of Pernambuco, Recife 50732-970, Brazil. W. R. de Oliveira is with the Department of Physics and <strong>Mathematics</strong>, Rural Federal University of Pernambuco, Recife 52171-900, Brazil. Digital Object Identifier 10.1109/TNN.2005.849838 3 by def<strong>in</strong><strong>in</strong>g the follow<strong>in</strong>g: 1) 2) r@ Aas�, where is theempty str<strong>in</strong>g and s� is an � 2 � identity matrix; r@—� Y—� Y FFFY—� A a r@—� Ar@—� AY FFFYr@—� A, where � ! P and • —� �s P P 6Y� aIYFFFY�. is the<strong>in</strong>itial state<strong>in</strong> which themach<strong>in</strong>eis found before the first symbol of the <strong>in</strong>put str<strong>in</strong>g is processed; • p is the set of f<strong>in</strong>al states @p A. The language accepted by a PA e€ is „ @e€ Aa�@3Y �@3AA�3 P 6 3 Y�@3A a%Hr@3A%p b H� where: 1) %H is a �-dimensional row vector, <strong>in</strong> which the �th component is equal to one if �� a �s, and 0 otherwise and 2) %p is an �-dimensional column vector, <strong>in</strong> which the �th component is equal to 1 if �� P p and 0 otherwise. The language accepted by e€ with cut-po<strong>in</strong>t(threshold) !, such that H !`I,isv@e€ Y!Aa�3�3 P 6 3 and %Hr@3A%p b!�. Probabilistic automata recognize exactly the class of weighted regular languages (WRLs) [9]. Such a class of languages <strong>in</strong>cludes properly 1045-9227/$20.00 © 2005 IEEE
Chapter 11 EURASIP JASP, 2007 Paper F.J. Theis, P. Georgiev, and A. Cichocki. Robust sparse component analysis based on a generalized hough transform. EURASIP Journal on Applied Signal Process<strong>in</strong>g, 2007 Reference (Theis et al., 2007a) Summary <strong>in</strong> section 1.4.1 161
- Page 1 and 2:
Statistical machine learning of bio
- Page 3:
to Jakob
- Page 6 and 7:
vi Preface
- Page 8 and 9:
viii CONTENTS 3 Signal Processing 8
- Page 11 and 12:
Chapter 1 Statistical machine learn
- Page 13 and 14:
1.1. Introduction 5 auditory cortex
- Page 15 and 16:
1.2. Uniqueness issues in independe
- Page 17 and 18:
1.2. Uniqueness issues in independe
- Page 19 and 20:
S U M M A R Y T his �le contains
- Page 21 and 22:
1.3. Dependent component analysis 1
- Page 23 and 24:
Table 1.1: BSS algorithms based on
- Page 25 and 26:
1.3. Dependent component analysis 1
- Page 27 and 28:
1.3. Dependent component analysis 1
- Page 29 and 30:
1.3. Dependent component analysis 2
- Page 31 and 32:
1.3. Dependent component analysis 2
- Page 33 and 34:
1.4. Sparseness 25 1.4 Sparseness O
- Page 35 and 36:
1.4. Sparseness 27 Theorem 1.4.3 (S
- Page 37 and 38:
1.4. Sparseness 29 R 3 R 3 A BSRA R
- Page 39 and 40:
1.4. Sparseness 31 Sparse projectio
- Page 41 and 42:
1.5. Machine learning for data prep
- Page 43 and 44:
1.5. Machine learning for data prep
- Page 45 and 46:
1.5. Machine learning for data prep
- Page 47 and 48:
1.5. Machine learning for data prep
- Page 49 and 50:
1.5. Machine learning for data prep
- Page 51 and 52:
1.6. Applications to biomedical dat
- Page 53 and 54:
1.6. Applications to biomedical dat
- Page 55 and 56:
1.6. Applications to biomedical dat
- Page 57 and 58:
1.6. Applications to biomedical dat
- Page 59 and 60:
1.7. Outlook 51 noise data signal i
- Page 61:
Part II Papers 53
- Page 64 and 65:
56 Chapter 2. Neural Computation 16
- Page 66 and 67:
58 Chapter 2. Neural Computation 16
- Page 68 and 69:
60 Chapter 2. Neural Computation 16
- Page 70 and 71:
62 Chapter 2. Neural Computation 16
- Page 72 and 73:
64 Chapter 2. Neural Computation 16
- Page 74 and 75:
66 Chapter 2. Neural Computation 16
- Page 76 and 77:
68 Chapter 2. Neural Computation 16
- Page 78 and 79:
70 Chapter 2. Neural Computation 16
- Page 80 and 81:
72 Chapter 2. Neural Computation 16
- Page 82 and 83:
74 Chapter 2. Neural Computation 16
- Page 84 and 85:
76 Chapter 2. Neural Computation 16
- Page 86 and 87:
78 Chapter 2. Neural Computation 16
- Page 88 and 89:
80 Chapter 2. Neural Computation 16
- Page 90 and 91:
82 Chapter 3. Signal Processing 84(
- Page 92 and 93:
84 Chapter 3. Signal Processing 84(
- Page 94 and 95:
86 Chapter 3. Signal Processing 84(
- Page 96 and 97:
88 Chapter 3. Signal Processing 84(
- Page 98 and 99:
90 Chapter 4. Neurocomputing 64:223
- Page 100 and 101:
92 Chapter 4. Neurocomputing 64:223
- Page 102 and 103:
94 Chapter 4. Neurocomputing 64:223
- Page 104 and 105:
96 Chapter 4. Neurocomputing 64:223
- Page 106 and 107:
98 Chapter 4. Neurocomputing 64:223
- Page 108 and 109:
100 Chapter 4. Neurocomputing 64:22
- Page 110 and 111:
102 Chapter 4. Neurocomputing 64:22
- Page 112 and 113:
104 Chapter 5. IEICE TF E87-A(9):23
- Page 114 and 115:
106 Chapter 5. IEICE TF E87-A(9):23
- Page 116 and 117:
108 Chapter 5. IEICE TF E87-A(9):23
- Page 118 and 119: 110 Chapter 5. IEICE TF E87-A(9):23
- Page 120 and 121: 112 Chapter 5. IEICE TF E87-A(9):23
- Page 122 and 123: 114 Chapter 6. LNCS 3195:726-733, 2
- Page 124 and 125: 116 Chapter 6. LNCS 3195:726-733, 2
- Page 126 and 127: 118 Chapter 6. LNCS 3195:726-733, 2
- Page 128 and 129: 120 Chapter 6. LNCS 3195:726-733, 2
- Page 130 and 131: 122 Chapter 6. LNCS 3195:726-733, 2
- Page 132 and 133: 124 Chapter 7. Proc. ISCAS 2005, pa
- Page 134 and 135: 126 Chapter 7. Proc. ISCAS 2005, pa
- Page 136 and 137: 128 Chapter 7. Proc. ISCAS 2005, pa
- Page 138 and 139: 130 Chapter 8. Proc. NIPS 2006 Towa
- Page 140 and 141: 132 Chapter 8. Proc. NIPS 2006 1.3
- Page 142 and 143: 134 Chapter 8. Proc. NIPS 2006 stru
- Page 144 and 145: 136 Chapter 8. Proc. NIPS 2006 5.5
- Page 146 and 147: 138 Chapter 8. Proc. NIPS 2006
- Page 148 and 149: 140 Chapter 9. Neurocomputing (in p
- Page 150 and 151: 142 Chapter 9. Neurocomputing (in p
- Page 152 and 153: 144 Chapter 9. Neurocomputing (in p
- Page 154 and 155: 146 Chapter 9. Neurocomputing (in p
- Page 156 and 157: 148 Chapter 9. Neurocomputing (in p
- Page 158 and 159: 150 Chapter 9. Neurocomputing (in p
- Page 160 and 161: 152 Chapter 9. Neurocomputing (in p
- Page 162 and 163: 154 Chapter 9. Neurocomputing (in p
- Page 164 and 165: 156 Chapter 10. IEEE TNN 16(4):992-
- Page 166 and 167: 158 Chapter 10. IEEE TNN 16(4):992-
- Page 170 and 171: 162 Chapter 11. EURASIP JASP, 2007
- Page 172 and 173: 164 Chapter 11. EURASIP JASP, 2007
- Page 174 and 175: 166 Chapter 11. EURASIP JASP, 2007
- Page 176 and 177: 168 Chapter 11. EURASIP JASP, 2007
- Page 178 and 179: 170 Chapter 11. EURASIP JASP, 2007
- Page 180 and 181: 172 Chapter 11. EURASIP JASP, 2007
- Page 182 and 183: 174 Chapter 11. EURASIP JASP, 2007
- Page 184 and 185: 176 Chapter 12. LNCS 3195:718-725,
- Page 186 and 187: 178 Chapter 12. LNCS 3195:718-725,
- Page 188 and 189: 180 Chapter 12. LNCS 3195:718-725,
- Page 190 and 191: 182 Chapter 12. LNCS 3195:718-725,
- Page 192 and 193: 184 Chapter 12. LNCS 3195:718-725,
- Page 194 and 195: 186 Chapter 13. Proc. EUSIPCO 2005
- Page 196 and 197: 188 Chapter 13. Proc. EUSIPCO 2005
- Page 198 and 199: 190 Chapter 13. Proc. EUSIPCO 2005
- Page 200 and 201: 192 Chapter 14. Proc. ICASSP 2006 S
- Page 202 and 203: 194 Chapter 14. Proc. ICASSP 2006 A
- Page 204 and 205: 196 Chapter 14. Proc. ICASSP 2006
- Page 206 and 207: 198 Chapter 15. Neurocomputing, 69:
- Page 208 and 209: 200 Chapter 15. Neurocomputing, 69:
- Page 210 and 211: 202 Chapter 15. Neurocomputing, 69:
- Page 212 and 213: 204 Chapter 15. Neurocomputing, 69:
- Page 214 and 215: 206 Chapter 15. Neurocomputing, 69:
- Page 216 and 217: 208 Chapter 15. Neurocomputing, 69:
- Page 218 and 219:
210 Chapter 15. Neurocomputing, 69:
- Page 220 and 221:
212 Chapter 15. Neurocomputing, 69:
- Page 222 and 223:
214 Chapter 15. Neurocomputing, 69:
- Page 224 and 225:
216 Chapter 15. Neurocomputing, 69:
- Page 226 and 227:
218 Chapter 15. Neurocomputing, 69:
- Page 228 and 229:
220 Chapter 15. Neurocomputing, 69:
- Page 230 and 231:
222 Chapter 15. Neurocomputing, 69:
- Page 232 and 233:
224 Chapter 15. Neurocomputing, 69:
- Page 234 and 235:
226 Chapter 15. Neurocomputing, 69:
- Page 236 and 237:
228 Chapter 15. Neurocomputing, 69:
- Page 238 and 239:
230 Chapter 16. Proc. ICA 2006, pag
- Page 240 and 241:
232 Chapter 16. Proc. ICA 2006, pag
- Page 242 and 243:
234 Chapter 16. Proc. ICA 2006, pag
- Page 244 and 245:
236 Chapter 16. Proc. ICA 2006, pag
- Page 246 and 247:
238 Chapter 16. Proc. ICA 2006, pag
- Page 248 and 249:
240 Chapter 17. IEEE SPL 13(2):96-9
- Page 250 and 251:
242 Chapter 17. IEEE SPL 13(2):96-9
- Page 252 and 253:
244 Chapter 17. IEEE SPL 13(2):96-9
- Page 254 and 255:
246 Chapter 18. Proc. EUSIPCO 2006
- Page 256 and 257:
248 Chapter 18. Proc. EUSIPCO 2006
- Page 258 and 259:
250 Chapter 18. Proc. EUSIPCO 2006
- Page 260 and 261:
252 Chapter 19. LNCS 3195:977-984,
- Page 262 and 263:
254 Chapter 19. LNCS 3195:977-984,
- Page 264 and 265:
256 Chapter 19. LNCS 3195:977-984,
- Page 266 and 267:
258 Chapter 19. LNCS 3195:977-984,
- Page 268 and 269:
260 Chapter 19. LNCS 3195:977-984,
- Page 270 and 271:
262 Chapter 20. Signal Processing 8
- Page 272 and 273:
264 Chapter 20. Signal Processing 8
- Page 274 and 275:
266 Chapter 20. Signal Processing 8
- Page 276 and 277:
268 Chapter 20. Signal Processing 8
- Page 278 and 279:
270 Chapter 20. Signal Processing 8
- Page 280 and 281:
272 Chapter 20. Signal Processing 8
- Page 282 and 283:
274 Chapter 20. Signal Processing 8
- Page 284 and 285:
276 Chapter 20. Signal Processing 8
- Page 286 and 287:
278 Chapter 20. Signal Processing 8
- Page 288 and 289:
280 Chapter 20. Signal Processing 8
- Page 290 and 291:
282 Chapter 20. Signal Processing 8
- Page 292 and 293:
284 Chapter 20. Signal Processing 8
- Page 294 and 295:
286 Chapter 20. Signal Processing 8
- Page 296 and 297:
288 Chapter 20. Signal Processing 8
- Page 298 and 299:
290 Chapter 20. Signal Processing 8
- Page 300 and 301:
292 Chapter 21. Proc. BIOMED 2005,
- Page 302 and 303:
294 Chapter 21. Proc. BIOMED 2005,
- Page 304 and 305:
296 Chapter 21. Proc. BIOMED 2005,
- Page 306 and 307:
298 BIBLIOGRAPHY Barber, C., Dobkin
- Page 308 and 309:
300 BIBLIOGRAPHY Georgiev, P. (2001
- Page 310 and 311:
302 BIBLIOGRAPHY Keck, I., Theis, F
- Page 312 and 313:
304 BIBLIOGRAPHY Schießl, I., Sch
- Page 314 and 315:
306 BIBLIOGRAPHY Theis, F. and Inou