31.01.2014 Views

Unsupervised Recursive Sequence Processing - Institute of ...

Unsupervised Recursive Sequence Processing - Institute of ...

Unsupervised Recursive Sequence Processing - Institute of ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

R N is provided, N denoting the number <strong>of</strong> neurons, which explicitly represents<br />

the contextual map activation <strong>of</strong> all neurons in the previous time step. Thus, the<br />

temporal context is represented in this model in an N-dimensional vector space, N<br />

denoting the number <strong>of</strong> neurons. One can think <strong>of</strong> the context as an explicit storage<br />

<strong>of</strong> the activity pr<strong>of</strong>ile <strong>of</strong> the whole map in the previous time step. More precisely,<br />

distance is recursively computed by<br />

d RecSOM ((s 1 , . . . , s t ), n j ) = η 1 ‖s 1 − w j ‖ 2 + η 2 ‖C RecSOM (s 2 , . . . , s t ) − c j ‖ 2<br />

where η 1 , η 2 > 0.<br />

C RecSOM (s) = (exp(−d RecSOM (s, n 1 )), . . . , exp(−d RecSOM (s, n N )))<br />

constitutes the context. Note that this vector is almost the vector <strong>of</strong> distances <strong>of</strong> all<br />

neurons computed in the previous time step. These are exponentially transformed<br />

to avoid an explosion <strong>of</strong> the values. As before, the above distance can be decomposed<br />

into two parts: the winner computation similar to standard SOM, and, as in<br />

the case <strong>of</strong> RSOM and TKM, a term which assesses the context match. For Rec-<br />

SOM the context match is a comparison <strong>of</strong> the current context when processing<br />

the sequence, i.e. the vector <strong>of</strong> distances <strong>of</strong> the previous time step, and the expected<br />

context c j which is stored at neuron j. That is to say, RecSOM explicitly stores context<br />

vectors for each neuron and compares these context vectors to their expected<br />

contexts during the recursive computation. Since the entire map activation is taken<br />

into account, sequences <strong>of</strong> any given fixed length can be stored, if enough neurons<br />

are provided. Thus, the representation space for context is no longer restricted by<br />

the weight space and its capacity now scales with the number <strong>of</strong> neurons.<br />

For RecSOM, training is done in Hebbian style for both weights and contexts. Denote<br />

by n j0 the winner for sequence entry i, then the weight changes are<br />

△w j = ɛ · h σ (nhd(n j0 , n j )) · (s i − w j )<br />

and the context adaptation is<br />

△c j = ɛ ′ · h σ (nhd(n j0 , n j )) · (C RecSOM (s i+1 , . . . , s t ) − c j )<br />

The latter update rule makes sure that the context vectors <strong>of</strong> the winner neuron<br />

and its neighborhood become more similar to the current context vector C RecSOM ,<br />

which is computed when the sequence is processed. The learning rates are ɛ, ɛ ′ ∈<br />

(0, 1). As demonstrated in [41], this richer representation <strong>of</strong> context allows a better<br />

quantization <strong>of</strong> time series data. In [41], various quantitative measures to evaluate<br />

trained recursive maps are proposed, such as the temporal quantization error and<br />

the specialization <strong>of</strong> neurons. RecSOM turns out to be clearly superior to TKM and<br />

RSOM with respect to these measures in the experiments provided in [41].<br />

8

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!