31.01.2014 Views

Unsupervised Recursive Sequence Processing - Institute of ...

Unsupervised Recursive Sequence Processing - Institute of ...

Unsupervised Recursive Sequence Processing - Institute of ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

and<br />

△c j = ɛ ′ · h σ (nhd(n j0 , n j )) · (C SOMSD (s i+1 , . . . , s t ) − c j )<br />

with learning rates ɛ, ɛ ′ ∈ (0, 1). n j0 denotes the winner for sequence entry i.<br />

As demonstrated in [11], a generalization <strong>of</strong> this approach to tree structures can<br />

reliably model structured objects and their respective topological ordering.<br />

We would like to point out that, although these approaches seem different, they<br />

constitute instances <strong>of</strong> the same recursive computation scheme. As proved in [14],<br />

the underlying recursive update dynamics comply with<br />

d((s 1 , . . . , s t ), n j ) = η 1 ‖s 1 − w j ‖ 2 + η 2 ‖C(s 2 , . . . , s n ) − c j ‖ 2<br />

in all the cases. Their specific similarity measures for weights and contexts are denoted<br />

by the generic ‖ · ‖ expression. The approaches differ with respect to the<br />

concrete choice <strong>of</strong> the context C: TKM and RSOM refer to only the neuron itself<br />

and are therefore restricted to local fractal codes within the weight space; RecSOM<br />

uses the whole map activation, which is powerful but also expensive and subject<br />

to random neuron activations; SOMSD relies on compressed information, the location<br />

<strong>of</strong> the winner. Note that also standard supervised recurrent networks can be<br />

put into the generic dynamic framework by choosing the context as the output <strong>of</strong><br />

the sigmoidal transfer function [14]. In addition, alternative compression schemes,<br />

such as a representation <strong>of</strong> the context by the winner content, are possible [37].<br />

To summarize this section, essentially four different models have been proposed<br />

for processing temporal information. The models are characterized by the way in<br />

which context is taken into account within the map. The models are:<br />

Standard SOM: no context representation; standard distance computation; standard<br />

competitive learning.<br />

TKM and RSOM: no explicit context representation; the distance computation<br />

recursively refers to the distance <strong>of</strong> the previous time step; competitive learning<br />

for the weight whereby (for RSOM) the averaged signal is used.<br />

RecSOM: explicit context representation as N-dimensional activity pr<strong>of</strong>ile <strong>of</strong> the<br />

previous time step; the distance computation is given as mixture <strong>of</strong> the current<br />

match and the match <strong>of</strong> the context stored at the neuron and the (recursively computed)<br />

current context given by the processed time series; competitive learning<br />

adapts the weight and context vectors.<br />

SOMSD: explicit context representation as low-dimensional vector, the location<br />

<strong>of</strong> the previously winning neuron in the map; the distance is computed recursively<br />

the same way as for RecSOM, whereby a distance measure for locations<br />

in the map has to be provided; so far, the model is only available for standard<br />

rectangular Euclidean lattices; competitive learning adapts the weight and context<br />

vectors, whereby the context vectors are embedded in the Euclidean space.<br />

10

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!