07.08.2015 Views

Modeling Hydra Behavior Using Methods Founded in Behavior-Based Robotics

Modeling Hydra Behavior Using Methods Founded in ... - SAIS

Modeling Hydra Behavior Using Methods Founded in ... - SAIS

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Appendix B. Recurrent neural networks 59case with FFNN). Instead, the network simply consists of a set of neurons with possibleconnections between all neurons, as well as self-coupl<strong>in</strong>gs. An example of an RNN isshown <strong>in</strong> Fig. B.2. For neurons connected <strong>in</strong> a network, the weighted <strong>in</strong>com<strong>in</strong>g connections(denoted x 1 , ..., x k <strong>in</strong> Fig. B.1) consists of both external <strong>in</strong>puts, I, and output signalsfrom neurons <strong>in</strong> the network, y). The dynamics of neuron i <strong>in</strong> a network consist<strong>in</strong>g of nneurons and m <strong>in</strong>put signals, is governed by Eq. B.1.(τ idy idt + y i = σb i +n∑w ij y j +j=1)m∑wijI I j , i = 1, ..., n, (B.1)where τ i is a time constant, b i the bias term, y i is the output of neuron i, w ij the synapticweight connect<strong>in</strong>g the output of neuron j to neuron i, w I ij the weight connect<strong>in</strong>g <strong>in</strong>putsignal j to neuron i, and I j is the external <strong>in</strong>put from <strong>in</strong>put signal j to neuron i. Severalalternative activation functions, σ(·), are common. In this project, it was taken as1σ(z) = , (B.2)1 + e−cz which restricts the output of any neuron to the range [0, 1]. For numerical (computer)calculations, the model is discretized, us<strong>in</strong>g Euler’s method, accord<strong>in</strong>g tody idt ≈ y i(t + ∆t) − y i (t). (B.3)∆t<strong>Us<strong>in</strong>g</strong> Eq. B.3 <strong>in</strong> Eq. B.1 gives()y i (t + ∆t) − y i (t)n∑m∑τ i + y i (t) = σ b i + w ij y j (t) + w I∆tijI j (t) , (B.4)so the discrete model can be expressed as:[ (y i (t + ∆t) = y i (t) + ∆t −y i (t) + σ b i +τ iB.2 Learn<strong>in</strong>g algorithmsj=1j=1n∑w ij y j (t) +j=1j=1)]m∑wijI I j (t) . (B.5)The parameters of an ANN is assigned <strong>in</strong> a process known as tra<strong>in</strong><strong>in</strong>g, where a learn<strong>in</strong>galgorithm is applied <strong>in</strong> order to optimize the network parameters, i.e. the synapticweights, bias terms, and time constants, <strong>in</strong> the case of RNNs. Some learn<strong>in</strong>g algorithmsalso allows the size and structure of the network to be optimized as well. For FFNNs,backpropagation is a commonly used learn<strong>in</strong>g algorithm, see e.g. [23, 67]. RNNs are, <strong>in</strong>general, more complex to tra<strong>in</strong> than FFNNs, but one way of accomplish<strong>in</strong>g parametricaland structural optimization of RNNs is to use EAs, which was also done <strong>in</strong> this project.j=1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!