28.02.2014 Views

The Development of Neural Network Based System Identification ...

The Development of Neural Network Based System Identification ...

The Development of Neural Network Based System Identification ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4.2 THE ARTIFICIAL NEURAL NETWORKS 89<br />

connections from context units to hidden units. Figure 4.6(c) shows the architecture <strong>of</strong><br />

the modified Elman network with reduced weight connections. Note that the connections<br />

from context units to hidden units are made with one to one connections instead <strong>of</strong><br />

multiple connections from a single context unit to multiple hidden units as in Figure<br />

4.6(b). For a single hidden layer case, the outputs formulation from hidden, context<br />

unit and output layer <strong>of</strong> a modified Elman network are given as follows:<br />

v h (t) = g h<br />

⎛<br />

⎝<br />

m∑<br />

W 1 hj X j (t) + B1 h +<br />

j=1<br />

⎞<br />

h∑<br />

W 3 k x k (t) ⎠ for h = 1, 2, 3 · · · H (4.8)<br />

k=1<br />

x k (t) = v h (t − 1) + αx k (t − 1) for k = 1, 2, 3 · · · H (4.9)<br />

( H<br />

)<br />

∑<br />

ŷ i (t) = g i W 2 ih v h (t) + B2 i for i = 1, 2, 3 · · · n (4.10)<br />

h=1<br />

Similar to MLP and HMLP network architecture, variables m, n and H denote the<br />

number <strong>of</strong> inputs, outputs and hidden nodes <strong>of</strong> the Elman network respectively, B1 h<br />

and B2 i is the bias elements <strong>of</strong> input and output layer and X j (t) denotes the inputs data<br />

fed into the Elman network. <strong>The</strong> function g h and g i are the activation functions used in<br />

the hidden and output layer similar to MLP architecture. <strong>The</strong> weight connections that<br />

connects the input layer to the hidden layer is given by matrix W 1 hj . Matrix W 2 ih<br />

indicates weight matrix that connect the hidden layer to the output layer. <strong>The</strong> weight<br />

connections from context units to hidden neurons are represented by vector W 3 k . <strong>The</strong><br />

self-connections α in the context units are set to be the same for all context units and<br />

typically selected between 0 ≤ α ≤ 1. <strong>The</strong> higher the value <strong>of</strong> α indicates that network<br />

has the higher capability to trace the gradient further into the past [Pham and Liu,<br />

1993].

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!