28.02.2014 Views

The Development of Neural Network Based System Identification ...

The Development of Neural Network Based System Identification ...

The Development of Neural Network Based System Identification ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4.2 THE ARTIFICIAL NEURAL NETWORKS 83<br />

ŷ1<br />

yˆn<br />

<br />

<br />

Output Layer<br />

B1<br />

W2n,h<br />

1<br />

Bias Input<br />

Bn<br />

1<br />

<br />

W21,1 W2n,1<br />

W2n,2 W21,h<br />

W21,2<br />

2<br />

<br />

<br />

h<br />

Hidden Layer<br />

b1<br />

b2<br />

1<br />

Bias Input<br />

bh<br />

W11,1<br />

W12,1 W12,m<br />

W1h,1<br />

W11,m<br />

W1h,m<br />

Input Layer<br />

X1<br />

Xm<br />

Figure 4.4 A fully connected, feed-forward multi-layered perceptron (MLP) with m-inputs, h-hidden<br />

neurons and n-outputs.<br />

more hidden layers in regression problem [Tu, 1996]. Moreover, Funahashi [1989] and<br />

Cybenko [1989] have proven that any non-linear relationship function mapping can be<br />

sufficiently approximate with a single hidden layer with sigmoid activation function.<br />

<strong>The</strong> MLP architecture with one hidden layer is shown in Figure 4.4. For a single hidden<br />

layer case, the outputs formulation from hidden and output layer <strong>of</strong> a MLP network is<br />

given as follows:<br />

v h (t) = g h<br />

⎛<br />

⎝<br />

⎞<br />

m∑<br />

W 1 hj X j (t) + b h<br />

⎠ ; for h = 1, 2, 3 · · · H (4.1)<br />

j=1<br />

( H<br />

)<br />

∑<br />

ŷ i (t) = g i W 2 ih V h (t) + B i ; for i = 1, 2, 3 · · · n (4.2)<br />

h=1<br />

where W 1 hj is the weights matrix between the input layer and the hidden layer and<br />

W 2 ih is the weights matrix between the hidden layer and the output layer. <strong>The</strong> functions<br />

g h (∗) and g i (∗) are non-linear activation function for neurons in each hidden and output<br />

layers. <strong>The</strong> symbol H denotes the number <strong>of</strong> neurons in the hidden layer while b h and<br />

B i are the bias elements for the input layer and output layer. <strong>The</strong> number <strong>of</strong> inputs<br />

and outputs <strong>of</strong> neural network are presented by m and n respectively. In Equation (4.1)<br />

and (4.2), the weight connections and biases in the network structure are included in

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!