28.02.2014 Views

The Development of Neural Network Based System Identification ...

The Development of Neural Network Based System Identification ...

The Development of Neural Network Based System Identification ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

134 CHAPTER 5 NN BASED SYSTEM IDENTIFICATION: RESULTS AND DISCUSSION<br />

RMSE (%)<br />

28<br />

26<br />

24<br />

22<br />

20<br />

18<br />

16<br />

14<br />

h=1<br />

h=2<br />

h=3<br />

h=4<br />

h=5<br />

h=6<br />

h=7<br />

h=8<br />

12<br />

10<br />

8<br />

6<br />

1−1 (4 regs.) 2−1 (6 regs.) 2−2 (8 regs.) 3−1 (8 regs.) 3−2 (10 regs.) 3−3 (12 regs.)<br />

Lag Space<br />

Figure 5.10 <strong>The</strong> percentage <strong>of</strong> Root Mean Square Error (RMSE) <strong>of</strong> the HMLP network trained with<br />

different network structures and number <strong>of</strong> hidden neurons. <strong>The</strong> neural network training was carried<br />

out using <strong>of</strong>f-line Levenberg-Marquardt (LM) algorithm.<br />

Table 5.4<br />

<strong>The</strong> HMLP neural networks model parameters.<br />

HMLP <strong>Network</strong> Specifications for Attitude Dynamics<br />

Number <strong>of</strong> past outputs 2<br />

Number <strong>of</strong> past inputs 1<br />

Number <strong>of</strong> neurons in hidden layer 3<br />

Activation function at hidden layer<br />

Tanh<br />

Activation function at output layer<br />

Linear<br />

Number <strong>of</strong> regressors 6<br />

Total number <strong>of</strong> weights 43<br />

Weight decay 0.0001<br />

plots <strong>of</strong> the optimum HMLP structure, the network is found capable <strong>of</strong> producing RMSE<br />

as good as the standard MLP network at much lower number <strong>of</strong> hidden neurons and<br />

network structure. This suggests that the additional linear connections from the input<br />

layer to output layer in the HMLP network helps reduces the complexity <strong>of</strong> the MLP<br />

network by incorporating linear weights connections instead <strong>of</strong> adding more neurons in<br />

the hidden layer. Furthermore, the signal propagations in the linear weight connections<br />

across layers are easier to train compared with signals that pass through non-linear<br />

neurons [Wilamowski, 2009]. Thus, the reduced network complexity in the HMLP<br />

network can leads to a faster learning rate such as demonstrated in Section 5.7.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!