28.02.2014 Views

The Development of Neural Network Based System Identification ...

The Development of Neural Network Based System Identification ...

The Development of Neural Network Based System Identification ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

2.3 NEURAL NETWORK BASED SYSTEM IDENTIFICATION 33<br />

considered.<br />

NN based modelling approach that uses tapped delay lines to represent a dynamical<br />

system usually requires a priori knowledge about the model order <strong>of</strong> the system [Suresh<br />

et al., 2003]. Low model order assumption can generally lead to reduction <strong>of</strong> prediction<br />

performance <strong>of</strong> the neural network model. A much more in-depth analysis in selecting<br />

proper network structure needs to be performed, in order to improve prediction quality<br />

from the NN model. Obviously, the model validation plays an important part in the<br />

system identification steps. <strong>The</strong> validation methods introduced in this work can be used<br />

to ensure that the NN model fits well with observations and aid the neural network<br />

modeller to select the optimised or near optimised network structure for prediction<br />

with an acceptable accuracy. Furthermore, massive amount <strong>of</strong> weights in the standard<br />

NNARX model can results in increased NN training time and limit the NNARX model<br />

in real-time application. Several alternative architectures will be tested out in this study<br />

in order to improve the training time <strong>of</strong> the NNARX model.<br />

2.3.2 NN Training Algorithms<br />

<strong>The</strong>re are numerous training algorithms available in the literature for NN training.<br />

<strong>The</strong> gradient based methods are common by used techniques to solve the minimisation<br />

problem in NN training.<br />

<strong>The</strong>y are used to minimise the error cost between the<br />

measurement data and the predicted outputs <strong>of</strong> the NN model. Figure 2.6 shows the<br />

general principle <strong>of</strong> NN training or learning process. <strong>The</strong> minimisation process <strong>of</strong><br />

the error cost function is carried out iteratively for a given data set to obtain a set<br />

<strong>of</strong> optimum NN weights. Subsequently, a set <strong>of</strong> properly trained NN weights gives<br />

the best possible fit to the measurement data. Several examples <strong>of</strong> gradient based<br />

methods commonly used for NN training are back-propagation technique (Steepest<br />

Descent Method), conjugate gradient method, Newton’s method, Gauss-Newton (GN)<br />

method, Broyden-Fletcher-Goldfarb-Shanno (BFGS) method, Neuron by Neuron (NBN)<br />

algorithm and Levenberg-Marquardt (LM) optimisation algorithm [Norgaard, 2000,<br />

Wilamowski et al., 2011, Wilamowski, 2011a, Haykin, 2009]. <strong>The</strong>se NN training methods<br />

are considered as local minimisation techniques.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!