28.02.2014 Views

The Development of Neural Network Based System Identification ...

The Development of Neural Network Based System Identification ...

The Development of Neural Network Based System Identification ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

80 CHAPTER 4 NEURAL NETWORK BASED SYSTEM IDENTIFICATION<br />

<strong>The</strong> computational model <strong>of</strong> a single neuron that can perform intelligent functions<br />

or tasks is shown in 4.2(a). As an example, the process to produce a pulse-width output<br />

signal inside a single neuron is shown in detail in Figure 4.2(b). <strong>The</strong> processing unit or<br />

neuron accepts external inputs through the weighted connections, W hj and all <strong>of</strong> these<br />

signals are summed up before being fed to the activation function. External inputs such<br />

as data measurement or outputs from the other neuron’s calculation can be supplied to<br />

the neuron for processing. If the activation function in the neuron is a linear function,<br />

this NN model is also known as the adaptive linear neuron model (ADALINE) which<br />

was first developed by Widrow and H<strong>of</strong>f [1960]. <strong>The</strong> learning <strong>of</strong> the ADALINE network<br />

was done by minimising the square error between the target value and the output <strong>of</strong><br />

the model. This learning process is also known as gradient descent in neural learning or<br />

least square error minimisation in statistical methods [Samarasinghe, 2007].<br />

<strong>The</strong> activation function, f h in the neurons is used to introduce non-linearity into<br />

the network. <strong>The</strong> activation function is typically selected as non-linear and continuous<br />

function that remains bounded within some upper and lower bounds [Norgaard, 2000,<br />

Samal, 2009]. Since the output <strong>of</strong> non-linear function varies non-linearly with the input,<br />

the NN model can perform non-linear mapping between inputs and outputs. Hornik<br />

et al. [1989], Tu [1996] and Tu [1996] further proved that a non-linear activation function<br />

such as sigmoid function introduced in the neurons would make the NN capable <strong>of</strong><br />

approximating any non-linear function <strong>of</strong> interest to any desired degree <strong>of</strong> accuracy<br />

with sufficiently available neurons. Hornik et al. [1989] further implies that any failure<br />

<strong>of</strong> a function mapping by a multilayer network must arise from inadequate choice <strong>of</strong><br />

weight parameters or an insufficient number <strong>of</strong> hidden nodes.<br />

<strong>The</strong> versatility <strong>of</strong> a NN as a universal approximator is not only limited to sigmoid<br />

function. Stinchcombe and White [1989] proved in their work that NN with general class<br />

<strong>of</strong> non-linear function can also achieve universal approximation. Furthermore, the nonlinear<br />

function mapping can also be approximated by NN with a bell shaped activations<br />

function in the hidden neuron unit [Baldi, 1990]. Different activation functions are<br />

used in neural networks training and some <strong>of</strong> the activation examples are given in<br />

Figure 4.3. <strong>The</strong> step and sign functions are activation function typically used for binary

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!