07.03.2014 Views

POLITECHNIKA WARSZAWSKA

POLITECHNIKA WARSZAWSKA

POLITECHNIKA WARSZAWSKA

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3. Basics of Artificial Neural Network (ANN)<br />

square difference between the desired output and the actual output of the feedforward<br />

ANN. In order to minimize this error function, the backpropagation algorithm uses a<br />

gradient search technique.<br />

3.4.2.1. The Widrow-Hoff (standard delta) learning rule<br />

Learning rule for one linear neuron<br />

Let us consider the simplest case of ANN. It means, that the Neural Network consists<br />

of one linear neuron with N inputs. We will study supervized learning process of this<br />

network. So, it is convenient to introduce so-called teaching sequence. We can define<br />

this sequence as follows:<br />

T<br />

( 1) (1) (2) (2) ( P)<br />

( P)<br />

= {{ X , z },{ X , z }, K ,{ X , z }} (3.11)<br />

( j)<br />

( j)<br />

where each element { X , z }, consists of input vector X in the j-th step of learning<br />

process, and appropriate desired output signal z.<br />

In order to show the learning algorithm, we define the error function as:<br />

Q =<br />

1<br />

2<br />

P<br />

( j)<br />

( j)<br />

2<br />

∑(<br />

z − y )<br />

j=<br />

1<br />

(3.12)<br />

We can rewrite this equation in the following form:<br />

Q =<br />

P<br />

( j)<br />

∑Q<br />

j=<br />

1<br />

(3.13)<br />

where:<br />

23

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!