07.03.2014 Views

POLITECHNIKA WARSZAWSKA

POLITECHNIKA WARSZAWSKA

POLITECHNIKA WARSZAWSKA

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3. Basics of Artificial Neural Network (ANN)<br />

where elements w j , are called synapse weights, can be modified during the learning<br />

process.<br />

The output of the neuron unit is defined as follows:<br />

y = F(e) (3.3)<br />

Note, that w 0 - is adjustable bias and F – is activation function (also called transfer<br />

function). Thus, the output, y, is obtained by summing the weighted inputs and<br />

passing the results through a nonlinear (or linear) activation function F.<br />

The activation function F map, a weighted sum's e (possibly) infinite domain<br />

to a prespecified range. Although the number of F functions is possibly infinite, six<br />

types are regularly applied in the majority of ANN: linear, step, bipolar, sigmoid,<br />

hyperbolic tangent. With the exception of the linear F function, all of these functions<br />

introduce a nonlinearity in the network by bounding the output within a fixed range.<br />

In the next subsection some examples of commonly used activation functions are<br />

briefly presented.<br />

3.2.1. Activation functions<br />

The linear F function (Fig. 3.2) produces a linearly modulated output from the input e<br />

as described by equation<br />

F(e) = ξ e (3.3)<br />

F (e)<br />

F (e)<br />

1<br />

1<br />

0<br />

e<br />

0<br />

e<br />

-1<br />

-1<br />

-1 0 1<br />

-1 0 1<br />

Fig. 3.2. Linear activation function<br />

16

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!