25.01.2015 Views

Download Full Issue in PDF - Academy Publisher

Download Full Issue in PDF - Academy Publisher

Download Full Issue in PDF - Academy Publisher

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

1588 JOURNAL OF COMPUTERS, VOL. 8, NO. 6, JUNE 2013<br />

functions of the process units are differentiable, which are<br />

usually S type function (Sigmoid function) f ( x ), that is<br />

1<br />

f( x)<br />

= (1)<br />

x<br />

1 + e −<br />

The study process of BP neural network <strong>in</strong>cludes<br />

forward propagation and error back-propagation. If given<br />

some <strong>in</strong>put mode, the BP network will study for every<br />

<strong>in</strong>put mode <strong>in</strong> accordance with the followed methods.<br />

The <strong>in</strong>put mode are transferred from <strong>in</strong>put layers to the<br />

hidden layer units, by which the <strong>in</strong>put mode can be<br />

processed, the new output mode will be transferred to<br />

output layer, that is called forward propagation. If the<br />

output mode is not expected, the error signals will return<br />

along the orig<strong>in</strong> route, connection weights of neurons <strong>in</strong><br />

every layer should be corrected to make the error signals<br />

least, that is error back-propagation. Forward propagation<br />

and back propagation repeatedly, until the expected<br />

output mode can be obta<strong>in</strong>ed<br />

The learn<strong>in</strong>g process of BP network beg<strong>in</strong> from a set<br />

of random weights and thresholds, any selected samples<br />

can be <strong>in</strong>put. The output can be computed by forwardback<br />

method. Usually this error is big, the new weights<br />

and thresholds of the mode must be computed over aga<strong>in</strong><br />

by the back propagation. For all of the samples, the<br />

process should be done repeatedly aga<strong>in</strong> and aga<strong>in</strong>, to get<br />

the appo<strong>in</strong>ted accuracy. In the process of network<br />

operation, the system error and s<strong>in</strong>gle mode error can be<br />

followed. If the network learn<strong>in</strong>g successfully, the system<br />

errors will decrease with <strong>in</strong>creas<strong>in</strong>g of iterative time, at<br />

last converge at a set of steady weights and thresholds.<br />

y1<br />

y2<br />

y3<br />

x1<br />

x2<br />

x3<br />

Figure 1. the BP network model structure with three layers<br />

B. The Mathematical Pr<strong>in</strong>ciple of Back Propagation<br />

Network Model<br />

The propagation formulas for BP network study are<br />

used to adjust the weights and thresholds. In fact, the<br />

network study process is a process <strong>in</strong> which weights and<br />

thresholds of network connection are revised repeatedly<br />

accord<strong>in</strong>g to the propagation formula <strong>in</strong> the direction of<br />

least error. There are some symbol conventions:<br />

O : output of nodei ;<br />

i<br />

net : <strong>in</strong>put of node<br />

j<br />

j ;<br />

w : connected weight from node i to node j ;<br />

ij<br />

θ : threshold of node<br />

j<br />

j ;<br />

y : actual output of node<br />

k<br />

k <strong>in</strong> output layer;<br />

t : expected output of node<br />

k<br />

k <strong>in</strong> output layer.<br />

Obviously, for hidden node j :<br />

net<br />

j<br />

= ∑ wijO<br />

⎫<br />

i ⎪ ⎬<br />

(2)<br />

Oj = f( netj −θ<br />

j)<br />

⎪⎭<br />

In study process of BP algorithm, the errors of every<br />

output node can be computed accord<strong>in</strong>g to the follow<strong>in</strong>g<br />

formula:<br />

1<br />

2<br />

e= ∑ ( tk<br />

− yk)<br />

(3)<br />

2 k<br />

The connection weights can be corrected accord<strong>in</strong>g to<br />

the follow<strong>in</strong>g formula:<br />

w ( t+ 1) = w ( t)<br />

+Δ w<br />

(4)<br />

ij ij ij<br />

w<br />

In the formula, () ij<br />

t wij<br />

( t+ 1)<br />

and are separately<br />

connection weights from node j to node k at time t<br />

andt + 1 Δw<br />

;<br />

ij is variation of connection weights.<br />

In order to improve the connection weights <strong>in</strong> the<br />

Δwij<br />

gradient change direction of error E, can be<br />

computed:<br />

e<br />

Δ wij<br />

=−η ∂<br />

(5)<br />

∂ w<br />

In the formula, η is ga<strong>in</strong> factor,<br />

Thus<br />

Thus<br />

jk<br />

∂e<br />

∂w<br />

jk<br />

∂e<br />

∂e<br />

∂net<br />

=<br />

∂w ∂net ∂w<br />

∂net<br />

jk k jk<br />

∂<br />

can be computed:<br />

k<br />

= ∑ wjkOj = O (6)<br />

j<br />

∂wjk<br />

∂wjk<br />

j<br />

∂<br />

δk<br />

= ∂ net k<br />

∂e<br />

Δ w =− η =−ηδ<br />

O<br />

ij k j<br />

∂wjk<br />

When comput<strong>in</strong>gδ k<br />

, it is essential to dist<strong>in</strong>guish the<br />

output layer nodes and hidden layer nodes. If node k lies<br />

<strong>in</strong> output layer, thus:<br />

∂e<br />

∂e<br />

∂yk<br />

δk<br />

= =<br />

∂netk ∂yk ∂netk<br />

Because of<br />

∂ e<br />

∂y<br />

=− ( t −<br />

k<br />

k<br />

yk)<br />

= f ′( netk<br />

)<br />

∂yk<br />

∂netk<br />

Thus<br />

δk =−( tk −yk) f′<br />

( netk)<br />

⎫⎪ ⎬ (8)<br />

Δ wjk = η( tk − yk) f′<br />

( netk)<br />

Oj⎪⎭<br />

k<br />

(7)<br />

© 2013 ACADEMY PUBLISHER

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!