25.01.2015 Views

Download Full Issue in PDF - Academy Publisher

Download Full Issue in PDF - Academy Publisher

Download Full Issue in PDF - Academy Publisher

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

1484 JOURNAL OF COMPUTERS, VOL. 8, NO. 6, JUNE 2013<br />

B. Traffic Flow Volterra Neural Network Rapid Learn<strong>in</strong>g<br />

Algorithm<br />

On the establishment of traffic flow chaotic time series<br />

WNN, Network <strong>in</strong>put the number of neurons, hidden<br />

layers and the number of neurons <strong>in</strong> the hidden layer are<br />

to be considered. The follow<strong>in</strong>g traffic flow data used are<br />

from "Chongq<strong>in</strong>g Road Traffic Management Data Sheet<br />

I" and "Chongq<strong>in</strong>g Road Traffic Management Data Sheet<br />

II" <strong>in</strong> 2006. There is the study of traffic volume time<br />

series of two-lane road 28 hours and 5 m<strong>in</strong>utes every 5<br />

m<strong>in</strong>utes, <strong>in</strong>clud<strong>in</strong>g m<strong>in</strong>i-vehicles, passenger cars, light<br />

trucks, midsize, large cars, trailer, micro van, not<br />

stereotypes, such as vehicles, and its sequence length<br />

n = 337 .First, Pretreatment of the traffic flow time series,<br />

the m<strong>in</strong>imum embedd<strong>in</strong>g dimension m = 4 and delay<br />

time τ = 3 are obta<strong>in</strong>ed by calculation. Then, the traffic<br />

flow Volterra neural network can be constructed: traffic<br />

flow Volterra neural network is designed to be three<br />

layers: <strong>in</strong>put layer, s<strong>in</strong>gle hidden layer and output layer;<br />

the number of hidden layer wavelet neural taken as 9 by<br />

Kolmogorov Theorem, the number of <strong>in</strong>put layer neurons<br />

equal to the m<strong>in</strong>imum embedd<strong>in</strong>g dimension ( m = 4 ),<br />

the number of output layer is 1, so that the 4-9-1 structure<br />

of traffic flow Volterra neural network was obta<strong>in</strong>ed,<br />

specifically shown <strong>in</strong> Figure 2. The hidden layer<br />

activation function can be used sigmoid function or other<br />

commonly used functions, and here it be used with<br />

polynomial activation functions<br />

2<br />

i<br />

gs = a0, s<br />

+ a1, sx+ a2, sx + + ai,<br />

sx<br />

+ , ais<br />

,<br />

∈ R is<br />

polynomial coefficients. Optimal network parameter w<br />

s,<br />

j<br />

and r<br />

s<br />

( s = 1, 2, N , j = 1, 2, m ) can be obta<strong>in</strong>ed by<br />

learn<strong>in</strong>g and tra<strong>in</strong><strong>in</strong>g the network for reduc<strong>in</strong>g the<br />

error E , and further hj( l1, l2, lj)<br />

( j = 1, 2, m) can be<br />

calculated by comb<strong>in</strong><strong>in</strong>g the polynomial coefficients.<br />

The steps of traffic flow chaotic time series Volterra<br />

Neural Network fast learn<strong>in</strong>g algorithm is showed and the<br />

specific steps are as follows:<br />

Algorithm VNNTF model fast learn<strong>in</strong>g algorithm<br />

Step1) The hidden neurons number is 9 by<br />

Kolmogorov Theorem, so that the 4-9-1 structure of<br />

traffic flow VNNTF neural networks was obta<strong>in</strong>ed. The<br />

traffic flow time series <strong>in</strong>put signal is<br />

( xt ( ), xt ( + τ ), , xt ( + ( m−1) τ ) T , ( t = 1, 2, ) ; the<br />

<br />

output signal is yt (); the weight coefficient matrix of the<br />

hidden layer is w = ( ws, l<br />

)<br />

N× m<br />

= ( ws,<br />

i)<br />

N×<br />

m<br />

, ( s = 1, 2, 9 ,<br />

j<br />

i , j = 1, 2, , 4 ) and the parameter is r s<br />

( s = 1, 2, 9 ).<br />

Step2) The traffic flow chaotic time series Volterra<br />

Neural Network parameters w = ( w<br />

s,<br />

i)<br />

N×<br />

m<br />

and r<br />

s<br />

( s = 1, 2, 9 , i = 1, 2, 4 ) are <strong>in</strong>itialized, where the<br />

parameters w = ( w<br />

s,<br />

i)<br />

N×<br />

m<strong>in</strong> each component take random<br />

function between 0 and 1; and r s<br />

are <strong>in</strong>itialized to take 9<br />

number between 0 and 1 by the random function.<br />

Step3) Us<strong>in</strong>g phase space reconstruction theory to<br />

preprocess the traffic flow chaotic time series, and<br />

perform normalization for the reconstructed network<br />

<strong>in</strong>put signal. Based on Takens theorem, the m<strong>in</strong>imum<br />

embedd<strong>in</strong>g dimension m = 4 , and the delay time τ = 3 .<br />

The reconstruction phase space vector number is<br />

N −1 −( m− 1) τ = 327, which the top 250 vector are used<br />

as network <strong>in</strong>put signals. the form is<br />

( xt ( ), xt ( + τ ), , xx ( + ( m−1) τ )) T , where t = 1, 2, 250 ,<br />

m = 4 and τ = 3.<br />

Then, the 250 phase space vectors to make a simple<br />

normalized, the normalized as<br />

[ x() t − mean( x())]/[max( t x()) t − m<strong>in</strong>( x())]<br />

t<br />

,<br />

t = 1, 2, 250 and, mak<strong>in</strong>g the value is owned by a range<br />

of -1 / 2 to 1/2.<br />

Step4) Us<strong>in</strong>g the <strong>in</strong>itialized network and the<br />

preprocessed traffic flow time series, the first VNNTF<br />

neural network tra<strong>in</strong><strong>in</strong>g beg<strong>in</strong> with the function<br />

N +∞ m<br />

<br />

i<br />

yt () = ra ( wxt ( + ( i−1) τ )) ,<br />

∑∑<br />

∑<br />

s i,<br />

s si<br />

s= 1 i= 1 i=<br />

0<br />

and the assumed activation function is a polynomial<br />

activation function g s<br />

, here a is ,<br />

∈ R are polynomial<br />

coefficients.<br />

Step5) Calculate error function, the function formula:<br />

250<br />

1 2<br />

E( θ ) = ( y( t) − y( t))<br />

∑<br />

2 t = 1<br />

Set the maximum error is E max<br />

= 0.035 , if E < Emax<br />

,<br />

the storage VNNTF neural network parameter use<br />

w = ( w<br />

s,<br />

i)<br />

N×<br />

m<br />

and r<br />

s<br />

( s = 1, 2, 9 , i = 1, 2, 4 ) ; and<br />

further hj<br />

( l1, l2, lj<br />

) ( j = 1, 2, m) can be calculated by<br />

comb<strong>in</strong><strong>in</strong>g the polynomial coefficients, otherwise,<br />

transferred to step6).<br />

Step6) Calculate local gradient of the traffic flow<br />

chaotic time series Volterra neural network. Specifically,<br />

accord<strong>in</strong>g to the formula δ ( t) = ( y( t) − y ( t)) g ′( V ( t))<br />

( j is the output layer) and the formula<br />

j j s j<br />

∂Et<br />

()<br />

δ<br />

j() t =− g ′<br />

s<br />

( Vj())<br />

t<br />

(18)<br />

∂ y () t<br />

j<br />

where, the local gradients are calculated <strong>in</strong> the hidden<br />

layer.<br />

Step7) By <strong>in</strong>troduc<strong>in</strong>g the momentum term, to adjust<br />

the learn<strong>in</strong>g weights of the traffic flow chaotic time series<br />

Volterra neural network. Introduce nonl<strong>in</strong>ear feedback<br />

<strong>in</strong>to the weight<strong>in</strong>g formal to adopt Chaos Mechanisms,<br />

due to the nonl<strong>in</strong>ear feedback is vector form of weight<strong>in</strong>g<br />

variables. In order to facilitate understand<strong>in</strong>g,<br />

respectively, gives the vector w and its weight<strong>in</strong>g formal,<br />

as follows. Note Δ w l ( t+ 1) = w l ( t+ 1) −w l ( t)<br />

, which<br />

ji ji ji<br />

represents the current value of weight<strong>in</strong>g variables, then<br />

1<br />

Δ w l ( t+ 1) = w l ( t+ 1) − w l () t = −ηδ l+<br />

() t x l () t .<br />

ji ji ji j i<br />

In order to speed up the learn<strong>in</strong>g process, <strong>in</strong> the right to<br />

l<br />

jo<strong>in</strong> a momentum term αΔw () t , then<br />

Δ w + = − x + Δw<br />

ji<br />

l 1<br />

( 1) l +<br />

( ) l ( ) l<br />

ji<br />

t ηδ<br />

j<br />

t<br />

i<br />

t α<br />

ji<br />

( t)<br />

© 2013 ACADEMY PUBLISHER

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!