Download Full Issue in PDF - Academy Publisher
Download Full Issue in PDF - Academy Publisher
Download Full Issue in PDF - Academy Publisher
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
JOURNAL OF COMPUTERS, VOL. 8, NO. 6, JUNE 2013 1451<br />
⎧ϖi( k+ 1) = ϖi( k) + 2 μϖ<br />
e( k) f( di( k))<br />
⎪<br />
2<br />
⎪ di<br />
( k)<br />
σi( k+ 1) = σi( k) + 2 μϖ<br />
e( k) f( di( k)) ϖi( k)<br />
⎪<br />
3<br />
σ<br />
i<br />
( k)<br />
⎪<br />
⎨<br />
x( k) − ri<br />
( k)<br />
(5)<br />
⎪ri( k + 1) = ri( k) + 2 μre( k) f( di( k)) ϖi( k)<br />
2<br />
σ<br />
i<br />
( k)<br />
⎪<br />
⎪<br />
x( k) − ri<br />
( k)<br />
⎪ri( k + 1) = ri( k) + 2 μre( k) f( di( k)) ϖi( k)<br />
2<br />
⎩<br />
σ<br />
i<br />
( k)<br />
i = 0, 2, , L−1.<br />
The RBF neural network system can enhance the<br />
stabilization and associative memory of chaotic dynamics<br />
and generalization ability of predictive model even by<br />
imperfect and variation <strong>in</strong>puts by select<strong>in</strong>g the suitable<br />
nonl<strong>in</strong>ear feedback term. The dynamics of network<br />
become chaotic one <strong>in</strong> the weight space. Thus, the<br />
regulate formula ϖ ( k)<br />
is shown as<br />
ϖ<br />
i( k + 1) = ϖi( k) + 2 μϖ<br />
e( k) f( di( k)) + g( ϖi( k) −ϖI( k−1))<br />
(6)<br />
2<br />
where g( x) = tanh( ax)exp( − bx ), x = ϖ ( k) −ϖ<br />
( k− 1) .<br />
That the feedback function g( x ) is chose is because<br />
that g( x)<br />
can get the difference feedback function<br />
correspond<strong>in</strong>g to the dissimilar parameter, such as the<br />
staircase function, δ function and so on. If the feedback<br />
function is seen as the motion-promot<strong>in</strong>g force, the<br />
different feedback parameters a and b correspond<strong>in</strong>g to<br />
the amplitude and width of the motion-promot<strong>in</strong>g force.<br />
The paper [18] was detailed to discuss the <strong>in</strong>fluences by<br />
select<strong>in</strong>g the suitable learn<strong>in</strong>g and predictive process. The<br />
simulation results <strong>in</strong>dicated that the network system can<br />
enhance the stabilization and associative memory of<br />
chaotic dynamics and generalization ability of predictive<br />
model even by imperfect and variation <strong>in</strong>puts dur<strong>in</strong>g the<br />
learn<strong>in</strong>g and prediction process by select<strong>in</strong>g the suitable<br />
nonl<strong>in</strong>ear feedback term.<br />
III. DETERMINATION METHOD OF THE OPTIMAL DELAY<br />
TIME AND MINIMUM EMBEDDING DIMENSION<br />
A. Determ<strong>in</strong>ation Method of the Optimal Delay Time τ<br />
Dur<strong>in</strong>g Phase Space Reconstruction <strong>in</strong> the Takens<br />
embedd<strong>in</strong>g theorem does not make limited to the delay<br />
time τ .In theory, when the observational data po<strong>in</strong>t is an<br />
<strong>in</strong>f<strong>in</strong>itely long, the effect of embedded not too large.<br />
However, <strong>in</strong> actual operation, τ is caused a great impact.<br />
If τ is too small, the chaotic attractor cannot be fully<br />
expanded, redundant error is larger; if τ is too large, the<br />
no related error is larger. Therefore, <strong>in</strong> order for complex<br />
nonl<strong>in</strong>ear systems, us<strong>in</strong>g the mutual <strong>in</strong>formation method<br />
to determ<strong>in</strong>e the optimal delay time τ , the mutual<br />
<strong>in</strong>formation method us<strong>in</strong>g a m<strong>in</strong>imal value of the mutual<br />
<strong>in</strong>formation function to determ<strong>in</strong>e the optimal delay time<br />
τ , its expression is as follows:<br />
P,<br />
() r<br />
M( x , )<br />
,<br />
( )ln i j<br />
t<br />
xt− τ<br />
= ∑ Pi j<br />
r<br />
(7)<br />
PP<br />
i,<br />
j i j<br />
i<br />
i<br />
where, P<br />
i<br />
is the probability of po<strong>in</strong>t x t<br />
<strong>in</strong> the i time<br />
<strong>in</strong>terval; Pi, j()<br />
r is the jo<strong>in</strong>t probability of the po<strong>in</strong>t x t<br />
<strong>in</strong><br />
t moment fall <strong>in</strong>to the i time <strong>in</strong>terval and the t + τ<br />
moment fall <strong>in</strong>to the j time <strong>in</strong>tervals.<br />
B. Determ<strong>in</strong>ation Method of the M<strong>in</strong>imum Embedd<strong>in</strong>g m<br />
In this paper, the commonly used pseudo-near-po<strong>in</strong>t<br />
method to calculate the m<strong>in</strong>imum embedd<strong>in</strong>g dimension<br />
m , set the number of attractor dimension d , then m is<br />
just the m<strong>in</strong>imum embedd<strong>in</strong>g dimension when the<br />
attractor is fully open. When m< d , the attractor <strong>in</strong> the<br />
phase space cannot be completely open, the attractor will<br />
produce some projection po<strong>in</strong>t <strong>in</strong> the embedded space,<br />
the projection po<strong>in</strong>t and the other po<strong>in</strong>ts <strong>in</strong> the phase<br />
space will form the closest po<strong>in</strong>t. In the orig<strong>in</strong>al system,<br />
the 2 po<strong>in</strong>ts are not true nearest neighbors, so called<br />
pseudo adjacent po<strong>in</strong>ts. Assume that any po<strong>in</strong>t yt () <strong>in</strong> the<br />
phase space, the criterion of false neighbor<strong>in</strong>g po<strong>in</strong>ts are<br />
as follows:<br />
1<br />
2 2<br />
D () () 2<br />
m+ 1<br />
t − Dm<br />
t xt ( + mτ) − xt ( ′ + mτ)<br />
= > ρm<br />
(8)<br />
D () t D () t<br />
m<br />
Where D () t is the Euclidean distance between the<br />
m<br />
N<br />
po<strong>in</strong>ts of yt () with its nearest neighbor y () t <strong>in</strong> the<br />
phase space when the embedd<strong>in</strong>g dimension is m .<br />
Accord<strong>in</strong>g to this criterion, the calculation pseudo-nearest<br />
neighbor number N when m from small to large, and<br />
then calculate the change amount Δ N when the<br />
embedd<strong>in</strong>g dimension from m to m + 1. Draw the curve<br />
ΔN<br />
Δ<br />
from to m ; when Δ N = 0 , just N dropped to 0,<br />
N<br />
N<br />
the value m * of m is seek<strong>in</strong>g the m<strong>in</strong>imum embedd<strong>in</strong>g<br />
dimension.<br />
IV. ADAPTIVE RBF NEURAL NETWORK RAPID LEARNING<br />
ALGORITHM<br />
On the establishment of chaotic time series RBF,<br />
Network <strong>in</strong>put the number of neurons, hidden layers and<br />
the number of neurons <strong>in</strong> the hidden layer are to be<br />
considered.The follow<strong>in</strong>g chaotic time series used are<br />
from Lorenz chaotic sampl<strong>in</strong>g time series. The Lorenz<br />
chaotic sampl<strong>in</strong>g time series RBF neural network can be<br />
constructed: RBF neural network is designed to be three<br />
layers: <strong>in</strong>put layer, s<strong>in</strong>gle hidden layer and output layer;<br />
the number of hidden layer wavelet neural taken as 9 by<br />
Kolmogorov Theorem, the number of <strong>in</strong>put layer neurons<br />
equal to the m<strong>in</strong>imum embedd<strong>in</strong>g dimension, the number<br />
of output layer is 1, so that the 4-9-1 structure of Lorenz<br />
chaotic sampl<strong>in</strong>g time series RBF was obta<strong>in</strong>ed,<br />
specifically shown <strong>in</strong> Figure 1.<br />
Algorithm The steps of the chaotic time series learn<strong>in</strong>g<br />
and prediction of the adaptive RBF neural network<br />
filter<strong>in</strong>g predictive model are showed:<br />
Step1) Based on the Takens' delay-coord<strong>in</strong>ate phase<br />
reconstruct theory, the number of the <strong>in</strong>put nerve cells<br />
m<br />
© 2013 ACADEMY PUBLISHER