25.01.2015 Views

Download Full Issue in PDF - Academy Publisher

Download Full Issue in PDF - Academy Publisher

Download Full Issue in PDF - Academy Publisher

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

1450 JOURNAL OF COMPUTERS, VOL. 8, NO. 6, JUNE 2013<br />

τ <strong>in</strong>cludes autocorrelation function method, multiple<br />

correlation function method, mutual <strong>in</strong>formation method.<br />

Embedd<strong>in</strong>g dimension m is calculated by the methods of<br />

GP algorithm, pseudo-nearest-po<strong>in</strong>t method, correlation<br />

<strong>in</strong>tegral method and Cao method.<br />

The chaotic time series prediction is based on the<br />

Takens' delay-coord<strong>in</strong>ate phase reconstruct theory. If the<br />

time series of one of the variables is available, based on<br />

the fact that the <strong>in</strong>teraction between the variables is such<br />

that every component conta<strong>in</strong>s <strong>in</strong>formation on the<br />

complex dynamics of the system, a smooth function can<br />

be found to model the portraits of time series. If the<br />

chaotic time series are{ x()<br />

t }, then the reconstruct state<br />

vector is<br />

x( t) = ( x( t), x( t+ τ ), , x( t+ ( m−1) τ ))<br />

where m ( m = 2,3, ) is called the embedd<strong>in</strong>g<br />

dimension ( m= 2d<br />

+ 1 , d is called the freedom of<br />

dynamics of the system), and τ is the delay time. The<br />

predictive reconstruct of chaotic series is a <strong>in</strong>verse<br />

problem to the dynamics of the system essentially. There<br />

exists a smooth function def<strong>in</strong>ed on the reconstructed<br />

m<br />

manifold <strong>in</strong> R to <strong>in</strong>terpret the<br />

dynamics x( t+ T) = F( x( t))<br />

, where T ( T > 0) is forward<br />

predictive step length, and F()<br />

⋅ is the reconstructed<br />

predictive model.<br />

B. RBF Neural Network Function Approximation Theory<br />

Takens embedd<strong>in</strong>g theorem states that there is a<br />

smooth mapp<strong>in</strong>g F of the F makes:<br />

x( t+ τ ) = F( x( t))<br />

(1)<br />

that is,<br />

xt ( + τ), xt (), , xt ( −( m− 2) τ) = F{[ xt (), xt ( −τ), , xt ( −( m−1) τ]}<br />

For purposes of calculation, equation (1) can be rewritten<br />

as:<br />

xt ( + τ ) = Fxt [ ( ), xt ( −τ), , xt ( −( m−1) τ]<br />

(2)<br />

where, f is the mapp<strong>in</strong>g from R M to R L . Chaos theory<br />

suggests that the chaotic time series is short-term forecast,<br />

and the essence of prediction is how to get a good<br />

approximation f on the function f . Chaotic time series<br />

determ<strong>in</strong>ed by the <strong>in</strong>ternal regularity, this regularity<br />

comes from the non-l<strong>in</strong>ear, it exhibits the time series <strong>in</strong><br />

the time delay state, this feature makes the system seem<br />

to have some k<strong>in</strong>d of memory capacity. The same time, it<br />

is difficult to demonstrate such a regularity by us<strong>in</strong>g the<br />

analytic methods; this type of <strong>in</strong>formation process<strong>in</strong>g<br />

happens to be the neural network, and the Kolmogorov<br />

cont<strong>in</strong>uity theorem <strong>in</strong> the neural networks theory<br />

provides a theoretical guarantee for the neural network<br />

nonl<strong>in</strong>ear function approximation.<br />

Theorem (Kolmogorov cont<strong>in</strong>uity theorem) Let ϕ ( x)<br />

be a non-constant and bounded monotonically <strong>in</strong>creas<strong>in</strong>g<br />

a cont<strong>in</strong>uous function; M is a compact sub-set of R n ,<br />

and f( x) = f( x1, x2, , x n<br />

) is the cont<strong>in</strong>uous real value<br />

function on M , then for ∀ ε > 0 , exists a positive <strong>in</strong>teger<br />

N and real numbers C , makes:<br />

<br />

N n<br />

f( x , x , , x ) = Cϕ( ϖ x −θ<br />

) (3)<br />

1 2<br />

∑<br />

∑<br />

n i ij j j<br />

i= 1 j=<br />

1<br />

meet:<br />

<br />

max f( x , x , , x ) − f( x , x , , x ) < ε (4)<br />

M<br />

1 2 n 1 2<br />

By the above theorem, the nonl<strong>in</strong>ear time series<br />

prediction process us<strong>in</strong>g neural network can be<br />

considered as dynamic reconfiguration, which is an<br />

<strong>in</strong>verse process. Namely, the existence of a three-layer<br />

network, the hidden unit output function, the network<br />

<strong>in</strong>put and output function is l<strong>in</strong>ear, three-layer network<br />

<strong>in</strong>put and output relation f can approximate p.<br />

Therefore, the theorem from mathematics is to ensure<br />

the feasibility of chaotic time series prediction by neural<br />

network.<br />

C. Realized Architecture of Adaptive RBF Neural<br />

Network Filter<strong>in</strong>g Predictive Model<br />

After reconstruct<strong>in</strong>g the phase space, the RBF neural<br />

networks adopt three layers networks of Figure 1. Where<br />

the <strong>in</strong>put layer has m nerve cells, the first layer feed to<br />

the second layer directly and it do not need the power<br />

process<strong>in</strong>g. r i<br />

( i = 1, 2, , L ) is the reference vector and<br />

ϖ<br />

k<br />

( i = 1, 2, , L ) is the adjustable parameters <strong>in</strong> the<br />

adaptive RBF neural network filter<strong>in</strong>g. Thus, the adaptive<br />

RBF neural network filter<strong>in</strong>g is more flexible <strong>in</strong> study<strong>in</strong>g<br />

the nonl<strong>in</strong>ear functions. The differentiation between the<br />

networks and the traditional neural networks is that the<br />

activation function is a RBF function but not the Sigmoid<br />

function. The activation function usually choose the<br />

Gauss function, the spl<strong>in</strong>e function f ( di<br />

( k )) ,<br />

where d ( k) = x( k) − r( k)<br />

. In the adaptive RBF neural<br />

i<br />

network filter<strong>in</strong>g, yk ( ) is expressed as<br />

i<br />

2<br />

L−1<br />

yˆ( k) = f ( ∑ ϖ ( k) f( d ( k)))<br />

,<br />

i=<br />

0<br />

i = 0, 2, , L−1,<br />

where f () 2<br />

⋅ is the activation function of output signal.<br />

i<br />

x(k)<br />

−1<br />

z<br />

−1<br />

z<br />

−1<br />

z<br />

r 0<br />

r 1<br />

r 2<br />

r L<br />

i<br />

ϖ 0<br />

( k)<br />

ϖ 1<br />

( k)<br />

ϖ 2<br />

( k)<br />

(k) ϖ L<br />

n<br />

∑<br />

yˆ ( k)<br />

<strong>in</strong>put hidden layer output<br />

Figure 1. Structure of adaptive RBF neural network filter<strong>in</strong>g<br />

Generally, the learn<strong>in</strong>g of the RBF neural network<br />

filter<strong>in</strong>g has three steps. If the gradient method and the<br />

Gauss activation function are adopted, the regulate<br />

formulas of RBF are shown as:<br />

© 2013 ACADEMY PUBLISHER

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!