25.01.2015 Views

Download Full Issue in PDF - Academy Publisher

Download Full Issue in PDF - Academy Publisher

Download Full Issue in PDF - Academy Publisher

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

1492 JOURNAL OF COMPUTERS, VOL. 8, NO. 6, JUNE 2013<br />

V 1 1<br />

2 V 2 h e<br />

θ<br />

≤− ρ<br />

θ<br />

+ ψ −<br />

u λ u<br />

≤− ρV<br />

+ ψ<br />

θ<br />

2<br />

(31)<br />

where ρ = σγ<br />

θ<br />

Eq. (31) implies that for V<br />

ψ<br />

θ<br />

> , V<br />

0 ρ θ<br />

<<br />

and, therefore, θ is bounded. By <strong>in</strong>tegrat<strong>in</strong>g (31), we can<br />

establish that:<br />

From (32), we have<br />

ψ<br />

θ θ γ<br />

ρ<br />

2 2<br />

− t<br />

≤ (0) e ρ + 2<br />

θ<br />

(32)<br />

(33)<br />

−0.5ρt<br />

θ ≤ θ(0) e + 2γθψ ρ<br />

Us<strong>in</strong>g (33) and the fact that δ ( z)<br />

and hu<br />

λ<br />

are bounded,<br />

we can write<br />

T<br />

(<br />

+ )<br />

βξη ( , ) hu<br />

φ ( z) θ δ( z)<br />

λ<br />

T<br />

≤ βξη ( , ) hu<br />

φ ( z) θ+ βξη ( , ) hu<br />

δ( z)<br />

λ<br />

λ<br />

T<br />

≤ βξη ( , ) h φ ( z) θ + <br />

uλ<br />

+ βξη ( , ) h δ( z)<br />

uλ<br />

−0.<br />

5ρt<br />

T<br />

≤ βξη ( , ) h φ ( z) θ(0)<br />

e<br />

uλ<br />

T<br />

+ βξη ( , ) h φ ( z) 2 γψ ρ+<br />

<br />

uλ<br />

+ βξη ( , ) h δ( z)<br />

uλ<br />

≤ ψ e + ψ<br />

−0.5ρt<br />

0 1<br />

θ<br />

(34)<br />

+ <br />

Where ψ<br />

0,<br />

ψ<br />

1<br />

are some f<strong>in</strong>ite positive constants.<br />

Lemma 1: The follow<strong>in</strong>g <strong>in</strong>equality holds for all<br />

Ξ> 0 and ς ∈ R with K = 0.2785 .<br />

c<br />

⎛ς<br />

⎞<br />

0≤ ς −ς<br />

⋅tanh⎜<br />

⎟≤ KcΞ<br />

⎝Ξ<br />

⎠<br />

(35)<br />

Theorem 1: Suppose that Assumption1-3 are satisfied<br />

for the system (1), then the neural network controller and<br />

adaptation law given by (24) guarantees the convergence<br />

of the neural network parameters and to be uniformly<br />

ultimately bounded of all the signal <strong>in</strong> the closed-loop<br />

system.<br />

Proof: Consider the Lyapunov function candidate:<br />

V( e, η) = e T Pe +μV<br />

( η ) (36)<br />

Where μ > 0 is the design parameter. Consider<strong>in</strong>g (4),<br />

(19), (20), (34) and lemma 1, differentiat<strong>in</strong>g V( e, η ) with<br />

respect to time, we obta<strong>in</strong><br />

0<br />

T<br />

T T T bPe<br />

Ve <br />

⎛ ⎞<br />

(,) η = e ( Ac<br />

P+ PAc)<br />

e− 2bPeλ<br />

tanh⎜<br />

⎟+<br />

<br />

⎝ Ξ ⎠<br />

T * dV0<br />

() η<br />

+2 bPehu<br />

( u− u)<br />

+ μ<br />

λ<br />

dt<br />

T<br />

T T<br />

⎛bPe⎞<br />

dV0<br />

() η<br />

=−eQe− 2bPeλtanh<br />

⎜ ⎟+ μ + <br />

⎝ Ξ ⎠ dt<br />

T<br />

( φ θ+<br />

δ )<br />

T<br />

+2 bPeh ( z) ( z)<br />

uλ<br />

T<br />

T T<br />

⎛bPe⎞ dV0<br />

() η<br />

≤−eQe−2 bPeλ<br />

tan h⎜<br />

⎟ ++ μ + <br />

⎝ Ξ ⎠ dt<br />

T<br />

−0.5<br />

t<br />

+ 2 bPe ψ e ρ + ψ<br />

( 0 1)<br />

(37)<br />

If the design parameter λ is large enough to make<br />

λ ≥ ψ 1<br />

and consider<strong>in</strong>g assumption 4, we have<br />

<br />

T T −0.5ρt<br />

Ve ( , η) eQe 2 bPeψ<br />

e +2ψ<br />

K<br />

0 1 c<br />

≤− + Ξ+ <br />

<br />

∂V<br />

( η)<br />

[ q(0, η, u ) q( ξ, η, u) q(0, η, u )<br />

η<br />

η<br />

]<br />

0<br />

+ μ<br />

+ −<br />

∂η<br />

T<br />

2<br />

e Qe μσ<br />

3<br />

μσ<br />

4Lξ<br />

μσ<br />

4Lq<br />

≤ − − η + ξ η + η + <br />

(38)<br />

T<br />

+ 2 b Pe ψ e + 2ψ<br />

K Ξ<br />

Then<br />

−0.5ρt<br />

0 1<br />

Consider<strong>in</strong>g assumption 2 and<br />

c<br />

T −0.5ρt 2 T<br />

2<br />

2 −ρt<br />

ψ0 ≤ + ψ0<br />

2 b Pe e 0.5 e 2 b P e<br />

ξ ≤ e + y y y ≤ e + b<br />

(1) ( r−1)<br />

T<br />

(<br />

d d<br />

<br />

d<br />

)<br />

d<br />

( λ )<br />

Ve ( , η) ≤- ( Q) −0.5<br />

e − + <br />

Us<strong>in</strong>g the <strong>in</strong>equality<br />

2 2<br />

m<strong>in</strong><br />

μσ3<br />

η<br />

+ μσ L e η + μσ L b η + <br />

4 ξ<br />

4<br />

ξ d<br />

T<br />

2<br />

2 −ρt<br />

4 q<br />

η<br />

0 1 c<br />

+ μσ L + 2 b P ψ e + 2ψ<br />

K Ξ<br />

4 ξ 4 ξ 1 4 ξ<br />

2 2ε1<br />

(39)<br />

1 2 1<br />

2<br />

μσ L e η ≤ μσ L ε η + μσ L e (40)<br />

μσ<br />

( ) ( ( )) 2 2<br />

Lb<br />

ξ d<br />

Lq μσ ε Lb<br />

ξ d<br />

Lq<br />

1<br />

+ η ≤ + η + (41)<br />

4 4 2 2<br />

4ε<br />

2<br />

Then (39) satisfies<br />

V<br />

( e, η)<br />

⎛<br />

1 ⎞ 2<br />

≤- ⎜λm<strong>in</strong> ( Q) −0.5− μσ4Lξ<br />

⎟ e + <br />

⎝<br />

2 ε1<br />

⎠<br />

⎡ 1<br />

2<br />

⎤ 2<br />

−μ⎢σ3 − σ4Lξε1 − μ( σ4ε2( Lξbd<br />

+ Lq)<br />

)<br />

2<br />

⎥ η +<br />

⎣<br />

⎦<br />

T<br />

2<br />

1<br />

+ 2 bP ψ e + 2ψ<br />

KΞ+<br />

2 −ρt<br />

0 1 c 2<br />

4ε<br />

2<br />

(42)<br />

where ε1,<br />

ε<br />

2<br />

are suitable positive constants. We<br />

adjust ε1,<br />

ε<br />

2<br />

to<br />

1<br />

make σ ( ( )) 2<br />

3<br />

σ4Lξε1 μ σ4ε2<br />

Lξbd<br />

Lq<br />

− − + > 0 .<br />

2<br />

© 2013 ACADEMY PUBLISHER

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!