06.03.2013 Views

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>and</strong> the weight adaptation equation is given by<br />

Wk+1 = Wk + ∆Wk . (16.24)<br />

The feedback loop in fig. 16.3 first generates Y0 arbitrarily <strong>and</strong> later makes<br />

correction through a change in Wk, which subsequently helps to determine<br />

controlled Xk+1 <strong>and</strong> hence Yk+1. Wk -1 denotes the fuzzy compositional inverse<br />

(strictly speaking, pre-inverse) for matrix Wk. The stability of the learning<br />

model is guaranteed, vide theorem 16.3.<br />

Theorem 16.3: The error vector Ek in the learning model converges to a<br />

stable point for 0< α < 2 <strong>and</strong> the steady-state value of error is inversely<br />

proportional to α.<br />

Proof: We have<br />

Yk = Wk+1 o Xk+1<br />

= (Wk + ∆Wk ) o Xk+1<br />

= {Wk + α Ek o Xk+1 T / ( ( Xk+1 T ) o ( Xk+1)) } o Xk+1<br />

= {Wk + α Ek o Xk+1 T / ( (Xk+1 T )o (Xk+1) )} o ( Wk -1 oYk)<br />

≤ Yk + {α Ek o Xk+1 T / ( (Xk+1 T ) o ( Xk+1)} o (Wk -1 o Yk)<br />

= Yk + {α Ek o Xk+1 T / ( (Xk+1 T )o ( Xk+1))}o Xk+1<br />

≈ Yk + α Ek.<br />

Thus, Yk+1 ≤ Yk + α Ek (16.25)<br />

= Yk + α (D- Yk)<br />

⇒ Yk+1 ≤ αD + (I - αI) Yk (16.26)<br />

⇒ [ E I - (I - α I)] Yk = (α - α ′ )D, for 0 < α ′≤ α (16.27)<br />

where E is the extended difference operator <strong>and</strong> I is the identity matrix.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!