28.02.2014 Views

The Development of Neural Network Based System Identification ...

The Development of Neural Network Based System Identification ...

The Development of Neural Network Based System Identification ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

112 CHAPTER 4 NEURAL NETWORK BASED SYSTEM IDENTIFICATION<br />

By applying Equation (4.52) to (4.49) with A = λ(t)R(t − 1), B = (1 − λ(t))ψ(t),<br />

C = 1, D = ψ T (t) and introducing term P (t) = (1 − λ(t))R −1 (t), the generalised rGN<br />

algorithm in Equation (4.48)-(4.51) are rewritten as follows to avoid full Hessian matrix<br />

inversion:<br />

P (t) = [ P (t − 1) − L (t) S −1 (t) L T (t) ] /λ (t) (4.53)<br />

S (t) = ψ T (t) P (t − 1) ψ (t) + λ (t) ˆΛ (t) (4.54)<br />

L (t) = P (t − 1) ψ (t) S −1 (t) (4.55)<br />

ˆθ (t) = ˆθ (t − 1) + L (t) e (t) (4.56)<br />

Note that the inversion <strong>of</strong> matrix R −1 (t) had been reduced from full inversion <strong>of</strong> d × d<br />

matrix to S −1 (t) with n × n dimension. Note that n denotes the number <strong>of</strong> outputs<br />

predicted in the model.<br />

Equation (4.53) - (4.56) indicates that initial value <strong>of</strong> P (0) (d × d matrix) and<br />

parameter vector ˆθ(0) need to be supplied by user at the beginning <strong>of</strong> the iteration.<br />

<strong>The</strong> initial parameter vector ˆθ(0) is usually selected as random values or pre-determined<br />

weights resulting from the <strong>of</strong>f-line training. A common choice for P (0) is P (0) = ρI<br />

with ρ being a large positive number (i.e 10 2 → 10 4 ) indicating little confidence in ˆθ(0).<br />

This would cause the estimation to rapidly increase in the transient phase for a short<br />

period <strong>of</strong> time δt [Ljung and Soderstrom, 1983]. As the estimation <strong>of</strong> parameters is<br />

quite poor at the beginning <strong>of</strong> the iteration, a lower forgetting factor λ(t) should be<br />

selected at the initial stage for rapid adaptation and approaching unity as the time<br />

increases. <strong>The</strong> following strategies introduced by Ljung and Soderstrom [1983] is used<br />

for updating the forgetting factor term:<br />

λ (t) = λ o λ (t − 1) + (1 − λ o ) (4.57)<br />

where λ o and λ(0) are design variables. <strong>The</strong> typical values <strong>of</strong> λ o is 0.99 and λ(0) is in<br />

the range <strong>of</strong> 0.95 < λ(0) < 1.<br />

According to Ljung and Soderstrom [1983], the recursion <strong>of</strong> Equation (4.53) is

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!