12.07.2015 Views

Neural Networks - Algorithms, Applications,and ... - Csbdu.in

Neural Networks - Algorithms, Applications,and ... - Csbdu.in

Neural Networks - Algorithms, Applications,and ... - Csbdu.in

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

326 Adaptive Resonance Theory13. Repeat steps 6 through 10.14. Modify bottom-up weights on the w<strong>in</strong>n<strong>in</strong>g F 2 unit.z.i, = ,1 - d15. Modify top-down weights com<strong>in</strong>g from the w<strong>in</strong>n<strong>in</strong>g F 2 unit.u,Z,J = 1 -d16. Remove the <strong>in</strong>put vector. Restore all <strong>in</strong>active F 2 units. Return to step 1with a new <strong>in</strong>put pattern.8.3.8 ART2 Process<strong>in</strong>g ExampleWe shall be us<strong>in</strong>g the same parameters <strong>and</strong> <strong>in</strong>put vector for this example thatwe used <strong>in</strong> Section 8.3.2. For that reason, we shall beg<strong>in</strong> with the propagationof the p vector up to FI. Before show<strong>in</strong>g the results of that calculation, weshall summarize the network parameters <strong>and</strong> show the <strong>in</strong>itialized weights.We established the follow<strong>in</strong>g parameters earlier: a = 10; b = 10; c =0.1,0 = 0.2. To that list we add the additional parameter, d = 0.9. We shalluse N = 6 units on the F 2 layer.The top-down weights are all <strong>in</strong>itialized to zero, so Zjj(0) = 0 as discussed<strong>in</strong> Section 8.3.5. The bottom-up weights are <strong>in</strong>itialized accord<strong>in</strong>g to Eq. (8.49):Zji = 0.5/(1 - d)\/M = 2.236, s<strong>in</strong>ce M = 5.Us<strong>in</strong>g I = (0.2,0.7,0.1,0.5,0.4)* as the <strong>in</strong>put vector, before propagationto F 2 we have p = (0.206,0.722,0,0.516,0.413)'. Propagat<strong>in</strong>g this vectorforward to F 2 yields a vector of activities across the F 2 units ofT = (4.151,4.151,4.151,4.151,4.151,4.151)'Because all of the activities are the same, the first unit becomes the w<strong>in</strong>ner <strong>and</strong>the activity vector becomesT = (4.151,0,0,0,0,0)'<strong>and</strong> the output of the F 2 layer is the vector, (0.9,0,0,0,0,0)'.We now propagate this output vector back to FI <strong>and</strong> cycle through thelayers aga<strong>in</strong>. S<strong>in</strong>ce the top-down weights are all <strong>in</strong>itialized to zero, there is nochange on the sublayers of FI . We showed earlier that this condition will notresult <strong>in</strong> a reset from the orient<strong>in</strong>g subsystem; <strong>in</strong> other words, we have reached aresonant state. The weight vectors will now update accord<strong>in</strong>g to the appropriateequations given previously. We f<strong>in</strong>d that the bottom-up weight matrix is/ 2.063 7.220 0.000 5.157 4.126 \2.236 2.236 2.236 2.236 2.2362.236 2.236 2.236 2.236 2.2362.236 2.236 2.236 2.236 2.2362.236 2.236 2.236 2.236 2.236\ 2.236 2.236 2.236 2.236 2.236 /

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!