12.07.2015 Views

Neural Networks - Algorithms, Applications,and ... - Csbdu.in

Neural Networks - Algorithms, Applications,and ... - Csbdu.in

Neural Networks - Algorithms, Applications,and ... - Csbdu.in

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

9.2 Architectures of Spatiotemporal <strong>Networks</strong> (STNs) 349The first <strong>in</strong>put vector, Qn, matches Zi exactly, so x\ beg<strong>in</strong>s to rise quickly.We shall assume that all other units rema<strong>in</strong> quiescent. If Qn rema<strong>in</strong>s longenough, x\ will saturate. In any case, as soon as Qn is removed <strong>and</strong> Q i2 isapplied, x\ will beg<strong>in</strong> to decay. At the same time, xi will beg<strong>in</strong> to rise becauseQi2 matches the weight vector, 22. Moreover, s<strong>in</strong>ce x\ will not have decayedaway completely, it will contribute to the positive <strong>in</strong>put value to unit 2. WhenQi3 is applied, both x\ <strong>and</strong> XT_ will contribute to the <strong>in</strong>put to unit 3.This cascade cont<strong>in</strong>ues while all <strong>in</strong>put vectors are applied to the network.S<strong>in</strong>ce the <strong>in</strong>put vectors have been applied <strong>in</strong> the proper sequence, unit <strong>in</strong>putvalues tend to be re<strong>in</strong>forced by the outputs of preced<strong>in</strong>g units. By the time thef<strong>in</strong>al <strong>in</strong>put vector is applied, the network output, represented by y — x,,,, mayalready have reached saturation, even though contributions from units very early<strong>in</strong> the network may have decayed away.To illustrate the effects of a pattern mismatch, let's exam<strong>in</strong>e the situationwhere the patterns of Q\ are applied <strong>in</strong> reverse order. S<strong>in</strong>ce Qi,,, matches z,,,,x m will beg<strong>in</strong> a quick rise toward saturation, although its output is not be<strong>in</strong>gre<strong>in</strong>forced by any other units <strong>in</strong> the network. When Qi.,,,_i is applied, x n ,-\turns on <strong>and</strong> sends its output value to contribute to x m . The total <strong>in</strong>put tothe mth unit is dx m -\, which, because d < 1, is unlikely to overcome thethreshold F. Therefore, x,,, will cont<strong>in</strong>ue to decay away. By the time the last<strong>in</strong>put vector is applied, x,,, may have decayed away entirely, <strong>in</strong>dicat<strong>in</strong>g thatthe network has not recognized the STP. Note that we have assumed here thatQi.m-i is orthogonal to the weight vector z ni , <strong>and</strong> thus there is no contributionto the <strong>in</strong>put to the mth unit from the dot product of these two vectors. Thisassumption is acceptable for this discussion to illustrate the concepts beh<strong>in</strong>d thenetwork operation. In practice, these vectors will not necessarily be orthogonal.Figure 9.7 shows a graphic example of unit outputs for a pattern recognitionby a simple, four-unit STN. Results from apply<strong>in</strong>g the identical <strong>in</strong>put vectors,but <strong>in</strong> a r<strong>and</strong>om order, are shown <strong>in</strong> Figure 9.8. Notice how the activity patternappears to flow smoothly from left to right <strong>in</strong> the first figure.Because of the relatively rapid rise time/ of the unit activity, followed bya longer decay time, the STN has the property of be<strong>in</strong>g somewhat <strong>in</strong>sensitiveto the speed at which the STP is presented. We know that different speakerspronounce the same word at slightly different rates; thus, tolerance of timevariation is a desirable characteristic of STNs. Figure 9.9 shows the results ofpresent<strong>in</strong>g an STP at two different rates to the same network.Recall that the network we have been describ<strong>in</strong>g is capable of recogniz<strong>in</strong>gonly a s<strong>in</strong>gle STP. To dist<strong>in</strong>guish words <strong>in</strong> some specified vocabulary, we wouldhave to replicate the network for each word that we want the system to recognize.Figure 9.10 illustrates a system that can dist<strong>in</strong>guish TV words.There are many aspects of the speech-recognition problem that we haveoverlooked <strong>in</strong> our discussion. For example, how do we account for both smallwords <strong>and</strong> large words? Moreover, some words are subsets of other words;will the system dist<strong>in</strong>guish between a subset word spoken slowly <strong>and</strong> a superset

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!