12.07.2015 Views

Neural Networks - Algorithms, Applications,and ... - Csbdu.in

Neural Networks - Algorithms, Applications,and ... - Csbdu.in

Neural Networks - Algorithms, Applications,and ... - Csbdu.in

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

20 Introduction to ANS Technologyconverts the net <strong>in</strong>put value, net;, to the node's output value, Xj. In this text, weshall consistently use the term output function for /,() of Eqs. (1.3) <strong>and</strong> (1.4).Be aware, however, that the literature is not always consistent <strong>in</strong> this respect.When we are describ<strong>in</strong>g the mathematical basis for network models, it willoften be useful to th<strong>in</strong>k of the network as a dynamical system—that is, asa system that evolves over time. To describe such a network, we shall writedifferential equations that describe the time rate of change of the outputs of thevarious PEs. For example, ±, — gi(x t , net,) represents a general differentialequation for the output of the ith PE, where the dot above the x refers todifferentiation with respect to time. S<strong>in</strong>ce netj depends on the outputs of manyother units, we actually have a system of coupled differential equations.As an example, let's look at the equation±i = -Xi + /j(neti)for the output of the itii process<strong>in</strong>g element. We apply some <strong>in</strong>put values to thePE so that net; > 0. If the <strong>in</strong>puts rema<strong>in</strong> for a sufficiently long time, the outputvalue will reach an equilibrium value, when x, = 0, given bywhich is identical to Eq. (1.4). We can often assume that <strong>in</strong>put values rema<strong>in</strong>until equilibrium has been achieved.Once the unit has a nonzero output value, removal of the <strong>in</strong>puts will causethe output to return to zero. If net; = 0, thenwhich means that x —> 0.It is also useful to view the collection of weight values as a dynamicalsystem. Recall the discussion <strong>in</strong> the previous section, where we asserted thatlearn<strong>in</strong>g is a result of the modification of the strength of synaptic junctions betweenneurons. In an ANS, learn<strong>in</strong>g usually is accomplished by modification ofthe weight values. We can write a system of differential equations for the weightvalues, Wij = G Z (WJJ, z;,Xj,...), where G, represents the learn<strong>in</strong>g law. Thelearn<strong>in</strong>g process consists of f<strong>in</strong>d<strong>in</strong>g weights that encode the knowledge that wewant the system to learn. For most realistic systems, it is not easy to determ<strong>in</strong>ea closed-form solution for this system of equations. Techniques exist, however,that result <strong>in</strong> an acceptable approximation to a solution. Prov<strong>in</strong>g the existenceof stable solutions to such systems of equations is an active area of research <strong>in</strong>neural networks today, <strong>and</strong> probably will cont<strong>in</strong>ue to be so for some time.1.2.2 Vector FormulationIn many of the network models that we shall discuss, it is useful to describecerta<strong>in</strong> quantities <strong>in</strong> terms of vectors. Th<strong>in</strong>k of a neural network composed ofseveral layers of identical process<strong>in</strong>g elements. If a particular layer conta<strong>in</strong>s n

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!