Neural Nets

www2.psych.ubc.ca
  • No tags were found...

Neural Nets

Simulating NeuralNetworksLawrence WardP465A


1. Neural network (or Parallel Distributed Processing, PDP) models areused for a wide variety of roles, including recognizing patterns,implementing logic, making decisions, etc.1.1 They rescued artificial intelligence research from the dead end ofsymbolic AI and are now one part of the foundation of the new AI (theother is agent-based processing).1.2 They learn, making them one of the most powerful tools forpsychological modeling.1.3 They have limitations, however, one of which is that representationsof psychologically meaningful concepts are difficult to see in the model.1.4 They are inherently dynamic but dynamics is not emphasized (e.g.the learning process) only the outcome (responses after learning).1.5 They are to be contrasted with spiking neural networks (or pulsecoupledneural networks) where dynamics is the focus.1.6 They implement only some of the many functional characteristics ofneurons.


2. The basic model element: the neuronIncomingactivationA 1Synapticweightsw 1A iw iAdderfunctionThresholdactivationfunctionOutgoingactivationA nw nx =!(x) =n! A iw ii=111+ e "(x"a)/b


3. Synapse layer model:Output layerInput layerAll of the networks knowledge is stored in the weights (w ij ) of the connections(representing the synapses in a real neural network). We will store the connectionweights in a matrix, call it W. For example for a two-layer, 3-element networkw11 w12 w13W = w21 w22 w23w31 w32 w33


4. Learning law: scheme to adjust the connection weights so that theyreflect the desired input-output relation. For example, a network istrained to identify various input patterns by giving a particular outputwhenever a particular pattern is input.4.1 If the weight adjustments are based on some abstract rule it iscalled unsupervised learning, but if the weight adjustments are basedon some error signal calculated from a desired target output, then it iscalled supervised learning. Supervised learning is most efficient, mostreliable, and most used (but unsupervised learning is how humans learn alot of the time).4.2 Error correction law:"w ij= # a i(c j$ b j)where α is the learning rate, a i is the activation of the ith input-layerneuron, b j is the activation of the jth output-layer neuron, and c j is thedesired activation of that neuron.


5. Multilayer networks5.1 Two-layer networks are limited: they cannot learn nonlinear mappings. Forexample they cannot learn the XOR problem, where the XOR truth table is:Input1 Input 2 Output0 0 00 1 11 0 11 1 0D3 = 1 iff D1=1and D2=1; planeis at D3=1The third dimension is provided by a “hidden layer” that does not directly interactwith the outside world. Below is a network that emulates the XOR function. Herethe activation function for hidden and output neurons is⎧⎧1f ( x)= ⎨⎨⎩⎩0ififx > 0⎫⎫⎬⎬x ≤ 0⎭⎭wherex=n∑i=1A iw i


As an example, consider the input x 1 = 1 and x 2 = 0 respectively.Then for hidden element y 1 the argument for the activationfunction is (1x1)+(-1x0)=1 and so its activation is f(1)=1. For y 2 itis (-1x1)+(1x0)=-1 and so f(-1)=0. Thus the inputs to the outputneuron are 1 and 0 making its activation (1x1)+(1x0)=1 and since f(1)=1 its output is 1, as required. Figure out the otherpossibilities as an exercise.


6. Backpropagation saves neural network models6.1 Minsky & Papert (Perceptrons, 1969) criticized multilayernetworks because they could see no way to correct the weightsof the connections between the input layer and the hidden layer.This and the limitations of 2-layer perceptrons contributed to along hiatus in neural net research.6.2 The solution to the problem is to backpropagate the errorfrom the output layer to calculate an error for each hiddenlayer neuron: a weighted sum of the output layer errorsmultiplied by the activation of the hidden-layer neuron. Thispenalizes the hidden layer neuron in proportion to how much itcontributed to the output layer error. These hidden-layer errorsare then used to calculate changes to the weights of the inputhiddenconnections just like is done directly for the hiddenoutputconnections. Output layer error can be backpropagatedlike this over any number of hidden layers.


7. The basic backpropagation model


7.1 Basic parameters: weights (2 layers), learning rate 0 < α < 1,momentum (how much a previous weight change affects currentweight change) m, tolerance (criterion for accepting an output asgood) t.7.2 Setup: assign random numbers between -1 and +1 to the twosets of weights. Assign values for α, m, and t.7.3 Learning forward pass:(1) Compute activation for each hidden layer neuron for inputpattern selected:for each hidden neuron summed input isny j= ! A iw1 ij+ hbias ji=1where A i is the input from the ith input neuron, and theactivation isf (y j) =11+ e !(y j!a)/b = A j


(2) Compute activation for each output layer neuronwhere Aj = f(yj) Is the input from the jth hidden layer neuron, and the outputactivation isThis vector of output layer neuron activations is the output you check forgoodness.7.4 Backpropagation of error:(1) compute the output-layer error:dkz k= ! A jW 2 jk+ obias kf (z k) ==Awhere A k is the activation of the kth neuron in the output layer.(2) compute the hidden layer error:pe j= A jmj=1k11+ e !(z k!a)/b = A k( 1−Ak)( Tk− Akk=1)" (1# A j)W 2 jkd k


(3) adjust the weights and bias for the hidden-output synapsesWhereW W 22jkt= W 2jk( t−1)+ Δjkt!W 2 jkt= ! A jd k+ m!W 2 jk(t"1)obias k= obias k+! d k(4) adjust the weights and bias for the input-hidden synapsesWhereW W11ijt= W1ij( t−1)+ ΔΔW α A1ijt=iej+ mΔW1ij(t−1)ijthbias j= hbias j+! e j(5) Repeat until Tk-Ak < t k for all output neurons and all input patterns.


7.5 Recall: repeat the steps for theforward pass. The computed output isthe recalled pattern.7.6 General approach: train on clear inputpatterns, test on same input patterns,test on variants of input patterns.


The assignment• 2 hidden unit XOR• 4 hidden unit XOR• Discuss– Effect of number of hidden units onlearning rate– Differences and similarities of twonetworks in terms of patterns of synapticweights between input and hidden, andhidden and output layers in them


The programDefine variables (arrays easiest): alpha, tol, MInput patterns (x) W1 (input->hidden) W2 (hidden->output)j j j0 00 -.50.04i0 1.2 1.68 -.37.57-.981 0-.33 .92.381 1patterns(pindex(i),j)target outputs0 1 1 0pattern indexhbias0 1 2 30 0 0 0


Hidden activation (y, by neuron)Output activation (z, by pattern)d (by pattern)e (by hidden)delW2delW2old0 0 0 00 0 0 00 0 0 00 0 0 00 0 0 00 0 0 0delW1 delW1old0 00 00 00 00 00 00 00 0


Initialize weightsScramble pindexCompute hidden activation (hard threshold)Compute output activationCompute output error (d)Compute hidden error (e)Compute new hidden->output weightsCompute new input->hidden weightsNew patternno4 patterns?yesnoConverge?yesDone

More magazines by this user
Similar magazines