12.07.2015 Views

Thesis - Department of Electronic & Computer Engineering

Thesis - Department of Electronic & Computer Engineering

Thesis - Department of Electronic & Computer Engineering

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

the system level. Our results show that arbitration is the best choice for neuromorphicsystems whose activity is sparse in space and in time. A Poisson model has been usedfor collision analysis in feed-forward communication system [12]. For this pulse coupledrecurrent system, we propose a periodic model, which is shown to give bettermodelling than Poisson model, especially for a network whose size is small or whichcontains neurons with significantly different activities. These results should be applicableto the implementation <strong>of</strong> the recurrent cortical competition and other complexnetworks with pulse coupled techniques.1.3 <strong>Thesis</strong> OrganizationThe remaining <strong>of</strong> this thesis is organized as follows. In chapter 2 we introduce thecontinuous WTA models on which the pulse coupled WTA neural networks are based.Pulse based communication schemes are presented in chapter 3. Related works arealso listed and compared. In chapter 4, the system level models <strong>of</strong> the communicationschemes used in this work are described. Chapter 5 elaborates the pulse coupled modelin detail. Analysis on the behavior <strong>of</strong> the model with different design parameters arealso covered. Performances <strong>of</strong> pulse coupled winner-take-all networks under differentcommunication schemes are compared, with simulation results and discussions. Aperiodic model is proposed for collision analysis. Conclusions are summarized inchapter 6.3


2. Continuous Winner-Take-AllNetworks2.1 What is Winner-Take-All?In neural networks with mutual inhibition, only significant activities can survivethe competition among neurons, and such competitive behavior has been considered toprovide a functional basis for neural information processing by the brain. To describethe different competitive behavior <strong>of</strong> such neural networks, three types <strong>of</strong> the behaviorare classified, that is, the winner-take-all (WTA), winners-share-all (WSA), and variantwinner-take-all (VWTA) [7]. These solutions are classified according to the number<strong>of</strong> active neurons that we call winners, and the dependence <strong>of</strong> actual winners oninitial conditions <strong>of</strong> neural activities.WTA is characterized by the fact that the neuron receiving the largest externalinput is the only winner. There is only a single stable fixed point (attractor) in the network,describing the selection <strong>of</strong> a maximal input.WSA describes the solution that at least two neurons remain active as winners inthe order <strong>of</strong> external input strength. The number <strong>of</strong> winners systematically changeswith the strength ratio <strong>of</strong> the different forms <strong>of</strong> inhibition.Like the WTA network, a VWTA network has only a single neuron active at steadystate. However, the winner is not necessarily the neuron receiving the largest input. Infact, any neuron can be the winner if the input to the neuron satisfies some requirement.This implies that the actual winner depends on the initial conditions <strong>of</strong> neural4


activities. The basin <strong>of</strong> attraction <strong>of</strong> an attractor should be larger for a neuron receivinga larger external input.For both WTA and WSA, an important feature <strong>of</strong> the competitive behavior is thatthe solutions do not depend on initial conditions <strong>of</strong> neuron activities. Such initial-condition-independentbehavior seems to be particularly useful for applications, especiallyWTA, since it can be used to distinguish a particular signal from others byestimation scalar values conveyed by the signals. For example, a decision-making processis thought to be the selection <strong>of</strong> one from many possible choices based on theevaluation <strong>of</strong> each choice with a certain criterion.2.2 The WTA ModelsThere are various WTA models proposed in the literature. Here we only study oneclass among them described below.We consider a network with N neurons, as depicted in Fig. 2.1, with the output <strong>of</strong>each neuron connected to all the other neurons and itself, in the same way as Maxnet[1]. The connection is excitatory if the output is self-feedback. It is inhibitory if theconnection is interneuron. For the ith neuron, i = 12…N , , , the input <strong>of</strong> the neuronis denoted byI i. The state potential and the activity output <strong>of</strong> the neuron are denotedby v i and f( v i ), respectively. For simplicity, we assume that f( v) = max( v,0), apiecewise linear function.vf(v)0t 0vThe leaky integratorThe input-output characteristic <strong>of</strong> f5


I kat steady state, f ( v k) = ----------- , f( v1 – a j) = 0.Fig. 2.2 shows the three operation regimes on a-b plane forδ 1= 0.1. The upper1 – aline corresponds to b = ------------- , and the lower line corresponds to1 – δ 1b = ( 1 – a) ( 1 – δ 1). The region above the upper line is the VWTA region, below thelower line is the WSA region, in between is the WTA region. From another point <strong>of</strong>view, given ab , , define the system resolution to be δ = max⎛-------------------- a+ b – 1 ,1 --------------------– a – b ⎞ .⎝ b 1 – a ⎠The minimum relative difference that the WTA network can discern is lower boundedby δ. In other words, it exhibits WTA behavior when δ 1 > δ .1.4Operation Regimes1.21VWTA0.8WTAb0.60.4WSA0.200 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1aFig. 2.2 Operation regions on a-b plane8


It is interesting to note that which regime the network will work at is solely deter-Imined by recurrent strengths ab , and δ 1, where δ 1– I1= -------------- 2. Obviously, for anypotential winner to become a winner in the end, it has to beat all the others in the network.In this case, since the inhibition is global, it becomes a competition between twoneurons, the neuron with the largest input and the neuron with the second largest input.Consequently, the network size N does not determine the competitive behavior <strong>of</strong> theWTA network.I 1It has to be mentioned that asδ 1decreases, the WTA region shrinks, and ifδ 1 → 0 , the WTA region becomes the line: a + b = 1. So when we choose( 1 – a)a+ b = 1, then 0 < a < 1 and ( 1 – a) ( 1 – δ 1 ) < b < ---------------- will always hold, thereforethe WTA model should be able to pick out the right winner at any resolution1 – δ 1level.2.3 PropertiesThe WTA model has close relationship with the recurrent cortical models, and canbe viewed as a simplified version <strong>of</strong> recurrent cortical model. One model for an orientation-hypercolumn in the primary visual cortex is given by [2]:τv˙i=LGN– + I i +v iN∑j = 1N∑φ exc ( i,j)f( x j )–φ inh ( i,j)f( x j ),j = 1I iLGNwhere τ is the membrane time constant and is the input from the lateral geniculatenuclei (LGN), φ exc ( i,j) and φ inh ( i,j)are the excitatory and inhibitory connectionstrengths, and f( x) = βmax( 0,x)is the presynaptic firing rate.9


⎧ 0,i ≠ j⎧ bi , ≠ jLet φ exc ( i,j)= ⎨ , φ and , where ,ai , = j inh ( i,j)= ⎨β = 1 ab , > 0⎩⎩0, i = jsuch that, there are only self-excitatory connections and global inhibitory connections,we will get the WTA model studied in this work.τv˙i=– + I i+ a ⋅ max( v i, 0) – b ⋅ max( v j, 0).v i∑i≠jConsequently, study <strong>of</strong> the WTA network is the first step towards study <strong>of</strong> the recurrentcortical networks.Let us present some properties <strong>of</strong> the WTA model which are useful for the laterdiscussion. A simple WTA model without external input nor self-decay has been analyzedin [1], with its properties proved. Although our model is more complex, withexternal input and involving self-decay, we can still verify that it has similar propertiesas the simple WTA model. The analysis here follows their notation.To simplify the discussion, it is assumed that the external inputs can be arranged ina strictly ascending order I π1 > I π2 > … > I πN , for a suitable index set { π 1 , …π , N },I π1Iwhere δ π1– I1= -------------------- π2. Recurrent strengths ab , are chosen such that the system resolutionδ≤ δ 1 , where δ = max⎛a -------------------- + b–1 ,1 --------------------– a – b ⎞ , to ensure the WTA behavior <strong>of</strong>⎝ b 1 – a ⎠the network.• The trajectory <strong>of</strong> the neural network is bounded;Lemma 1: ∀ε > 0,∀i= 1 , …,N,∃T i < ∞,∀t > T i , f i () t < ----------- + ε .1 – aI iPro<strong>of</strong>: (see appendix).10


Although there is no saturation in the activation function,f , for the WTA networkwith self-excitatory connections, the important property <strong>of</strong> the boundedness <strong>of</strong> neuralactivities can be guaranteed by global inhibition.• Order preserving;No matter what the initial conditions <strong>of</strong> the neural activities are, at steady states,the state potentials <strong>of</strong> the neurons are ordered in the same order as the external inputstrengths. Consider any two neurons in the network, if their initial state potentials arein the same order as their input strengths, this order is then invariant during the wholedynamic evolution (see Lemma 2 in the appendix); If their initial state potentials are inthe reverse order <strong>of</strong> their input strengths, their state potentials will eventually becomein the same order as their external input strengths under the condition that the relativedifference <strong>of</strong> the two inputs is greater than the resolution <strong>of</strong> the networkδ(SeeLemma 3 in the appendix). If the same condition holds, independent <strong>of</strong> the initial conditions,at steady states, the neuron receiving the larger external input remains active.The state potential <strong>of</strong> neuron receiving the smaller external input is below zero, correspondingto no activity (See Lemma 4 in appendix). This implies the initial-conditionindependentproperty.Theorem 1: If I π1 > I π2 > … > I πNand v π1 ( 0) ≥ v π2 ( 0) ≥ … ≥v πN ( 0), thenv π1() t > v π2() t > … > v πN() t , for all t > 0 .Pro<strong>of</strong>: The pro<strong>of</strong> is directly implied from Lemma 2 in the appendix. If the initialstate potentials are in the same order as the input strengths, this order is then invariantduring the whole dynamic evolution.Theorem 2: If I π1 > I π2 > … > I πN, and δ 1≥ δ , then there exists T < ∞ , such that∀t> T, v π1() t > v π2() t > … > v πN() t .11


Pro<strong>of</strong>: From Lemma 2 and Lemma 3, ∃T< ∞ , such that for t > T,v π1() t > v πi() tfor all i ≠ 1 . From Lemma 4, we have ∃T i< ∞,∀t > T i, v i() t < 0 for i ≠ π 1 , whichimplies that π1neuron is the only active neuron in the network. It can be easily shownthat no matter what the initial conditions are, in steady states, the state potentials areordered in the same order as the external input strengths.Based on the above analysis, if δ 1> δ , then there exists T i< ∞ such that ∀t > T i,v i() t < 0 for all i ≠ π 1. So f i() t = 0 for all i ≠ π 1. Consequently, except the winnerneuron, the activities <strong>of</strong> all the other neurons are completely suppressed to zero.π 1I π1 I π2 … I πNTheorem 3: If > > > , then there exists T i < ∞ , such that ∀t > T i ,f i() t = 0 for all i ≠ π 1 .Pro<strong>of</strong>: The pro<strong>of</strong> is directly implied from Lemma 3 and Lemma 4.Theorem 4: If I π1> I π2> … > I πN, and v π1( 0) = v π2( 0) = … = v πN( 0), thenthere exists∞ > T π2> T π3> … > T πN> 0 , such thatf πi() t = 0 , ∀t > T πiPro<strong>of</strong>:According to Theorem 1, we obtain thatv π1 () t > v π2 () t > … > v πN () t , for allt > 0. From Theorem 3, we know that all the losers will decrease to zero in the end.Obviously, it is not possible for a neuron with larger input to settle down faster than aneuron with smaller input. Thus all the losers settle in the reverse order <strong>of</strong> the externalinputs.12


• Response time <strong>of</strong> the system;As the WTA is an important component in many unsupervised learning models, itis important to investigate the information on its response time. Response Time <strong>of</strong> thesystem,, is defined as the total time it takes for all the losers to decrease to zero. It ist sv 10one <strong>of</strong> the criterion for selecting design parameters.For a network described in section 2.2, assume( ) = v 2( 0) = … = v N( 0) = 0 ,ndefine X n = ∑ v i , Y n = ∑ v i – ( n – 1)v n , n = 12…N , , and let N + denote thei = 1n – 1i = 1number <strong>of</strong> active neurons at time t, according to Theorem 4, the losers settle in thereverse order <strong>of</strong> the inputs, therefore, if = n ⇒ v 1, v 2…v n are active, soN +τX˙+NN +∑() t = [ a – ( N + – 1)b– 1]X + () t + I N ii = 1τY˙+NN + – 1∑() t = [ a+b – 1]Y + () t + I N i – ( N + – 1)I +Ni = 1N + decreases from N to 1 as time goes by, and the decrement <strong>of</strong> N + from n to n-1happens at t n , where t n is the solution to X n () t = Y n () t . Therefore the Responset stime is the solution toX 2 () t = Y 2 () t .13


IFor example, let I 2= … = I Nand 1– I-------------- 2= δ 1, v 1( 0) = … = v N( 0) = 0 ,then we haveI 1τX ˙N() t = [ a – ( N – 1)b– 1]X N() t + I 1[ 1 + ( N – 1) ( 1 – δ 1)]τY ˙ N() t = [ a + b – 1 ]Y N() t + I 1δ 1.Further suppose a+ b = 1, it becomesτX ˙ N() t=– NbX N() t + I 1[ 1 + ( N – 1) ( 1 – δ 1)]τY ˙N() t = I 1δ 1tDefine y - and to be the solution toτ= y s δ 1 y=( N – 1)1 – -----------------δN 1-------------------------------- ( 1 – e .b– Nby )Hence the response timet s=y sτFor example, if we choosea = b = 0.5, then we get the WTA model:τv˙ i()t = – vi () t + I i + 0.5max( v i ()0 t , )–0.5 max( v j ()0 t , ).∑j≠iI 1Further fix the largest input I 1 = 1.0 and τ = 1ms. For δ 1 = 0.1 and N = 2 , theo-1.0retically, v 1 () t = ----------- = --------------- = 2.0 at steady state and t . These can1 – a 1 – 0.5s = 19ms14


e verified by computer simulation (Matlab Simulink) in Fig. 2.3. It shows the consistencybetween the simulation results and the predictions.2.5Neural activities V.S. time2The winnerNeural activities1.510.5The loser00 5 10 15 20 25 30Time (ms)Fig. 2.3 Two neuron WTA modelAlthough the closed-form solution is not available fort s, qualitatively, asδincreases, t sdecreases nearly inversely proportional to δ ; while t sis insensitive to thechange <strong>of</strong> N. This can be shown by computer simulations.For δ 1 = 0.1 , we calculate the response time for different network size N asshown in Fig. 2.4. It is shown that when N is larger than 50, the response time isvery insensitive to the change <strong>of</strong> N.t s15


19.2Response Time V.S. Network Size N1918.8Response Time (ms)18.618.418.2180 10 20 30 40 50 60 70 80 90 100NFig. 2.4 Response time changes with network size N2.4 Related WorksLazzaro’s circuit [8] was the first hardware model <strong>of</strong> a winner-take-all network,which consists <strong>of</strong> only O(N) <strong>of</strong> interconnect. Each cell suppresses the outputs <strong>of</strong> allother cells through a global nonlinear inhibition. Since then, many improvements andvariations <strong>of</strong> this network with addition <strong>of</strong> positive feedback and lateral connectionshave been described.2.4.1 J. LazzaroIn Lazzaro’s design, a global nonlinear inhibition is computed by a single wire.Each cell contributes to this global inhibition, and each cell receives the same global16


inhibition. Due to the fact that each cell contributes to inhibit itself without any compensation,this circuit has a low resolution <strong>of</strong> 10% or so. The final outputs <strong>of</strong> the circuitare solely decided by the input conditions.2.4.2 Add self-excitationLazzaro’s circuit is enhanced through the addition <strong>of</strong> a current mirror to each cellto realize excitatory feedback (self-excitation). For example, J. A. Starzyk and X. Fangproposed such a circuit which can improve both the speed and resolution <strong>of</strong> the originalWTA circuit.To mediate the competition between potential winners, hysteresis can be added tothe WTA circuit by adding local feedback to the winning node in the array. The addition<strong>of</strong> hysteresis enhances the present winning stimulus so that it resists the selection<strong>of</strong> a new winner unless the stimulus has a value much larger than the present winner[9]. This has been used for analog VLSI selective attention system.2.4.3 Add lateral excitationAfter lateral excitation is added to the WTA circuits through resistive networks, thewinner is no longer restricted to the one who has the largest input, but depends on thespreading inputs in the neighborhood around the neuron. This kind <strong>of</strong> competition israther close to the cortical competition. There are some implementations which canmake it difficult for the network to leave the WTA region [10][11].17


3. Pulse Based Communication Scheme3.1 Address-Event-Representation (AER)Originally envisioned by Mahowald and Silvilotti, the address-event-representationis an asynchronous point-to-point communication protocol for silicon neural systems.In their scheme, as depicted in Fig. 3.1, to transmit pulses, or spikes from anarray <strong>of</strong> neurons on one chip to the corresponding location in an array on a secondchip, an address encoder generates a unique binary address for each neuron wheneverit spikes. A common bus transmits these addresses to the receiving chip, where anaddress decoder selects the corresponding location. The received pulses form theinputs <strong>of</strong> the second chip’s neurons and are accumulated locally.AddressEncoderDigital BusAddressDecoder12312321Time123Fig. 3.1 The Address-Event-RepresentationIt has the characteristics <strong>of</strong> event-driven multiplexed pulse-frequency modulationin which the address <strong>of</strong> the node which is the source <strong>of</strong> an event is broadcast during thepulse to all computational nodes within a defined region. This representation has thebiological communication style, no energy is allocated to transmit “useless” informa-18


tion. Thus the power consumption is minimized. The communication channel isaccessed and energy is expended by active neurons only. Because generally very fewneurons within a network are active at any one time, AER is more efficient at transmittingthis sparse representation <strong>of</strong> data across the neural population than the non-eventdriven multiplexing method such as scanning used in earlier neuromorphic work.Different from the biological communication channel, contention will occur if twoor more neurons attempt to spike simultaneously when random access to the sharedcommon bus are provided. Thus collisions must be expected and handled properly. Wecan simply detect and ignore the collided pulses, or use an arbitration scheme to serializesimultaneous events. We can hold all the waiting pulses in the queue until they areselected by the arbiter, or discard the aging pulses when they are no longer <strong>of</strong> interestand can be regarded as noise. According to the way how the collisions are handled, thereported AER communication schemes can be classified into the following two maingroups: Non-arbitered and Arbitered.3.1.1 Non-arbiteredIn the non-arbitered scheme, all output units have access to one common bus, onwhich the code identifying each neuron is wired. When activity in a neuron determinesa pulse emission, and if no other pulses are simultaneously emitted, the bus configurationcarries the identity <strong>of</strong> the emitting neuron for the duration <strong>of</strong> a pulse. Pulses aredecoded by the receiver, and directed to the units on the target chips. Whenever 2 ormore neurons attempt to access the channel at the same time, the coding ensures thatthe resulting bus configuration is not valid and automatically ignored by the decoder.To test the performance <strong>of</strong> the communication system exploiting non-arbiteredscheme, Mortara has reported a system in [12] as an variation <strong>of</strong> the original AER <strong>of</strong>Mahowald. The goal <strong>of</strong> the system is to map the activities <strong>of</strong> an array <strong>of</strong> cells in the19


transmitter chip onto a receiver with the same dimension, as shown in Fig. 3.2 by theblock diagram.Every event is sent onto the bus preserving the timing information by avoiding allsorts <strong>of</strong> handshaking and buffering. To detect collisions, the bus performs the bitwisewired OR operation on the colliding codes. In their implementations, the addresses arecoded with the same numberk<strong>of</strong> “ones” such that a collision, by bitwise ORing twoor more codes, results in a code with at leastk + 1“ones”. For example, if we choosek = 2, suppose the pulse from neuron A coded by “1010”, pulse from neuron Bcoded by “0011”, by bitwise ORing operation on the two codes when there is collisionbetween them, it results in a code “1011”, which has 3 “ones”, rather than 2. This codeis not valid and is automatically ignored by the decoder, which results in the loss <strong>of</strong> thecolliding pulses.Activity-to-frequency conversionPulse accummulationEncoderbusDecoderConverter’s outputReconstructed outputTransmitter chipReceiver chipBusAddress codecollision between1010 0011 1011andFig. 3.2 The non-arbitered communication system20


Let∆be the minimum time necessary to generate a suitable pulse at the receiver,if the neuron starts firing at timet , there will be a collision if any other neuron firesany time between t – ∆ and t + ∆ . This will result in correct bus configurations, correspondingto the firing neuron’s codes, for a time shorter than ∆ : not enough to drivethe receiver.This non-arbitered communication scheme is mainly justified by the fact thatevents that are indistinguishable by biological structures are sufficiently separated inthe VLSI context. Because in biological systems like retina, which respond vigorouslyand immediately to changes, many events co-occur within a few milliseconds. In acommon VLSI system, simultaneity means events separated by a time interval <strong>of</strong> theorder <strong>of</strong> ten to a hundred nanoseconds.As illustrated, this scheme has the virtue <strong>of</strong> simplicity, and permits high-speedoperation. Provided with a system with few neurons and low bus-traffic, few collisionswill occur. However, for random (Poisson) firing process, the collisions increase exponentiallyas the spiking activity increases, and the collision rates are even more prohibitivewhen neurons fire in synchrony. Therefore, scaling to larger systems would leadto a significant number <strong>of</strong> collisions, which will result in a degradation <strong>of</strong> the bandwidth<strong>of</strong> the bus.3.1.2 ArbiteredThe arbitered approach [14] [15] completely avoids collisions on the bus by handshakingand arbitration. An arbiter is introduced between the output nodes and theaddress-encoder. The arbiter detects potential collisions and ensures that only one <strong>of</strong>the contending output nodes gains access to the encoder at any time. The output <strong>of</strong> therejected nodes can be ignored and discarded (partially arbitered), or queued until theyare selected by the arbiter (fully arbitered). Intermediate queuing strategies, which21


queue a limited number <strong>of</strong> events, or discard aging events, have also been investigated[16].To illustrate this arbitered communication scheme, here we give one <strong>of</strong> the fullyarbitered implementations proposed by Boahen in [14].A full four-phase handshake is performed over a pair <strong>of</strong> wires between sender andreceiver, as shown in Fig. 3.3, which guarantees synchronization between chips. Thedata lines communicate the address <strong>of</strong> the sender to the receiver. A data-buffer is usedto queue unacknowledged events. Thus simultaneous events are serialized onto thesingle communication bus.SraraRdatadataCommunication modeldataraSignals and timingFig. 3.3 The arbitered communication scheme22


The sender initiates the sequence by driving its data onto the bus and taking r high.The receiver acknowledges by taking a high, after latching the data. This makesqueueing and pipelining straightforward: You make a neuron wait or stall a pipelinestage simply by refusing to acknowledge it.Pipelining is a well known approach to reduce the time <strong>of</strong> arbitration by breakingthe communication cycle up into a sequence <strong>of</strong> smaller steps that execute concurrently.As shown in Fig. 3.4, concurrency allows several address-events to be in various stages<strong>of</strong> transmission at the same time, reducing the cycle time to the length <strong>of</strong> the longeststep.Sending neuronArbiter Encoder DecoderReceiving neuronTimeSetResetFig. 3.4 Pipelined communication channelCommunication cycle involving four-phase handshakes between sending neuron,arbiter, address encoder, address decoder, and the receiving neuron. White and blackboxes indicate the duration <strong>of</strong> the set and reset halves. In the pipelined channel, we donot wait for the next stage to acknowledge us before we acknowledge the previous23


stage. similarly, we do not wait for it to withdraw its acknowledge before we withdrawours.The data-buffer pipeline-stage performs FIFO (first-in, first-out), which means theevents to be acknowledged are queued by age.In this arbitered scheme, arbitration preserves the integrity <strong>of</strong> the addresses that aretransmitted, but the statistics and the temporal structure <strong>of</strong> events may be distorted bythe queuing.3.1.3 SummaryThe selection <strong>of</strong> a communication scheme depends on the task that must be solvedby the neuromorphic system. A simple non-arbitered design, which discards spikesclobbered by collisions, <strong>of</strong>fers higher throughput (bus utilization) if high spike lossrates are tolerable. In contrast, a complex arbitered design, which makes neurons waittheir turn, <strong>of</strong>fers higher throughput when low spike loss rates are desired. Whereasarbitration lengthens the cycle time, reducing the channel capacity and queuing causestemporal dispersion, degrading timing information.3.2 Related Works on AERSince it was used to transmit visual signals out <strong>of</strong> a silicon retina at first, the AERrepresentation has been strengthened and formalized. Several variants <strong>of</strong> the originalscheme have emerged in the last few years. Boahen has interfaced two silicon retinasto three receiver chips to implement binocular disparity-selective elements [24].Venier and his colleagues have used an asynchronous interface to a silicon retina toimplement orientation-selective fields [15].24


Here in Table 1 is the comparison <strong>of</strong> the pulse coupled communication schemes bythe listed authors:TABLE 1. Comparison <strong>of</strong> reported pulse communication schemesMortara/Venier[12] Douglas[13] Boahen[14]Mahowald/Sivilotti/Lazzaro[13]Codingscheme PFM PFM PFM PFMCodingscheme Non-arbitered Non-arbitered Arbitered ArbiteredAERCollisionHandlerEncoderAddressencodeOR operation;Loss <strong>of</strong> allparticipatingeventsAddressencoder;(1+log 2 N)wiresHard-wiredversion andArbitered version[1][log 2( Y)]-bitdecoder &[log 2 (x)]-bitdecoderHandshake;Queue newPerformance T=20ms Not available 30ns 2us* Information not available.*4-cycle handshakeIntegrate-andfireIntegrate-andfireIntegrate-andfireIntegrate-andfire25


4. Modelling <strong>of</strong> Pulse BasedCommunication Scheme4.1 System StructureUnlike most <strong>of</strong> the reported pulse based communication schemes using the AERrepresentation for interchip communication, which perform only feed-forward communications,we propose a system incorporated with recurrent connections as depictedbelow in Fig. 4.1. All cells have access to a single parallel bus, each cell can not onlybroadcast the firing state <strong>of</strong> itself on the common bus but also get the firing states <strong>of</strong>the other neurons from the bus.BusActivity-to-frequencyEncoderPulse directorDecoderFig. 4.1 Communication system block diagramThe integrate-and-fire neuron is used to do activity-frequency conversion, wheneverthe neuron fires and no other neuron fires at the same time, the address <strong>of</strong> its26


source cell will be broadcast on the bus. Collision handling will be discussed in section4.2.4.1.1 Integrate-and-fire neuronThe IF neuron may be regarded as a caricature <strong>of</strong> a real neuron that captures someessence <strong>of</strong> its firing or spiking properties. The IF model considered in this work is amodel <strong>of</strong> non-leaky, current-clamped membrane in terms <strong>of</strong> a state variable U()t . Theoutput is a sequence <strong>of</strong> firing events that are defined as those times at whichU()treaches some threshold θ . Immediately after a firing event the state variable is reset tosome resting level, which is chosen to be 0 here. With external inputI()t, its dynamicscan be described asdU()t-------------- = I()tdtsubject to reset:lim U( T n – δ)δ → 0= θ , lim U( T n + δ)= 0δ → 0where is the point when U() t reaches θ.T nThe evolving <strong>of</strong> the integrate-and-fire neuron is shown in Fig. 4.2.U(t)0T ntFig. 4.2 The integrate-and-fire neuron27


4.1.2 Address-encoderThe encoder S code the address <strong>of</strong> the firing neuron. There are different codingschemes available. One possibility is described in section 3.1.1, where the collidedpart <strong>of</strong> all the participating pulses is lost. Another code has been studied in [17], wherea collision can result in the loss <strong>of</strong> only one <strong>of</strong> the participating pulses, but the schemeintroduces a bias favoring some <strong>of</strong> the codes, which is undesirable.There is no coding scheme capable <strong>of</strong> systematically and unambiguously separatingseveral overlapping addresses.4.1.3 Address-decoderWhenever the decoder detects a valid address on the bus, a fixed-height, fixedwidthpulse, is generated and directed to the corresponding locations. Weighting isapplied as recurrent strength.As shown in Fig. 4.3, A and represent the pulse amplitude and width respec-T ptively. The constraint thatAT p=θis set by the fact that we must keep the integral <strong>of</strong>the continuous input signal to be represented the same as the integral <strong>of</strong> the encodedpulse stream, in other words, we must keep their amplitude averages the same.AT p= θAT pFig. 4.3 The fixed-height, fixed-width pulse28


At this point, it seems appropriate to point out that the pulse amplitudeAhas tosatisfy A> I max , otherwise the pulse generator will saturate. T p determines the channelcapacity and plays an important role in the performance <strong>of</strong> the system.4.2 System Level Model <strong>of</strong> Pulse CoupledCommunication SchemesSince the transmission delay is so small, normally on the scale <strong>of</strong> nanoseconds, itis reasonable to ignore the transmission delay in the simulations, and assume thatwhenever the address code <strong>of</strong> the firing neuron is on the bus, that firing event isdetected by the receiver at the same time.To compute a network in discrete-time, a continuous-time period is divided inintervals <strong>of</strong> a constant duration ε. The period ε is commonly referred to as time slice.Within a time slice the new state <strong>of</strong> the network is calculated, where integration isdone by forward-Euler. That means all neurons are computed based on the inputs theyreceive and their internal state variables. The result <strong>of</strong> this computation is a new state<strong>of</strong> the network including output spikes generated within this time slice.During each time slice the new state <strong>of</strong> the network is computed and updated. Wecan divide the computation <strong>of</strong> a time slice into four phases as:1. Input-phase execute InputFunction for all neurons:v˙ i(n)= – vi ( n – 1)+ I i + as i ( n – 1) – b s j ( n – 1)2. Update-phase execute UpdateFunction for all neurons:∑j≠iv i( n) = v i( n – 1) + v˙ i(n)⋅ ε29


3. Output-phase execute OutputFunction for all neurons:U ˙ i ( n)=max( v i ( n) , 0)emit spike, if U i( n)exceeds threshold4. Arbitration-phase execute ArbitrationFunction for all events:set s i( n)according to arbitration schemeThese phases represent a way <strong>of</strong> structuring the computation <strong>of</strong> a new state <strong>of</strong> thenetwork. They will be used in the following sections when describing the simulationprocedures.4.2.1 Non-arbiteredTo model the non-arbitered communication scheme proposed by Mortara[12], weassume that as long as the address code on the bus is valid, the decoder can decode itcorrectly, as shown in Fig. 4.4. The overlapping part <strong>of</strong> each colliding pulse is ignored,while the rest <strong>of</strong> them can go through.In digital simulation, suppose TS time steps <strong>of</strong> a network with N neurons shouldbe simulated. First, the inputs and all the state variables are initialized; For each timestep in TS, we calculate the new state <strong>of</strong> the network: for each neuron in N,we updateall the variables and detect the potential spikes; If there is no potential spike, no spikeis encoded, if there is only one potential spike, it is encoded. Otherwise, collision is30


detected, all the potential spikes involved are discarded. The flow chart is given in Fig.4.5.Address code on the busDecoded signal at the receiverCollision betweenandFig. 4.4 The non-arbitered communication schemeInitializationFor each timestep ε inTSFor each neuron in NInputFunction()UpdateFunction()OutputFunction()Collision?YSet all s i zeroNSet s i A for the firing neuronSet s i zero for the non-firing neuronsFig. 4.5 Flow chart for the simulation <strong>of</strong> the non-arbitered scheme31


Inside each dashed frame is the body <strong>of</strong> the corresponding loop. It models exactlythe non-arbitered communication scheme illustrated in Fig. 4.4.4.2.2 ArbiteredIgnoring the cycle time for handshaking, it is assumed that whenever a collidingpulse is detected, it is held in the queue. When the bus is not busy, the oldest pulse isfired and removed from the queue. As shown in Figure 4.6, in this scheme, first <strong>of</strong> all,collisions are allowed and no pulse is lost. Secondly, collided pulses are queuedaccording to age, the oldest pulse has the first priority. This describes the same arbitrationscheme as Boahen’s design in [14].Address code on the busDecoded signal at the receiverCollision betweenandFig. 4.6 The arbitered communication schemeAs shown in Fig. 4.7, the procedures for the arbitered communication scheme indigital simulations are similar to those <strong>of</strong> the non-arbitered scheme, except that aqueue is maintained and the ArbitrationFunction is different. All the potential spikesare queued by age, when the bus is busy, no new spike is fired, while the bus is notbusy, the first spike in the queue is fired and then removed from the queue, first in firstout. Thus it is equivalent to that the bus carries only valid address code.32


InitializationFor each timestep ε inTSFor each neuron in NInputFunction()UpdateFunction()OutputFunction()Queue all the spikesBusy or not?NFire no new spikeFire and remove the first spike in the queueFig. 4.7 Flow chart for the simulation <strong>of</strong> arbitered schemeInside each dashed frame is the body <strong>of</strong> the corresponding loop. It models the fullyarbitered communication scheme by Boahen [14].33


5. Pulse Coupled Winner-Take-AllModel5.1 Basic ModelThe basic pulse coupled WTA model we investigate can be described as follows:τv˙i () t=∑– v i () t + I i + as i () t – b s j () tj≠iwheres i() t = Aut [ ( – t ij– T p)–ut ( – t ij)], represents a fixed width and fixedheight pulse stream, with AT p= θ .U ˙i() t=max( v i()0 t , ), describes a non-leaky integrate-and -fire neuron.U i() tut () is the step function, and { t ij } is the time sequence when U i () t = θ,andis then reset to zero.It is a pulse coupled form <strong>of</strong> the continuous WTA network described in chapter 2.PFM (pulse frequency modulation) is used as modulation format, where each cell inthe network has an non-leaky IF neuron for activity conversion.The architecture <strong>of</strong> a single neuron is shown below in Fig. 5.1. And Fig. 5.2 showsthe general communication process when there is no collision involved.34


global inhibition BusActivity to frequency-++-1sIntegratoraself-excitationFig. 5.1 The block diagram for a single neuron in the networkIntegrate-and-fireU(t)tAddress codebroadcast on thedigital busAddress codePulse directed tothe receivinglocationS(t)ATptFig. 5.2 The signal flow <strong>of</strong> the pulsed based communication scheme35


As we will discover in this chapter, this pulse coupled WTA network can not guaranteethe qualitative behavior <strong>of</strong> the continuous network, however, if the designparameters are properly chosen, the qualitative behavior can be achieved.5.2 PropertiesAssume the communication channel has infinite capacity, such that there is no collision,the pulse coupled model qualitatively preserves the following properties that thecontinuous model has.• The trajectory <strong>of</strong> the neural network is bounded;Similar to the continuous network, the pulse coupled WTA network also exhibitsthe property <strong>of</strong> the boundedness <strong>of</strong> neural activities. Letdenote the peak andv iand v iv ibottom values <strong>of</strong> each neuron in steady states respectively. Define˜ () tas the average<strong>of</strong> v i() t between two successive spikes as depicted in Fig. 5.3.v˜i tT n + 1∫() = v( τ)dτ, where t ∈ ( T n , T n + 1 ] .T n36


Spikes from the winnerATpTTT n T n+1v iDynamics <strong>of</strong> the loserT n T n+1v iFig. 5.3 Signal representations in steady stateLemma 1:∀ε > 0,∀i1 … N T i∞ t T iv˜ I= , , , ∃ < , ∀ > , i()t < ----------- i+ ε1 – aPro<strong>of</strong>: (see appendix).• WTA model will never work at VWTA region;Pro<strong>of</strong>:For a model described in chapter 5.1, suppose at steady state, the winner is not theneuron with the largest external input, v 1 , but v k ( k ≠ 1).37


If the model exhibits WTA behavior in continuous case, that is, it satisfies( 1 – a)θa < 1,( 1–a) ( 1 – δ 1) < b < ---------------- , let y = -- , ----- = β < 1 , the peak state value <strong>of</strong>1 – δ 1 τ Tthe neuron with largest input will beT pe βy ( 1 – a)⎛ ----------------I⎞⎜k– 1⎟I k ⎝⎠v 1= I 1– b-----------------------------------------------------( 1 – a)β⎛e y ----------------( 1 – a)I⎞⎜ k– 1⎟⎝ ⎠Since v 1 is one <strong>of</strong> the losers, it should keep silent; that is, v 1 ≤ 0 is always true.Defineϕ( x)=e βx – 1--------------- , where x>0, thene x – 1ϕ( x)∞( βx) n βx∑ ------------n!∑ --------nn!In = 1 n = 1----------------------- ------------------k= < = β v∞x∑ ----n ∞1I 1bx----n( ------------------- 1 – a)ββI k⇒ > – > I 1– b----------------( 1 – a)n!∑n!n = 1∞n = 1( 1 – a)( 1 – a) ( 1 – δ 1) < b < ----------------1 – δ 1b 1 1⇒ ----------- < ------------- ≤ -------------1 – a 1 – δ 1 1 – δ kv 1 I 1 b I k I> – ----------- > I k1 – a 1 – ------------- = 01 – δ kThis contradicts to the assumption thatv 1 ≤ 0, so such assumption does not hold.Since the neuron with the largest input will always fire, for any WTA network, itspulse-coupled counterpart will never exhibit VWTA competitive behavior where thewinner is not the neuron with the largest input but some other neuron. In other words,38


the network will either work at WTA regime or WSA regime. Consequently the networkwill not make wrong decision by choosing the wrong winner. At worst, it can notmake a decision.5.3 Simulation Environment• Programming environmentMatlab (Simulink) and C++ are chosen as programming languages.• Arithmetic precisionFor simulations on workstations, floating-point is a natural representation.• Network sizeSince the computational load increases dramatically as the size <strong>of</strong> the networkincreases, it is hard to investigate a network with large population <strong>of</strong> neurons. Wechoose the network size N = 10 .• Random initial states <strong>of</strong> the IF integratorTo avoid the initial synchronization, the initial conditions <strong>of</strong> the IF integrators areθ θrandomized between –-- and -- .2 2• NoiseAfter each iteration, uniform deviate noise is added to the system at the level <strong>of</strong> 1%<strong>of</strong> the maximum input.39


5.4 Examples <strong>of</strong> Pulse Coupled RealizationWithout considering implementation issues, we assume the capacity <strong>of</strong> the pulsebased communication is infinite, such that all the pulses can be put through.Let us look at the two-neuron model first. To make sure the recurrent strengths arechosen such that the system works at the WTA region, we choose a = b = 0.5 ,θ = 0.4τ , A = 4 and T p= 0.1τ , then we will get a concrete WTA model:τv˙ 1() t = – v 1() t + I 1+ 0.5s 1() t – 0.5s 2() tτv˙ 2() t = – v 2() t + I 2– 0.5s 1() t + 0.5s 2() twhere s 1 () t and s 2 () t follow the definition in section 5.1.With the same input conditions as the example analyzed in Chapter 2, whereI 1= 1.0, δ 1= 0.1 ⇒ I 2= 0.9 and τ = 1ms, according to the previous analysis andsimulation, with continuous model at steady states, we have˜ I 1 1.0= ----------- = --------------- = 2.0, v ˜, .1 – a 1 – 0.52 = 0 t s ≈ 19msv 140


The computer simulation results for this pulse coupled model are depicted in Fig.5.4.2.5Neural activities V.S. time2The winnerNeural activities1.510.5The loser00 5 10 15 20 25 30TimeFig. 5.4 The pulse coupled realization compared with the continuousIn this example, we can observe that the pulse coupled network preserves the samecompetitive behavior as the continuous network. “It shows nicely that informationabout a time dependent signal can indeed be conveyed by spike timing [12].” In thiscase, the temporal average <strong>of</strong> the outputs are very close to the outputs <strong>of</strong> the continuousmodel. Consequently it is possible to retain all the desirable properties <strong>of</strong> the continuousnetwork with pulse coupled realizations.However, if we choose θ = 0.8τ , A = 8 and T p = 0.1τ instead, the pulse coupledmodel does not exhibit the same competitive behavior any more, as shown in Fig.5.5.41


2.5Neural activities V.S. timeThe winner2Neural activities1.510.5The loser00 50 100 150TimeFig. 5.5 The pulse coupled realizationIt turns out that the potential winner can not suppress the activity <strong>of</strong> the loser completely.At steady state, the potential winner is not the only active neuron. Therefore,the operation regime switches from WTA to WSA.As shown, the discrete coding can not guarantee the same competitive behavior <strong>of</strong>the network. Design parameters, θ , A and have to satisfy certain criteria to ensureT pthe qualitative behavior be preserved. The rest <strong>of</strong> this chapter will focus on findingthese criteria. As long as the parameters are chosen properly, the temporal average <strong>of</strong>the outputs <strong>of</strong> the pulse coupled realization could be very close to the outputs <strong>of</strong> thecontinuous model, even at a fairly low spiking rate when the communication load iseasily afforded.42


5.5 Steady State AnalysisAs demonstrated in the previous section, three design parameters are involved andthey have to satisfy certain criteria to achieve qualitative behavior. Since AT p= θ ,there are two degrees <strong>of</strong> freedom. Given T p , there must exist θ l( T p) and θ h( T p),which correspond to the lower and upper bound <strong>of</strong> the thresholdθrespectively. Theanalysis <strong>of</strong> the dynamics <strong>of</strong> pulse coupled networks are generally complicated. However,in steady states, it is feasible to predict the network dynamics. Therefore, we canfind θ l( T p) and θ h( T p) for steady states, which are the necessary conditions for thepulse coupled networks to exhibit WTA behavior.Assume that the network can reach the state <strong>of</strong> WTA. Define the peak value andbottom values <strong>of</strong> the loser’s state potential at steady state to bev i and v irespectivelyas shown in Fig. 5.6. For the simple N-neuron network described in section 5.1, givena, b, and τ , the steady-state average value <strong>of</strong> the winner is ----------- , thus the average fir-1 – a1θ( 1 – a)ing rate <strong>of</strong> the winner is -- , where T = ------------------- . As stated in section 4.1.2, defineTI 1I 1IA min= ----------- 1, then the pulse amplitude A has to satisfy1 – aA>A min,I 1I 1θ----- > ----------- ⇒ θ > -----------T ,T p 1 – a 1 – a p43


θ--θ τAs we have A = ----- = ------ , where both θ and T are normalized by . For theT p T pτ----- pτrest <strong>of</strong> the discussions, let τ be the unit <strong>of</strong> θ and T p. So θ l( T p) = -----------T . the1 – a pthreshold must satisfy θ> θ l( T p) for any given T p . θ l( T p) is proportional to I 1 , thelargest external input.I 12.5State potentials V.S. time2State potentials1.510.50−0.50 5 10 15 20 25 30Time0.5Steady stateState potentials0v i−0.525 25.5 26 26.5 27 27.5 28 28.5 29 29.5 30Timev iFig. 5.6 Steady stateSince the winner is the only active neuron in steady states, the dynamics <strong>of</strong> all thelosers are governed by the spikes from the winner. The state potential <strong>of</strong> each loseroscillates between its peak value and bottom value with the periodT=θ( 1 – a)------------------- , forI 144


all the losers. To ensure that all the losers remain silent in steady states, their peak valuesshould be always below zero. As illustrated in the appendix, we obtainv i I i bA eT p ⁄ τ( – 1)= – -------------------------⁄ ,– 1e T τwhich has to satisfy v i≤ 0 , since v 2is the neuron with the second largest input, thecondition becomes v 2≤ 0 .(v 2 I 2 bA ey / A – 1)= – ------------------------------- , where yy( 1 – a)⎛ -------------------I⎞⎜e1– 1⎟⎝ ⎠=θ--τIf the right hand side <strong>of</strong> the inequality is greater than zero,⎛I 2 ⎜e⎝y( 1 – a)-------------------I 1⎞– 1⎟⎠> bA( e y/ A – 1).That is, whenθ--τis greater than certain value, even if it reaches the steady state, whateverthe pulse form is, it will not stay at the steady state.θSuppose a + b = 1 holds, and let y denote -- , the necessary condition for the networkto exhibit WTA behaviorτis⎛I 2 ⎜e⎝-----ybI 1⎞– 1⎟≤ bA( e y/ A – 1).⎠45


⎛Given y, if A maxis the solution to I 2e y ---b I⎞⎜ 1– 1⎟= bA( e y/ A – 1), A maxis the⎝ ⎠upper limit <strong>of</strong> the pulse amplitude corresponding to y. The intersection <strong>of</strong>A maxandAθ= ----- will give us the upper bound <strong>of</strong> θ, defined as θ h( T p) .T pConsider the extreme case when T pis very small ( 0 < T p« 1 );v 2I 2bA eT p ⁄ τ( – 1)– -------------------------e T ⁄ τ I– 1 2bA T p< – ------------------⁄ τe T ⁄ τ I– 1 2b θ -- 1= = – --------------------------τ θ( 1 – a)-------------------τIe 1– 1y( 1 – a)⎛ -------------------I⎞Define θ mas the solution to I 1( 1 – δ 1) e 1θ⎜ – 1⎟= by, where y = -- . Thus⎝ ⎠τθ mis the lower limiting <strong>of</strong> θ h( T p) .θ( 1 – a)⎛ -------------------τI⎞If θ l( T p) < θ< θ m, then I 2 ⎜e1– 1⎟≤ b θ for all .⎝ ⎠-- Tτ pFor a simple WTA model with a = b = 0.5 and I 1= 1.0, δ 1= 0.1 respectively,the necessary conditions for the network to exhibit WTA is given in Fig. 5.7.46


10090θ=θ mThe necessary conditions to exhibit WTATp=0.018070Pulse amplitude605040WTAWSA30Tp=0.052010Amin00 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2Threshold(Normalized)Fig. 5.7 The necessary conditions to exhibit WTA. The non-shaded area is the regionwhich can exhibit WTA behavior, while the shaded area is inhibited can not exhibitWTA behavior.For a fixed, to make the pulse coupled network exhibit WTA behavior, on oneT phand, the pulse amplitude A must be greater than A min , the intersection <strong>of</strong> the twoθlines A = A minand A = ----- determines the lower bound <strong>of</strong> the threshold θ l ( T p );T pone the other hand, A must be less than , the intersection <strong>of</strong> the curve and theA maxT pθline A = ----- determines the upper bound <strong>of</strong> θ h ( T p ). Only when the threshold is chosensuch thatθ l ( T p ) < θ


served, as some loser will never stop firing, although it fires sparsely. Consequently, itswitches to WSA behavior.It also shows that asT pdecreases, the intersection points shift to the left, that is,both θ l( T p) and θ h( T p) also decreases till at last θ h( T p) →θ mand θ l( T p) → 0 .Therefore, the range <strong>of</strong> the available threshold shifts to the left asT pdecreases. Alsowhen T p« 1 , the applicable range <strong>of</strong> the threshold can be approximated by ( 0,θ m).Because <strong>of</strong> the assumption <strong>of</strong> the whole analysis, the above constraints are necessarybut not sufficient.At this point, it seems appropriate to stress that, the relative strength <strong>of</strong> self-excitationand lateral inhibition is crucial to the competitive behavior <strong>of</strong> the network.We keep a + b = 1 constant and choose b = 0.4, 0.5,0.6 respectively. Fig. 5.8shows that θ h( T p) increases as b decreases. That is, the constraints on the thresholdaais loosened as -- increases. This is straightforward, the ratio, -- , determines the degreebb<strong>of</strong> WTA behavior. The larger the ratio, the harder the WTA, the weaker the constraintson the threshold selection.48


100Necessary conditions to exhibit WTA908070b=0.5b=0.4Tp=0.01Pulse amplitude605040b=0.630201000 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2ThresholdFig. 5.8 The effects <strong>of</strong> the recurrent strengths5.6 Response Time StudyAs stated before, the response time is an important parameter for a WTA network,yet only a few publications have appeared to provide in-depth analysis on the networkresponse time. In this section, we will look at the response time <strong>of</strong> the pulse coupledWTA network. We consider the same model as described previously, witha = b = 0.5 , I 1 = 1.0, δ 1 = 0.1 , and N = 10 with equally strong losers,I 2 = … = I 10 = I 1 ( 1 – δ 1 ). The response time calculated for the continuous modelis t s = 18.2 .49


It has to be stressed that unlike traditional coding <strong>of</strong> information, the pulse streammust be viewed as a statistical representation where the statistical properties must bepreserved, while loss <strong>of</strong> data may be tolerated. Since it is statistical, we must have alarge enough sample set in order to evaluate the behavior <strong>of</strong> the pulse coupled WTAnetwork. In this work, the size <strong>of</strong> the sample set is 25.5.6.1 Response time under different communication schemesWe study the response time for the following three cases.• No collision considerationWhen not taking the collisions into account, undoubtedly it will give the best performancewe can achieve with the pulse coupled model, because the channel capacityis assumed to be infinite.• Non-arbiteredBy handling the collision as illustrated in section 4.2.1, we simulate the non-arbiteredcommunication scheme by Mortara.• ArbiteredTo simulate the fully arbitered communication scheme by Boahen, we handle thecollision using the arbitration scheme demonstrated in section 4.2.2,We estimate the response time forT p = 0.01. Based upon the steady state analysis,in this case,θ l ( 0.01) = 0.02 , θ h ( 0.01) ≅ 0.4450


The response time is estimated in the applicable range <strong>of</strong>( θ l( T p),θ h( T p)). Thesimulation results are shown in Fig. 5.9.Response Time V.S. Threshold6050Response Time40302010ts=18.2No collisionArbiteredNon-arbitered00.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45ThresholdFig. 5.9 The response timeFrom the simulation results, we observe that, even without collision consideration,outside the applicable range <strong>of</strong> the threshold( θ l( T p),θ h( T p)), it can not settle down,which is consistent with the steady state analysis in section 5.5. It also shows that thesmaller the threshold, the smaller the response time. This is straightforward, since alarger threshold gives larger neural latency, which contributes to the degradation <strong>of</strong> theperformance. When the threshold θis small enough, it may take less time than thecontinuous case for the losers to settle. Intuitively, we prefer selecting a threshold aslow as possible. However, there is no free lunch, a smaller threshold requires a largercommunication load.51


When collisions are considered, for both non-arbitered and arbitered schemes, theperformance is degraded remarkably near the lower bound <strong>of</strong> the threshold, θ l( T p),due to the high collision rate and consequently high loss <strong>of</strong> data. Out <strong>of</strong> the applicablerange <strong>of</strong> the threshold, it ceases functioning as WTA. The upper bound <strong>of</strong> threshold isthe same asθ h( T p), which is set by the theoretical analysis. However, the lowerbound <strong>of</strong> threshold is larger thanθ l( T p). The actual lower bound is set by the largestcollision rate that can be tolerated.The similarities between the two schemes are: with the same pulse width, theyhave same threshold upper bound set by the theoretical analysis; and they give theminimum response time whenθtakes a moderate value in the applicable range.Because the response time is governed by two factors: on the one hand, as θincreases,the response time increases according to the curve measured without consideration <strong>of</strong>collisions; on the other hand, asθdecreases, the collision rate increases, which contributesto the increment <strong>of</strong> the response time. Hence we have to choose a moderatethreshold to balance the two effects.Near the lower bound <strong>of</strong> the threshold, the two communication schemes give similarperformance except that the arbitered scheme can tolerate a slightly larger collisionrate. However, when θis larger, or in other words, when the average firing rate issmaller, the arbitered scheme gives much better performance than the non-arbiteredscheme in terms <strong>of</strong> the response time.This is consistent with what Boahen said “Arbitration is the best choice for neuromorphicsystems whose activity is sparse in space and in time.” [13]52


5.6.2 Further study on the non-arbitered schemeTo study the effects <strong>of</strong> the pulse widthin the non-arbitered communicationT pscheme, we compare the response time with different T p : T p= 0.01 andT p= 0.005. The results are shown in Fig. 5.10.The figure shows thatT pis critical in determining the response time <strong>of</strong> the pulsecoupled network. The smaller theT p, the smaller the response time. This is easy tounderstand, becausedetermines the capacity <strong>of</strong> the pulse communication channel.T pAlso, the optimal threshold θ shifts to the left as T p decreases. A decrement in T plessens the collision rate effects for the same θ when doing the selection.Response Time V.S. Threshold6050Response Time40302010Tp=0.01Tp=0.00500.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45ThresholdFig. 5.10 Response time under the non-arbitered scheme53


The degradation due to collisions in response time can be explained by the gainloss analysis. The loss <strong>of</strong> the pulses can be regarded as the loss in the recurrentstrength, ab , .When the neurons fire frequently enough, the pulse coupled network can be treatedas a continuous network with less recurrent strengths to compensate for the loss due tocollisions. In this way, we will have different equivalentrate.ab ,for different collisionThe equivalent recurrent strengths area∗ = b∗ = 0.5 ⋅ ( 1 – p l),p lwhere denotes the average collision rate.To work at the WTA regime, the equivalent recurrent strengths must satisfyb∗ > ( 1 – a∗) ( 1 – δ), which can be reduced to1 – δa∗ > 2 ----------- – δ0.9= ------ = 0.47371.9We study how the response time changes with the normalized average equivalentrecurrent strengtha∗-----awhen the firing rate is high, and compare the simulation resultswith those <strong>of</strong> the continuous model (Related data is listed in Table. 2 in appendix). Asshown in Fig. 5.11, the non-arbitered pulse coupled model exhibits the same qualitativecharacteristics as the continuous model does, although they are not matched well,they have the same trend. It is necessary to note that the equivalent recurrent strengthsare varying with time in the non-arbitered pulse coupled model, while the strengths inthe continuous model are kept constant. This may account for some <strong>of</strong> the mismatches.54


100Response time V.S. equivalent gain9080ContinuousNon-arbitered70Response time6050403020100.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1Normalized equivalent gainFig. 5.11 The response time with equivalent recurrent strengthsIt suggests that when the collision rate is high enough the pulse coupled neural networkcan be viewed as a continuous network with weaker recurrent strengths. Thiseffect will then dominate the response time <strong>of</strong> the pulse coupled network.Collision analysis is important in selecting the design parameters, we will investigatethis issue in the following section.5.7 Collision AnalysisIt is reasonable to model the spiking process by Poisson process for a large population<strong>of</strong> independently firing cells, and it has been experimentally verified for feed-forwardconnections by Mortara[12]. But for a pulse coupled recurrent network with55


feedback connections, does it still hold? Can we find a way to refine the model suchthat it becomes even closer to the real process?In the following discussions, we define the collision rate as the fraction <strong>of</strong> thepotential pulses that has been lost due to the collisions:p col=Total fraction that has been lost-------------------------------------------------------------------------------------------- .Total pulses that should be transmittedThe throughput is defined as the usable fraction <strong>of</strong> the channel capacity.5.7.1 Poisson modelDefine the point process as “beginning <strong>of</strong> a pulse emission anywhere in the network”.By modelling this process as a Poisson process whose rate is determined by theaverage activity in the network, we can do the following performance estimation.Consider a network containing N cells. Letα f 0be the average cell pulse rate,f 0where is the frequency corresponding to the maximum activity and α is the averageactivity <strong>of</strong> the network, 0 < α < 1 .• Collision ratePoisson model can only provide an average collision rate <strong>of</strong> the network for all theneurons. To transmit a spike whose duration isT pwithout collision, the previousT pspike must occur at least earlier, and the next spike must occur at least later.Otherwise at least part <strong>of</strong> the spike will be lost. Therefore the average safe emissionrate isT p56


cell will never collide with each other. However, if it is assumed to be a Poisson propT p∫0–e Nα f 0( τ + T p )T pdτ= ------------------------------------------ =e – λ ( 1 – e – λ )----------------------------- ,λwhere λ = Nα f 0T p. If λ « 1 , then the above equation becomesp ≈e – λ.The average collision rate isep col 1 – p 1– λ ( 1 – e–λ )= = – -----------------------------λIf λ « 1 , p col≈ 1 – e – λ .• ThroughputisAssuming the spiking neurons are described by Poisson processes, the throughputS = λp = e – λ ( 1 – e–λ ),since the probability <strong>of</strong> a safe transmission isp=e – λ ( 1 – e–λ )----------------------------- . The throughput canλbe approximated by S ≈ λe – λ if λ«1 .5.7.2 Periodic modelAccording to the operation <strong>of</strong> the pulse coupled model, two spikes from the same57


cess, we actually treat each spike equally, no matter where it is from. Therefore, weshould get more accurate collision analysis if this characteristic is added to the model.A periodic model is presented below for collision analysis.Suppose that in a certain time interval, all neurons can be assumed firing periodicallywith periodin Fig. 5.12.T irespectively, and these pulse streams are independent, as shownT 1T 2Tps 1As 2T Ns NT 1 T 2 … T NFig. 5.12 The periodic modelSuppose< <


ps ( i) = ( 1 – ϕ i)δ( s)for s = 0ps ( i) = ϕ iδ( s – A)for s = Awhereps ( 1, s 2,…,s N)is the multidimensional probability density function <strong>of</strong> theamplitudes <strong>of</strong> all the pulse streams and δ( s)is the Dirac delta function.• Collision rateFor the periodic model, the probability that a spike from neuronother spikes would beicollides with∏p c () i = 1–( 1–ϕ j ).j≠iFrom the above equation, the spikes from the most active neuron has the smallestcollision rate, since p c () i is minimum when ϕ i > ϕ j for all j≠i. The most inactiveneuron has the largest collision rate.Assume that the average firing rate <strong>of</strong> every neuron is the same:f 1= f 3= … = f N= f.We get the simple relationshipp c = 1 – ( 1 – ϕ) N – 1 ,where ϕ = T p f .IfT pf « 1,N » 1 , we have59


p col≈ 1 – e N – 1–( )T p f,which is very close to the estimation by the Poisson model when N → ∞ .If the average firing rate <strong>of</strong> every neuron except the winner is the same:f 2= f 3= … = f N= fThe collision rate <strong>of</strong> the winner isp cw= 1 – ( 1 – T pf ) N – 1IfT p f « 1,N » 1 , we obtain thatp cw≈ 1 – e N – 1–( )T p f• ThroughputThe throughput <strong>of</strong> the communication channel isN∑S = ϕ i( 1 – ϕ j)i = 1∏j≠i5.7.3 Simulation resultsWe do computer simulations forT p = 0.01. The average firing rates and collisionrates are measured and listed in Table 2 <strong>of</strong> the appendix. According to the results, as θincreases, the collision rate decreases and the normalized equivalent recurrentstrengths become closer to one. Consequently, the response time is no longer dominatedby the collision rate but the neural latency.60


These results show that the collision rate <strong>of</strong> the winner is crucial to the performance<strong>of</strong> the system, in terms <strong>of</strong> both operation regime and response time. They alsoshow thatp cw


0.7The average collision rate <strong>of</strong> the loser0.60.5MeasuredBy Periodic modelBy Poisson modelCollision rate0.40.30.20.100.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4ThresholdFig. 5.14 The collision rate <strong>of</strong> the loserIt turns out that the periodic model gives better estimations than Poisson model,especially for the winner. The estimations <strong>of</strong> the winner’s collision rates by the periodicmodel match the measured ones almost perfectly. When the average collision rateis high enough, the estimations <strong>of</strong> the loser’s collision rate by the periodic model stillgive smaller error than Poisson model. However, when the collision rate is quite small,the estimations by the periodic model have larger error. It must be pointed out that wecare about the collision rate only when the average collision rate is fairly large, otherwisethe performance <strong>of</strong> the system is no longer governed by the collision rate. For thisreason, the periodic model is better than Poisson model.62


5.7.4 SummaryCompared with the Poisson model, the periodic model tends to reduce the randomness,excluding the collision between two spikes from the same neuron, and it is a bettermodel for collision analysis, especially for small networks or networks containingdifferent populations <strong>of</strong> neurons.The simulation results shows that the more active the neuron, the lower its collisionrate. In other words, the pulse based communication scheme is in favour <strong>of</strong> theactive neurons rather than the passive ones, which is the beauty <strong>of</strong> the AER representationas well, allocating more channel capacity to the more active cells. This phenomenais consistent with the theoretical analysis based on the periodic model.The difference between the two models is remarkable only when the network sizeis small or the neurons can be classified into different populations according to thedegree <strong>of</strong> neural activity. When the network size is large enough and all the neurons inthe network exhibit similar activities, there is only nuance between them.To scale up the network sizeN, we can simply scale down the pulse widthT pwhile keeping the same collision rate. Thus the same performance can be preserved.T pHowever, is limited by the minimum pulse duration ∆ , which is necessary to generatea suitable spike. Therefore the network size N is limited by ∆ with other conditionsunchanged.As shown, for the given pulse coupled WTA network withN = 10, we can find asuitable threshold to achieve the same competitive behavior ifT p = 0.01. It is reasonableto expect that the same qualitative behavior can be achieved for a network size63


NT<strong>of</strong> N max= ---------- p. If ∆ = 100ns and τ = 10ms, then N , which is large∆max= 10000enough for normal applications.64


6. ConclusionsIn this thesis, a class <strong>of</strong> pulse coupled WTA neural networks is investigated. Wefind that it preserves some <strong>of</strong> the properties <strong>of</strong> its continuous counterpart qualitatively,however, the discrete coding can not always guarantee the same qualitative behavior <strong>of</strong>the network. Hence the selection <strong>of</strong> design parameters is critical. By selecting parametersproperly, it is feasible to implement pulse coupled WTA neural networks at lowcost but preserving the same competitive behavior as the continuous networks.To summarize, the following parameters are involved:I 1: Maximum external input;δ 1:The minimum resolution required;a : Self-excitatory recurrent strength;b : Lateral inhibition strength;τ : The time constant;N :The network size;θ:The threshold <strong>of</strong> the integrated-and-fire neuron, normalized by τ;A :The amplitude <strong>of</strong> the pulse;T p:The duration <strong>of</strong> the pulse at the receiver, normalized by τ ;t s: Response time, normalized by τ ;65


θ( 1 – a)⎛ -------------------I⎞Define θ mis the solution to I 1( 1 – δ) ⎜e1– 1⎟= bθ .⎝ ⎠⎛Define θ h( T p) is the solution to I 1( 1 – δ) e θ ( ----------------1 – a)I⎞⎜ 1– 1⎟= bθ( e θ/ A – 1)and⎝ ⎠A=θ-----T pIDefine A min = ----------- 1and θ .1 – a l( T p) = A minT pConstraints on selecting the design parameters are given as follows:• To make the model work at the WTA regime, ab , have to satisfy( 1 – a)a < 1,( 1–a) ( 1 – δ 1) < b < ----------------1 – δ 1• Given T p, the necessary condition on the threshold is θ l( T p) < θ


In this thesis, we consider only a class <strong>of</strong> WTA neural networks. However, theanalysis and results should be applicable for more complex networks and lead to a successfulimplementation <strong>of</strong> a versatile visual system. While applications <strong>of</strong> interchipcommunication are still in the early stages, the results so far are quite promising. Notonly feed-froward but also feedback interchip communications can be implemented byVLSI. “Perhaps the foundation <strong>of</strong> the intelligence is our ability to communicate ideas[23].” If we can match the nature’s communication efficiency, large scale behavingsystem is highly approachable.67


7. Appendix7.1 Pro<strong>of</strong> for Properties <strong>of</strong> the Continuous WTAModelFor a class <strong>of</strong> network, connected in the same way as Maxnet, with excitatory selffeedbackand inhibitory lateral connections, the dynamics can be described as:∑τv˙ i()t = – vi () t + I i + af i () t – b f j () t ,i≠jwhere f i() t = max( v i()0 t , ), for all i = 12… , , N and ab , > 0 .Assume that the inputs can be arranged in a strictly descending order, i.e.,I π1> I π2> … > I πN , for a suitable index set { π 1, π 2…π , N}, and recurrent strengthsab , are chosen such that the network exhibits WTA behavior, that is, 0 < a < 1and( 1 – a)I( 1 – a) ( 1 – δ 1) < b < ---------------- , where δ π1– I π2.1 – δ 1= --------------------1I π1Lemma 1: ∀ε > 0,∀i= 1 , …,N,∃T i < ∞,∀t > T i , f i () t < ----------- + ε .1 – aI iPro<strong>of</strong>: For each neuron in the network, its dynamics obeyτv˙ i()t = – vi () t + I i + af i () t – b f j () t ≤ – v i () t + I i + af i () t .∑i≠jFor the right hand side <strong>of</strong> the inequality, we obtain that68


v i() t≤I i----------- ( 1 – e1 – a– t / τ ) + v i( 0)e – t / τI iIf v i( 0)< ----------- , let T , then ;1 – a i= 0 ∀t > T i, f i() t < 1 ----------- – aI iIv i ( 0)– -----------i1 – aIIf v i( 0)> ----------- , let T , then .1 – a i= τln----------------------------- ∀t > Tεi , f i() t < ----------- i+ ε1 – aI iLemma 2: ∀i,j ∈ { 1 , …,N }, if I i> I j and v i( 0) ≥ v j( 0), then ∀t > 0,v i() t > v j() t .Pro<strong>of</strong>:τ---- d [ vdt i() t – v j() t ] = – [ v i() t – v j() t ] + ( I i– I j) + ( a + b) [ f i() t – f j() t ]There are three cases to be considered.Case I:v i() t ≥ v j() t > 0τ---- d [ vdt i() t – v j() t ] = ( a + b – 1) [ v i() t – v j() t ] + ( I i– I j)If a+b – 1 ≥ 0, thenτ---- d [ v ;dt i () t – v j () t ] > I i – I j > 0⇒v i () t – v j () t > v i ( 0) – v j ( 0)> 0If a+b – 1 < 0, thenv i () t – v j () t =I i – I j1 -------------------- ⎛⎜– a – b1 –⎝e–( 1 – a – b) t τ -⎞⎟⎠–( )t τ -+ [ v i ( 0) – v j ( 0)]e 1 – a – b> 069


Case II:0 > v i() t ≥ v j() tτ---- d [ vdt i() t – v j() t ] = – [ v i() t – v j() t ] + ( I i– I j)⎛v i () t – v j () t = ( I i – I j ) ⎜1 – e⎝t– -τ⎞⎟⎠+ [ v i ( 0) – v j ( 0)]e – t / τ > 0Case III:v i () t ≥ 0,v j () t ≤ 0τ---- d [ vdt i () t – v j () t ] = – [ v i () t – v j () t ] + ( I i – I j ) + ( a + b)v i () tτ---- d [ vdt i () t – v j () t ] > – [ v i () t – v j () t ] + ( I i – I j ) ⇒ v i () t – v j () t > 0∴v i() t > v j() tI i– I jLemma 3: ∀i,j ∈ { 1 , …,N }, if v i( 0) ≥ v j( 0)and ------------- ≥ δ ,∃T j< ∞,∀t > T j , v j () t < 0 .I iPro<strong>of</strong>:τ---- d [ vdt i () t – v j () t ] = – [ v i () t – v j () t ] + ( I i – I j ) + ( a + b) [ f i () t – f j () t ]There are three cases to be considered.Case I:v i () t ≥ v j () t > 0τ---- d [ vdt i() t – v j() t ] = ( a+b – 1) [ v i() t – v j() t ] + ( I i– I j)70


If a+ b – 1≥0, then τ---- d [ v ; From Lemma 1, there is andt i() t – v j() t ] > I i– I j> 0upper limit <strong>of</strong> v i() t , as [ v i() t – v j() t ] will keep on increasing if v j() t > 0 , so theremust exist < ∞,∀t > T j, v j() t < 0 .T jIf a+b – 1 < 0, thenv i () t – v j () t=I i– I j1 -------------------- ⎛⎜– a – b1 –⎝e–( 1 – a – b) t τ -⎞⎟⎠–( – – b)tτ -+ [ v i ( 0) – v j ( 0)]e 1 aSince b > ( 1 – a) ( 1 – δ)⇒ 1 – a – b < ( 1–a)δI i – I j I-------------------- i – I j I------------------- i δ1 – a – b ( 1 – a)δ≥ ( -------------------I> ⇒ ∃ε 1 – a)δ1 > 0,----------- i+ ε1 – a 1=I i– I-------------------- j1 – a – bFrom lemma 1,∀ε > 0,∃T i< ∞,∀t > T i, v i() t < ----------- + ε , so1 – aI i∀ε 2< ε 1, ∃T i< ∞,∀t > T i, v i() t < ----------- + ε1 – a 2I i–( – – b) t τ -I 1 av j() t ≤ ε 2– ε i1+ ⎛----------- + ε ⎞⎝1 – a 1 e⎠–( – – b)tτ -–[ v i( 0) – v j( 0)] e 1 aFor the right hand side <strong>of</strong> the inequality, letT j=I 1⎛----------- + ε ⎞τ ⎜1 – a 1 ⎟-------------------- ln⎜----------------------⎟, then1 – a – b ⎜ ε 1 – ε 2 ⎟⎝ ⎠∀t> T j , v j () t < 0 .Case II:0 > v i() t ≥ v j() t71


τ---- d [ v , which shows that willdt i() t – v j() t ] = – [ v i() t – v j() t ] + ( I i– I j)v j() teither stay below zero or become case I.Case III:v i() t > 0,v j() t < 0τ---- d [ v , which shows thatdt i() t – v j() t ] = – [ v i() t – v j() t ] + ( I i– I j) + ( a + b)v i() tv j() twill either stay below zero or become case I.I i – I jLemma 4: ∀i,j ∈ { 1 , …,N }, if v i( 0) < v j( 0)and ------------- ≥ δ ,∃T i< ∞,∀t > T i , v i () t > v j () t .I iPro<strong>of</strong>:τ---- d [ vdt i() t – v j() t ] = – [ v i() t – v j() t ] + ( I i– I j) + ( a + b) [ f i() t – f j() t ]There are three cases to be considered.Case I:v j () t > v i () t > 0τ---- d [ vdt i () t – v j () t ] = ( a + b – 1) [ v i () t – v j () t ] + ( I i – I j )If a + b–1 ≤ 0, then τ---- d [ v ; As long asdt i () t – v j () t ] > I i – I j > 0v j () t ≥ v i () t ,d---- [ vdt i() t – v j() t ] > I i– I j> 0 will always hold, so there must exist T i< ∞ ,∀t> T i, v i() t > v j() t .72


If a+ b – 1 > 0, since v i() t > 0 , ∴∃ε 1> 0 , such that v i() t ≥ ε 1 . From Lemma 1,∀ε > 0 , ∃T j< ∞ , ∀t > T j, v j() t < ----------- + ε , we obtain that ∀ε , ,1 – a2< ε 1∃T j< ∞∀t>T jI j, v j() t < ----------- + ε .1 – a 2I jI jτ---- d [ vdt i() t – v j() t ] > ( a + b – 1)----------- + ( I1 – a i– I j) + ( a + b – 1) ( ε 1– ε 2)for allt >T j.( 1 – a)( a – 1)δSince b < ---------------- ⇒ 1 – a – b > ------------------- and I( 1 – δ)1 – δ j≤ I i( 1 – δ)τ d ( a – 1)δ---- [ vdt i() t – v j() t ] ------------------- Ii ( 1 – δ )>1 – δ-------------------- +1 – aI δ + ( a + b – 1 )( ε i 1– ε 2)∴τ---- d [ vdt i () t – v j () t ] > ( a+b – 1) ( ε 1 – ε 2 ) > 0So there must exist < T i< ∞,∀t > T i, v i() t > v j() t .T jCase II:0 > v j() t > v i() tτ---- d [ vdt i () t – v j () t ] = – [ v i () t – v j () t ] + ( I i – I j ) > I i – I j > 0Case III:v j () t > 0,v i () t < 0τ---- d [ vdt i () t – v j () t ] = – [ v i () t – v j () t ] + ( I i – I j )–( a + b)v j () tτ---- d [ vdt i() t – v j() t ] = – v i() t + ( I i– I j) + ( 1– a–b)v j() t73


If 1 – a – b ≥ 0, τ---- d [ v ;dt i() t – v j() t ] > I i– I j> 0If 1 – a – b < 0, since – v i() t > 0 , ∴∃ε 1> 0, such that – v i() t ≥ ε 1.From Lemma 1, ∀ε > 0,∃T j < ∞,∀t > T j , v j () t < ----------- + ε , we obtain that1 – aI j∃ε 2 < ε 1 , ∃T j < ∞,∀t > T j , v j () t < -----------1 – a+ -------------------- a+b – 1I jε 2I jτ---- d [ v for all .dt i() t – v j() t ] > ( 1 – a – b)----------- + ( I1 – a i– I j) + ( ε 1– ε 2) t > T j( 1 – a)( a – 1)δSince b < ---------------- ⇒ 1 – a – b > ------------------- and I( 1 – δ)1 – δ j≤ I i( 1 – δ)τ d ( a – 1)δ---- [ vdt i () t – v j () t ] ------------------- Ii ( 1 – δ )>1 – δ-------------------- +1 – aI δ + ( ε – ε )i 1 2∴τ---- d [ vdt i () t – v j () t ] > ( ε 1 – ε 2 ) > 0To summarize, ∃ < ∞,∀t > T i, v i() t > v j() t .T i7.2 Pro<strong>of</strong> for Properties <strong>of</strong> the Pulse Coupled WTAModelThe proposed pulse coupled model can be described asv i () ˙t=∑– v i () t + I i + as i () t – b s j () tj≠i74


whereV ˙ i () t=max( v i ()0 t , ), describing non-leaky integrator.s i() t = Aut [ ( – t ij– T p)–ut ( – t ij)], representing fixed width and fixed heightpulse.ut () is the step function, and { t ij} is the time sequence whenV i() t = θθ , = A⋅T p , then V i() t is reset to zero. The depicts below shows the waythe pulse is generated.Define the peak value and bottom values <strong>of</strong> the each neuron in steady state arev iand v i respectively. Let v˜i()t be the average <strong>of</strong> vi () t between two successive spikesas depicted in Figure 5.4.v˜i tT n + 1∫() = v( τ)dτ, where t ∈ ( T n , T n + 1 ] .T nLemma 1: ∀ε > 0,∀i1 … N T i∞ t T iv˜ I= , , , ∃ < , ∀ > , i()t < ----------- i+ ε1 – aPro<strong>of</strong>:v i () ˙t=∑– v i () t + I i + as i () t – b s j () t < – v i () t + I i + as i () tj≠iConsider the steady state <strong>of</strong> the right hand side <strong>of</strong> the inequality,T p⎛ –-----⎞–----- ⎛v i = ( I i + aA) ⎜1 – e τ⎟ + v i e τ , v i = I i ⎜1 – e⎝ ⎠⎝T p( T – T p )–--------------------τ⎞⎟⎠+ v i e( T – T p )–--------------------τ75


⎛ –-----⎞⎜1 – e τ⎟⎝ ⎠v i= I i+ aA-----------------------,1 – e – T / τ v i= I i+ aAeT p( T – T p )–--------------------τT p⎛ –-----⎞⎜1 – e τ⎟⎝ ⎠-----------------------1 – e – T / τT p pθ aT p[( I i+ aA) ( 1 – e – t / τ ) + v ie – t / τ ] dT ⎛= = ∫ t + I i ⎜1 – e0∫T p ⎝( t – T p )–------------------τ⎞⎟⎠( t – T p )–------------------τ+ v ie dt( θ = aT p= I iT + aAT p) ⇒ ṽ=I i-----------1 – a7.3 Data for collision analysisForT p= 0.01, the average firing rates and collision rates are measured throughthe simulation and listed in Table 2. and denote the average firing rate andf wcollision rate <strong>of</strong> the winner; and denote the average collision rates <strong>of</strong> all thep ctp cla l∗ aneurons and <strong>of</strong> the losers; ------ w∗and -------- denote the average normalized equivalenta arecurrent strength for the loser and the winner respectively.TABLE 2. Collision analysisp cwθ t s Nf f wp ct(%) p cl(%) p cw(%)a l∗------ (%)aa w∗-------- (%)a0.05 - 52.76 10.89 38.89 39.95 34.83 60.05 65.170.10 93.38 20.60 10.28 14.72 19.99 9.43 80.01 90.570.11 68.84 18.48 9.77 13.24 19.03 8.08 80.97 91.920.12 58.22 16.74 9.06 11.94 17.53 7.20 83.47 92.800.13 46.79 15.27 8.33 10.57 15.73 6.27 84.23 93.730.14 45.74 14.05 7.81 9.54 14.46 5.61 85.54 93.390.15 42.24 13.14 7.18 9.72 14.56 5.70 85.44 94.300.16 39.13 12.17 7.06 8.30 13.30 4.68 86.70 96.3276


θ t s Nf f wp ct(%) p cl(%) p cw(%)a l∗------ (%)aa w∗-------- (%)a0.17 39.09 11.47 6.50 8.38 13.08 4.79 86.92 96.210.18 37.45 10.79 6.30 7.86 12.91 4.26 87.09 96.740.19 36.77 10.20 5.86 7.54 12.01 4.23 87.99 96.770.20 34.21 9.60 5.68 6.37 10.57 3.47 89.43 96.530.25 35.26 7.67 4.63 5.69 9.68 3.07 90.32 96.930.30 39.94 6.39 3.99 5.11 8.85 2.86 91.15 97.140.35 45.93 5.48 3.56 4.52 8.14 2.57 91.86 97.430.40 51.80 4.80 3.27 3.62 7.17 1.96 92.83 98.0877


8. ReferencesReferences1. John P. F. Sum, Chi-Sing Leung, Peter K. S. Tam, “Analysis for a Class <strong>of</strong>Winner-Take-All model”, IEEE Trans. Neural Networks, Vol. 10, No. 1, Jan,1999.2. Kwabena Boahen, “Retinomorphic Vision Systems”, Proceedings <strong>of</strong> MicroNeuro’96.3. Charles M. Higgins and Christ<strong>of</strong> Koch, “Multi-chip Neuromophic Motion Processing”,Proceedings 20th Anniversary Conference on Advanced Research inVLSI. IEEE Comput. Soc 1999 pp.309-23. Los Alamitos, CA, USA.4. Peter Adorjan, Lars Schwabe, “Recurrent cortical competition: Strengthen orweaken?”, Advances in Neural Information Processing Systems 12, MIT Press,Cambridge, MA, 2000.5. Zhaoping Li, “A Neural Model <strong>of</strong> Contour Integration in the Primary VisualCortex”, Neural Computation 10, 903-940 (1998).6. R.L.T Hahnloser, “On the piecewise analysis <strong>of</strong> networks <strong>of</strong> linear thresholdneurons”, Neural Networks 11 (1998) 691-697.7. Tetsuya Asai, Masashiro Ohtani, and Hiroo Yonezu, “Analog Integrated Circuitsfor the Lotka_Volterra Competitive Neural Networks”, IEEE Trans. NeuralNetworks, Vol. 10, No. 5, Sep, 1999.8. J. Lazzaro, S. Ryckebusch, M.A. Mahowald, and C. A. Mead, “Winner-take-allNetworks <strong>of</strong> O(N) Complexity, Advances in Neural Information ProcessingSystems I, 1988.9. T. G. Morris and S. P. Deweerth, “Analog VLSI Excitatory Feedback Circuitsfor Attention Shifts and Tracking”, Analog Integrated Circuits and Signal Processing,13, 79-91 (1997).78


10. “WTA networks with lateral excitation”, Analog Integrated Circuits & SignalProcessing, Vol. 13, No. 1-2, May-June, 1997, PP. 185-93.11. “CMOS Current Mode WTA Circuit with distributed…”, <strong>Electronic</strong>s Letters,Vol. 31, No. 13, 22, June, 1995, pp. 1051-3.12. Alessandro Mortara, Eric A. Vittoz and Philippe Venier, “A communicationScheme for Analog VLSI Perceptive Systems”, IEEE Journal <strong>of</strong> Solid-StateCircuits, V.30, No.6, June, 1995.13. Maass and Bishop, Pulsed Neural Networks, 1999, Chapter 6 & 7, MIT Press.14. Kwabena A. Boahen, “Point-to-Point Connectivity Between NeuromorphicChips using Address-Events”, IEEE Trans. On Circuits & Systems, 1999.15. John Lazzaro and John Wawrzynek, “A Multi-Sender Asynchronous Extensionto the AER Protocol”, Proceedings. Sixteenth Conference on advancedResearch in VLSI. IEEE Comput. Soc. Press 1995, pp. 158-69, Los alamitos,CA, USA.16. Jan-tore Marienborg, Tor Sverre Lande, “Neuromorphic analog Communication”,ICNN96, The 1996 IEEE International Conference on Neural Networks.IEEE. Part Vol.2, 1996, pp. 920-5, Vol. 2, New York, NY, USA.17. A. Mortara and E. A. Vittoz, “A communication architecture tailored for analogVLSI neural networks: Intrinsic performance and limitation”, IEEE Trans.Neural Networks, Vo.5, No.3, pp. 459-66, May, 1994.18. P. Venier, A. Mortara, X. Arreguit, and E. Vittoz. “An Integrated Cortical Layerfor Orientation Enhancement”, IEEE Journal <strong>of</strong> Solid State Circuits, 32(2):177-186, Feb, 1997.19. Robert W. Adams, “Spectral Noise-Shaping in Integrate-and Fire Neural Networks”,1997 IEEE International Conference on Neural Networks. Proceedings.IEEE. Part Vol.2, 1997 pp. 953-8, Vol.2, New York, NY, USA.20. Alessandro Mortara, “A Pulsed Communication/Computation Framework forAnalog VLSI Perceptive Systems”, Analog Integrated Circuits & Signal Processing,Vol. 13, No. , May/June, 1997.79


21. Wolfgang Maass, “Networks <strong>of</strong> Spiking Neurons: The Third Generation <strong>of</strong>Neural Network Model”, Neural Networks, Vol. 10, No. 8, pp. 1659-1671,1997.22. L.M. Reyneri, “Theoretical and Implementation Aspects <strong>of</strong> Pulsed Streams: anOverview”, Proceedings <strong>of</strong> the Seventh International Conference <strong>of</strong> Microelectronicsfor Neural, Fuzzy and Bio-inspired systems, IEEE Comput. Soc. 1999,pp. 78-89. Los Alamitos, CA, USA.23. Thomas Lindblad and Jason M. Kinser, “Image Processing using Pulse-CoupledNeural Networks”, Springer-Verlag London Limited 1998.24. K. Boahen, NSF Neuromorphic <strong>Engineering</strong> Workshop Report. Telluride, CO,199680

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!