08.02.2013 Views

Bernal S D_2010.pdf - University of Plymouth

Bernal S D_2010.pdf - University of Plymouth

Bernal S D_2010.pdf - University of Plymouth

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

3.4. EXISTING MODELS<br />

3.4.1.3 Local inference circuits model<br />

Concurrent lo the publication <strong>of</strong> the previous model, I jtvak and Ullman (2009) published an<br />

alternative implementation <strong>of</strong> belief propagation using spiking neurons. The latter employs pop­<br />

ulations <strong>of</strong> standard leaky integrale-und-lire neurons to implement the belief revision algorithm<br />

in pairwise Markov random fields. Belief revision is analogous to belief propagation except<br />

that it replaces the sum operation with the max operation, i.e. uses the max-product instead <strong>of</strong><br />

the sum-product algorithm, thus obtaining the maximum-a-posleriori estimate (also called the<br />

most probable explanation) instead <strong>of</strong> the posterior marginal probability. Additionally, the algo­<br />

rithm is implemented in the log domain, which leads to a linal neuronal implementation based<br />

on a max-sum scheme, called belief consolidalion. Pairwise Markov random lields are a type<br />

<strong>of</strong> undirected graphical model, which share many properties with directed graphical models<br />

(Bayesian networks), but are not inlfrchangeable (see Section 3.J.3).<br />

To implement the belief consolidalion algorithm, the model uses building blocks called local<br />

inference circuit.^ (MNCs). Fach neuronal LINC is connected to other LINCs according to the<br />

graph structure, and propagates the same message to all neighbours. Each LINC roughly im­<br />

plements the operations peri'ormed locally by each node in the graph using smaller elementary<br />

circuits thai approximate the two mathematical operations: a linear summation circuit and a<br />

maximization circuit. The model uses populations <strong>of</strong> leaky integrate-and-fire neurons to im-<br />

plemenl these computations. The synaptic weights between the different elementary circuits<br />

define their specific functional properties. The mean rate <strong>of</strong> ihe neural populations during short<br />

peritKls <strong>of</strong> lime (few tens <strong>of</strong> milliseconds), represents the values <strong>of</strong> messages computed during<br />

the inference process,<br />

Hach neuron.il I.INC uses A' (number <strong>of</strong> neighbours) x S (number <strong>of</strong> states) weighted maxi­<br />

mization circuits, which compute the maximum value for each slate <strong>of</strong> the inpul nodes. Before<br />

finding the maximum value, the circuit uses a linear summation element to add the correspond­<br />

ing weight to each input message (in Ihe log domain, weights are additive). The weighted<br />

maximum results for each state are then combined in the N corresponding summation circuits.<br />

The vector <strong>of</strong> single valued outputs <strong>of</strong> each summation circuit represents the output message <strong>of</strong><br />

117

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!