08.02.2013 Views

Bernal S D_2010.pdf - University of Plymouth

Bernal S D_2010.pdf - University of Plymouth

Bernal S D_2010.pdf - University of Plymouth

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3.4. EXISTING MODELS<br />

attenlmn.<br />

The main contribution <strong>of</strong> this model is that it managed lo implement Bayesian inference us­<br />

ing equations representing a recurrently connected network <strong>of</strong> spiking neurons. However, the<br />

main limitation <strong>of</strong> the model is that it does not <strong>of</strong>fer a general solution lo implementing belief<br />

propagation with spiking neurons, but rather very specific and simple examples with heuristic<br />

implementations. The main model consists <strong>of</strong> just a single layer containing 30 neurons, which<br />

does not capture the complexities <strong>of</strong> belief propagation, nor its many benefits, such as a local<br />

and distributed implementation; furthermore, it does not capture the complexities inherent in<br />

visual processing. Additionally, the implementation in the log domain requires the use <strong>of</strong> an<br />

approximation lo the conditional probability weighl.s, which has not been proven to provide<br />

accurate results when the system is scaled up.<br />

3.4.1.2 Liquid state machine model<br />

A more recent model <strong>of</strong> belief propagation in networks <strong>of</strong> spiking neurons was provided by<br />

Steimer et al. (2009). The model approximates belief propagation in Forney factor graphs, a<br />

type <strong>of</strong> graphical model that is considered more general than Bayesiun network.s, and therefore<br />

can capture all <strong>of</strong> ils properties. The model makes use <strong>of</strong> liquid stute machines composed <strong>of</strong><br />

liquid pools <strong>of</strong> spiking neurons lo represent the function nodes in the factor graph, similar to the<br />

conditional probability functions in Bayesian networks. The internal dynamics <strong>of</strong> each pool <strong>of</strong><br />

neurons aUows it lo combine Ihe incoming messages from Ihe corresponding input nixies. Mes­<br />

sages from one node to another are transmitted using iradouT populations <strong>of</strong> neurons which<br />

extract the output information from the liquid pools. The readout populations need to be cali­<br />

brated and trained to map the input synaptic current with desired output message (probability<br />

from 0 to 1). encoded using an average population rate. Figure 3.10 shows the neural imple­<br />

mentation <strong>of</strong> belief propagation in a factor graph using the liquid and readout populations <strong>of</strong> a<br />

liquid stale machine.<br />

The model was evaluated using two simple examples: a classical inference problem dealing<br />

wiih ihe transmission <strong>of</strong> binary infomiation in an unreliable channel, and a more biologically-<br />

grounded example dealing with the integration <strong>of</strong> psychophysical information to elucidate the<br />

115

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!