08.02.2013 Views

Bernal S D_2010.pdf - University of Plymouth

Bernal S D_2010.pdf - University of Plymouth

Bernal S D_2010.pdf - University of Plymouth

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

4.4. FEEDFORWARD PROCFSSING<br />

Figure 4.13 shows the K-L divergence between the true and approximate likelihood dis­<br />

tributions, averaged over 500 trials, as a function <strong>of</strong> M^^y ^nd the total number <strong>of</strong> parenls<br />

TV. The likelihood distributions are assumed to have AT = 100 slates. For comparison,<br />

the K-L divergence between Ihe exact likelihood and a randomly generated likelihood<br />

distribution is also plotted.<br />

The results show that the coefficient M,„,i,//V increases as the goodness <strong>of</strong> fit between<br />

the approximation and the exact solution increases. Also, as the total number <strong>of</strong> input<br />

messages, N. increases, the goodness <strong>of</strong> fit decreases. The relative difference between<br />

the K-L divergence <strong>of</strong> the approximaie and the random distributions suggests that for<br />

values <strong>of</strong> M,„a:, above a given threshold the approximate distribution provides a good fit<br />

to the exact solution. It is important to note that in the real model data, the input A<br />

messages are correlated (due to the overlap in receptive fields) and are therefore likely to<br />

present more similarities between them than the randomly generated A messages <strong>of</strong> the<br />

statistical test. Additionally, a subset <strong>of</strong> Ihe discarded distributions w^iM typically present<br />

near-llat distributions as they originate from blank regions <strong>of</strong> the image. Consequently,<br />

the approximation in the model will constitute a belter (it to the exact distribution than<br />

that suggested by this empirical test.<br />

• The messages (probabihly distributions) are sum-normalized to 1 and then re-weighted<br />

so thai the minimum value <strong>of</strong> the distribution is never below V, — ]/{\0- Kx). All<br />

elements <strong>of</strong> the message that are below Vm,n are set to Vmin- The overall Increase in the<br />

sum <strong>of</strong> the elements <strong>of</strong> the resulting distribution is ihen compensated by proportionally<br />

decreasing Ihe remaining elements (those which were not set the to Vmin)- Consequently.<br />

ihe rcsuliing distribution will still be sum-normalized to I, while having a minimum value<br />

equal to V,„,„. The distribution will have a pr<strong>of</strong>ile equivalent to that <strong>of</strong> the original one.<br />

except for those elements that were originally below P^,,,, which will now exhibit higher<br />

relative values.<br />

This adjustment <strong>of</strong> the message probabilily distributions eliminales all values under V„i„,<br />

thus allowing multiplicative combination <strong>of</strong> a greater number <strong>of</strong> input messages, i.e. Mmat<br />

174

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!