08.02.2013 Views

Bernal S D_2010.pdf - University of Plymouth

Bernal S D_2010.pdf - University of Plymouth

Bernal S D_2010.pdf - University of Plymouth

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4.4. FEEDFORWARD PROCESSmG<br />

240 prototypes or S3 slates. This increases the invariance to position at the top level.<br />

The learning process for the 4-layer architecture is not described in detail as it is a trivial exten­<br />

sion <strong>of</strong> the methodology employed for the 3-layer architecture.<br />

4.4 Feedforward processing<br />

4.4.1 Approximation to the selectivity and Invarlance operations<br />

For the feedforward recognition results presented in Chapter 4 we therefore assume that the<br />

network is a singly-connecied tree, so that the k messages can propagate to the root node<br />

without being affected by top-down messages (see Figure 3.9 in Section 3.3.3 for details). This<br />

is the same strategy used during the learning process. This facilitates the approximation to the<br />

HMAX operations and greatly reduces the computational costs to process each image. This is<br />

specially important when testing a large image datasei over a large parameter space.<br />

Note that even if the root nodes and 7i messages were initialized to a flat distribution, they would<br />

still modulate the hotlom-up A messages, as n messages are multiplied by the CPT before being<br />

combined with die A messages. In oUier words, as long as there is bottom-up evidence, it will be<br />

modulated by top-down messages even if the lalier exhibit flat distributions. This was illustrated<br />

in Figure 3.7. Several recognition simulations were also perftirmcd without this assumption, in<br />

other words, with flat top-down messages that modulaied the feedforward A messages, in order<br />

to compare results and establish ils validity. These revealed that a similar invariant recognition<br />

perforniunce can be obtained even when including the feedback K messages (using loopy belief<br />

propagation), but performing a detailed systematic test over the complete dataset and parameter<br />

space is infeasible due to the high computational resources required.<br />

The singly-connecled tree assumption allows the selectivity operation in HMA.X to be approxi­<br />

mated as shown in liquation 4.9. Note thai the original Radial Basis lunction operation has l>een<br />

replaced by an approximately equivalent dot-product operation as proposed by Serre (2006).<br />

Serre et al. (2005a), and this dot-product operation is then approximated using the belief prop­<br />

agation equation. More precisely, the weighted sum over SI locations and features is approx­<br />

imated as a sum over the features and a product over the locations. This can be interpreted as<br />

171

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!