08.02.2013 Views

Bernal S D_2010.pdf - University of Plymouth

Bernal S D_2010.pdf - University of Plymouth

Bernal S D_2010.pdf - University of Plymouth

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4.2. ARCHITECTURES<br />

Scale band<br />

RF size", AA'i-2<br />

S2 types, Ksi<br />

Band pooling, ^c2<br />

Grid size, ANc2<br />

Sampling, ec2<br />

C2 types, Kfi<br />

RF size^, ANsi<br />

S3 types, Ksi<br />

1<br />

4<br />

S2 parameters<br />

2<br />

3<br />

8<br />

12<br />

1000<br />

C2 parameters<br />

All bands: 1 8<br />

0.5 X total S2 units<br />

0.25 X total S2 units<br />

1000<br />

S3 parameters<br />

2<br />

2x2x60 = 240<br />

"83 protoiype elemeffls= ANs2 x ^f^si " f^C\ f* orienlalions). Same for all scale bands.<br />

''S3 proiolype elt;ments= AWss x AN^i x Kci (1000 feaiurra).<br />

Table 4.3: Parameters <strong>of</strong> the aliemativc .^-layer archlieciure based on Yamane ei al. (2006).<br />

to Just 2 X 2 C2 units, and lo learn 2x2 prototypes (one (breach location) per object category,<br />

which leads to greater position invariancc at the lop layer. As a consequence, the learned high-<br />

level prototypes <strong>of</strong> objects contain some information about the spatial arrangement <strong>of</strong> Iheir<br />

constituent pans, in agreement with the results shown by Yamanc ct al. (2006). This was also<br />

discussed in Section 2.1.1 and illustrated in Figure 2.3b.<br />

Additionally, the smaller pooling ranges <strong>of</strong> each unit help to reduce the large fan-in <strong>of</strong> A mes­<br />

sages and to increase the speciticity <strong>of</strong> feedback. All feedback results presented in the thesis are<br />

based on this architecture.<br />

4.2.3 Four-level architecture<br />

This architecture is based on the version <strong>of</strong> HMAX dcscril>ed in Scrre et al. (2(M)7b), and in­<br />

cludes two extra layers. The parameters <strong>of</strong> this architeclure are shown in Table 4.4 and the<br />

resulting Bayesian network is illustrated in Figure 4.6 (only layers above S2, as layers SI and<br />

CI are equivalent those shown in Figure 4.4). The main advantage <strong>of</strong> this architecture is the<br />

further processing by the two extra layers with smaller pooling ranges which leads to greater<br />

position and scale invariance and can increase selectivity in highly detailed images. However,<br />

these two extra layers also lead to greater complexity and higher approximation errors when<br />

implementing the model as a Bayesian network. For this reason, the four-level architecture was<br />

153<br />

4<br />

16

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!