01.04.2015 Views

1FfUrl0

1FfUrl0

1FfUrl0

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 10<br />

The result is that each image is now represented by a single array of features of<br />

the same size (the number of clusters; in our case 256). Therefore, we can use our<br />

standard classification methods. Using logistic regression again, we now get 62<br />

percent, a 7 percent improvement. We can combine all of the features together and<br />

we obtain 67 percent, more than 12 percent over what was obtained with texturebased<br />

methods:<br />

Summary<br />

We learned the classical feature-based approach to handling images in a machine<br />

learning context by reducing a million pixels to a few numeric dimensions. All<br />

the technologies that we learned in the other chapters suddenly become directly<br />

applicable to image problems. This includes classification, which is often referred<br />

to as pattern recognition when the inputs are images, clustering, or dimensionality<br />

reduction (even topic modeling can be performed on images, often with very<br />

interesting results).<br />

We also learned how to use local features in a bag-of-words model for classification.<br />

This is a very modern approach to computer vision and achieves good results while<br />

being robust to many irrelevant aspects of the image, such as illumination and<br />

also uneven illumination in the same image. We also used clustering as a useful<br />

intermediate step in classification rather than as an end in itself.<br />

[ 219 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!