01.04.2015 Views

1FfUrl0

1FfUrl0

1FfUrl0

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Computer Vision – Pattern Recognition<br />

Both objects are against natural backgrounds and with large smooth areas inside the<br />

objects. We therefore expect that textures will not be very good.<br />

When we use the same features as before, we achieve 55 percent accuracy in<br />

cross-validation using logistic regression. This is not too bad on four classes, but<br />

not spectacular either. Let's see if we can use a different method to do better. In fact,<br />

we will see that we need to combine texture features with other methods to get the<br />

best possible results. But, first things first—we look at local features.<br />

Local feature representations<br />

A relatively recent development in the computer vision world has been the<br />

development of local-feature-based methods. Local features are computed on a small<br />

region of the image, unlike the previous features we considered, which had been<br />

computed on the whole image. Mahotas supports computing a type of these features;<br />

Speeded Up Robust Features, also known as SURF (there are several others, the most<br />

well-known being the original proposal of Scale-Invariant Feature Transform (SIFT)).<br />

These local features are designed to be robust against rotational or illumination<br />

changes (that is, they only change their value slightly when illumination changes).<br />

When using these features, we have to decide where to compute them. There are<br />

three possibilities that are commonly used:<br />

• Randomly<br />

• In a grid<br />

• Detecting interesting areas of the image (a technique known as keypoint<br />

detection or interest point detection)<br />

All of these are valid and will, under the right circumstances, give good results.<br />

Mahotas supports all three. Using interest point detection works best if you have a<br />

reason to expect that your interest point will correspond to areas of importance in the<br />

image. This depends, naturally, on what your image collection consists of. Typically,<br />

this is found to work better in man-made images rather than natural scenes. Manmade<br />

scenes have stronger angles, edges, or regions of high contrast, which are the<br />

typical regions marked as interesting by these automated detectors.<br />

Since we are using photographs of mostly natural scenes, we are going to use the<br />

interest point method. Computing them with mahotas is easy; import the right<br />

submodule and call the surf.surf function:<br />

from mahotas.features import surf<br />

descriptors = surf.surf(image, descriptors_only=True)<br />

[ 216 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!