26.04.2013 Views

The Gixel Array Descriptor (GAD) for Multi-Modal Image Matching

The Gixel Array Descriptor (GAD) for Multi-Modal Image Matching

The Gixel Array Descriptor (GAD) for Multi-Modal Image Matching

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Figure 10. More results <strong>for</strong> matching with <strong>GAD</strong> (left) and recall vs. 1 − precision curve comparison (right). (a) <strong>Matching</strong> with JPEG<br />

compression (image 1 and 2 from the “Ubc” series [1]) (b) <strong>Matching</strong> with illumination change (image 1 and 4 from the “Leuven” series [1])<br />

likely helps in reducing the impact of JPEG compression.<br />

In Fig.10(b) with illumination change, the <strong>GAD</strong> has a recall<br />

rate slightly inferior to other descriptors, but still finds<br />

a large number of correct matches with almost no errors.<br />

5.5. Processing Time<br />

<strong>GAD</strong>’s computation process is time-consuming compared<br />

to state-of-the-art descriptors, but no ef<strong>for</strong>ts at optimization<br />

have been made yet. For examples, Fig.1 (size<br />

512x512) takes <strong>GAD</strong> 8.9 seconds, while SURF needs 0.7s;<br />

Fig.10(b) (size 900x600) takes <strong>GAD</strong> 19.5s, while SURF<br />

needs 1.3s. At this point, speed is not a primary concern in<br />

our research, but we’ll pursue optimizations in future work.<br />

6. Conclusion<br />

We introduce a novel descriptor unit called a <strong>Gixel</strong>,<br />

which uses an additive scoring method to extract surrounding<br />

edge in<strong>for</strong>mation. We show that a circular array of <strong>Gixel</strong>s<br />

will sample edge in<strong>for</strong>mation in overlapping regions to<br />

make the descriptor more discriminative and it can be invariant<br />

to rotation and scale. Experiments demonstrate the<br />

superiority of the <strong>Gixel</strong> array descriptor (<strong>GAD</strong>) <strong>for</strong> multimodal<br />

matching, while maintaining a per<strong>for</strong>mance comparable<br />

to state-of-the-art descriptors on traditional single<br />

modality matching.<br />

<strong>The</strong> <strong>GAD</strong> still has some limitations in its current development<br />

status. We have put little ef<strong>for</strong>t into optimization,<br />

so the run time is slow. In addition, though <strong>GAD</strong> exhibits<br />

rotation and scale invariance, large viewpoint changes may<br />

reduce per<strong>for</strong>mance, and we have not addressed that issue<br />

yet. Finally, as a feature built sole on edges, <strong>GAD</strong> may not<br />

per<strong>for</strong>m well in situations where edges are rare. <strong>The</strong>se issues<br />

will be investigated sin our future work.<br />

References<br />

[1] K. Mikolajczyk and C. Schmid. A per<strong>for</strong>mance evaluation of<br />

local descriptors. IEEE Transactions on Pattern Analysis and<br />

Machine Intelligence, 27(10):1615–1630, 2005. 2, 4, 5, 7, 8<br />

[2] H. Bay, A. Ess, T. Tuytelaars, L. Van Gool. SURF: Speeded<br />

Up Robust Features. Computer Vision and <strong>Image</strong> Understanding,<br />

110(3):346–359, 2008. 2, 5<br />

[3] R. Zabih and J. Woodfill. Non-parametric local trans<strong>for</strong>ms <strong>for</strong><br />

computing visual correspondance. Proceedings of the European<br />

Conference on Computer Vision, pp.151–158, 1994. 2<br />

[4] A. Johnson and M. Hebert. Object recognition by matching<br />

oriented points. Proceedings of the IEEE Conference on Computer<br />

Vision and Pattern Recognition, pp.684–689, 1997. 2<br />

[5] S. Belongie, J. Malik, and J. Puzicha. Shape matching and<br />

object recognition using shape contexts. IEEE Transactions<br />

on Pattern Analysis and Machine Intelligence, 24(4):509–522,<br />

2002. 2<br />

[6] D. Lowe. Distinctive image features from scale-invariant keypoints.<br />

International Journal of Computer Vision, 60(2):91–<br />

110, 2004. 2, 5<br />

[7] Y. Ke and R. Sukthankar. PCA-SIFT: A more distinctive representation<br />

<strong>for</strong> local image descriptors. Proceedings of the<br />

IEEE Conference on Computer Vision and Pattern Recognition,<br />

pp.511–517, 2004. 2<br />

[8] M. Calonder, V. Lepetit, C. Strecha, and P. Fua. BRIEF: Binary<br />

Robust Independent Elementary Features. Proceedings of<br />

the European Conference on Computer Vision, 2010. 2, 5<br />

[9] E. Rublee, V. Rabaud, K. Konolige, G. Bradski. ORB: an efficient<br />

alternative to SIFT or SURF. Proceedings of the IEEE<br />

International Conference on Computer Vision, 2011. 2, 5<br />

[10] F. Tang, S. H. Lim, N. L. Chang , H. Tao. A novel feature<br />

descriptor invariant to complex brightness changes. Proceedings<br />

of the IEEE Conference on Computer Vision and Pattern<br />

Recognition, pp.2631–2638, 2009. 2<br />

[11] A. Bosch, A. Zisserman, and X. Munoz. <strong>Image</strong> classification<br />

using random <strong>for</strong>ests and ferns. Proceedings of the IEEE<br />

International Conference on Computer Vision, 2007. 2<br />

[12] E. Shechtman and M. Irani. <strong>Matching</strong> local self-similarities<br />

across images and videos. Proceedings of the IEEE Conference<br />

on Computer Vision and Pattern Recognition, 2007. 2<br />

[13] S. Leutenegger, M. Chli and R. Siegwart. BRISK: Binary<br />

Robust Invariant Scalable Keypoints, Proceedings of the IEEE<br />

International Conference on Computer Vision, 2011. 2<br />

[14] S. Winder, G. Hua, M. Brown. Picking the best DAISY. Proceedings<br />

of the IEEE Conference on Computer Vision and<br />

Pattern Recognition, pp.178-185, 2009. 3<br />

[15] G. Yang, C. V. Stewart, M. Sofka, C. L. Tsai. Registration of<br />

Challenging <strong>Image</strong> Pairs: Initialization, Estimation, and Decision.<br />

IEEE Transactions on Pattern Analysis and Machine<br />

Intelligence, 29(11):1973–1989, 2007. 7

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!