26.07.2013 Views

Efficient Change Detection in 3D Environment for Autonomous ...

Efficient Change Detection in 3D Environment for Autonomous ...

Efficient Change Detection in 3D Environment for Autonomous ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

albeit faster. Table I uses the best times obta<strong>in</strong>ed <strong>in</strong> the<br />

work of Nuñez et al. [6]. Both methods are divided <strong>in</strong>to two<br />

steps, where the first step builds a “special” representation<br />

and the second step compares this representation (called<br />

<strong>Detection</strong>). In the case of [6], the representation is built us<strong>in</strong>g<br />

a simplification method and GMM estimation. In our method,<br />

the representation tailored to build<strong>in</strong>g implicit volumes. The<br />

change detection is achieved us<strong>in</strong>g structural match<strong>in</strong>g <strong>in</strong> [6],<br />

and Boolean operation <strong>in</strong> our approach.<br />

Notice that, <strong>for</strong> all datasets, our method is substantially<br />

faster to build the required representation. The ma<strong>in</strong> limitation<br />

of the method proposed by Nuñez et al. [6] is related<br />

to the Gaussian Mixture Models us<strong>in</strong>g the Expectation-<br />

Maximization algorithm, which has lower computational per<strong>for</strong>mance<br />

and is quite sensitive to the number of Gaussians,<br />

which is one of the parameters of the EM algorithm.<br />

Besides the improvement <strong>in</strong> computational time process<strong>in</strong>g,<br />

we empirically observe, from other results as shown<br />

<strong>in</strong> the Figure 9, that our method per<strong>for</strong>ms quite robustly <strong>in</strong><br />

the presence of parallax and highly complex geometry. In<br />

the work of [6], the datasets obta<strong>in</strong>ed by the robot show<br />

the walls and some objects, but the roof was also acquired,<br />

where pipes and other objects are present <strong>in</strong> the ceil<strong>in</strong>g, as<br />

shown <strong>in</strong> Figure 8-a. However, this data are not considered<br />

<strong>in</strong> their experiment, because of to the high data complexity<br />

due parallax and absence of mean<strong>in</strong>gful changes. In another<br />

experiment, we used the “complete” <strong>in</strong><strong>for</strong>mation available<br />

from this dataset to describe the data <strong>in</strong>clud<strong>in</strong>g all the po<strong>in</strong>ts.<br />

Figure 9-a shows one example of “complete” po<strong>in</strong>t cloud<br />

obta<strong>in</strong>ed us<strong>in</strong>g the Pioneer robot (see Fig. 8-a) where one<br />

person produce the change (see Fig. 8-b). The blue dots<br />

represent the po<strong>in</strong>ts detected as not change, and the red po<strong>in</strong>ts<br />

as change po<strong>in</strong>ts detected by our algorithm. Some red po<strong>in</strong>ts<br />

are <strong>in</strong> the ceil<strong>in</strong>g, and this is due to the parallax effect <strong>in</strong><br />

the acquisition and misalignment between reference dataset<br />

(without change) and the present dataset (with change).<br />

Figure 9-b shows the clustered volume obta<strong>in</strong>ed with our<br />

implicit approach <strong>for</strong> the complete dataset. The po<strong>in</strong>t cloud<br />

is approximately twice as large as the partial volume, i.e. it<br />

has approximately 150k po<strong>in</strong>ts. We were able to correctly<br />

detect changes <strong>for</strong> this dataset <strong>in</strong> less than one second.<br />

V. CONCLUSIONS AND FUTURE WORK<br />

This paper described a novel method to detect changes <strong>in</strong><br />

a <strong>3D</strong> real environment <strong>for</strong> robot navigation. Results obta<strong>in</strong>ed<br />

with two different experimental plat<strong>for</strong>ms are shown. A new<br />

approach to detect changes based on implicit volumes allow<br />

us to detect changes us<strong>in</strong>g Boolean operation. It enables us to<br />

localize the existence of novelties <strong>in</strong> the scene, i.e. segment<br />

them. Results of the proposed algorithm demonstrate the<br />

reliability and efficiency of the approach. The algorithm<br />

was compared with a state-of-art approach and obta<strong>in</strong>ed<br />

substantially better results <strong>in</strong> terms of computational cost<br />

and similar as far as detection and accuracy are concerned.<br />

Future <strong>in</strong>vestigation will focus on the extension of the<br />

method to work iteratively, with the data be<strong>in</strong>g captured<br />

onl<strong>in</strong>e by the robot. Another important improvement is to<br />

(a) (b)<br />

Fig. 9. <strong>Change</strong> detection algorithm comparison: a) example of po<strong>in</strong>t cloud<br />

obta<strong>in</strong>ed us<strong>in</strong>g Pioneer robot (see Fig. 8-a) with one person as change (see<br />

Fig. 8-b). <strong>Change</strong>s detected by the algorithm is <strong>in</strong>dicated by red po<strong>in</strong>ts,<br />

where is possible to see some outliers <strong>in</strong> the roof. b) Implicit volumes<br />

obta<strong>in</strong>ed from po<strong>in</strong>t cloud <strong>in</strong> a) generated by our approach.<br />

per<strong>for</strong>m the registration between clouds concurrently with<br />

Boolean operation. This will ultimately allow us to correlate<br />

and align similar po<strong>in</strong>ts and obta<strong>in</strong> good registration even<br />

<strong>for</strong> significant changes <strong>in</strong> environmental and acquisition<br />

conditions.<br />

REFERENCES<br />

[1] Henrik Andreasson, Mart<strong>in</strong> Magnusson, and Achim J. Lilienthal. Has<br />

someth<strong>in</strong>g changed here? autonomous difference detection <strong>for</strong> security<br />

patrol robots. In Proc. IROS, pages 3429–3435, San Diego, USA,<br />

2007.<br />

[2] Jules Bloomenthal and Brian Wyvill, editors. Introduction to Implicit<br />

Surfaces. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA,<br />

1997.<br />

[3] Paulo Drews Jr, P. Núñez, R. Rocha, M. Campos, and J. Dias. Novelty<br />

detection and 3d shape retrieval us<strong>in</strong>g superquadrics and multi-scale<br />

sampl<strong>in</strong>g <strong>for</strong> autonomous mobile robots. In Proc. ICRA, pages 3635–<br />

3640, may 2010.<br />

[4] Ralf Kaestner, Sebastian Thrun, Michael Montemerlo, and Matt Whalley.<br />

A non-rigid approach to scan alignment and change detection<br />

us<strong>in</strong>g range sensor data. In FSR, pages 179–194, 2005.<br />

[5] Thomas Lew<strong>in</strong>er, Hélio Lopes, Antônio Wilson Vieira, and Geovan<br />

Tavares. <strong>Efficient</strong> implementation of march<strong>in</strong>g cubes cases with<br />

topological guarantees. JGT, 8(2):1–15, december 2003.<br />

[6] P. Núñez, P. Drews, A. Bandera, R. Rocha, M. Campos, and J. Dias.<br />

<strong>Change</strong> detection <strong>in</strong> 3d environments based on gaussian mixture model<br />

and robust structural match<strong>in</strong>g <strong>for</strong> autonomous robotic applications. In<br />

Proc. IROS, pages 2633 –2638, oct. 2010.<br />

[7] P. Núñez, P. Drews, R. Rocha, M. Campos, and J. Dias. Novelty<br />

detection and 3d shape retrieval based on gaussian mixture models<br />

<strong>for</strong> autonomous surveillance robotics. In Proc. IROS, pages 4724–<br />

4730, Piscataway, NJ, USA, 2009. IEEE Press.<br />

[8] A. Ricci. A constructive geometry <strong>for</strong> computer graphics. The<br />

Computer Jornal, 16(2):157–160, 1973.<br />

[9] T. C. Sprenger, R. Brunella, and M. H. Gross. H-blob: a hierarchical<br />

visual cluster<strong>in</strong>g method us<strong>in</strong>g implicit surfaces. In In Proc. VIS, pages<br />

61–68, Los Alamitos, CA, USA, 2000. IEEE Computer Society Press.<br />

[10] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic<br />

Robotics. The MIT Press, 2005.<br />

[11] Sebastian Thrun, Dirk Hahnel, David Ferguson, Michael Montemerlo,<br />

Rudolph Triebel, Wolfram Burgard, Christopher Baker, Zachary Omohundro,<br />

Scott Thayer, and William Whittaker. A system <strong>for</strong> volumetric<br />

robotic mapp<strong>in</strong>g of abandoned m<strong>in</strong>es. In Proc. ICRA, pages 4270–<br />

4275, 2003.<br />

[12] Hugo Vieira Neto and Ulrich Nehmzow. Visual novelty detection <strong>for</strong><br />

autonomous <strong>in</strong>spection robots. In Yoshihiko Takahashi, editor, Service<br />

Robot Applications, pages 309–330. I-Tech Education and Publish<strong>in</strong>g,<br />

Vienna, Austria, 2008.<br />

[13] Weihong Wang Xujia Q<strong>in</strong> and Qu Li. Practical boolean operations on<br />

po<strong>in</strong>t-sampled models. In Proc. ICCSA, 3980:393–401, 2006.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!