26.07.2013 Views

Efficient Change Detection in 3D Environment for Autonomous ...

Efficient Change Detection in 3D Environment for Autonomous ...

Efficient Change Detection in 3D Environment for Autonomous ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

TABLE I<br />

COMPARATIVE STUDY OF DIFFERENT CHANGE DETECTION ALGORITHMS.<br />

Number of Po<strong>in</strong>ts Build Ref. Map Build Cur. Map <strong>Detection</strong><br />

Dataset Ref. Map Cur. Map Nuñez et al. Our Nuñez et al. Our Nuñez et al. Our<br />

Real Data - Test Area 1 79171 79633 177.21 1.12 164.56 1.07 0.014 0.146<br />

Real Data - Test Area 2 79171 81134 177.36 1.16 110.86 1.05 0.014 0.149<br />

Real Data - Test Area 3 79171 80112 167.88 1.14 109.22 1.06 0.028 0.166<br />

detection, we are able to detect changes even <strong>for</strong> small sized<br />

changes (see Figure 5-b).<br />

Figure 6-b, shows the process<strong>in</strong>g time of the proposed<br />

algorithm. The process<strong>in</strong>g time is obta<strong>in</strong>ed consider<strong>in</strong>g the<br />

total time <strong>for</strong> each of the three steps: (1) build<strong>in</strong>g implicit<br />

volume <strong>for</strong> reference po<strong>in</strong>t cloud, (2) build<strong>in</strong>g implicit volume<br />

<strong>for</strong> the po<strong>in</strong>t cloud with changes and, (3) per<strong>for</strong>m<strong>in</strong>g<br />

the Boolean operation (change detection). The results are<br />

shown with mean time and standard deviation. This graph<br />

shows that <strong>in</strong>creas<strong>in</strong>g grid size, <strong>in</strong>creases computational time<br />

without any ga<strong>in</strong> <strong>in</strong> accuracy <strong>for</strong> grid sizes greater than<br />

80 × 80 × 80, <strong>for</strong> which high accuracy is obta<strong>in</strong>ed <strong>in</strong> less<br />

than half a second.<br />

(a)<br />

(b)<br />

Fig. 6. Results obta<strong>in</strong>ed us<strong>in</strong>g different datasets acquired with the K<strong>in</strong>ect<br />

device, with objects of different sizes and shapes and <strong>for</strong> different poses.<br />

a) The statistical assessment κ <strong>in</strong>dex of agreement <strong>for</strong> grid size changes. b)<br />

Process<strong>in</strong>g time <strong>for</strong> each dataset change <strong>in</strong> the grid size.<br />

In order to experimentally show the l<strong>in</strong>ear time complexity<br />

of our method, we computed changes <strong>for</strong> po<strong>in</strong>t clouds with<br />

different sizes. For this experiment, n<strong>in</strong>e sets were used <strong>for</strong><br />

which the number of po<strong>in</strong>ts varyed from 17k to 144k. For<br />

each dataset, five different acquisitions were processed to<br />

Fig. 7. Computation time <strong>for</strong> different po<strong>in</strong>t cloud sizes. N<strong>in</strong>e sets<br />

with different sizes from 17k to 144k were used to show the l<strong>in</strong>ear time<br />

complexity <strong>in</strong> the number of po<strong>in</strong>ts.<br />

present the mean and standard deviation of the process<strong>in</strong>g<br />

time as shown <strong>in</strong> Figure 7.<br />

B. Comparison with a state-of-art approach<br />

We use compare our method with the one presented <strong>in</strong> [6]<br />

on their dataset. That dataset was acquired <strong>in</strong> the hallways<br />

of a build<strong>in</strong>g of the M<strong>in</strong>as Gerais Federal University us<strong>in</strong>g<br />

a Pioneer 2-AT Robot equipped with two SICK LMS-200<br />

mounted orthogonally, <strong>in</strong> order to localize and acquire <strong>3D</strong><br />

maps. For the experiments, three different novelties were<br />

<strong>in</strong>cluded <strong>in</strong> the robot’s workspace: An identification cyl<strong>in</strong>der,<br />

a person and a pr<strong>in</strong>ter box. The environment and the objects<br />

used are shown <strong>in</strong> Fig. 8-a and 8-b.<br />

Consider<strong>in</strong>g the results shown <strong>in</strong> [6], we are able to<br />

compare only the process<strong>in</strong>g time. The accuracy of the<br />

method is not available because different metrics were used<br />

<strong>in</strong> the evaluation processes of each method. As <strong>in</strong> [6], we<br />

are also capable to detect changes <strong>in</strong> all three scenarios,<br />

(a) (b)<br />

Fig. 8. Experimental Setup 2: (a) <strong>Environment</strong> and experimental setup composed<br />

of a Pioneer 2-AT Robot equipped with two orthogonally mounted<br />

laser scanners acquir<strong>in</strong>g <strong>3D</strong> data. (b) Three different changes were <strong>in</strong>serted<br />

<strong>in</strong> the robot’s environment.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!