18.07.2014 Views

PHD Thesis - Institute for Computer Graphics and Vision - Graz ...

PHD Thesis - Institute for Computer Graphics and Vision - Graz ...

PHD Thesis - Institute for Computer Graphics and Vision - Graz ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

2.5. Summary 17<br />

Component analysis to compute a representative feature vector. Using these l<strong>and</strong>marks, map<br />

building is described as an off-line approach. For map building a robot equipped with a camera<br />

<strong>and</strong> a laser range finder is steered through the environment. Planar l<strong>and</strong>marks are detected<br />

by the previously described method. For each l<strong>and</strong>mark a 3D reconstruction of the contour<br />

will be per<strong>for</strong>med from two successive frames. The reconstructed l<strong>and</strong>mark will be put into the<br />

global coordinate system by using the robot pose in<strong>for</strong>mation of the laser range finder. Robot<br />

localization then works by matching planar l<strong>and</strong>marks detected in the current view with the<br />

map l<strong>and</strong>marks. For a matched l<strong>and</strong>mark the four corner points are used to create 3D-2D<br />

point correspondences. The pose of the robot is then computed from the four 3D-2D point<br />

correspondences using the planar P4P method <strong>and</strong> a subsequent iterative refinement [45]. A<br />

key concept of this approach is the visibility map <strong>for</strong> localization. It is assumed that a pathplanning<br />

process defines a trajectory in the world coordinate frame. According to this path the<br />

best suited l<strong>and</strong>marks are selected <strong>for</strong> different sections of the path. Localization is per<strong>for</strong>med in<br />

this approach from a single l<strong>and</strong>mark. The active camera is used to keep the l<strong>and</strong>marks selected<br />

from the visibility map in the field-of-view. The quadrangular l<strong>and</strong>marks however impose a very<br />

strong restriction upon the potential l<strong>and</strong>marks. Even <strong>for</strong> indoor environments this is a big<br />

limitation. And the localization algorithm relies on the four corner points. A single occluded<br />

corner point renders the whole l<strong>and</strong>mark invalid. For practical applications these constraint will<br />

be to rigid.<br />

A different SLAM approach using plane features has been proposed by Molton et al. [77].<br />

The pose is computed by image alignment between learned <strong>and</strong> detected l<strong>and</strong>marks.<br />

2.5 Summary<br />

The focus of the presented state-of-the-art in the last sections was drawn to methods which allow<br />

the construction of a world map <strong>and</strong> allow global localization therein. Both, SLAM approaches<br />

<strong>and</strong> batch approaches were enlisted. A SLAM approach allows incremental map building during<br />

operation. In a batch approach a map has to be created prior robot operation. This does not<br />

mean, that the map building could not be automatically. A mobile robot navigating with a<br />

laser range finder could traverse the environment <strong>and</strong> acquire image data. The world map could<br />

then be constructed off-line from the image data. Afterwards this map could be used by other<br />

robots only equipped with a digital camera <strong>for</strong> localization <strong>and</strong> navigation. This makes sense as<br />

digital cameras are much cheaper than a laser range finder <strong>and</strong> the robots which operate later<br />

on only need a digital camera. In general a SLAM approach is more challenging to develop, but<br />

<strong>for</strong> most cases it would be possible to extend existing batch versions to full SLAM approaches.<br />

The approaches proposed by Se et al. [96], Karlsson et al. [56] <strong>and</strong> Davison et al. [21] are SLAM<br />

approaches. The others are batch approaches. The approach from Kosaka [59] even requires a<br />

manually constructed CAD-model of the world. All of the presented SLAM approaches use a<br />

metric map containing 3D point features. Extraction <strong>and</strong> reconstruction of point l<strong>and</strong>marks can<br />

be done very fast. Other l<strong>and</strong>marks as lines, planes or vanishing points require more complex<br />

detection <strong>and</strong> reconstruction methods <strong>and</strong> might be not applicable <strong>for</strong> a real-time updating of<br />

the l<strong>and</strong>marks as required in a SLAM framework. Until know the influence of the map features<br />

<strong>for</strong> navigation <strong>and</strong> path planning has been neglected. However, it is worth to be discussed.<br />

For path planning the robot has to know which locations are obstructed <strong>and</strong> which are free.<br />

Clearly, this in<strong>for</strong>mation should be provided from the world representation. The described<br />

SLAM approaches construct sparse world representations consisting of distinct point features<br />

only. A path planning algorithm does not know if the space between the l<strong>and</strong>marks is occupied

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!