An Omnidirectional Time-of-Flight Camera and its Application to ...

icg.tugraz.at

An Omnidirectional Time-of-Flight Camera and its Application to ...

ithm are used to register two consecutive scans, assuming very

small pose changes. Finally, a GraphSLAM approach is used

to perform loop closing and to compensate for accumulated

registration errors. They evaluated their system by moving the

robot along a 10m path. Mapping accuracy was evaluated with

the distance between two walls resulting in a deviation of

0.4m.

In summary, the main drawbacks of using a TOF camera

for SLAM are the limited field of view, the low resolution

and the limited capture range. Hence, localization and map

building is restricted to small indoor environments. A forward

looking camera with a field of view of 40 ◦ and a range of 5m

might capture no more than empty space in a typical office

corridor. A side looking camera typically observes a single

wall, which constrains camera motion only in one direction. By

using more than one camera to increase the field of view, one

increases the overall complexity and runs into the problems

of synchronization, calibration, excessive data transfer and

interference between the cameras.

In this work we use a TOF camera in an omnidirectional

configuration to improve its localization capabilities. We use a

number of planar mirrors to split and redirect the light source

and the field of view. The number of mirrors hereby is scalable

and only limited by the image area. Calibration of the camera

extrinsics is particularly simple and does not require a large

calibration device.

II. OMNIDIRECTIONAL TOF VISION

A. Imaging Principle

The mode of operation of a TOF camera is based on measuring

the phase shift between an emitted amplitude modulated

light signal and its reflection.

The system consists of a light source, which is usually

coaxial to the camera viewing direction. Emitted light is

amplitude-modulated with a frequency f m . In most cases,

LEDs in the near infrared spectrum around 850nm are used

for this purpose. Objects in the scene reflect the light and the

incoming signal is measured in each pixel and compared to the

original modulation signal. Depending on the camera-object

distance, the light travel time affects the phase shift between

original and received signal. Both signals are correlated within

a sensor pixel. At a sampling rate of four times the modulation

period, the phase shift can be reconstructed as

Φ = arctan A 3 − A 1

A 0 − A 2

, (1)

where A 0 . . . A 3 are the four samples as shown in Figure

2.

Physically the samples are measured as the charge difference

in neighboring CCD pixels, where one is charged with

the original signal, and the other one with the reflected signal.

Phase shift is related to scene depth through

d = Φ

c , (2)

4πf m

Fig. 2.

Measurement principle of a Time-of-Flight camera.

where c is the speed of light. Furthermore, most TOF

cameras return the reflected signal amplitude, calculated by


(A3 − A 1 )

A =

2 + (A 2 − A 0 ) 2

, (3)

2

and the signal DC offset by

B. Omnidirectional Geometry

B = 1 ∑

A i . (4)

2

In creating an omnidirectional device with a TOF camera,

one has to account for the difference between light source and

projection center. The focused imaging rays of the camera, as

well as the coaxial light source need to be redirected. Small

parabolic mirrors, which are commonly used in omnidirectional

devices, are not suitable, because the LED projection

centers differ from the camera center and the rays are reflected

in different directions. Fisheye lenses will not work for the

LEDs either. Further, considering the application of indoor

SLAM, we are not interested in covering the entire upper

hemisphere with a modest resolution of 176 × 144 pixels.

Therefore, we propose to use four planar mirrors assembled

as a pyramid, as sketched in Figure 1(a). This setup has several

advantages: the imaging rays are redirected mainly to the side,

front and rear, where most geometric information should be

present in an indoor environment. Furthermore, in contrast to

classical omnidirectional devices, it is comparably easy to align

and calibrate planar mirrors in front of a camera.

We decided to place the pyramid diagonal to the robot

motion direction to capture most of the surrounding structure

(see Figure 1(b)).

C. Calibration

Calibration of omnidirectional cameras traditionally involves

the construction of an accurate, large and concave

calibration target, where the camera is placed inside. Other

approaches follow the traditional way of placing a planar

reference target around the camera, which is feasible if the

imaging model is mainly radially symmetric.

Our imaging model differs from this assumption. There

are essentially four independent, virtual cameras to calibrate,

which differ in the extrinsics and share the same intrinsics.

Because the mirrors are oriented in a divergent manner, it is

i

More magazines by this user
Similar magazines