14.06.2014 Views

3D Body Scanning in a Mirror Cabinet

3D Body Scanning in a Mirror Cabinet

3D Body Scanning in a Mirror Cabinet

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

DAGM 2008, LNCS 5096, pp. 284–293<br />

<strong>3D</strong> <strong>Body</strong> <strong>Scann<strong>in</strong>g</strong> <strong>in</strong> a <strong>Mirror</strong> Cab<strong>in</strong>et<br />

Sven Molkenstruck, Simon W<strong>in</strong>kelbach, and Friedrich M. Wahl<br />

Institute for Robotics and Process Control, Technical University of Braunschweig,<br />

Mühlenpfordtstr. 23, D-38106 Braunschweig, Germany<br />

{S.Molkenstruck, S.W<strong>in</strong>kelbach, F.Wahl}@tu-bs.de<br />

Abstract. <strong>Body</strong> scanners offer significant potential for use <strong>in</strong> many applications<br />

like cloth<strong>in</strong>g <strong>in</strong>dustry, orthopedy, surgery, healthcare, monument<br />

conservation, art, as well as film- and computer game <strong>in</strong>dustry.<br />

In order to avoid distortions due to body movements (e.g. breath<strong>in</strong>g or<br />

balance control), it is necessary to scan the entire surface <strong>in</strong> one pass<br />

with<strong>in</strong> a few seconds. Most body scanners commercially available nowadays<br />

require several scans to obta<strong>in</strong> all sides, or they consist of several<br />

sensors mak<strong>in</strong>g them unnecessarily expensive. We propose a new body<br />

scann<strong>in</strong>g system, which is able to fully scan a person’s body or an object<br />

from three sides at the same time. By tak<strong>in</strong>g advantage of two mirrors,<br />

we only require a s<strong>in</strong>gle grayscale camera and at least one standard l<strong>in</strong>e<br />

laser, mak<strong>in</strong>g the system far more affordable than previously possible.<br />

Our experimental results prove efficiency and usability of the proposed<br />

setup.<br />

1 Introduction<br />

The first systems for three-dimensional body scann<strong>in</strong>g have been developed more<br />

than ten years ago (see e.g. [1], [2]), but an exploitation of all promis<strong>in</strong>g applications<br />

is still <strong>in</strong> the very early stages. Modern body scanners can capture the<br />

whole shape of a human <strong>in</strong> a few seconds. Such systems offer significant potential<br />

for use <strong>in</strong> e.g. cloth<strong>in</strong>g <strong>in</strong>dustry, orthopedy, surgery, healthcare, monument conservation,<br />

art, as well as film- and computer game <strong>in</strong>dustry. Recently, Treleaven<br />

and Wells published an article that thoroughly emphasizes the expected “major<br />

impact on medical research and practice” [3]. They exemplify the high capability<br />

of <strong>3D</strong> body scanners to improve e.g. scoliosis treatment, prosthetics, drug<br />

dosage and cosmetic surgery. Most publications deal<strong>in</strong>g with body scanners are<br />

motivated by the <strong>in</strong>creas<strong>in</strong>g importance of fast and automatic body measur<strong>in</strong>g<br />

systems for the cloth<strong>in</strong>g <strong>in</strong>dustry (see e.g. [1], [4], [5]), s<strong>in</strong>ce such systems may<br />

enable the <strong>in</strong>dustry to produce mass customized cloth<strong>in</strong>g.<br />

In the last years a few approaches for contact-free <strong>3D</strong> body scann<strong>in</strong>g have<br />

been proposed and some commercial products are already available. Most systems<br />

rely on well-known acquisition techniques like coded light, phase shift,<br />

structured light, laser triangulation, and time-of-flight (see [6] for a review of<br />

different range sensors). Surface acquisition techniques are state-of-the-art (see<br />

e.g. [7], [8], [9]), but when they are applied to body scann<strong>in</strong>g, there are still


Fig. 1. Proposed body scanner setup consist<strong>in</strong>g of two mirrors, at least one camera,<br />

and multiple l<strong>in</strong>e laser modules which are mounted on a l<strong>in</strong>ear slide.<br />

some restrictive conditions that have to be taken <strong>in</strong>to consideration. In order<br />

to avoid problems with body movement (e.g. due to breath<strong>in</strong>g or balance control),<br />

it is necessary to scan the entire surface <strong>in</strong> one pass with<strong>in</strong> a few seconds.<br />

Whole-body scanners must capture the surface from multiple view<strong>in</strong>g directions<br />

to obta<strong>in</strong> all sides of a person. Therefore most of them consist of several sensors,<br />

which make them very expensive.<br />

Alternative low-cost solutions are <strong>in</strong> great demand. We propose a new body<br />

scanner system that is able to scan a person’s body or an object from all sides<br />

at the same time. By mak<strong>in</strong>g <strong>in</strong>telligent use of two mirrors, we require only one<br />

s<strong>in</strong>gle grayscale camera and at least one standard l<strong>in</strong>e laser, mak<strong>in</strong>g the system<br />

far more affordable than was previously possible.<br />

2 System Setup<br />

Our proposed body scanner setup is illustrated <strong>in</strong> Fig. 1. It consists of two<br />

mirrors, at least one camera, and multiple l<strong>in</strong>e laser modules mounted on a l<strong>in</strong>ear<br />

slide. Surface acquisition is based on standard triangulation (i.e. <strong>in</strong>tersection of<br />

camera rays and laser planes, see e.g. [7], [10]). The key po<strong>in</strong>t of our setup are two<br />

mirrors beh<strong>in</strong>d the person, hav<strong>in</strong>g an angle of about 120 ◦ between them. This<br />

‘mirror cab<strong>in</strong>et’ not only provides that each camera can capture three view<strong>in</strong>g<br />

directions of the subject (see Fig. 2), but additionally reflects each laser plane<br />

<strong>in</strong>to itself, yield<strong>in</strong>g a coplanar illum<strong>in</strong>ation from three directions and a visible<br />

‘laser r<strong>in</strong>g’ at the subject’s surface. This enables a simultaneous measurement of<br />

multiple sides of the person with m<strong>in</strong>imal hardware requirements.<br />

Details, possible variations, and recommendations about the setup are discussed<br />

<strong>in</strong> Section 4.


Fig. 2. View of the upper camera. Each camera captures three view<strong>in</strong>g directions of<br />

the subject: two mirrored views <strong>in</strong> the left and right side of the image, and one direct<br />

view <strong>in</strong> the middle.<br />

2.1 Camera/<strong>Mirror</strong> Calibration<br />

A precise camera calibration is an essential precondition for accurate body scann<strong>in</strong>g.<br />

To estimate the <strong>in</strong>tr<strong>in</strong>sic and extr<strong>in</strong>sic camera parameters, the calibration<br />

process requires po<strong>in</strong>ts <strong>in</strong> <strong>3D</strong> world coord<strong>in</strong>ates and correspond<strong>in</strong>g po<strong>in</strong>ts <strong>in</strong> 2D<br />

pixel coord<strong>in</strong>ates. These po<strong>in</strong>ts should preferably span the entire work<strong>in</strong>g space<br />

(i.e. scan volume). Therefore, we built a calibration target with visible markers<br />

at three sides as shown <strong>in</strong> Fig. 3. S<strong>in</strong>ce each camera <strong>in</strong> our mirror setup can capture<br />

images <strong>in</strong> three view<strong>in</strong>g directions, we derive three camera models for each<br />

camera: One model for the center view and two for the left and right reflected<br />

views. In this way we can act as if we used three separate cameras that can see<br />

three different sides of the body. Our enhanced calibration procedure consists of<br />

three steps:<br />

1. Separate calibration of reflected left, reflected right, and unreflected center<br />

camera.<br />

2. Coarse pose estimation of the left and right mirror plane.<br />

3. Comb<strong>in</strong>ed f<strong>in</strong>e calibration of the overall mirror-camera setup.<br />

The first step applies three times the standard camera calibration approach of<br />

Tsai [11], except that the reflected cameras require to reflect the x-component<br />

whenever we map from image to world coord<strong>in</strong>ates or vice versa. The results are<br />

three calibrated camera models represent<strong>in</strong>g three view<strong>in</strong>g directions of one real<br />

camera.<br />

The second step estimates an <strong>in</strong>itial pose of each mirror us<strong>in</strong>g the fact that<br />

each of them lies <strong>in</strong> a midplane of two camera focal po<strong>in</strong>ts. For example, the left<br />

mirror plane is given by plane equation<br />

(f c − f l ) · (x − (f l + f c )/2) = 0; x ∈ R 3 (1)


Fig. 3. Calibration target consist<strong>in</strong>g of three white panels mounted at a precise angle<br />

of 120 ◦ and hav<strong>in</strong>g a total of 48 markers: (Left) schematic layout; (right) shot of the<br />

upper camera.<br />

with focal po<strong>in</strong>t f c of the unreflected center camera and focal po<strong>in</strong>t f l of the<br />

reflected left camera.<br />

Theoretically those two calibration steps are sufficient for the follow<strong>in</strong>g triangulation<br />

approach. However, a well-known weakness of fixed camera calibration<br />

is the close relationship between focal length f and camera-object distance T z .<br />

The effects on the image of small changes <strong>in</strong> f compared to small changes <strong>in</strong><br />

T z are very similar, which leads to measurement <strong>in</strong>accuracy. This fact is the<br />

ma<strong>in</strong> reason for slight differences <strong>in</strong> the focal length of all three camera models<br />

(which should be equal). Therefore, the calibration is ref<strong>in</strong>ed <strong>in</strong> the third step by<br />

perform<strong>in</strong>g a numerical optimization of both mirror poses and all center camera<br />

model parameters, <strong>in</strong>corporat<strong>in</strong>g all 48 markers. F<strong>in</strong>ally, optimized left and right<br />

camera models can be derived from the optimized center camera by reflection<br />

on the left and right mirror plane. A comparison of scans with and without this<br />

last optimization step is given <strong>in</strong> the last section (Fig. 5).<br />

3 <strong>3D</strong> <strong>Scann<strong>in</strong>g</strong><br />

Computation of surface data consists of three steps: (i) Extraction of laser l<strong>in</strong>e(s)<br />

from camera images, (ii) determ<strong>in</strong>ation of the laser slide position for each camera<br />

frame, and (iii) triangulation of surface po<strong>in</strong>ts and filter<strong>in</strong>g. We suggest to<br />

perform only the first step dur<strong>in</strong>g scann<strong>in</strong>g; all further process<strong>in</strong>g steps can be<br />

carried out afterwards when time is no longer critical.


3.1 Laser L<strong>in</strong>e Extraction<br />

From each camera image, the laser l<strong>in</strong>e(s) need to be extracted for later process<strong>in</strong>g.<br />

In our case an efficient column-wise l<strong>in</strong>e detection algorithm can be applied,<br />

s<strong>in</strong>ce the laser l<strong>in</strong>es run roughly horizontally (see Fig. 2). In order to obta<strong>in</strong> the<br />

peak position of a laser l<strong>in</strong>e to subpixel precision, we use a conventional centerof-mass<br />

estimation: For each image row x, the subpixel precise y coord<strong>in</strong>ate of a<br />

laser l<strong>in</strong>e is calculated as the average y coord<strong>in</strong>ate of all “bright” pixels with<strong>in</strong> a<br />

mov<strong>in</strong>g w<strong>in</strong>dow, weighted with its correspond<strong>in</strong>g pixel <strong>in</strong>tensities. (See [12] for<br />

a comparison of different subpixel peak detection methods.) The result<strong>in</strong>g peak<br />

coord<strong>in</strong>ates per image column are collected for each camera frame and are used<br />

<strong>in</strong> the follow<strong>in</strong>g steps.<br />

3.2 Laser Calibration<br />

There are several possible ways to obta<strong>in</strong> the laser slide position (i.e. height<br />

of the laser light planes) for each camera frame: (i) Synchronized camera(s)<br />

and servo drive system for laser movement, (ii) stereo vision us<strong>in</strong>g synchronized<br />

cameras, or (iii) a laser calibration target (a diffusely reflective object/surface)<br />

at a precisely known location, such that at least one laser l<strong>in</strong>e is always visible<br />

on it.<br />

The latter method has been implemented <strong>in</strong> our experimental setup, as it has<br />

the advantage of requir<strong>in</strong>g neither any hardware synchronization nor expensive<br />

servo drive systems. In our setup we simply use the two flat panels beh<strong>in</strong>d<br />

the mirrors as laser calibration target, s<strong>in</strong>ce these panels run parallel to the<br />

previously calibrated mirror planes. <strong>3D</strong> po<strong>in</strong>ts along the laser l<strong>in</strong>es that lie on<br />

the panels are obta<strong>in</strong>ed by ray-plane <strong>in</strong>tersection. To be robust with respect to<br />

occlusions and noise, we presume that the laser moves at constant speed. This<br />

allows us to estimate a l<strong>in</strong>ear function that represents the laser height over time,<br />

e.g. by us<strong>in</strong>g the Hough Transform for l<strong>in</strong>es [13], [14]. From this process<strong>in</strong>g step,<br />

the position (height) of the laser sledge is precisely known for each camera frame,<br />

allow<strong>in</strong>g triangulation of <strong>3D</strong> surface po<strong>in</strong>ts described <strong>in</strong> the follow<strong>in</strong>g.<br />

3.3 Triangulation and Filter<strong>in</strong>g<br />

Triangulation of <strong>3D</strong> surface po<strong>in</strong>ts from calibrated cameras and known laser<br />

planes is simple if we use a s<strong>in</strong>gle laser plane. But s<strong>in</strong>ce we want to use several<br />

laser planes simultaneously, we have to solve the pixel-to-plane correspondence<br />

problem (i.e. identify the laser plane number that corresponds to a given illum<strong>in</strong>ated<br />

pixel). Here we propose two heuristics, which can be comb<strong>in</strong>ed:<br />

Bound<strong>in</strong>g Volume Constra<strong>in</strong>t. When the scanned subject is a human body<br />

and its approximate position <strong>in</strong> the scene is known, its surface can be roughly<br />

approximated by a bound<strong>in</strong>g volume, e.g. by a surround<strong>in</strong>g vertical cyl<strong>in</strong>der c<br />

with a certa<strong>in</strong> height and radius. Triangulation with all n possible laser planes


Fig. 4. Intersection of a camera ray with the light planes of laser 1 and laser 2 produces<br />

two possible surface po<strong>in</strong>ts p 1 and p 2. However, only the correct po<strong>in</strong>t p 2 lies <strong>in</strong>side the<br />

bound<strong>in</strong>g cyl<strong>in</strong>der.<br />

yields n different surface po<strong>in</strong>ts, which lie on the correspond<strong>in</strong>g light ray to the<br />

camera. In a reasonable setup, only one of them lies <strong>in</strong>side c (see Fig. 4)<br />

The maximum radius of c, i.e. the scann<strong>in</strong>g region, depends on the distance<br />

between the laser planes and the triangulation angle between laser planes and<br />

camera view. Obviously these parameters need to be adjusted to the scanned<br />

subject’s size.<br />

Column-Wise Laser Count<strong>in</strong>g. Another possible way of determ<strong>in</strong><strong>in</strong>g the<br />

correct laser for each illum<strong>in</strong>ated surface po<strong>in</strong>t can be <strong>in</strong>tegrated <strong>in</strong>to the laser<br />

l<strong>in</strong>e extraction algorithm (Section 3.1): If the number of detected laser peaks <strong>in</strong><br />

an image column is equal to the number of lasers n, their respective laser plane<br />

<strong>in</strong>dex i can be easily determ<strong>in</strong>ed by simple count<strong>in</strong>g.<br />

However, this is not possible <strong>in</strong> all columns due to occlusion, reflections,<br />

image noise, etc. Therefore, we suggest to use a specialized label<strong>in</strong>g algorithm<br />

that assigns the same label to illum<strong>in</strong>ated po<strong>in</strong>ts that are direct neighbors <strong>in</strong><br />

space or time (i.e. frame number). Thus all po<strong>in</strong>ts with the same label most<br />

probably have been illum<strong>in</strong>ated by the same laser l<strong>in</strong>e. The <strong>in</strong>dex of this laser<br />

l<strong>in</strong>e can be determ<strong>in</strong>ed by a majority vote of all available <strong>in</strong>dexes i.<br />

Note: The same pr<strong>in</strong>ciple of majority vote can be used to decide whether<br />

po<strong>in</strong>ts with a specific label have been seen directly, via the left mirror, or via the<br />

right mirror. This can be necessary if some directly visible parts of the subject<br />

(e.g. the arms, see Fig. 2) reach <strong>in</strong>to the left and/or right image region, occlud<strong>in</strong>g<br />

the mirrors.<br />

3.4 Postprocess<strong>in</strong>g<br />

F<strong>in</strong>ally the result<strong>in</strong>g surface fragments are merged to a closed triangle mesh us<strong>in</strong>g<br />

Poisson Reconstruction [15]. If they do not fit perfectly, e.g. due to imprecisions


<strong>in</strong> setup or calibration, surface registration techniques like ICP [16], [17] can be<br />

used to improve their alignment.<br />

4 Experiments and Conclusion<br />

4.1 Experimental Setup<br />

The approach described <strong>in</strong> this paper allows several setup variations. One parameter<br />

is the number of parallel laser planes. Increas<strong>in</strong>g this number has the<br />

advantage of decreas<strong>in</strong>g the duration of a whole surface scan, but makes the<br />

pixel-to-plane correspondence problem (discussed <strong>in</strong> Section 3.3) more difficult.<br />

In our experiments, we have used one to three lasers (650 nm, 16 mW ) with a<br />

distance of 20 cm. The lasers are mounted on a l<strong>in</strong>ear slide, which is driven at constant<br />

speed. Images are captured by two Firewire grayscale cameras (1024 × 768<br />

pixels, 30 fps), set up as depicted <strong>in</strong> Fig. 1. In case of three lasers we can capture<br />

up to 90 body slices per second, which should be sufficient for most applications.<br />

Depend<strong>in</strong>g on the scanned object, a s<strong>in</strong>gle camera may be sufficient; but if<br />

some surface areas are occluded, a second camera may be useful to cover them.<br />

S<strong>in</strong>ce only laser l<strong>in</strong>es will be extracted from the camera images dur<strong>in</strong>g scann<strong>in</strong>g,<br />

depend<strong>in</strong>g on the environmental light conditions, it may be useful to attach a<br />

correspond<strong>in</strong>g wavelength bandpass filter to the cameras.<br />

In our prototypical setup, we use two usual hall mirrors from a do-it-yourself<br />

store (62 cm × 170 cm each). Although those mirrors are not perfectly flat (especially<br />

the one on the right), we obta<strong>in</strong>ed satisfy<strong>in</strong>g results. However, <strong>in</strong>dustrial<br />

front surface mirrors should be preferred <strong>in</strong> a professional setup.<br />

4.2 Results and Conclusion<br />

To verify the absolute precision of our setup, we have scanned a precisely known<br />

object: a cuboid of 227.7 mm × 316.8 mm × 206.6 mm, mounted on a tripod at<br />

about chest height (Fig. 5). The obta<strong>in</strong>ed scan has an absolute error of approx.<br />

1 mm <strong>in</strong> size. However, the <strong>in</strong>fluence of our slightly distorted right mirror can be<br />

seen at the right side of Fig. 5(b): The right edge of the cuboid is not precisely<br />

parallel to the left one.<br />

Furthermore, we have analyzed the surface noise of the measured cuboid<br />

faces (i.e. the captured surface po<strong>in</strong>t distances to the correspond<strong>in</strong>g planes). The<br />

scan data of the front side, which is directly visible <strong>in</strong> the camera image, have<br />

lowest noise: The 17025 captured surface po<strong>in</strong>ts exhibit a standard deviation of<br />

0.63 mm; their maximum distance is less than 2.2 mm. As the other three faces<br />

are not directly visible (only as reflections <strong>in</strong> the mirrors), their pixel resolution<br />

is lower, and noise is slightly stronger: We obta<strong>in</strong>ed 5993 to 8558 surface po<strong>in</strong>ts<br />

per side, hav<strong>in</strong>g a standard deviation of 0.82 mm to 0.93 mm and a maximum<br />

outlier distance of 3.9 mm.<br />

Some qualitative results from our experiments can be seen <strong>in</strong> Fig. 7 and<br />

Fig. 8. Fig. 6 shows a correspond<strong>in</strong>g camera view directly before scann<strong>in</strong>g and


(a) (b) (c)<br />

Fig. 5. Unfiltered scan of a 206.6 mm × 316.8 mm × 227.7 mm cuboid. (a) Horizontal<br />

slice (seen from above) after perform<strong>in</strong>g a standard camera calibration. (b) Same horizontal<br />

slice after perform<strong>in</strong>g our ref<strong>in</strong>ed calibration (see step 3 <strong>in</strong> Section 2.1). The<br />

divergence at the right side is caused by our slightly distort<strong>in</strong>g right mirror. (c) <strong>3D</strong><br />

render<strong>in</strong>g of the result.<br />

Fig. 6. Shots of the upper camera: (Left) Person <strong>in</strong> a mirror cab<strong>in</strong>et; (right) overlaid<br />

laser l<strong>in</strong>es of every 10th camera frame.<br />

an overlay of multiple camera images dur<strong>in</strong>g scann<strong>in</strong>g. The <strong>3D</strong> data have only<br />

been filtered with a 3 × 3 averag<strong>in</strong>g filter. On closer <strong>in</strong>spection of the surface one<br />

can see some small horizontal waves appear<strong>in</strong>g at the surface that are caused by<br />

slight body motion, which is almost impossible to avoid. In future work, we will<br />

try to elim<strong>in</strong>ate this wav<strong>in</strong>ess by adjust<strong>in</strong>g the balance po<strong>in</strong>t of each body slice.<br />

Gaps appear <strong>in</strong> the 360 ◦ view on too specular or dark surface parts (e.g. hair),<br />

or <strong>in</strong> regions which are occluded from the lasers or cameras by other body parts.<br />

Our results prove usability and efficiency of the suggested body scanner setup.<br />

Its precision is sufficient for many applications <strong>in</strong> health care or cloth<strong>in</strong>g <strong>in</strong>dustries,<br />

and the use of mirrors and related algorithms makes our setup very<br />

cost-efficient compared to previous techniques, because each mirror saves costs<br />

of at least one camera, one laser, and one l<strong>in</strong>ear slide.


(a) (b) (c)<br />

Fig. 7. Scan results of the person <strong>in</strong> Fig. 6: (a) Front and rear view of a scan us<strong>in</strong>g the<br />

lower camera only; (b) merged data of lower and upper camera, front view; (c) rear<br />

view.<br />

(a) (b) (c) (d)<br />

Fig. 8. Further results: (a) Front view of a female; (b) rear view. (c) Front view of a<br />

male; (c) rear view.


References<br />

1. Jones, P.R.M., Brooke-Wavell, K., West, G.: Format for human body modell<strong>in</strong>g<br />

from 3d body scann<strong>in</strong>g. Int. Journal of Cloth<strong>in</strong>g Science and Technology 7(1)<br />

(1995) 7–15<br />

2. Horiguchi, C.: Bl (body l<strong>in</strong>e) scanner - the development of a new 3d measurement<br />

and reconstruction system. International Archives of Photogrammetry Remote<br />

Sens<strong>in</strong>g 32(5) (1998) 421–429<br />

3. Treleaven, P., Wells, J.: 3d body scann<strong>in</strong>g and healthcare applications. Computer,<br />

IEEE Computer Society 40(7) (2007) 28–34<br />

4. Istook, C.K., Hwang, S.J.: 3d body scann<strong>in</strong>g systems with application to the<br />

apparel <strong>in</strong>dustry. Journal of Fashion Market<strong>in</strong>g and Management 5(2) (2001) 120–<br />

132<br />

5. Guerla<strong>in</strong>, P., B.Durand: Digitiz<strong>in</strong>g and measur<strong>in</strong>g of the human body for the<br />

cloth<strong>in</strong>g <strong>in</strong>dustry. Int. Journal of Cloth<strong>in</strong>g Science and Technology 18(3) (2006)<br />

151–165<br />

6. Blais, F.: Review of 20 years range sensor development. Journal of Electronic<br />

Imag<strong>in</strong>g 13(1) (2004)<br />

7. Pipitone, F.J., Marshall, T.G.: A wide-field scann<strong>in</strong>g triangulation rangef<strong>in</strong>der for<br />

mach<strong>in</strong>e vision. International Journal of Robotics Research 2(1) (1983) 39–49<br />

8. W<strong>in</strong>kelbach, S., Wahl, F.M.: Shape from s<strong>in</strong>gle stripe pattern illum<strong>in</strong>ation. In<br />

Van Gool L., ed.: Pattern Recognition, 24th DAGM Symposium. Lecture Notes <strong>in</strong><br />

Computer Science 2449, Spr<strong>in</strong>ger (2002) 240–247<br />

9. W<strong>in</strong>kelbach, S., Molkenstruck, S., Wahl, F.: Low-cost laser range scanner and fast<br />

surface registration approach. In Franke K.; Müller K.-R.; Nickolay B.; Schäfer R.,<br />

ed.: Pattern Recognition, 28th DAGM Symposium. Lecture Notes <strong>in</strong> Computer<br />

Science 4174, Spr<strong>in</strong>ger (2006) 718–728<br />

10. Hall, E.L., Tio, J.B.K., MCPherson, C.A.: Measur<strong>in</strong>g curved surfaces for robot<br />

vision. Computer 15(12) (1982) 42–54<br />

11. Tsai, R.Y.: An efficient and accurate camera calibration technique for 3d mach<strong>in</strong>e<br />

vision. In: IEEE Conf. Computer Vision and Pattern Recognition. (1986) 364–374<br />

12. Fisher, R., Naidu, D.: A comparison of algorithms for subpixel peak detection. In<br />

Sanz, J.L.C., ed.: Image Technology Division, Spr<strong>in</strong>ger (1996) 385–404<br />

13. Hough, P.V.C.: Method and means for recogniz<strong>in</strong>g complex patterns (December<br />

1962) U.S. Patent 3,069,654.<br />

14. Duda, R.O., Hart, P.E.: Use of the hough transformation to detect l<strong>in</strong>es and curves<br />

<strong>in</strong> pictures. Communications od the ACM 15(1) (1972) 11–15<br />

15. Kazhdan, M., Bolitho, M., Hoppe, H.: Poisson surface reconstruction. In: Eurographics<br />

Symposium on Geometry Process<strong>in</strong>g. (2006) 61–70<br />

16. Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. IEEE Trans.<br />

Pattern Anal. Mach<strong>in</strong>e Intell. 14(2) (1992) 239–258<br />

17. Dalley, G., Flynn, P.: Pair-wise range image registration: a study <strong>in</strong> outlier classification.<br />

Comput. Vis. Image Underst. 87(1-3) (2002) 104–115

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!