29.06.2013 Views

NUI Galway – UL Alliance First Annual ENGINEERING AND - ARAN ...

NUI Galway – UL Alliance First Annual ENGINEERING AND - ARAN ...

NUI Galway – UL Alliance First Annual ENGINEERING AND - ARAN ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Real-time depth map generation using an FPGA-implemented stereo camera<br />

Istvan Andorko, Dr. Peter Corcoran, Dr. Petronel Bigioi<br />

College of Engineering and Informatics, <strong>NUI</strong> <strong>Galway</strong>; Tessera (Ireland) Ltd.<br />

i.andorko1@nuigalway.ie; peter.corcoran@nuigalway.ie; petronel.bigioi@nuigalway.ie<br />

Abstract<br />

An FPGA-implemented stereo camera based system is<br />

proposed whose aim is to generate real-time accurate<br />

depth maps at VGA resolution.<br />

1. Introduction and progress of current<br />

research<br />

The aim of our current research is to generate realtime<br />

accurate depth maps. There are two main types of<br />

depth map generation algorithms. In case of the first<br />

one, the resulting depth maps are not accurate enough<br />

and they have a lot of noise, but they can be<br />

implemented in hardware for real-time applications [1].<br />

The second types of algorithms are computationally<br />

expensive, but they can generate very accurate depth<br />

maps [2].<br />

Based on our study so far, almost all the researchers<br />

have created scenes for testing that are the most suitable<br />

for their algorithms and that cannot be found in the<br />

“real-life” environment. Some of these pictures can be<br />

found in figure 1.<br />

Figure 1. Tsukuba stereo pair<br />

Our idea was to create setups for testing that are very<br />

likely to be found when the user will be trying to create<br />

depth maps using the handheld stereo camera. An<br />

example of this can be seen in figure 2.<br />

Figure 2. Stereo image of a face for the “real-life” scenario<br />

The tests that we carried out were done with the less<br />

computationally expensive algorithms. For each setup<br />

we have taken pictures from four different distances and<br />

four different illumination conditions. In figure 3 a, b, c<br />

the difference between different algorithms and setup<br />

conditions can be seen.<br />

13<br />

Figure 3.a. Difference between SAD and NCC algorithms<br />

Figure 3.b. SAD algorithm, different illumination<br />

Figure 3.c SAD algorithms, similar illumination and different distance<br />

2. Future work<br />

Regarding our future work, the plans are to find an<br />

algorithm that works well and gives similar results<br />

under different conditions (illumination, distance). The<br />

second step will be to make it work well when using<br />

human faces. The reason for this is that in the<br />

Consumer Electronics industry, most of the camera<br />

features are developed for faces. In the end, the<br />

algorithm will be implemented in a SoC and it will be<br />

optimized to work in real-time (30 fps) at VGA<br />

(640x480) resolution.<br />

3. Acknowledgement<br />

The project is financed by the Irish Research Council<br />

for Science, Engineering and Technology (IRCSET)<br />

and Tessera (Ireland) Ltd.<br />

4. References<br />

[1] C. Georgulas et al, “Real-time disparity map computation<br />

module”, Microprocessors and Microsystems, vol. 32, pp.<br />

159-170, 2008.<br />

[2] V. Kolmogorov, R. Zabih, “Computing visual<br />

correspondence with occlusions via graph cuts”, International<br />

Conference on Computer Vision, pp. 508-515, 2001.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!