13.08.2018 Views

[Studies in Computational Intelligence 481] Artur Babiarz, Robert Bieda, Karol Jędrasiak, Aleksander Nawrat (auth.), Aleksander Nawrat, Zygmunt Kuś (eds.) - Vision Based Systemsfor UAV Applications (2013, Sprin

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

152 A. <strong>Babiarz</strong>, R. <strong>Bieda</strong>, and K. Jaskot<br />

• Create two vectors, and , with common orig<strong>in</strong> <strong>in</strong> mass center , . Initial<br />

length of these vectors should be 1. They should be at angle and to the<br />

<br />

ma<strong>in</strong> axis of <strong>in</strong>ertia, respectively. Angles are directed and measured clockwise.<br />

• If the vector po<strong>in</strong>ts to background pixel and vector po<strong>in</strong>ts to pixel belong<strong>in</strong>g<br />

to the object, then .<br />

• Conversely, if the vector po<strong>in</strong>ts to background pixel and vector po<strong>in</strong>ts to<br />

pixel belong<strong>in</strong>g to the object, then .<br />

• If both vectors po<strong>in</strong>t to pixels belong<strong>in</strong>g to the object, the length of these vectors<br />

is <strong>in</strong>creased by one and the whole procedure is repeated.<br />

The algorithm is simple and efficient, however it requires that the shape created by<br />

background subtraction algorithm should not differ significantly from the reference<br />

design presented on Figure 6.<br />

3 Obta<strong>in</strong>ed Results<br />

The vision application works under L<strong>in</strong>ux. It is meant to be a centralized vision<br />

system provid<strong>in</strong>g vision <strong>in</strong>formation for all who will ask for it. That goal was<br />

achieved by implement<strong>in</strong>g the application as a TCP/IP server, to which clients can<br />

connect and ask for vision <strong>in</strong>formation (Figure 8 and 9).<br />

The application was tested on a PC computer work<strong>in</strong>g under Ubuntu L<strong>in</strong>ux.<br />

The computer is equipped with Pentium 4 3.00 GHz Hyper Thread<strong>in</strong>g processor,<br />

512 MB DDR RAM and GeForce 5200 FX graphics card. Input images were acquired<br />

with Samsung SCC–331 CCD camera equipped with Ernitec lens ( = 6 -<br />

12 mm, = 35.5 mm, 1:1.4). Images were then digitized by simple BT878-based<br />

frame-grabber card. The light<strong>in</strong>g was provided by four 500W halogen lamps <strong>in</strong>stalled<br />

2 m over the corners of the play<strong>in</strong>g field.<br />

The application was compiled by the new GNU C++ compiler <strong>in</strong> version 4 with<br />

use of the follow<strong>in</strong>g optimization flags: -O3 -funroll-loops. On the test hardware<br />

the application uses approximately 50% of CPU time and about 20 MB of RAM.<br />

The framerate is constant and is equal to 30 frames per second, which is the limit<br />

imposed by the frame-grabber hardware. With better frame-grabber it should be<br />

possible to reach about 60 frames per second without any code re-factorization or<br />

optimization. Results of precise measurements of time needed for complet<strong>in</strong>g<br />

particular tasks by the vision algorithm are presented <strong>in</strong> Table 1. The figures were<br />

obta<strong>in</strong>ed by averag<strong>in</strong>g 100 measurements. The vision server also responds very<br />

quickly, usual response is under 1 ms.<br />

High frame-rate and quick response of the server is crucial for the stability of<br />

all control algorithms that use vision <strong>in</strong>formation on the feedback loop, so there is<br />

always a place for further research here, however the delays <strong>in</strong>troduced by the<br />

current version of vision application seem to be acceptable for broad range of<br />

control algorithms.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!