13.08.2018 Views

[Studies in Computational Intelligence 481] Artur Babiarz, Robert Bieda, Karol Jędrasiak, Aleksander Nawrat (auth.), Aleksander Nawrat, Zygmunt Kuś (eds.) - Vision Based Systemsfor UAV Applications (2013, Sprin

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Vision</strong> System for Group of Mobile Robots 141<br />

2 Implementation of <strong>Vision</strong> Algorithm<br />

The algorithm operates <strong>in</strong> two modes: Learn<strong>in</strong>g Mode and Runn<strong>in</strong>g Mode. This is<br />

typical approach <strong>in</strong> the doma<strong>in</strong> of computer vision, however there are some efforts<br />

to get rid of the learn<strong>in</strong>g phase and to build an algorithm capable of autonomous<br />

onl<strong>in</strong>e learn<strong>in</strong>g [1].<br />

Before the vision algorithm can actually be used it requires short learn<strong>in</strong>g<br />

phase. The learn<strong>in</strong>g is supervised, however there are no very complex tasks that<br />

the user have to perform. The algorithm requires user’s attention <strong>in</strong> order to set<br />

proper thresholds for shadow removal and background subtraction algorithms and<br />

for plac<strong>in</strong>g the robots and ball <strong>in</strong> specific place on the play<strong>in</strong>g field to discover<br />

unique color labels denot<strong>in</strong>g every object of <strong>in</strong>terest. As the algorithm is very<br />

robust, the threshold values have quite big tolerance and usually default values are<br />

sufficient. The block diagram of the steps of learn<strong>in</strong>g phase of the algorithm is<br />

presented on Figure 2.<br />

Fig. 2. Schematic diagram of the steps of learn<strong>in</strong>g phase of the vision algorithm<br />

After f<strong>in</strong>ished learn<strong>in</strong>g the algorithm is immediately ready for operation. The<br />

block diagram of the runn<strong>in</strong>g mode of the algorithm is presented on the Figure 3.<br />

The algorithm starts with obta<strong>in</strong><strong>in</strong>g RGB image <strong>in</strong> resolution 320x240. This<br />

size comes from the limitations imposed by the frame grabber hardware used dur<strong>in</strong>g<br />

the experiments. Resolution 640x480 is available only <strong>in</strong> <strong>in</strong>terlaced mode and<br />

640x240 has non-square pixels (twice as tall as wide). The next operation done on<br />

the image is shadow removal accord<strong>in</strong>g to the current background model and selected<br />

threshold. Details of this algorithm are described below <strong>in</strong> separate section.<br />

Improved <strong>in</strong>put image without shadows is then used to extract foreground<br />

objects by simple threshold<strong>in</strong>g the difference to the current background model. Obta<strong>in</strong>ed<br />

b<strong>in</strong>ary image is a subject to mathematical morphology operator called dilation,<br />

which significantly improves the cont<strong>in</strong>uity of foreground objects, which usually<br />

become worse when an object is placed over a white l<strong>in</strong>e drawn on the black play<strong>in</strong>g<br />

field. The dilation operator uses4-neighbourhood and a mask of size 3x3.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!