23.11.2012 Views

Master Thesis - Hochschule Bonn-Rhein-Sieg

Master Thesis - Hochschule Bonn-Rhein-Sieg

Master Thesis - Hochschule Bonn-Rhein-Sieg

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

5. Algorithms <strong>Master</strong> <strong>Thesis</strong> Björn Ostermann page 87 of 126<br />

5.5.1 Calculating the distance between the current pose and the goal pose<br />

In order to be able to implement a simple path control, controlling the robot’s movement from a start<br />

point through several sub-goal points to an end point, the program needs to be able to compare the<br />

robot’s current position with that of its next goal. This allows the program to determine whether a goal<br />

has been sufficiently reached and the next goal is to be activated (see chapter 4.4.1).<br />

This comparison was achieved by building a vector from both positions and calculating the length of<br />

the vector. Equation 8 shows the calculation used to calculate the distance from the robot’s current<br />

position to the given goal position.<br />

� � � � � �2 2<br />

2<br />

x � x � y � y � z z<br />

�<br />

v �<br />

�<br />

current<br />

goal<br />

current<br />

Equation 8: Length of a vector;<br />

the three dimensional distance of two points [61]<br />

If the distance between the current position and the goal position is below a certain threshold, the next<br />

goal position is activated in the program.<br />

5.5.2 Transformation of coordinate systems<br />

Two coordinate systems exist in the given workplace, the robot’s coordinate system and the camera’s<br />

coordinate system. One common coordinate system is needed for the path planning algorithms.<br />

Therefore both systems have to be merged. The easiest way to accomplish this is the transformation of<br />

one system into the other. For this thesis the robot’s coordinate system has been chosen as the basic<br />

system, because the robot has to be controlled, whereas the objects in the camera’s view only have to<br />

be observed. Using the robot’s coordinate system results in fewer transformations of coordinates and<br />

thus in less computational effort.<br />

To transform the coordinates from the camera’s coordinate system to the robot’s coordinate system,<br />

the shift in the origin and the rotation of the systems towards each other have to be determined.<br />

The transformation of the single coordinates from one system into the other takes place by<br />

multiplication of the position vectors with a homogenous transformation matrix (see Equation 9).<br />

M<br />

4 x 4<br />

��<br />

��<br />

� ��<br />

���<br />

�<br />

�<br />

0<br />

R<br />

3x<br />

3<br />

0<br />

0<br />

�<br />

�<br />

�<br />

��<br />

goal<br />

� ��<br />

��a<br />

� � �<br />

T<br />

� �<br />

� 3x1<br />

��<br />

� �<br />

d<br />

�<br />

��<br />

��<br />

� ���<br />

g<br />

1 � �<br />

� � 0<br />

b<br />

e<br />

h<br />

0<br />

current<br />

c �<br />

f<br />

�<br />

�<br />

i ��<br />

0<br />

Equation 9: Transformation Matrix M [61]<br />

goal<br />

� k ��<br />

� �<br />

l<br />

�<br />

� ��<br />

��<br />

m��<br />

�<br />

1 �<br />

�<br />

This transformation matrix consists of a rotation matrix R 3x 3 , giving the rotation from one coordinate<br />

system into the other, and a translation vector T 3x1<br />

, giving the shift of the origin of the coordinate<br />

systems.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!