DOOMBA ME 4451: Robotics December 14, 2010 Dr. Harvey ... - helix
DOOMBA ME 4451: Robotics December 14, 2010 Dr. Harvey ... - helix
DOOMBA ME 4451: Robotics December 14, 2010 Dr. Harvey ... - helix
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>DOOMBA</strong><br />
<strong>ME</strong> <strong>4451</strong>: <strong>Robotics</strong><br />
<strong>December</strong> <strong>14</strong>, <strong>2010</strong><br />
<strong>Dr</strong>. <strong>Harvey</strong> Lipkin, and <strong>Dr</strong>. Nader Sadegh<br />
Chad Norton, Ryan Lober, Zachary Van Schoyck, Ben Coburn
Initial Plan<br />
For the Doomba project the primary objective was to create a mobile robot<br />
capable of tracking a human in a dynamic environment. The only variable to be inputted<br />
by the user would be a specific distance to maintain from the target. The robot, in this<br />
case the iRobot Create with a webcam, would then calculate the angle and distance to the<br />
target from image processing, determine the necessary movements to reorient itself, then<br />
execute those movements to arrive at the specified distance and a zero degree angle<br />
difference (i.e. directly facing the target). Figure 1 below is a graphical representation of<br />
these operations.<br />
d<br />
0ᵒ<br />
θᵒ<br />
d + x<br />
2<br />
0ᵒ<br />
d<br />
1<br />
Figure 1: Physical representation of tracking and subsequent reorientation<br />
To accomplish this overall task, three operations would have to be accomplished.<br />
First the system would have to be manually calibrated to the environment, second the<br />
camera would have to locate the subject of tracking, and third the locations obtained from<br />
the tracking would have to be transformed in to robotic movements.<br />
Initially this sequence of operations was to be completed using live human<br />
tracking algorithms and real time video. Real time tracking was accomplished using the<br />
Simulink block set, unfortunately Simulink would not communicate with the Create and<br />
we were forced to switch from video tracking to image grabbing using Matlab.<br />
Switching to Matlab as the processing platform reduced the speed of image analysis but<br />
the overall theme of the project was maintained.<br />
1
To accomplish human tracking in Matlab, the target to be tracked had to wear two<br />
green circles spaced 9” center to center apart. Using the calibration functions we would<br />
manually choose the tracked colors from an image grab. After calibrating the system<br />
Matlab would grab images from the video and perform a series of image filters to select<br />
the chosen target color, calculate the properties of the image, filter out noise, determine<br />
which objects were circular, and output the coordinates of the two desired circle<br />
centroids. Finally, a function would take these centroid locations and calculate the angle<br />
and distance to the target, based on camera specific parameters. These measurements<br />
would be compared against the desired distance to maintain, and the current orientation,<br />
and the appropriate robot motions would be executed using the Create Toolbox functions.<br />
This process would then repeat after every move allowing the robot to follow a target.<br />
Robot Kinematics<br />
The bulk of the project revolved around image processing and very minor robot<br />
kinematics were incorporated into the final product with the exception of the movements<br />
preformed by the iRobot. The only parameters given to the robot were distance and angle<br />
changes needed to maintain proper orientation to the target. These commands were<br />
executed using turnAngle and travelDist. Supplying the rotation angle and the distance<br />
difference between the current and desired locations to these two functions, allowed us to<br />
effectively relocate the robot without extensive code or calculations. Although inelegant,<br />
it was the simplest solution to the kinematics issue, and removed the need to focus our<br />
energy on this problem.<br />
After achieving reasonable image processing results, we invested some time in<br />
writing the code necessary to solve the velocity kinematics for the robot in order to use<br />
smoother motions to arrive at the final destination. To do this we calculated the change<br />
in x, y, and angle θ, from one frame to the next. Dividing these values by the processing<br />
time between frames gave us their rates of change. Using these values a reverse velocity<br />
kinematics analysis could be preformed to calculate the turning speed of each wheel. The<br />
equation for this is shown below.<br />
2
where,<br />
Using the command Set<strong>Dr</strong>iveWheelsCreate, we could send the calculated wheel<br />
velocities to the robot and have it move in a smooth curved path towards its locations.<br />
The fundamental drawback to this was that calculations of these velocities was dependent<br />
on the CPU time to process the image, and this value was not matching the values we<br />
were expecting and was causing our code to miscalculate the necessary velocities.<br />
Another issue was that the function Set<strong>Dr</strong>iveWheelsCreate, had to be stopped by<br />
resetting the values for the wheel speeds to zero. It was decided that because of the<br />
unnecessary problems involved with the smooth trajectory motion that we would use the<br />
proven turn and go mechanism described above, for simplicity.<br />
Challenges/Solutions<br />
The original plan for this project was to try to make the iRobot follow arbitrary<br />
“new” objects with the webcam. The idea was that we could use or modify an already<br />
existing Simulink package to do this, allowing relatively simple programming and letting<br />
us move on to figuring out the rest of the design. This turned out to be un-workable due<br />
to the structure of the original Simulink code.<br />
The Simulink package tracks objects by taking a picture of the background with<br />
no objects of interest present, and then assuming any differences from that background<br />
are interesting objects and should be tracked. This works quite well under the right<br />
conditions, but it requires that the camera be stationary. If the camera moves at all, it will<br />
cause every point on the image to change, making the program try to track the entire<br />
image. This obviously doesn’t work, but it seemed like a good way to at least acquire the<br />
image to be tracked, maybe it could be modified somehow to account for a moving<br />
robot<br />
3
There are two basic ways to handle having a moving camera: attempt to update<br />
the background picture as the robot moves, allowing the continued use of the original<br />
algorithm, or switch to some completely different tracking approach. The first appears to<br />
require a full 3d model of the background environment, because the change in position of<br />
a given image depends on both the relative movement and relative distance of the camera<br />
and the object in question.<br />
On the other hand, tracking by simple color selection and threshold operations<br />
merely requires giving up on tracking arbitrary objects. It also helps with another<br />
problem, that of determining the distance to an object with just one camera.<br />
In order to follow an object with the iRobot, the control system needs two basic<br />
parameters: The angle and distance to the target. The angle comes essentially free with<br />
any camera tracking program, but the distance requires more complicated processes. The<br />
solution to this problem was to place a pair of colored dots on the target and then<br />
determine its distance by measuring the angle between them and using that to determine<br />
the distance to the target. This basic approach also gives us a relatively easy tracking<br />
problem for general target tracking.<br />
The initial difficulty with this approach was our choice of target dots. With a<br />
small camera very low to the ground, in an area lit primarily from above such as the<br />
robotics lab in the MRDC, the image saturates and appears washed out. That is, all<br />
relatively bright colors seem to blend together. This caused problems with our original<br />
choice of bright orange dots on a white background as a target to track. The dots<br />
appeared to be identically colored to the background of the shirt, making tracking<br />
impossible. The solution to this problem had two basic parts: The orange dots were<br />
replaced with a mat green color, reducing issues with washout, and the camera was<br />
elevated significantly above the floor, improving the image and saturation, as shown in<br />
Figure 3 and Figure 4, respectively.<br />
4
Figure 3: Object Tracking Dots<br />
Figure 4: Elevated Camera Stand to improve image quality<br />
This also made selecting appropriate threshold values that simultaneously<br />
detected the target circles and didn’t detect excessive noise. This turns out to be difficult,<br />
partially because appropriate values appear to vary based on lighting and the exact details<br />
of how far the camera is from the target and similar and partially because there's<br />
significant overlap between the value necessary to find the target and a value low enough<br />
to not find anything else. The solution to this problem was to select what would normally<br />
be an excessively large threshold value, accepting somewhat more noise, and then<br />
selecting target blobs by their shape rather than simply selecting the largest two.<br />
5
Implementing this required several filters on the blob tracking. First, we selected<br />
only those blobs which had a ratio of Area to Perimeter squared close to that of a circle<br />
and a non-negligible area. They were then selected by choosing the two with the smallest<br />
horizontal offset, matching the vertically aligned target dots. Finally, to reduce the<br />
possibility of the robot moving after picking an incorrect pair of centroids, a check was<br />
inserted to stop motion and repeat the image analysis loop if the centroids were too far<br />
apart horizontally or not far enough part vertically.<br />
The parts of the project that were not directly related to image tracking were<br />
relatively simple. The basic commands for moving the iRobot were already written, and<br />
worked reasonably well. Developing the trailer to carry the laptop used and the stand to<br />
elevate the camera were both fairly easy. The movement code did include a limit to<br />
prevent the target from being moved out of frame, and to limit the potential trouble<br />
caused by picking an incorrect pair of centroids.<br />
Achievements/Results<br />
As described previously, the overall goal of this project was to have the iRobot<br />
track an object, determine its relative location, and then navigate towards that object<br />
while maintaining a desired distance. In essence, the final product met each of the<br />
requirements. However, the degree to which those goals were met varies.<br />
Using the developed code in Appendix 1, the iRobot can reliably distinguish the<br />
appropriate target from the background. The success of this routine is highly dependent<br />
upon the ambient lighting conditions, light saturation levels and the resolution of the<br />
camera used. While the final product is fairly robust, it is still susceptible to erroneous<br />
identification. An advantage to the developed code is the ability for the robot to continue<br />
to look for the object even if it is out of frame. If the object does return to the frame, the<br />
robot will immediately begin tracking it again.<br />
In order to determine the objects relative position, the code developed in<br />
Appendix 1 was used, as well. The overall tolerance of the trailing distance is not of<br />
significant importance; however the system is accurate to within centimeters of the<br />
desired input distance. As seen in Figure #, the trailer used to tow the laptop provides an<br />
excessive amount of mass to the rear end of the iRobot. This added weight can result in<br />
vehicle either over-rotating because of the added torque or under-rotating because<br />
6
excessive weight that the iRobot is required to turn. This not only hinders the linear<br />
distance of the robot, but it also results in the possibility of the robot not facing the target<br />
object. While the deviations were on average very small, they were present.<br />
Learning Experiences<br />
Based upon the achievements that were reached during the extent of the project,<br />
and the short-comings, our group concluded that there were several adjustments that<br />
could lead to improvements in the design. For example, the robot’s wheel motors were<br />
not sufficient to turn robot and trailer system accurately. A possible solution to this<br />
would be to replace the trailer system with a lighter version or negating it all together.<br />
Removing the need for the trailer would require the use of a wireless camera. Based upon<br />
the short-comings of the camera used, this change would be welcomed. The current<br />
camera was found to provide no distinct advantage in capture rate or in pixel resolution.<br />
Similarly, the field of view of the camera limited the capability of the tracking system.<br />
Therefore, by replacing the camera, several issues could be resolved.<br />
The susceptibility of the image processing to slight variations in color and lighting<br />
was a non-trivial issue that was encountered. In the final design of the project, the image<br />
processing attempted to track two large circles. If the lighting was too saturated or the<br />
background was similar in color, the tracking could be fooled. However, if the tracking<br />
system were configured to detect a light emitting diode (LED), the system may be able to<br />
track more effectively. Without a more effective means to track, the system will be<br />
unacceptably unreliable.<br />
Class Suggestion<br />
The final project could be designed to be a more significant portion of the final<br />
grade in the class. For the amount of time that is spent designing, building, and testing the<br />
project the grade could reflect that time with a higher percentage of the overall grade.<br />
Someone may spend on average 15 hours studying for a midterm and spend twice that on<br />
their project, but the midterm is worth nearly twice the percentage that the entire project<br />
is worth.<br />
7
Appendix<br />
Code to initialize the procedure: start.m<br />
Code to monitor camera vision: test.m<br />
8
Code to send commands to Roomba: <strong>Dr</strong>un.m<br />
9
The following functions were used in support of the 3 codes listed above:<br />
vidsearch.m<br />
ColorDetectHSVimage.m<br />
selectpixelsandsetHSV.m<br />
turnAngle.m<br />
travelDist.m<br />
11