UWE Bristol Engineering showcase 2015
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Lee Paplauskas<br />
BEng Robotics (Hons)<br />
Project Supervisor<br />
Prof. Alan Winfield<br />
Interaction between Multiple Robots with Self-Simulation<br />
The Future!<br />
Robots, and Artificial Intelligence are become much more prevalent<br />
in our culture. Whether something as basic as an algorithm to<br />
recommend TV shows, or a fridge with twitter to let you know<br />
when you're out of milk, right through to a car which can drive<br />
itself, at the push of a button, its all there.<br />
As a result of this, the machines, or robots, we make, need to be<br />
made safer. There is only so much that can be done to make their<br />
processes safer, using current methods, such as putting proximity<br />
sensors on, or limiting their speed. What if, instead of limiting<br />
them, to make them safer, we gave them a way of predicting if<br />
something they were about to do was dangerous? Why stop at that<br />
though? Why not even go as far as to make the machines protect<br />
us from dangers, outright?<br />
The Three Laws of Robotics<br />
Isaac Asimov wrote, in his short stories about “Positronic” or<br />
emotional robotics, of “The Three Laws of Robotics”. These laws<br />
are ingrained into every robot's “Positronic Brain” and were<br />
designed to protect a human, and make sure robots were<br />
subservient.<br />
LAW 1:<br />
A robot must never harm a human being, or, through inaction,<br />
allow a human being to come to harm.<br />
LAW 2:<br />
A robot must obey the orders given to it by human beings, except<br />
where such orders would conflict with the First Law.<br />
LAW 3:<br />
A robot must protect its own existence as long as such protection<br />
does not conflict with the First of Second Law.<br />
By implementing these laws, the robots created in his stories, were<br />
a lot safer than they would have been otherwise, and as a result,<br />
could function more efficiently. Asimov's stories do, however,<br />
explore the results of his robots being purely logical machines – at<br />
the end of the day – and taking the orders they are given very<br />
literally.<br />
Towards an Ethical Robot<br />
Steps have been made to attempt to implement something similar<br />
to Asimov's Three Laws into robots to date. Professor Alan Winfield,<br />
Christian Blum, and Wenguo Liu have performed the “Ethical<br />
Robot” experiment. This experiment involves the use of a<br />
“Consequence Engine” that is programmed into one of the robots,<br />
to predict the movements of the other actors (in the case of the<br />
experiment, other robots) and map them on an internal simulation.<br />
It would then simulate various possible movements, and choose<br />
the 'best' solution in order to keep itself, and the other actors safe.<br />
[Figure 1.0] The Ethical Robot Experiment<br />
Extending the Ethical Robot<br />
My report contains the details of expanding the ethical robot's<br />
Consequence Engine from a single 'Smart Robot' unit, to multiple<br />
platforms.<br />
Each robot with the Consequence Engine will be able to simulate<br />
the entire environment, and each actor, meaning they would be<br />
able to predict movements, and, assuming the environment<br />
contains 'DangerZones' they would be able to protect the other<br />
actors in the environment.<br />
The Consequence Engine<br />
As the robot is not designed specifically to save the other actors,<br />
the Consequence Engine is designed as a secondary process that<br />
runs constantly, but doesn't effect the main operation of the robot,<br />
unless necessary. The primary operation of the robots, in these set<br />
of experiments, is to get from A, to B.<br />
As it is navigating the environment to get to the end location, it will<br />
simulate the actions of the other actors, and if one of them was<br />
headed towards a 'DangerZone' then the robot would move to<br />
intercept, altering the actor's course away from the danger.<br />
[Figure 2.0] An Example of the Consequence Engine<br />
The processing of the Consequence Engine predicts multiple<br />
paths in the environment, and then chooses the 'best' of<br />
these. The Red lines in Figure 2.0 are the predicted paths of<br />
the Consequence engine.<br />
The Corridor Experiment<br />
In order for the multiple Consequence Engine code to be<br />
tested, a series of updated experiments needed to be<br />
implemented. Below is a brief overview of the proposed<br />
experiments:<br />
1) The Corridor Experiment: By placing two 'Smart' robots at<br />
opposing ends of a narrow corridor, we can set them going,<br />
and see how they will interact. Their main goal will be to<br />
reach the opposing end of the corridor, and there will not be<br />
any 'dumb' robots to save.<br />
[Figure 3.0] The Corridor Experiment Setup<br />
2) The Ethical Robots: Repeating the Ethical Robot<br />
experiment, but increasing the number of 'Smart' robots,<br />
would mean that the Consequence Engine would be fully<br />
tested in a situation that is likely to commonly arise.<br />
Recommendations for Further Work<br />
Unfortunatly, the scope of my project focused only on the<br />
software side of these experiments. As a result, the Epuck<br />
robots – which were modified with a Linux board extension –<br />
happened to not be powerful enough to run the internal<br />
simulations. The laptop that the code was originally ran on<br />
was not powerful enough to run two instances of this code.<br />
As a result, I have decided to recommend some further work<br />
on the hardware of the Epuck extension board.<br />
These recommendations are as follows:<br />
- Update the Linux extension board, increasing the<br />
processing power available on each Epuck robot, to re-run<br />
this experiment.<br />
- Alternatively, look into alternative methods of processing<br />
the Internal simulations for the robots.<br />
Project Summary<br />
The aim of this project will be to extend existing work on<br />
robots with internal models – i.e. robots with a simulation<br />
of themselves, other robots, and their environment –<br />
inside of themselves. The first step will be to extend an<br />
existing implementation from a single, to multiple robots.<br />
When this has been tested and proven to work, I will then<br />
conduct a series of experiments to show how multiple<br />
robots interacting could behave more safely, or ethically,<br />
than robots without self-simulation.<br />
Project Objectives<br />
• Replicate the 'ethical robot' experiment, and observe<br />
the robots motions, to see where improvement, or<br />
modification can be made.<br />
• Re-write, or modify the code, to allow experimentation<br />
to be taken onto multiple smart robot platforms.<br />
• Run several different experiments, initially in<br />
simulation, then in reality, to a) prove the modified<br />
code runs as desired, and b) to observe the results, and<br />
draw conclusions from there.<br />
Project Conclusion<br />
While it has not been possible to complete the main<br />
experimentation, two of the three main objectives have<br />
been completed. When initially replicating the “Ethical<br />
robot” experiment, we were able to get it running<br />
smoothly, and were able to them use the information<br />
from that to produce the modified code, to be exported<br />
onto each individual platform. What was not expected,<br />
however, was the vast amount of computational power<br />
required to perform this operation – looking back though,<br />
it should have been expected – and the inability to run<br />
multiple instances of the code in parallel, on the current<br />
hardware.<br />
In its current state it has been proven that the<br />
Consequence Engine is capable of:<br />
• Tracking actors in an environment and judging their<br />
'safety' based on preset 'dangerzones'.<br />
• Avoiding Obstacles within the environment, whether<br />
mobile, or static.<br />
• Navigating an environment, and reacting to other<br />
actors' safety, even moving to prevent them from being<br />
in danger.