A Programming Framework for Multi-AgentCoordination <strong>of</strong> Robotic EcologiesM. Dragone 1 , S. Abdel-Naby 1 , D. Swords 1 , and M. Broxvall 21 <strong>University</strong> College Dublin, Dublin, Ireland,mauro.dragone@ucd.ie,2 Örebro <strong>University</strong>, Fakultetsgatan 1, SE-70182, Örebro, SwedenAbstract. Building smart environments with robotic ecologies made up<strong>of</strong> distributed sensors, actuators and mobile robot devices extends thetype <strong>of</strong> applications that can be considered, and reduces the complexityand cost <strong>of</strong> such solutions. While the potentials <strong>of</strong> such an approachmakes robotic ecologies increasingly popular, many fundamental researchquestions remain open. One such question is how to make a robotic ecologyself-adaptive, so as to adapt to changing conditions and evolvingrequirements, and consequently reduce the amount <strong>of</strong> preparation andpre-programming required for their deployment in real world applications.In this paper we present a framework for integrating an agentprogramming system with the traditional robotic and middleware approachto the development <strong>of</strong> robotic ecologies. We illustrate how theseapproaches can complement each other and how they provide an avenuewhere to pursue adaptive robotic ecologies.Keywords: robotic ecologies, multiagent systems, agent and componentbased s<strong>of</strong>tware engineering1 IntroductionThis paper describes the integration between an agent programming system anda middleware supporting the development <strong>of</strong> Robotic Ecologies - networks <strong>of</strong>heterogeneous robotic devices pervasively embedded in everyday environments.Robotic ecologies is an emerging paradigm, which crosses the borders betweenthe fields <strong>of</strong> robotics, sensor networks, and ambient intelligence (AmI). Centralto the robotic ecology concept is that complex tasks are not performed by asingle, very capable robot (e.g., a humanoid robot butler), instead they areperformed through the collaboration and cooperation <strong>of</strong> many networked roboticdevices (including mobile robots, static sensors or actuators, and automatedhome appliances) performing several steps in a coordinated and goal orientedfashion.One <strong>of</strong> the key strengths <strong>of</strong> such an approach is the possibility <strong>of</strong> using alternativemeans to accomplish application goals when multiple courses <strong>of</strong> actionsare available. For instance, a robot seeking to reach the user in another roommay decide to localize itself with its on-board sensors, or to avail itself <strong>of</strong> themore accurate location information from a ceiling localization system.69
However, while having multiple options is a potential source <strong>of</strong> robustnessand adaptability, the combinatorial growth <strong>of</strong> possible execution traces makesdifficult to scale to complex ecologies. Adapting, within tractable time frames, todynamically changing goals and environmental conditions is made more challengingwhen these conditions fall outside those envisioned by the system designer.In the EU FP7 project RUBICON (Robotic UBIquitous COgnitive Network)[1][2] we tackle these challenges by seeking to develop goal-oriented robotic ecologiesthat exhibit a tightly coupled, self-sustaining learning interaction among all<strong>of</strong> their participants. Specifically, we investigate how all the participants in theRUBICON ecology can cooperate in using their past experience to improve theirperformance by autonomously and proactively adjusting their behaviour andperception capabilities in response to a changing environment and user needs.An important pre-requisite <strong>of</strong> such an endeavour, which is addressed in thispaper, is the necessary s<strong>of</strong>tware infrastructure subtending the specification, integration,and the distributed management <strong>of</strong> the operations <strong>of</strong> robotic ecologies.Specifically, this work builds upon the Self -OSGi [3] [4], a modular andlightweight agent system built over Java technology from the Open Service GatewayInitiative (OSGi) [5], and extends it to operate across distributed platformsby integrating it with the PEIS middleware, previously developed as part <strong>of</strong>the Ecologies <strong>of</strong> Physically Embedded Intelligent Systems project [6]. The resultdescribed in this paper is a distributed programming framework for the specificationand the development <strong>of</strong> robotic ecologies.The remainder <strong>of</strong> the paper is organized in the following manner: Section 2provides an overview <strong>of</strong> the state <strong>of</strong> the art techniques for the coordination <strong>of</strong>robotic ecologies, with an emphasis on those pursued within the PEIS initiative- the starting point for the control <strong>of</strong> RUBICON robotic ecologies. Section 3presents the Self -OSGi component & service-based agent framework, and theway it has been recently extended and integrated with PEIS. Section 4 illustratesthe use <strong>of</strong> the resulting multi-agent framework with a robotic ecology experiment.Finally, Section 5 summarizes the contributions <strong>of</strong> this paper and points to some<strong>of</strong> the directions to be explored in future research.2 PEISThe PEIS kernel [7] and related middleware tools are a suite <strong>of</strong> s<strong>of</strong>tware, previouslydeveloped as part <strong>of</strong> the PEIS project [6] in order to enable communicationand collaboration between heterogeneous robotic devices.The PEIS kernel is written in pure C (with binding for Java and other languages)and with as few library and RAM/processing dependencies as possiblein order to fit on a wide range <strong>of</strong> devices.PEIS includes a decentralized mechanism for collaboration between separateprocesses running on separate devices that allows for automatic discovery, highlevelcommunication and collaboration through subscription based connections.It also <strong>of</strong>fers a shared, tuple space blackboard that allows for high level collaborationand dynamic self-configuration between different devices through the70
- Page 2 and 3:
Proceedings of the Tenth Internatio
- Page 4 and 5:
OrganisationOrganising CommitteeMeh
- Page 6:
Table of ContentseJason: an impleme
- Page 10 and 11:
in Sect. 3 the translation of the J
- Page 12 and 13:
init_count(0).max_count(2000).(a)(b
- Page 14 and 15:
For instance, a failure in the ERES
- Page 16 and 17:
{plan, fun start_count_trigger/1,fu
- Page 18 and 19:
single parameter, an Erlang record
- Page 20 and 21: 1. Belief annotations. Even though
- Page 22 and 23: decisions taken during the design a
- Page 24 and 25: Conceptual Integration of Agents wi
- Page 26 and 27: Fig. 2. Active component structurep
- Page 28 and 29: the service provider component. As
- Page 30 and 31: Fig. 4. Web Service Invocationretri
- Page 32 and 33: 01: public interface IBankingServic
- Page 34 and 35: tate them in the same way as in the
- Page 36 and 37: 01: public interface IChartService
- Page 38 and 39: implementations being available for
- Page 41: deliberative behavior in BDI archit
- Page 44 and 45: layer modules (i.e. nodes) can be d
- Page 46 and 47: different methods to choose the cur
- Page 48 and 49: also a single scheduler module, imp
- Page 50 and 51: andom choice (OR), conditional choi
- Page 52 and 53: - Dealing with conflicts based on p
- Page 54 and 55: 5. Brooks, R. A. (1991) Intelligenc
- Page 56 and 57: An Agent-Based Cognitive Robot Arch
- Page 58 and 59: It has been argued that building ro
- Page 60 and 61: EnvironmentHardwareLocal SoftwareC+
- Page 62 and 63: a cognitive layer can connect as a
- Page 64 and 65: can reliably be differentiated and
- Page 66 and 67: 4 ExperimentTo evaluate the feasibi
- Page 68 and 69: learn or gain knowledge from experi
- Page 72 and 73: exchange and storage of tuples (key
- Page 74 and 75: Although some success [13] [14] hav
- Page 76 and 77: as well as important non-functional
- Page 78 and 79: component plans have been instantia
- Page 80 and 81: A in the example) can evaluate all
- Page 83 and 84: 1. robot-1 issues a Localization(ro
- Page 85 and 86: ACKNOWLEDGMENTThis work has been su
- Page 87 and 88: The code was analysed both objectiv
- Page 89 and 90: a conversation is following. Additi
- Page 91 and 92: the context of a communication-heav
- Page 93 and 94: Table 1. Core Agent ProtocolsAgent
- Page 95 and 96: statistically significant using an
- Page 97 and 98: to the conversation and has a perfo
- Page 99 and 100: principal reasons. Firstly, it is a
- Page 101 and 102: 2. Muldoon, C., O’Hare, G.M.P., C
- Page 103 and 104: In the following section we will at
- Page 105 and 106: DevelopmentProductionHuman Readabil
- Page 107 and 108: will then evaluate this new format
- Page 109 and 110: encoder, it is first checked if the
- Page 111 and 112: nents themselves. However, since th
- Page 113 and 114: optimized for this format feature s
- Page 115 and 116: Java serialization and Jadex Binary
- Page 117 and 118: 10. P. Hoffman and F. Yergeau, “U
- Page 119 and 120: Caching the results of previous que
- Page 121 and 122:
querying an agent’s beliefs and g
- Page 123 and 124:
or relative performance of each pla
- Page 125 and 126:
were run for 1.5 minutes; 1.5 minut
- Page 127 and 128:
Size N K n p c qry U c upd Update c
- Page 129 and 130:
epresentation. The cache simply act
- Page 131 and 132:
6 ConclusionWe presented an abstrac
- Page 133 and 134:
Typing Multi-Agent Programs in simp
- Page 135 and 136:
1 // agent ag02 iterations (" zero
- Page 137 and 138:
3.1 simpAL OverviewThe main inspira
- Page 139 and 140:
3.2 Typing Agents with Tasks and Ro
- Page 141 and 142:
Defining Agent Scripts in simpAL (F
- Page 143 and 144:
that sends a message to the receive
- Page 145 and 146:
* error: wrong type for the param v
- Page 147 and 148:
Given an organization model, it is
- Page 149 and 150:
Learning to Improve Agent Behaviour
- Page 151 and 152:
2.1 Agent Programming LanguagesAgen
- Page 153 and 154:
choosing actions is to find a good
- Page 155 and 156:
1 init module {2 knowledge{3 block(
- Page 157 and 158:
of a module. For example, to change
- Page 159 and 160:
if bel(on(X,Y), clear(X)), a-goal(c
- Page 161 and 162:
mance. Figure 2d shows the same A f
- Page 163 and 164:
the current percepts of the agent.
- Page 165:
Author IndexAbdel-Naby, S., 69Alelc