andom choice (OR), conditional choice (IF) and iteration (FOR, WHILE) planoperators. In order to facilitate the programming <strong>of</strong> autonomous robots, able toachieve complex goals in parallel, different mechanisms are necessary to deal withtemporal and functional constraints related to a robot’s tasks and its physics,and for a proper parallel use <strong>of</strong> a robot’s resources.Robotic research has developed many specialized execution languages to representand execute plans that are generated manually by robotic s<strong>of</strong>tware developersor automatically by planning systems [38]. Such languages provide manyadvanced mechanisms for synchronizing, coordinating and monitoring the executions<strong>of</strong> plans. This section discusses different plan execution control mechanismsneeded by autonomous robots. A check list <strong>of</strong> such requirements is presentedbased on generalizing from the analysis <strong>of</strong> different plan execution control functionalitiesprovided by TDL [34], PLEXIL [37], APEX [13], SMARTTCL [33],PRS [15] and PRS-lite [23] plan execution languages.6.1 Representation <strong>of</strong> Complex PlansTo allow performing complex behaviors, different plan operators are needed forsynchronizing the execution <strong>of</strong> actions/plans in complex arrangements, beyondthe simple sequential and parallel settings provided by the existing agent programminglanguages. For example in the assistant robot application scenario,when NAO wants to check if there is enough drug in the box, it needs to go infront <strong>of</strong> the box (Location L), orient its head’s camera toward the box (OrientationO), and then take a picture to analyze if the box is empty or not. To achievethis goal in an efficient way, the NAO should be able to perform both actions<strong>of</strong> Move To(L) and Orient Head(O) in parallel, and then to take a picture onlyafter both Move To(L) and Orient Head(O) actions have been successfully performed.Moreover it might be necessary for the camera to wait for a few secondafter the robot has arrived to the location and stopped walking, to start takingthe picture. As can be seen from this example, developing cognitive roboticapplications requires agent programming languages to be enriched with differentmechanisms necessary for the synchronization <strong>of</strong> actions/plans executions inorder and time. Current execution languages provide support for the followingmechanisms:– Hierarchical task decomposition: composing a complex plan from a set <strong>of</strong>other plans (i.e. subplans) in sequence and parallel orderings in differentlevels <strong>of</strong> a hierarchy.– Controllability <strong>of</strong> the execution <strong>of</strong> a plan at different levels <strong>of</strong> its hierarchy.– Supporting conditional contingencies, loops, temporal constraints and floatingcontingencies (i.e. event driven task execution) in the task tree decomposition:governing the execution <strong>of</strong> subplans (i.e. when to start, stop, suspend,resume/restart or abort a plan) by different conditions related to temporalconstraints on the absolute time, constraints on execution status <strong>of</strong> othersubplans, occurrence <strong>of</strong> certain events, constraints on a robot’s beliefs andalso by direct access from other subplans (e.g. coordination using sharedvariables).49
– Supporting both blocking and non-blocking intention dispatching for a newsubgoal: the former places the new generated plan at the front <strong>of</strong> the executionpath <strong>of</strong> the intention which generated the subgoal, the latter intendsthe new generated plan as a new intention.– Control on expansion <strong>of</strong> a subplan such as complete expansion before executionor incremental expansion in runtime.– Priority for execution <strong>of</strong> intentions.– Supporting atomic and continuous actions (actions which provide feedback)in blocking and non-blocking modes.6.2 Monitoring and Resource ManagementA cognitive robot has different goals and receives different events. In a BDIarchitecture, the robot generates different plans to achieve those goals and reactto those events. To provide a good level <strong>of</strong> autonomy and intelligence, it isobvious that a robot should be able to follow its different plans in parallel. Forexample when NAO is moving toward the drug box to check if it’s empty or not,it should be in the same time responsive to requests from its users (e.g. Task 5).The problem is that execution <strong>of</strong> different plans in parallel can be conflictingdue to a robot’s functional and resource constraints and should be coordinatedbased on the priorities <strong>of</strong> different plans. For example consider a use case inwhich NAO has picked up a piece <strong>of</strong> trash and going to put it into the trashcan. Suddenly, NAO hears a user asking for help. To be able to help the user,NAO should go to the Red Button and have empty hands to press it. As canbe noticed, this plan has two conflicts with the previous plan <strong>of</strong> the NAO (i.e.walking to the trash bin and having trash in hand). As helping the user is <strong>of</strong>the highest priority, NAO should leave the trash and start walking toward theRed Button immediately.To facilitate the use <strong>of</strong> agent programming languages for implementing controlsystems <strong>of</strong> autonomous robots, these languages should be extended withdifferent mechanisms and corresponding programming constructs necessary forcoordinating the parallel execution <strong>of</strong> different plans. Moreover execution <strong>of</strong>plans should be monitored and their failures should be handled in a proper way.Current execution languages provide support for the following mechanisms:– Monitoring the execution <strong>of</strong> a plan at different levels <strong>of</strong> its hierarchy.– Monitoring different stages <strong>of</strong> the execution <strong>of</strong> a plan to guarantee its safeexecution. Some conditions should be checked before starting/resuming theplan, some conditions should be checked continuously during the plan executionand some should be checked after finishing the execution <strong>of</strong> the plan.– Monitoring different conditions such as temporal constraints on the absolutetime, constraints on execution statuses <strong>of</strong> other subplans, occurrence <strong>of</strong>certain events and constraints on a robot’s beliefs.– Representing and determining conflicts between different plans (e.g. explicitrepresentation by denoting the resources they require or by providing sharedvariables and locking mechanisms).50
- Page 2 and 3: Proceedings of the Tenth Internatio
- Page 4 and 5: OrganisationOrganising CommitteeMeh
- Page 6: Table of ContentseJason: an impleme
- Page 10 and 11: in Sect. 3 the translation of the J
- Page 12 and 13: init_count(0).max_count(2000).(a)(b
- Page 14 and 15: For instance, a failure in the ERES
- Page 16 and 17: {plan, fun start_count_trigger/1,fu
- Page 18 and 19: single parameter, an Erlang record
- Page 20 and 21: 1. Belief annotations. Even though
- Page 22 and 23: decisions taken during the design a
- Page 24 and 25: Conceptual Integration of Agents wi
- Page 26 and 27: Fig. 2. Active component structurep
- Page 28 and 29: the service provider component. As
- Page 30 and 31: Fig. 4. Web Service Invocationretri
- Page 32 and 33: 01: public interface IBankingServic
- Page 34 and 35: tate them in the same way as in the
- Page 36 and 37: 01: public interface IChartService
- Page 38 and 39: implementations being available for
- Page 41: deliberative behavior in BDI archit
- Page 44 and 45: layer modules (i.e. nodes) can be d
- Page 46 and 47: different methods to choose the cur
- Page 48 and 49: also a single scheduler module, imp
- Page 52 and 53: - Dealing with conflicts based on p
- Page 54 and 55: 5. Brooks, R. A. (1991) Intelligenc
- Page 56 and 57: An Agent-Based Cognitive Robot Arch
- Page 58 and 59: It has been argued that building ro
- Page 60 and 61: EnvironmentHardwareLocal SoftwareC+
- Page 62 and 63: a cognitive layer can connect as a
- Page 64 and 65: can reliably be differentiated and
- Page 66 and 67: 4 ExperimentTo evaluate the feasibi
- Page 68 and 69: learn or gain knowledge from experi
- Page 70 and 71: A Programming Framework for Multi-A
- Page 72 and 73: exchange and storage of tuples (key
- Page 74 and 75: Although some success [13] [14] hav
- Page 76 and 77: as well as important non-functional
- Page 78 and 79: component plans have been instantia
- Page 80 and 81: A in the example) can evaluate all
- Page 83 and 84: 1. robot-1 issues a Localization(ro
- Page 85 and 86: ACKNOWLEDGMENTThis work has been su
- Page 87 and 88: The code was analysed both objectiv
- Page 89 and 90: a conversation is following. Additi
- Page 91 and 92: the context of a communication-heav
- Page 93 and 94: Table 1. Core Agent ProtocolsAgent
- Page 95 and 96: statistically significant using an
- Page 97 and 98: to the conversation and has a perfo
- Page 99 and 100: principal reasons. Firstly, it is a
- Page 101 and 102:
2. Muldoon, C., O’Hare, G.M.P., C
- Page 103 and 104:
In the following section we will at
- Page 105 and 106:
DevelopmentProductionHuman Readabil
- Page 107 and 108:
will then evaluate this new format
- Page 109 and 110:
encoder, it is first checked if the
- Page 111 and 112:
nents themselves. However, since th
- Page 113 and 114:
optimized for this format feature s
- Page 115 and 116:
Java serialization and Jadex Binary
- Page 117 and 118:
10. P. Hoffman and F. Yergeau, “U
- Page 119 and 120:
Caching the results of previous que
- Page 121 and 122:
querying an agent’s beliefs and g
- Page 123 and 124:
or relative performance of each pla
- Page 125 and 126:
were run for 1.5 minutes; 1.5 minut
- Page 127 and 128:
Size N K n p c qry U c upd Update c
- Page 129 and 130:
epresentation. The cache simply act
- Page 131 and 132:
6 ConclusionWe presented an abstrac
- Page 133 and 134:
Typing Multi-Agent Programs in simp
- Page 135 and 136:
1 // agent ag02 iterations (" zero
- Page 137 and 138:
3.1 simpAL OverviewThe main inspira
- Page 139 and 140:
3.2 Typing Agents with Tasks and Ro
- Page 141 and 142:
Defining Agent Scripts in simpAL (F
- Page 143 and 144:
that sends a message to the receive
- Page 145 and 146:
* error: wrong type for the param v
- Page 147 and 148:
Given an organization model, it is
- Page 149 and 150:
Learning to Improve Agent Behaviour
- Page 151 and 152:
2.1 Agent Programming LanguagesAgen
- Page 153 and 154:
choosing actions is to find a good
- Page 155 and 156:
1 init module {2 knowledge{3 block(
- Page 157 and 158:
of a module. For example, to change
- Page 159 and 160:
if bel(on(X,Y), clear(X)), a-goal(c
- Page 161 and 162:
mance. Figure 2d shows the same A f
- Page 163 and 164:
the current percepts of the agent.
- Page 165:
Author IndexAbdel-Naby, S., 69Alelc