epresentative <strong>of</strong> the query and update performance <strong>of</strong> a broad range <strong>of</strong> agent programs‘in the wild’. Table 1 summarises the agents, environments and the agent platforms thatwere used in the experiments.Environment Agent Agent DeliberationPlatform LoC CyclesBlocks World Jason 34 104-9612APL 64 186-1590GOAL 42 16-144Elevator Sim 2APL 367 3187-4010GOAL 87 2292-5844MAPC 2006 Jason 295 2664MAPC 2011 GOAL 1588 30Wumpus World Jason 294 292-443Table 1: <strong>Agents</strong> and Environments3.1 Experimental SetupTo perform the experiments, we extended the logging functionality <strong>of</strong> the three agentplatforms, and analysed the resulting query and update patterns in the execution tracesfor each agent/environment combination. The extended logging functionality capturedall queries and updates delegated to the knowledge representation used by the agentplatform and the cost <strong>of</strong> performing each query or update.In the case <strong>of</strong> 2APL and GOAL, which use a third party Prolog engine, we recordedthe cost <strong>of</strong> each query or update delegated to the respective Prolog engine. In theselanguages, Prolog is used to represent and reason with percepts, messages, knowledge,beliefs, and goals. Action preconditions and test goals are also evaluated using Prolog.Prolog queries and updates to the Prolog database therefore account for all costsinvolved in the query and update phases <strong>of</strong> an update cycle. In the case <strong>of</strong> Jason, theinstrumentation is less straightforward, and involved modifying the JASON belief baseto record the time required to query and update percepts, messages, knowledge andbeliefs. 4 The time required to process other types <strong>of</strong> Jason events, e.g., related to theintentions or plans <strong>of</strong> an agent, was not recorded.We ran each <strong>of</strong> the agent/environment/platform combinations listed in Table 1 untilthe pattern <strong>of</strong> queries and updates stabilised (i.e., disregarding any ‘start up’ periodwhen the agent(s), e.g., populate their initial representation <strong>of</strong> the environment). For differentagent environments, this required different numbers <strong>of</strong> deliberation cycles (listedin the Deliberation Cycles column in Table 1). For example, fewer deliberation cyclesare required in the Blocks World to complete a task than in other environments, whereasin the Elevator Sim environment thousands <strong>of</strong> deliberation cycles are required to reachsteady state. For the real-time Multi-Agent Programming Contest cases, the simulations4 In contrast to 2APL and GOAL, Jason does not have declarative goals.123
were run for 1.5 minutes; 1.5 minutes is sufficient to collect a representative number <strong>of</strong>cycles while keeping the amount <strong>of</strong> data that needs to be analysed to manageable proportions.For each agent/environment/platform run, the time required to perform eachquery or update resulting from the execution <strong>of</strong> the agent’s program was logged, resultingin log files as illustrated in Figure 4. <strong>Here</strong>, add and del indicate updates, and...add on(b2,table) 30add on(b9,b10) 43del on(b4,b6) 21query tower(’.’(X,T)) 101query tower(’.’(b7,[])) 51...Fig. 4: Example log filequery indicates a belief or goal query, followed by the updated belief or goal, or queryperformed, and the time required in microseconds.3.2 Experimental ResultsIn this section we briefly present the results <strong>of</strong> our analysis and highlight some observationsrelating to the query and update patterns that can be seen in this data. Westress that our aim is not a direct comparison <strong>of</strong> the performance <strong>of</strong> the agent programsor platforms analysed. The performance results presented below depend on the technologyused for knowledge representation and reasoning as well as on the machinearchitecture used to obtain the results. As such the figures provide some insight in howthese technologies perform in the context <strong>of</strong> agent programming but cannot be useddirectly to compare different technologies. Rather our main focus concerns the patternsthat can be observed in the queries and updates that are performed by all programs andplatforms, and the potential performance improvement that might be gained by cachingqueries on each agent platform.We focus on the update cycles that are executed during a run <strong>of</strong> an agent. Recall thatthese cycles may differ from the deliberation cycle <strong>of</strong> an agent. An update cycle consists<strong>of</strong> a phase in which queries are performed which is followed by a subsequent phase inwhich updates are performed on the databases that an agent maintains. Note that updatecycles do not correspond one-to-one to deliberation cycles. In particular, both Jason and2APL agents execute significantly more deliberation cycles than update cycles as canbe seen by comparing Table 1 with the tables below. The phases are extracted from logfiles by grouping query and add/del lines.We analysed the log files to derive values for all the parameters in the abstractmodel introduced in Section 2, including the average number queries performed at eachupdate cycle N, the average number <strong>of</strong> unique queries performed in an update cycleK, the average number <strong>of</strong> times that the same query is performed in an update cycleN/K, the average cost <strong>of</strong> a query c qry , the average number <strong>of</strong> updates performed in an124
- Page 2 and 3:
Proceedings of the Tenth Internatio
- Page 4 and 5:
OrganisationOrganising CommitteeMeh
- Page 6:
Table of ContentseJason: an impleme
- Page 10 and 11:
in Sect. 3 the translation of the J
- Page 12 and 13:
init_count(0).max_count(2000).(a)(b
- Page 14 and 15:
For instance, a failure in the ERES
- Page 16 and 17:
{plan, fun start_count_trigger/1,fu
- Page 18 and 19:
single parameter, an Erlang record
- Page 20 and 21:
1. Belief annotations. Even though
- Page 22 and 23:
decisions taken during the design a
- Page 24 and 25:
Conceptual Integration of Agents wi
- Page 26 and 27:
Fig. 2. Active component structurep
- Page 28 and 29:
the service provider component. As
- Page 30 and 31:
Fig. 4. Web Service Invocationretri
- Page 32 and 33:
01: public interface IBankingServic
- Page 34 and 35:
tate them in the same way as in the
- Page 36 and 37:
01: public interface IChartService
- Page 38 and 39:
implementations being available for
- Page 41:
deliberative behavior in BDI archit
- Page 44 and 45:
layer modules (i.e. nodes) can be d
- Page 46 and 47:
different methods to choose the cur
- Page 48 and 49:
also a single scheduler module, imp
- Page 50 and 51:
andom choice (OR), conditional choi
- Page 52 and 53:
- Dealing with conflicts based on p
- Page 54 and 55:
5. Brooks, R. A. (1991) Intelligenc
- Page 56 and 57:
An Agent-Based Cognitive Robot Arch
- Page 58 and 59:
It has been argued that building ro
- Page 60 and 61:
EnvironmentHardwareLocal SoftwareC+
- Page 62 and 63:
a cognitive layer can connect as a
- Page 64 and 65:
can reliably be differentiated and
- Page 66 and 67:
4 ExperimentTo evaluate the feasibi
- Page 68 and 69:
learn or gain knowledge from experi
- Page 70 and 71:
A Programming Framework for Multi-A
- Page 72 and 73:
exchange and storage of tuples (key
- Page 74 and 75: Although some success [13] [14] hav
- Page 76 and 77: as well as important non-functional
- Page 78 and 79: component plans have been instantia
- Page 80 and 81: A in the example) can evaluate all
- Page 83 and 84: 1. robot-1 issues a Localization(ro
- Page 85 and 86: ACKNOWLEDGMENTThis work has been su
- Page 87 and 88: The code was analysed both objectiv
- Page 89 and 90: a conversation is following. Additi
- Page 91 and 92: the context of a communication-heav
- Page 93 and 94: Table 1. Core Agent ProtocolsAgent
- Page 95 and 96: statistically significant using an
- Page 97 and 98: to the conversation and has a perfo
- Page 99 and 100: principal reasons. Firstly, it is a
- Page 101 and 102: 2. Muldoon, C., O’Hare, G.M.P., C
- Page 103 and 104: In the following section we will at
- Page 105 and 106: DevelopmentProductionHuman Readabil
- Page 107 and 108: will then evaluate this new format
- Page 109 and 110: encoder, it is first checked if the
- Page 111 and 112: nents themselves. However, since th
- Page 113 and 114: optimized for this format feature s
- Page 115 and 116: Java serialization and Jadex Binary
- Page 117 and 118: 10. P. Hoffman and F. Yergeau, “U
- Page 119 and 120: Caching the results of previous que
- Page 121 and 122: querying an agent’s beliefs and g
- Page 123: or relative performance of each pla
- Page 127 and 128: Size N K n p c qry U c upd Update c
- Page 129 and 130: epresentation. The cache simply act
- Page 131 and 132: 6 ConclusionWe presented an abstrac
- Page 133 and 134: Typing Multi-Agent Programs in simp
- Page 135 and 136: 1 // agent ag02 iterations (" zero
- Page 137 and 138: 3.1 simpAL OverviewThe main inspira
- Page 139 and 140: 3.2 Typing Agents with Tasks and Ro
- Page 141 and 142: Defining Agent Scripts in simpAL (F
- Page 143 and 144: that sends a message to the receive
- Page 145 and 146: * error: wrong type for the param v
- Page 147 and 148: Given an organization model, it is
- Page 149 and 150: Learning to Improve Agent Behaviour
- Page 151 and 152: 2.1 Agent Programming LanguagesAgen
- Page 153 and 154: choosing actions is to find a good
- Page 155 and 156: 1 init module {2 knowledge{3 block(
- Page 157 and 158: of a module. For example, to change
- Page 159 and 160: if bel(on(X,Y), clear(X)), a-goal(c
- Page 161 and 162: mance. Figure 2d shows the same A f
- Page 163 and 164: the current percepts of the agent.
- Page 165: Author IndexAbdel-Naby, S., 69Alelc