intentions and plans that can be viewed as queries and updates, these operations varywidely from platform to platform and typically do not involve logical inference.The first key KRT functionality is querying a database. A query assumes the presence<strong>of</strong> some inference engine to perform the query. In many agent platforms, a distinctionis made between performing a query to obtain a single answer and to obtain allanswers. In what follows, we abstract away from the details <strong>of</strong> particular inference enginesprovided by different agent platforms and represent queries by the (average) timerequired to perform the query. The second key functionality is that <strong>of</strong> modifying or updatingthe content <strong>of</strong> a database. With the exception <strong>of</strong> recent work on the semantic weband on theory progression in situation calculus, update has not been a major concern forclassical (non-situated) reasoners. However it is essential for agents as they need to beable to represent a changing and dynamic environment. All the agent platforms we haveinvestigated use a simple form <strong>of</strong> updating which involves simply adding or removingfacts. In cases where the agent platform adopts the open world assumption, one needsto be slightly more general and support the addition and removal <strong>of</strong> literals (positiveand negated facts).Based on this model, we can derive an analysis <strong>of</strong> the average case performance fora single update cycle <strong>of</strong> an agent. Our analysis distinguishes between costs associatedwith the query phase and the update phase <strong>of</strong> an update cycle. We assume that the agentperforms on average N queries in the query phase <strong>of</strong> an update cycle. If the averagecost <strong>of</strong> a query is c qry , then the average total cost <strong>of</strong> the query phase is given byN · c qryIn general, the same query may be performed several times in a given update cycle. (Weprovide support for the fact that queries are performed multiple times in a cycle below.)If the average number <strong>of</strong> unique queries performed in an update cycle is K, then onaverage each query is performed n = N/K times per cycle.The total average cost <strong>of</strong> the update phase <strong>of</strong> an update cycle can be derived similarly.In logic-based agent programming languages, updates are simple operations whichonly add or remove facts (literals) from a database, so it is reasonable to consider onlythe total number <strong>of</strong> updates when estimating the cost <strong>of</strong> the update phase. If U is theaverage number <strong>of</strong> updates (i.e., adds and deletes) per cycle and c upd is the average cost<strong>of</strong> an update, then the average total cost <strong>of</strong> the update phase is given byU · c updCombining both the query and update phase costs yields:N · c qry + U · c upd (1)3 Experimental AnalysisTo quantify typical values for the parameters in our abstract performance model, weperformed a number <strong>of</strong> experiments using different agent platforms and agent and environmentimplementations. We stress that our aim was not to determine the absolute121
or relative performance <strong>of</strong> each platform, but to estimate the relative average number <strong>of</strong>queries and updates performed by ‘typical’ agent programs, and their relative averagecosts on each platform, in order to determine to what extent caching may be useful as ageneral strategy for logic-based agent programming languages. To this end, we selectedthree well known agent platforms (Jason [6], 2APL [7] and GOAL [8]), and five existingagent programs/environments (Blocks World, Elevator Sim, Multi-Agent ProgrammingContest 2006 & 2011, and Wumpus World).The agent platforms were chosen as representative <strong>of</strong> the current state <strong>of</strong> the artin logic-based agent programming languages, and span a range <strong>of</strong> implementation approaches.For example, both 2APL and GOAL use Prolog engines provided by thirdparties for knowledge representation and reasoning. 2APL uses the commercial Prologengine JIProlog [10] implemented in Java, whereas GOAL uses the Java interface JPLto connect to the open source SWI-Prolog engine (v5.8) which is implemented in C[11]. In contrast, the logical language used in Jason is integrated into the platform andis implemented in Java.The agent programs were chosen as representative <strong>of</strong> ‘typical’ agent applications,and span a wide range <strong>of</strong> task environments (from observable and static to partiallyobservable and real-time), program complexity (measured in lines <strong>of</strong> code, LoC), andprogramming styles. The Blocks World is a classic environment in which blocks mustbe moved from an initial position to a goal state by means <strong>of</strong> a gripper. The BlocksWorld is a single agent, discrete, fully observable environment where the agent has fullcontrol. Elevator Sim is a dynamic, environment that simulates one or more elevators ina building with a variable number <strong>of</strong> floors (we used 25 floors) where the goal is to transporta pre-set number <strong>of</strong> people between floors [12]. Each elevator is controlled by anagent, and the simulator controls people that randomly appear, push call buttons, floorbuttons, and enter and leave elevators upon arrival at floors. The environment is partiallyobservable as elevators cannot see which buttons inside other elevators are pressed norwhere these other elevators are located. In the 2006 Multi-Agent Programming Contestscenario (MAPC 2006) [13] teams <strong>of</strong> 5 agents explore grid-like terrain to find gold andtransport it to a depot. In the 2011 Multi-Agent Programming Contest scenario (MAPC2011) [14] teams <strong>of</strong> 10 agents explore ‘Mars’ and occupy valuable zones. Both MAPCenvironments are discrete, partially observable, real-time multi-agent environments, inwhich agent actions are not guaranteed to have their intended effect.Finally, the Wumpus World is a discrete, partially observable environment in whicha single agent must explore a grid to locate gold while avoiding being eaten by theWumpus or trapped in a pit. For some <strong>of</strong> the environments we also varied the size <strong>of</strong>the problem instance the agent(s) have to deal with. In the Blocks World the number<strong>of</strong> blocks determines the problem size, and in the Elevator Sim an important parameterthat determines the size <strong>of</strong> a problem instance is the number <strong>of</strong> people to be movedbetween floors. The size <strong>of</strong> problem instances that we have used can be found in thefirst column <strong>of</strong> Tables 2 through 6.It is important to stress that, to avoid any bias due to agent design in our results,the programs were not written specially for the experiments. While our selection wastherefore necessarily constrained by the availability <strong>of</strong> pre-existing code (in particularversions <strong>of</strong> each program were not available for all platforms), we believe our results are122
- Page 2 and 3:
Proceedings of the Tenth Internatio
- Page 4 and 5:
OrganisationOrganising CommitteeMeh
- Page 6:
Table of ContentseJason: an impleme
- Page 10 and 11:
in Sect. 3 the translation of the J
- Page 12 and 13:
init_count(0).max_count(2000).(a)(b
- Page 14 and 15:
For instance, a failure in the ERES
- Page 16 and 17:
{plan, fun start_count_trigger/1,fu
- Page 18 and 19:
single parameter, an Erlang record
- Page 20 and 21:
1. Belief annotations. Even though
- Page 22 and 23:
decisions taken during the design a
- Page 24 and 25:
Conceptual Integration of Agents wi
- Page 26 and 27:
Fig. 2. Active component structurep
- Page 28 and 29:
the service provider component. As
- Page 30 and 31:
Fig. 4. Web Service Invocationretri
- Page 32 and 33:
01: public interface IBankingServic
- Page 34 and 35:
tate them in the same way as in the
- Page 36 and 37:
01: public interface IChartService
- Page 38 and 39:
implementations being available for
- Page 41:
deliberative behavior in BDI archit
- Page 44 and 45:
layer modules (i.e. nodes) can be d
- Page 46 and 47:
different methods to choose the cur
- Page 48 and 49:
also a single scheduler module, imp
- Page 50 and 51:
andom choice (OR), conditional choi
- Page 52 and 53:
- Dealing with conflicts based on p
- Page 54 and 55:
5. Brooks, R. A. (1991) Intelligenc
- Page 56 and 57:
An Agent-Based Cognitive Robot Arch
- Page 58 and 59:
It has been argued that building ro
- Page 60 and 61:
EnvironmentHardwareLocal SoftwareC+
- Page 62 and 63:
a cognitive layer can connect as a
- Page 64 and 65:
can reliably be differentiated and
- Page 66 and 67:
4 ExperimentTo evaluate the feasibi
- Page 68 and 69:
learn or gain knowledge from experi
- Page 70 and 71:
A Programming Framework for Multi-A
- Page 72 and 73: exchange and storage of tuples (key
- Page 74 and 75: Although some success [13] [14] hav
- Page 76 and 77: as well as important non-functional
- Page 78 and 79: component plans have been instantia
- Page 80 and 81: A in the example) can evaluate all
- Page 83 and 84: 1. robot-1 issues a Localization(ro
- Page 85 and 86: ACKNOWLEDGMENTThis work has been su
- Page 87 and 88: The code was analysed both objectiv
- Page 89 and 90: a conversation is following. Additi
- Page 91 and 92: the context of a communication-heav
- Page 93 and 94: Table 1. Core Agent ProtocolsAgent
- Page 95 and 96: statistically significant using an
- Page 97 and 98: to the conversation and has a perfo
- Page 99 and 100: principal reasons. Firstly, it is a
- Page 101 and 102: 2. Muldoon, C., O’Hare, G.M.P., C
- Page 103 and 104: In the following section we will at
- Page 105 and 106: DevelopmentProductionHuman Readabil
- Page 107 and 108: will then evaluate this new format
- Page 109 and 110: encoder, it is first checked if the
- Page 111 and 112: nents themselves. However, since th
- Page 113 and 114: optimized for this format feature s
- Page 115 and 116: Java serialization and Jadex Binary
- Page 117 and 118: 10. P. Hoffman and F. Yergeau, “U
- Page 119 and 120: Caching the results of previous que
- Page 121: querying an agent’s beliefs and g
- Page 125 and 126: were run for 1.5 minutes; 1.5 minut
- Page 127 and 128: Size N K n p c qry U c upd Update c
- Page 129 and 130: epresentation. The cache simply act
- Page 131 and 132: 6 ConclusionWe presented an abstrac
- Page 133 and 134: Typing Multi-Agent Programs in simp
- Page 135 and 136: 1 // agent ag02 iterations (" zero
- Page 137 and 138: 3.1 simpAL OverviewThe main inspira
- Page 139 and 140: 3.2 Typing Agents with Tasks and Ro
- Page 141 and 142: Defining Agent Scripts in simpAL (F
- Page 143 and 144: that sends a message to the receive
- Page 145 and 146: * error: wrong type for the param v
- Page 147 and 148: Given an organization model, it is
- Page 149 and 150: Learning to Improve Agent Behaviour
- Page 151 and 152: 2.1 Agent Programming LanguagesAgen
- Page 153 and 154: choosing actions is to find a good
- Page 155 and 156: 1 init module {2 knowledge{3 block(
- Page 157 and 158: of a module. For example, to change
- Page 159 and 160: if bel(on(X,Y), clear(X)), a-goal(c
- Page 161 and 162: mance. Figure 2d shows the same A f
- Page 163 and 164: the current percepts of the agent.
- Page 165: Author IndexAbdel-Naby, S., 69Alelc