Query Caching in Agent Programming LanguagesNatasha Alechina 1 , Tristan Behrens 2 , Koen Hindriks 3 , and Brian Logan 11 School <strong>of</strong> Computer Science<strong>University</strong> <strong>of</strong> <strong>Nottingham</strong><strong>Nottingham</strong> NG8 1BB UK2 Department <strong>of</strong> InformaticsClausthal <strong>University</strong> <strong>of</strong> TechnologyGermany3 Delft <strong>University</strong> <strong>of</strong> TechnologyAbstract. Agent programs are increasingly widely used for large scale, timecritical applications. In developing such applications, the performance <strong>of</strong> theagent platform is a key concern. Many logic-based BDI-based agent programminglanguages rely on inferencing over some underlying knowledge representation.While this allows the development <strong>of</strong> flexible, declarative programs, repeatedinferencing triggered by queries to the agent’s knowledge representationcan result in poor performance. In this paper we present an approach to querycaching for agent programming languages. Our approach is motivated by the observationthat agents repeatedly perform queries against a database <strong>of</strong> beliefs andgoals to select possible courses <strong>of</strong> action. Caching the results <strong>of</strong> previous queries(memoization) is therefore likely to be beneficial. We develop an abstract model<strong>of</strong> the performance <strong>of</strong> a logic-based BDI agent programming language. Using ourmodel together with traces from typical agent programs, we quantify the possibleperformance improvements that can be achieved by memoization. Our resultssuggest that memoization has the potential to significantly increase the performance<strong>of</strong> logic-based agent platforms.1 IntroductionBelief-Desire-Intention (BDI) based agent programming languages facilitate the development<strong>of</strong> rational agents specified in terms <strong>of</strong> beliefs, goals and plans. In the BDIparadigm, agents select a course <strong>of</strong> action that will achieve their goals given their beliefs.To select plans based on their goals and beliefs, many logic-based BDI-basedagent programming languages rely on inferencing over some underlying knowledgerepresentation. While this allows the development <strong>of</strong> flexible, declarative programs, repeatedinferencing triggered by queries to the agent’s knowledge representation canresult in poor performance. When developing multiagent applications for large scale,time critical applications such performance issues are <strong>of</strong>ten a key concern, potentiallyadversely impacting the adoption <strong>of</strong> BDI-based agent programming languages and platformsas an implementation technology.In this paper we present an approach to query caching for agent programming languages.Our approach is motivated by the observation that agents repeatedly performqueries against a database <strong>of</strong> beliefs and goals to select possible courses <strong>of</strong> action.117
Caching the results <strong>of</strong> previous queries (memoization) is therefore likely to be beneficial.Indeed caching as used in algorithms such as Rete [1] and TREAT [2] has beenshown to be beneficial in a wide range <strong>of</strong> related AI applications, including cognitiveagent architectures, e.g., [3], expert systems, e.g., [4], and reasoners, e.g., [5]. Howeverthat work has focused on the propagation <strong>of</strong> simple ground facts through a dependencynetwork. In contrast, the key contribution <strong>of</strong> this paper is to investigate the potential <strong>of</strong>caching the results <strong>of</strong> arbitrary logical queries in improving the performance <strong>of</strong> agentprogramming languages. We develop an abstract model <strong>of</strong> the performance <strong>of</strong> a logicbasedBDI agent programming language, defined in terms <strong>of</strong> the basic query and updateoperations that form the interface to the agent’s knowledge representation. Using ourmodel together with traces from typical agent programs, we quantify the possible performanceimprovements that can be achieved by memoization. Our results suggest thatmemoization has the potential to significantly increase the performance <strong>of</strong> logic-basedagent platforms.The remainder <strong>of</strong> the paper is organised as follows. In Section 2 we introduce an abstractmodel <strong>of</strong> the interface to a logic-based BDI agent’s underlying knowledge representationand an associated performance model. Section 3 presents experimental resultsobtained from traces <strong>of</strong> typical agent programs and several key observations regardingquery and update patterns in these programs. Section 4 introduces two models to exploitthese observations and improve the efficiency <strong>of</strong> the use <strong>of</strong> Knowledge RepresentationTechnologies (KRTs) by agent programs. Section 5 discusses related work, and Section6 concludes the paper.2 Abstract Performance ModelIn this section, we present an abstract model <strong>of</strong> the performance <strong>of</strong> a logic-based agentprogramming language as a framework for our analysis. The model abstracts away detailsthat are specific to particular agent programming languages (such as Jason [6],2APL [7], and GOAL [8]), and focuses on key elements that are common to most, if notall, logic-based agent programming languages.The interpreter <strong>of</strong> a logic-based BDI agent programming language repeatedly executesa ‘sense-plan-act’ cycle (<strong>of</strong>ten called a deliberation cycle [9] or agent reasoningcycle [6]). The details <strong>of</strong> the deliberation cycle vary from language to language, but inall cases it includes processing <strong>of</strong> events (sense), deciding on what to do next (plan),and executing one or more selected actions (act). In a logic-based agent programminglanguage, the plan phase <strong>of</strong> the deliberation cycle is implemented by executing the set<strong>of</strong> rules comprising the agent’s program. The rule conditions consist <strong>of</strong> queries to beevaluated against the agent’s beliefs and goals (e.g., plan triggers in Jason, the heads<strong>of</strong> practical reasoning rules in 2APL) and the rule actions consist <strong>of</strong> actions or plans(sequences <strong>of</strong> actions) that may be performed by the agent in a situation where the rulecondition holds. In the act phase, we can distinguish between two different kinds <strong>of</strong>actions. Query actions involve queries against the agent’s beliefs and goals and do notchange the agent’s state. Update actions, on the other hand, are either actions that directlychange the agent’s beliefs and goals (e.g., ‘mental notes’ in Jason, belief update118
- Page 2 and 3:
Proceedings of the Tenth Internatio
- Page 4 and 5:
OrganisationOrganising CommitteeMeh
- Page 6:
Table of ContentseJason: an impleme
- Page 10 and 11:
in Sect. 3 the translation of the J
- Page 12 and 13:
init_count(0).max_count(2000).(a)(b
- Page 14 and 15:
For instance, a failure in the ERES
- Page 16 and 17:
{plan, fun start_count_trigger/1,fu
- Page 18 and 19:
single parameter, an Erlang record
- Page 20 and 21:
1. Belief annotations. Even though
- Page 22 and 23:
decisions taken during the design a
- Page 24 and 25:
Conceptual Integration of Agents wi
- Page 26 and 27:
Fig. 2. Active component structurep
- Page 28 and 29:
the service provider component. As
- Page 30 and 31:
Fig. 4. Web Service Invocationretri
- Page 32 and 33:
01: public interface IBankingServic
- Page 34 and 35:
tate them in the same way as in the
- Page 36 and 37:
01: public interface IChartService
- Page 38 and 39:
implementations being available for
- Page 41:
deliberative behavior in BDI archit
- Page 44 and 45:
layer modules (i.e. nodes) can be d
- Page 46 and 47:
different methods to choose the cur
- Page 48 and 49:
also a single scheduler module, imp
- Page 50 and 51:
andom choice (OR), conditional choi
- Page 52 and 53:
- Dealing with conflicts based on p
- Page 54 and 55:
5. Brooks, R. A. (1991) Intelligenc
- Page 56 and 57:
An Agent-Based Cognitive Robot Arch
- Page 58 and 59:
It has been argued that building ro
- Page 60 and 61:
EnvironmentHardwareLocal SoftwareC+
- Page 62 and 63:
a cognitive layer can connect as a
- Page 64 and 65:
can reliably be differentiated and
- Page 66 and 67:
4 ExperimentTo evaluate the feasibi
- Page 68 and 69: learn or gain knowledge from experi
- Page 70 and 71: A Programming Framework for Multi-A
- Page 72 and 73: exchange and storage of tuples (key
- Page 74 and 75: Although some success [13] [14] hav
- Page 76 and 77: as well as important non-functional
- Page 78 and 79: component plans have been instantia
- Page 80 and 81: A in the example) can evaluate all
- Page 83 and 84: 1. robot-1 issues a Localization(ro
- Page 85 and 86: ACKNOWLEDGMENTThis work has been su
- Page 87 and 88: The code was analysed both objectiv
- Page 89 and 90: a conversation is following. Additi
- Page 91 and 92: the context of a communication-heav
- Page 93 and 94: Table 1. Core Agent ProtocolsAgent
- Page 95 and 96: statistically significant using an
- Page 97 and 98: to the conversation and has a perfo
- Page 99 and 100: principal reasons. Firstly, it is a
- Page 101 and 102: 2. Muldoon, C., O’Hare, G.M.P., C
- Page 103 and 104: In the following section we will at
- Page 105 and 106: DevelopmentProductionHuman Readabil
- Page 107 and 108: will then evaluate this new format
- Page 109 and 110: encoder, it is first checked if the
- Page 111 and 112: nents themselves. However, since th
- Page 113 and 114: optimized for this format feature s
- Page 115 and 116: Java serialization and Jadex Binary
- Page 117: 10. P. Hoffman and F. Yergeau, “U
- Page 121 and 122: querying an agent’s beliefs and g
- Page 123 and 124: or relative performance of each pla
- Page 125 and 126: were run for 1.5 minutes; 1.5 minut
- Page 127 and 128: Size N K n p c qry U c upd Update c
- Page 129 and 130: epresentation. The cache simply act
- Page 131 and 132: 6 ConclusionWe presented an abstrac
- Page 133 and 134: Typing Multi-Agent Programs in simp
- Page 135 and 136: 1 // agent ag02 iterations (" zero
- Page 137 and 138: 3.1 simpAL OverviewThe main inspira
- Page 139 and 140: 3.2 Typing Agents with Tasks and Ro
- Page 141 and 142: Defining Agent Scripts in simpAL (F
- Page 143 and 144: that sends a message to the receive
- Page 145 and 146: * error: wrong type for the param v
- Page 147 and 148: Given an organization model, it is
- Page 149 and 150: Learning to Improve Agent Behaviour
- Page 151 and 152: 2.1 Agent Programming LanguagesAgen
- Page 153 and 154: choosing actions is to find a good
- Page 155 and 156: 1 init module {2 knowledge{3 block(
- Page 157 and 158: of a module. For example, to change
- Page 159 and 160: if bel(on(X,Y), clear(X)), a-goal(c
- Page 161 and 162: mance. Figure 2d shows the same A f
- Page 163 and 164: the current percepts of the agent.
- Page 165: Author IndexAbdel-Naby, S., 69Alelc