1 agent - script ACMEThermostat implements Thermostat in SmartHome {23 cond : Conditioner ;4 therm : Thermometer ;5 threshold : double ;6 condSpeed : double ;78 plan - for Init ( conditioner : Conditioner thermometer : Thermometer ){9 cond = conditioner ; therm = thermometer ; threshold = 1; condSpeed = 0.50;10 }1112 plan - for AchieveTemperature using : cond , therm , log @ mainRoom {13 log ( msg : " achieving temperature "+ targetTemp +" from "+ currentTemp )14 completed - when : java . lang . Math . abs ( targetTemp - currentTemp ) < threshold {15 every - time currentTemp > ( targetTemp + threshold ) && ! isCooling16 => startCooling ( speed : condSpeed )17 every - time currentTemp < ( targetTemp - threshold ) && ! isHeating18 => startHeating ( speed : condSpeed )19 }20 if ( isHeating || isCooling ){21 stop ()22 }23 }2425 plan - for KeepTemperature using : cond , therm , userView {26 quitPlan : boolean = false ;2728 completed - when : quitPlan {29 do - task AchieveTemperature ( targetTemp : desiredTemp )3031 every - time changed desiredTemp => atomically : {32 if (is - doing - any AchieveTemperature ) {33 drop -all - tasks AchieveTemperature34 }35 do - task AchieveTemperature ( targetTemp : desiredTemp )36 }3738 every - time changed currentTemp : !is - doing - any AchieveTemperature39 => do - task AchieveTemperature ( targetTemp : desiredTemp )40 }4142 every - time told newThreshold => {43 threshold = newThreshold44 }4546 every - time changed thermostatStatus : thermStatus . equals == ThermostatStatus . OFF47 => atomically : {48 if ( isCooling || isHeating ){49 stop ()50 }51 drop -all - tasks AchieveTemperature52 quitPlan = true53 }54 }55 }5657 plan - for SelfTest () using : log @ mainRoom , cond {58 log ( msg :" Simple thermostat "+ myName )59 log ( msg :" Using the conditioner " + name in cond )60 }61 }Fig. 3. Definition <strong>of</strong> a script in simpAL.139
Defining Agent Scripts in simpAL (Fig. 3)The behavior <strong>of</strong> an agent can be programmed in simpAL through the definition <strong>of</strong> scripts, that are loaded and executed byagents at runtime. <strong>Here</strong> we give a very brief account directly by using the ACMEThermostat example Fig. 3. The definition<strong>of</strong> an agent script includes the script name, an explicit declaration <strong>of</strong> the roles played by the script and then the scriptbody, which contains the declaration <strong>of</strong> a set <strong>of</strong> beliefs and the definition <strong>of</strong> a set <strong>of</strong> plans. Beliefs in simpAL are likesimple variables, characterized by a name, a type and an initial value. The ACMEThermostat script has four beliefs, two tokeep track <strong>of</strong> the conditioner and thermometer artifacts used to do his job, one for the threshold and one for the speedto be used when using the conditioner. Beliefs declared at the script level are a sort <strong>of</strong> long-term memory <strong>of</strong> the agent,useful to keep track information that could be accessed and updated by any plan in execution, and whose lifetime is equalto the one <strong>of</strong> the agent (script). Plans contain the recipe to execute tasks. The ACMEThermostat script has four plans, toachieve a certain temperature value (line 12-23), to maintain a temperature value (line 25-54), and to do some self test(line 56-59), and also a pre-defined plan to initialize the script, which is executed by default when the script is loaded.To do the AchieveTemperature task, the plan starts cooling or heating – using the conditioner – as soon as the currenttemperature is too high or too low compared to the target one (and the threshold)—the current temperature is observedby the thermometer. The belief about the target temperature (targetTemp) derives from the related attribute (parameter)<strong>of</strong> the task, while the one about the current temperature (currentTemp) is related to the observable property <strong>of</strong> the thermthermometer used in the plan. As soon as the current temperature is in the good range, the plan completes—stopping theconditioner if it was working. To do the KeepTemperature task, the plan achieves the desired temperature by immediatelyexecuting the sub-task AchieveTemperature (line 29), which is executed also as soon as the desired temperature changes(lines 31-36) or the current temperature changes and the agent is not already achieving the temperature (line 38-39). Thebelief about the desired temperature (desiredTemp) comes from the observable property <strong>of</strong> the userView artifact used in theplan. Also, as soon as a message about a new threshold is told by some other agent, the internal value <strong>of</strong> the threshold isupdated. The plan quits if the agent perceives from the userView artifact that the user has switched <strong>of</strong>f the thermostat.In that case, before quitting the plan, the conditioner is eventually stopped if it was working. Finally, to do the SelfTesttask, the agent simply logs on the console a sequence <strong>of</strong> information, about his identifier, and the name <strong>of</strong> the conditionerwhich is using.Some explanations about some key elements <strong>of</strong> the syntax and semantics <strong>of</strong> plans follow— a more comprehensive descriptioncan be found here [15] and on simpAL technical documentation. The definition <strong>of</strong> a plan includes the specification <strong>of</strong> thetype <strong>of</strong> task for which the plan can be used and a plan body, which is an action rule block. The action rule block containsthe declaration <strong>of</strong> a set <strong>of</strong> local beliefs – that are visible only inside the block, as a kind <strong>of</strong> short-term memory – and aset <strong>of</strong> action rules specifying when executing which action. In the simplest case, an action rule is just an action and ablock could be a flat list <strong>of</strong> actions. In that case, actions are executed in sequence, i.e. every action in the list is executedonly after perceiving the events that the previous one has completed. In the most general case, an action rule is <strong>of</strong> thekind: every-time | when Event : Condition => Action meaning that the specified action can be executed every time oronce that (when) the specified event occur and the specified condition – which is a boolean expression <strong>of</strong> the agent beliefsbase – holds. if not specified, the default value <strong>of</strong> the condition is true. Events concern percepts related to either one <strong>of</strong>(a) the environment, (b) messages sent by agents, (c) action execution, (d) time passing. All events are actually uniformlymodeled as changes to some belief belonging to agent belief base, given the fact that observable properties, messages sent,action state variables, the internal clock <strong>of</strong> the agent are all represented as beliefs. Furthermore, the syntax for specifyingevents related to a change <strong>of</strong> an observable property is changed ObsProp (e.g. line 38), the one for specifying the update<strong>of</strong> a belief about an information told by another agent is told What (e.g. line 42). If no event is specified, the pre-definedmeaning is that the rule can be triggered immediately, but only once. Given that, the execution <strong>of</strong> a flat list <strong>of</strong> actionscan be obtained by a sequence <strong>of</strong> action rules with only the action specified. Actions can be either external actions toaffect the environment, i.e. operations provided <strong>of</strong> some artifact, or communicative actions to directly interact with someother agent (to tell some belief, to assign a task, etc.) or pre-defined internal actions – to update internal beliefs, tomanage tasks in execution, etc. An action can be also an action rule block {...}, which allows then to nest action blocks.Finally, the definition <strong>of</strong> an action rule block includes the possibility to specify some predefined attributes, for instance: theusing: attribute to specify the list <strong>of</strong> the identifiers <strong>of</strong> the artifacts used inside the block (an artifact can be used/observedonly if explicitly declared), the completed-when: to specify the condition for which the action rule block execution can beconsidered completed, the atomically: to specify that the action rule block must be executed as a single action, withoutbeing interrupted or interleaved with blocks <strong>of</strong> other plans in execution (when the agent is executing multiple tasks at atime).140
- Page 2 and 3:
Proceedings of the Tenth Internatio
- Page 4 and 5:
OrganisationOrganising CommitteeMeh
- Page 6:
Table of ContentseJason: an impleme
- Page 10 and 11:
in Sect. 3 the translation of the J
- Page 12 and 13:
init_count(0).max_count(2000).(a)(b
- Page 14 and 15:
For instance, a failure in the ERES
- Page 16 and 17:
{plan, fun start_count_trigger/1,fu
- Page 18 and 19:
single parameter, an Erlang record
- Page 20 and 21:
1. Belief annotations. Even though
- Page 22 and 23:
decisions taken during the design a
- Page 24 and 25:
Conceptual Integration of Agents wi
- Page 26 and 27:
Fig. 2. Active component structurep
- Page 28 and 29:
the service provider component. As
- Page 30 and 31:
Fig. 4. Web Service Invocationretri
- Page 32 and 33:
01: public interface IBankingServic
- Page 34 and 35:
tate them in the same way as in the
- Page 36 and 37:
01: public interface IChartService
- Page 38 and 39:
implementations being available for
- Page 41:
deliberative behavior in BDI archit
- Page 44 and 45:
layer modules (i.e. nodes) can be d
- Page 46 and 47:
different methods to choose the cur
- Page 48 and 49:
also a single scheduler module, imp
- Page 50 and 51:
andom choice (OR), conditional choi
- Page 52 and 53:
- Dealing with conflicts based on p
- Page 54 and 55:
5. Brooks, R. A. (1991) Intelligenc
- Page 56 and 57:
An Agent-Based Cognitive Robot Arch
- Page 58 and 59:
It has been argued that building ro
- Page 60 and 61:
EnvironmentHardwareLocal SoftwareC+
- Page 62 and 63:
a cognitive layer can connect as a
- Page 64 and 65:
can reliably be differentiated and
- Page 66 and 67:
4 ExperimentTo evaluate the feasibi
- Page 68 and 69:
learn or gain knowledge from experi
- Page 70 and 71:
A Programming Framework for Multi-A
- Page 72 and 73:
exchange and storage of tuples (key
- Page 74 and 75:
Although some success [13] [14] hav
- Page 76 and 77:
as well as important non-functional
- Page 78 and 79:
component plans have been instantia
- Page 80 and 81:
A in the example) can evaluate all
- Page 83 and 84:
1. robot-1 issues a Localization(ro
- Page 85 and 86:
ACKNOWLEDGMENTThis work has been su
- Page 87 and 88:
The code was analysed both objectiv
- Page 89 and 90: a conversation is following. Additi
- Page 91 and 92: the context of a communication-heav
- Page 93 and 94: Table 1. Core Agent ProtocolsAgent
- Page 95 and 96: statistically significant using an
- Page 97 and 98: to the conversation and has a perfo
- Page 99 and 100: principal reasons. Firstly, it is a
- Page 101 and 102: 2. Muldoon, C., O’Hare, G.M.P., C
- Page 103 and 104: In the following section we will at
- Page 105 and 106: DevelopmentProductionHuman Readabil
- Page 107 and 108: will then evaluate this new format
- Page 109 and 110: encoder, it is first checked if the
- Page 111 and 112: nents themselves. However, since th
- Page 113 and 114: optimized for this format feature s
- Page 115 and 116: Java serialization and Jadex Binary
- Page 117 and 118: 10. P. Hoffman and F. Yergeau, “U
- Page 119 and 120: Caching the results of previous que
- Page 121 and 122: querying an agent’s beliefs and g
- Page 123 and 124: or relative performance of each pla
- Page 125 and 126: were run for 1.5 minutes; 1.5 minut
- Page 127 and 128: Size N K n p c qry U c upd Update c
- Page 129 and 130: epresentation. The cache simply act
- Page 131 and 132: 6 ConclusionWe presented an abstrac
- Page 133 and 134: Typing Multi-Agent Programs in simp
- Page 135 and 136: 1 // agent ag02 iterations (" zero
- Page 137 and 138: 3.1 simpAL OverviewThe main inspira
- Page 139: 3.2 Typing Agents with Tasks and Ro
- Page 143 and 144: that sends a message to the receive
- Page 145 and 146: * error: wrong type for the param v
- Page 147 and 148: Given an organization model, it is
- Page 149 and 150: Learning to Improve Agent Behaviour
- Page 151 and 152: 2.1 Agent Programming LanguagesAgen
- Page 153 and 154: choosing actions is to find a good
- Page 155 and 156: 1 init module {2 knowledge{3 block(
- Page 157 and 158: of a module. For example, to change
- Page 159 and 160: if bel(on(X,Y), clear(X)), a-goal(c
- Page 161 and 162: mance. Figure 2d shows the same A f
- Page 163 and 164: the current percepts of the agent.
- Page 165: Author IndexAbdel-Naby, S., 69Alelc