a cognitive layer can connect as a client (thus facilitating the swapping <strong>of</strong>control from one robot to another).– Debugger Monitor provides several GUIs that are useful for debuggingrobot programs, enabling developers to visualize sensory data and allowingthem to set specific function parameters. This component also includesa Wizard <strong>of</strong> Oz interface to conduct human-robot interaction experiments.– Action Execution, for instructing a robot to perform concrete behavioursand actions. The actions include motion movements such as walking andturning around while the behaviours include predefined body gestures suchas sitting down, standing up and raising the arms <strong>of</strong> humanoid robots.Environment Interface The environment interface layer built using EIS [8] providessupport for interfacing the behaviour layer with the cognitive layer. Thecore components in this layer include an Environment Model that establishesthe connection between cognitive robotic agents with external environments,an Environment Management component that initializes and manages the interface,and an Environment Interface component that provides the actual bridgebetween a cognitive agent and the behavioural layer.Deliberative Reasoning and Decision Making The cognitive control layer providessupport for reasoning and decision making. In our architecture, we employthe Goal agent programming language for programming the high-level cognitiveagent that controls the robot. Goal is a language for programming logic-based,cognitive agents that use symbolic representations for their beliefs and goalsfrom which they derive their choice <strong>of</strong> action. Due to space limitations, we donot describe Goal agent programs in any detail here but refer the interestedreader for more information to [5, 6].3.3 Information & Control Flow in the ArchitectureA key issue in any robot control architecture concerns the flow <strong>of</strong> informationand control. Each component in such an architecture needs to have access tothe relevant information in order to function effectively. In line with the generalarchitecture <strong>of</strong> Figure 1 and layered architectures in general, different types <strong>of</strong>data are associated with each <strong>of</strong> the different layers, where EIS is a mediatinglayer between the behavioural and cognitive layer. At the lowest level all “raw”sensor data is processed yielding information that is used in the behaviourallayer. The information present in the behavioural layer is abstracted and translatedinto knowledge by means <strong>of</strong> EIS that is used in the cognitive layer. Actionsin turn are decided upon in the cognitive layer. These actions are translated byEIS into behaviours that can be executed at the behavioural layer. Finally, thesebehaviours are translated into motor control commands at the lowest layer.Bottom-up Data Processing and Top-down Action Execution The processing <strong>of</strong>data starts at the bottom in the architecture and follows a strict bottom-upprocessing scheme whereas for action execution the order is reversed and a strict61
top-down scheme is followed. At each <strong>of</strong> the layers different types <strong>of</strong> representationsare used: urbiscript at the lowest, C++ data structures in the behaviourallayer, and Prolog representations are used in the logic-based cognitive layer.Data processing starts by capturing values from various sensors. This sensorydata then is analysed to extract useful features and to classify the data usingbasic knowledge <strong>of</strong> the environment. Fusion <strong>of</strong> different data sources is used toincrease reliability. Finally, by using more high-level, semantic knowledge aboutthe environment, symbolic representations are obtained by combining, abstractingand translating the information present at the behavioural layer by means<strong>of</strong> the EIS layer. The EIS layer provides symbolic “percepts” as input to thecognitive layer. Different aspects <strong>of</strong> the environment are handled by differenttechniques. For example, in order to recognize human faces, camera images needto be used and processed. First, several image frames need to be captured bythe lowest layer for further processing in the behavioural layer which runs facedetection or recognition algorithms. Using information obtained about the exactregions where a face is located in an image, further processing is needed tomatch these regions with a database <strong>of</strong> faces. Upon recognition, finally, a facecan be associated with a qualitative description, i.e. a name, which is sent to thecognitive layer. The cognitive layer then uses this information and decides e.g.to say “Hello, name.”.The cognitive layer is the center <strong>of</strong> control and decides on the actions thatthe robot will perform. Actions are translated into one or more possibly complexbehaviours at the behavioural layer. Of course, such behaviours, and thus theaction that triggered these, take time and it is important to monitor the progress<strong>of</strong> the behaviours that are used to execute a high-level action. The cognitivelayer delegates most <strong>of</strong> the detailed control to the behavioural layer but needsto remain informed to be able to interrupt or correct things that go wrongat this layer. One clear example concerns navigation. At the cognitive layer adecision to perform a high-level action goto(roomA) may be made which thenis passed on to a navigation module at the behavioural layer. The latter layerfigures out details such as the exact destination position <strong>of</strong> roomA on the 2D gridmap and then plans an optimal path to the destination. Due to the unreliability<strong>of</strong> walking (e.g. foot slippage), the optimal path needs to be continuously reevaluatedand re-planned in real-time. This happens in the behavioural layer.However, at the same time, the cognitive layer needs to monitor whether theplanning at the behavioural layer yields expected progress and may interrupt bychanging target locations.Synchronization <strong>of</strong> Perception and Action Part <strong>of</strong> the job <strong>of</strong> the EIS intermediatelayer is to manage the “cognitive” load that needs to be processed in the cognitivelayer. One issue is that the Environment Configurator in the behavioural layertypically will produce a large number <strong>of</strong> (more or less) identical percepts. Forexample, assuming an camera image frame rate <strong>of</strong> 30 fps, if a robot stares at anobject for only one second, this component will generate 30 identical percepts.In order to prevent that a stream <strong>of</strong> redundant percepts is generated the EISlayer acts as a filter. However, this only works if in the behavioural layer objects62
- Page 2 and 3:
Proceedings of the Tenth Internatio
- Page 4 and 5:
OrganisationOrganising CommitteeMeh
- Page 6:
Table of ContentseJason: an impleme
- Page 10 and 11:
in Sect. 3 the translation of the J
- Page 12 and 13: init_count(0).max_count(2000).(a)(b
- Page 14 and 15: For instance, a failure in the ERES
- Page 16 and 17: {plan, fun start_count_trigger/1,fu
- Page 18 and 19: single parameter, an Erlang record
- Page 20 and 21: 1. Belief annotations. Even though
- Page 22 and 23: decisions taken during the design a
- Page 24 and 25: Conceptual Integration of Agents wi
- Page 26 and 27: Fig. 2. Active component structurep
- Page 28 and 29: the service provider component. As
- Page 30 and 31: Fig. 4. Web Service Invocationretri
- Page 32 and 33: 01: public interface IBankingServic
- Page 34 and 35: tate them in the same way as in the
- Page 36 and 37: 01: public interface IChartService
- Page 38 and 39: implementations being available for
- Page 41: deliberative behavior in BDI archit
- Page 44 and 45: layer modules (i.e. nodes) can be d
- Page 46 and 47: different methods to choose the cur
- Page 48 and 49: also a single scheduler module, imp
- Page 50 and 51: andom choice (OR), conditional choi
- Page 52 and 53: - Dealing with conflicts based on p
- Page 54 and 55: 5. Brooks, R. A. (1991) Intelligenc
- Page 56 and 57: An Agent-Based Cognitive Robot Arch
- Page 58 and 59: It has been argued that building ro
- Page 60 and 61: EnvironmentHardwareLocal SoftwareC+
- Page 64 and 65: can reliably be differentiated and
- Page 66 and 67: 4 ExperimentTo evaluate the feasibi
- Page 68 and 69: learn or gain knowledge from experi
- Page 70 and 71: A Programming Framework for Multi-A
- Page 72 and 73: exchange and storage of tuples (key
- Page 74 and 75: Although some success [13] [14] hav
- Page 76 and 77: as well as important non-functional
- Page 78 and 79: component plans have been instantia
- Page 80 and 81: A in the example) can evaluate all
- Page 83 and 84: 1. robot-1 issues a Localization(ro
- Page 85 and 86: ACKNOWLEDGMENTThis work has been su
- Page 87 and 88: The code was analysed both objectiv
- Page 89 and 90: a conversation is following. Additi
- Page 91 and 92: the context of a communication-heav
- Page 93 and 94: Table 1. Core Agent ProtocolsAgent
- Page 95 and 96: statistically significant using an
- Page 97 and 98: to the conversation and has a perfo
- Page 99 and 100: principal reasons. Firstly, it is a
- Page 101 and 102: 2. Muldoon, C., O’Hare, G.M.P., C
- Page 103 and 104: In the following section we will at
- Page 105 and 106: DevelopmentProductionHuman Readabil
- Page 107 and 108: will then evaluate this new format
- Page 109 and 110: encoder, it is first checked if the
- Page 111 and 112: nents themselves. However, since th
- Page 113 and 114:
optimized for this format feature s
- Page 115 and 116:
Java serialization and Jadex Binary
- Page 117 and 118:
10. P. Hoffman and F. Yergeau, “U
- Page 119 and 120:
Caching the results of previous que
- Page 121 and 122:
querying an agent’s beliefs and g
- Page 123 and 124:
or relative performance of each pla
- Page 125 and 126:
were run for 1.5 minutes; 1.5 minut
- Page 127 and 128:
Size N K n p c qry U c upd Update c
- Page 129 and 130:
epresentation. The cache simply act
- Page 131 and 132:
6 ConclusionWe presented an abstrac
- Page 133 and 134:
Typing Multi-Agent Programs in simp
- Page 135 and 136:
1 // agent ag02 iterations (" zero
- Page 137 and 138:
3.1 simpAL OverviewThe main inspira
- Page 139 and 140:
3.2 Typing Agents with Tasks and Ro
- Page 141 and 142:
Defining Agent Scripts in simpAL (F
- Page 143 and 144:
that sends a message to the receive
- Page 145 and 146:
* error: wrong type for the param v
- Page 147 and 148:
Given an organization model, it is
- Page 149 and 150:
Learning to Improve Agent Behaviour
- Page 151 and 152:
2.1 Agent Programming LanguagesAgen
- Page 153 and 154:
choosing actions is to find a good
- Page 155 and 156:
1 init module {2 knowledge{3 block(
- Page 157 and 158:
of a module. For example, to change
- Page 159 and 160:
if bel(on(X,Y), clear(X)), a-goal(c
- Page 161 and 162:
mance. Figure 2d shows the same A f
- Page 163 and 164:
the current percepts of the agent.
- Page 165:
Author IndexAbdel-Naby, S., 69Alelc