28.01.2013 Views

Pleading for open modular architectures - Lirmm

Pleading for open modular architectures - Lirmm

Pleading for open modular architectures - Lirmm

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Organization<br />

This first national workshop is aimed at addressing important aspects of robot control<br />

<strong>architectures</strong>, with a specific emphasis on software aspects. It brings together researchers and<br />

practitioners from universities, institutions and industries, working in this field. It intends to be a<br />

meeting to expose and discuss gathered expertise, identified trends and issues, as well as new<br />

scientific results and applications around software control <strong>architectures</strong> related topics, through<br />

plenary invited papers. It consists of 19 invited talks, and 2 round tables.<br />

This 2-day event has been co-organized by LIRMM 1 and DGA 2 . The organizing committee<br />

wishes to thank the following organizations <strong>for</strong> their financial support <strong>for</strong> the workshop:<br />

• LIRMM CNRS-UM2,<br />

• University of Montpellier 2 (UM2),<br />

• STICS Research Department of UM2,<br />

• Robotics Department of LIRMM.<br />

Steering Committee<br />

• David Andreu, LIRMM, University of Montpellier 2, France - andreu@lirmm.fr<br />

• Aurélien Godin, ETAS, DGA, France – aurelien.godin@dga.defense.gouv.fr<br />

Local Organization<br />

Web<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

• Céline Berger, LIRMM, France - berger@lirmm.fr<br />

• Robin Passama, LIRMM, University of Montpellier 2, France – passama@lirmm.fr<br />

• Stéphanie Belin, LIRMM, France - belin@lirmm.fr<br />

1 Laboratoire d’In<strong>for</strong>matique, de Robotique et de Microélectronique de Montpellier : http://www.lirmm.fr<br />

2 Délégation Générale pour l’Armement : http://www.defense.gouv.fr/dga/<br />

4


Theme<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Due to their increasing complexity, nowadays intervention robots, that to say those dedicated <strong>for</strong><br />

instance to exploration, security or defence applications, definitely raise huge scientific and<br />

commercial issues. Whatever the considered environment, terrestrial, aerial, marine or even<br />

spatial, this complexity mainly derives from the integration of multiple functionalities: advanced<br />

perception, planning, navigation, autonomous behaviours, in parallel with teleoperation or robots<br />

coordination enable to tackle more and more difficult missions.<br />

But robots can only be equipped with such functions if an appropriate hardware and software<br />

structure is embedded: the software <strong>architectures</strong> will hence be the main concern of this<br />

workshop.<br />

As quoted above, the control architecture is thus a necessary element <strong>for</strong> the integration of a<br />

multitude of works; it also permits to cope with technological advances that continually offer<br />

new devices <strong>for</strong> communication, localisation, computing, etc. As a matter of fact, it should be<br />

<strong>modular</strong>, reusable, scalable and even readable (ability to analyse and understand it). Besides,<br />

such properties ease the sharing of competencies among the robotics community, but also with<br />

computer scientists and automatics specialists as the domain is inherently a multidisciplinary<br />

one.<br />

Numerous solutions have been proposed, based on the "classical" three layers architecture or on<br />

more "modern" approaches such as object or component oriented programming. Actually, almost<br />

every robot integrates its own architecture; the workshop will thus be a real opportunity to share<br />

reflections on these solutions but also on related needs, especially standardization ones, which<br />

are of particular importance in military applications <strong>for</strong> instance.<br />

Hence, this first national workshop on control <strong>architectures</strong> of robots aims at gathering a large<br />

number of robotics actors (researchers, manufacturers as well as state institutions) in order to<br />

highlight the multiple issues, key difficulties and potential sources of advances.<br />

David Andreu Aurélien Godin<br />

LIRMM DGA<br />

Robotics Department Angers Technical Centre<br />

5


Contents<br />

Organization………………………………………………………………………...……………4<br />

Theme…………………………………………………………………………………….….……5<br />

Program………………………………………………………………………………………..…8<br />

Papers<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Session “Institutions”<br />

DGA-ETAS……………………………………………………………………………………...10<br />

<strong>Pleading</strong> <strong>for</strong> <strong>open</strong> <strong>modular</strong> <strong>architectures</strong> in robotics<br />

A. Godin, O. Evain.<br />

DGA-SPART……………………………………………………………………………………18<br />

Integrating human/robot interaction into robot control <strong>architectures</strong> <strong>for</strong> defence applications<br />

D. Dufourd, A. Dalgalarrondo.<br />

ONERA-CERT………………………………………………………………………………….37<br />

ProCoSA: a software package <strong>for</strong> autonomous system supervision<br />

M. Barbier, J.F. Gabard, D. Vizcaino, O. Bonnet-Torrès<br />

DGA-GESMA……………………………………………………………………….…………..48<br />

Goal driven planning and adaptivity <strong>for</strong> AUVs<br />

H. Ayreault, F. Dabe, M. Barbier, S. Nicolas, G. Kermet.<br />

IFREMER…………………………………………………………………….…………………56<br />

Advanced Control <strong>for</strong> Autonomous Underwater Vehicles<br />

M. Perrier.<br />

Session “Industrials”<br />

THALES…………………………………………………………………………………….…..64<br />

Compared Architectures of Vehicle Control System (Vetronics) and application to an UXV<br />

J.P. Quin.<br />

INTEMPORA………………………………………………………………………………...…72<br />

RT-MAPS: a <strong>modular</strong> software <strong>for</strong> rapid prototyping of real-time multisensor applications.<br />

N. Dulac.<br />

ECA……………………………………………………………………………………………...76<br />

DES (Data Exchange System), a publish/subscribe architecture <strong>for</strong> robotics<br />

C. Riquier, N. Ricard, C. Rousset.<br />

ROBOSOFT……………………………………………………………………………….……85<br />

Modular distributed architecture <strong>for</strong> robotics embedded systems<br />

P. Pomiers, V. Dupourqué.<br />

ECA (CYBERNETIX)………………………………………………………………………….….91<br />

Remote operation kit with <strong>modular</strong> conception and <strong>open</strong> architecture: the SUMMER concept<br />

L. Walle.<br />

6


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Session “Academics”<br />

LVR – Université d’Orléans………………………………………………………………….100<br />

A multi-level architecture controlling robots from autonomy to teleoperation<br />

C. Novales, G. Mourioux, G. Poisson.<br />

LAAS-CNRS…………………………………………………………………………………..120<br />

LAAS architecture: Open Robots<br />

F. Ingrand.<br />

LISYC - Université de Bretagne Occidentale……………………………………………….121<br />

Architectures logicielles pour la robotique et sûreté de fonctionnement<br />

L. Nanatchamda.<br />

INRIA Rhône-Alpes…………………………………………………………………………..131<br />

Orccad, a framework <strong>for</strong> safe control design and implementation<br />

D. Simon, R. Pissard-Gibollet, S. Arias.<br />

LIRMM-CNRS – Université Montpellier 2 ……………………………………….………...145<br />

Overview of a new Robot Controller Development Methodology<br />

R. Passama, D. Andreu, T. Libourel, C. Dony.<br />

LIP6 - Université Pierre et Marie Curie Paris VI………………………..…………….……164<br />

An Asynchronous Reflection Model <strong>for</strong> Object-oriented Distributed Reactive Systems<br />

J. Malenfant.<br />

LST - Université de Technologie de Bel<strong>for</strong>t-Montbéliard …...…………………………….183<br />

Reactive Multi-Agent approaches <strong>for</strong> the Control of Mobile Robots<br />

O. Simonin.<br />

VALORIA – Université Bretagne Sud ………………………………………………………192<br />

Horocol language and Hardware modules <strong>for</strong> robots<br />

D. Duhaut, C. Gueganno, Y. Le Guyadec, M. Dubois.<br />

CEMAGREF…………………………………………………………………………………..203<br />

A Real-Time, Multi-Sensor Architecture <strong>for</strong> fusion of delayed observations: Application to<br />

Vehicle Localisation<br />

C. Tessier, C. Cariou, C. Debain, F. Chausse, R. Chapuis, C. Rousset.<br />

List of speakers………………………………………………………………………….……..208<br />

List of participants…………………………………………………………………….………209<br />

7


Program<br />

April 6, 2006<br />

8:30-9:00 Reception (coffee)<br />

Welcome LIRMM-DGA<br />

9:00- 9:30<br />

F. Pierrot (LIRMM Assitant Director)<br />

R. Zapata (Robotics Department Manager)<br />

9:30-12:30 Session « Institutions »<br />

9:30-10:00 DGA-ETAS A. Godin<br />

10:00-10:30 DGA-SPART D. Dufourd<br />

10:30-11:00 Coffee Break<br />

11:00-11:30 ONERA-CERT M. Barbier<br />

11:30-12:00 DGA-GESMA H. Ayreault<br />

12:00-12:30 IFREMER M. Perrier<br />

12:30-14:00 Lunch<br />

14:00-16:30 Session « Industrials »<br />

14:00-14:30 THALES J.P. Quin<br />

14:30-15:00 INTEMPORA N. Dulac<br />

15:00-15:30 ECA C. Rousset<br />

15:30-16:00 ECA(Cybernetix) L. Walle<br />

16:00-16:30 Robosoft P. Pomiers<br />

16:30-17:00 Coffee Break<br />

17:00-18:30<br />

Round table<br />

« Software control architecture: trends and issues »<br />

20:00 Diner<br />

April 7, 2006<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

8:30-9:00 Reception (coffee)<br />

9:00-12:00 Session « Academic »<br />

9:00-9:30 LVR C. Novales<br />

9:30-10:00 LAAS F. Ingrand<br />

10:00-10:30 LISYC L. Nanatchamda<br />

10:30-11:00 Coffee Break<br />

11:00-11:30 INRIA D. Simon<br />

11:30-12:00 LIRMM R. Passama<br />

12:00-14:00 Lunch<br />

14:00-14:30 LIP6 J. Malenfant<br />

14:30-15:00 UTBM O. Simonin<br />

15:00-15:30 VALORIA D. Duhaut<br />

15:30-16:00 CEMAGREF C. Tessier<br />

16:00-16:30 Coffee Break<br />

16:30-18:00<br />

Round table<br />

« Software robot control architecture: what’s going on ? »<br />

8


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Session « Institutions »<br />

9


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

<strong>Pleading</strong> <strong>for</strong> <strong>open</strong> <strong>modular</strong> <strong>architectures</strong><br />

Abstract— Since first researches in the field of robotics <strong>architectures</strong>,<br />

many advances have been achieved. Nowadays softwares<br />

offer <strong>modular</strong> functionalities, which interests will be reminded<br />

in section I, and actually satisfy most of the needs, as shown in<br />

section II. However, none of the mentioned approaches managed<br />

to spread widely or succeeded in gathering works developed<br />

by the very numerous robotics actors, not even those that can<br />

already be considered mature. We argue here that the port of<br />

these works from an architecture to another is a major difficulty<br />

and that only an ef<strong>for</strong>t towards standardisation can help to<br />

overcome this drawback.<br />

I. REASONS FOR DEFENDING MODULAR OPEN DESIGNS<br />

We here define the architecture as the structured<br />

organisation of components (or “framework”), embedded on a<br />

system, that enables their simultaneous and correct execution,<br />

by offering the basic services needed <strong>for</strong> all of them. In this<br />

paper, we will more specifically consider software aspects.<br />

Modularity will be defined as the ability, <strong>for</strong> this software,<br />

to receive new components that were not included in the<br />

original release, thus enabling, a posteriori, to extend its<br />

functionalities. It will be moreover “<strong>open</strong>” if interfaces <strong>for</strong><br />

writing these modules are public, so that a person different<br />

from the developper that originally implemented the code<br />

can produce some. For instance, Linux is such a system, in<br />

which hardware drivers can easily be added by third parties.<br />

And so is Windows: these two characteristics are effectively<br />

not contradictory with commercial or property policies and<br />

do not mean that the original sources must be unveiled.<br />

Recent articles on <strong>architectures</strong> often insist on their <strong>modular</strong>ity.<br />

This must not only be considered a “commercial”<br />

announce but, indeed, softwares that satisfy this property<br />

offer many interests. First of all, as they are able to receive<br />

all sorts of modules, provided that the latter respect the<br />

defined interfaces, they can potentially attract many robotic<br />

actors and their adoption is eased. The direct corollary of<br />

wide-spread <strong>architectures</strong> is to provide this users community<br />

with a common framework, enhancing the possibilities of<br />

sharing and exchanging competencies. As a matter of fact,<br />

it also enables to validate concurrent approaches in the same<br />

conditions/environment <strong>for</strong> more relevant comparison results.<br />

Besides, <strong>modular</strong>ity permits to focus one’s development<br />

only on particular aspects, while working algorithms can be<br />

re-used. Hence, there is no need anymore, when conducting<br />

a specific research, to redevelop an entire system to test it,<br />

but one can take benefit of an already existing complete<br />

Aurélien Godin, Olivier Evain<br />

French Department of Defence<br />

Angers Technical Centre / Vetronics & Robotics Group<br />

{aurelien.godin,olivier.evain}@dga.defense.gouv.fr<br />

10<br />

framework, in which to integrate and evaluate one’s own work.<br />

But interests are not limited to these ones. Indeed, the<br />

vehicle (more generally the plat<strong>for</strong>m) in which the <strong>modular</strong><br />

framework will be embedded will allow a fast integration<br />

of different modules or their easy replacement. This is particularly<br />

interesting <strong>for</strong> all users. Laboratories take benefit<br />

from the flexibility of such structures, manufacturers can<br />

easily provide updates or new functionalities to their clients.<br />

Finally, end-users, such as the Department of Defence, can<br />

both take advantage of such <strong>open</strong>-targets to support their<br />

research programmes and, depending of the mission needs, to<br />

get reconfigurable operational systems. Recently, the French<br />

Ground Army confirmed that the MiniRoC concepts of ground<br />

robots, see [10], presenting such <strong>modular</strong> characteristics, was<br />

of great interest as they would allow soldiers to only take with<br />

them the absolutely necessary modules related to their actual<br />

task.<br />

By extension, if on the one hand <strong>open</strong> <strong>architectures</strong> compel<br />

to con<strong>for</strong>m to given software interfaces, on the other hand they<br />

make no assumptions as regarding to the underlying hardware.<br />

Such frameworks are, ideally, completely independent from<br />

plat<strong>for</strong>ms, processors or electronics. Said another way, they<br />

offer the possibility to evolve as technologies progress : it<br />

remains up-to-date. For end-users essentially, it is a guarantee<br />

of durability and, consequently, it requires to train maintainers<br />

only <strong>for</strong> one type of software. This durability is however<br />

achievable only by ensuring backward compatibility, so that<br />

modules running on older versions of the software can be<br />

executed in latest releases.<br />

For the sake of exhaustivity, the same arguments that<br />

talk in favour of <strong>modular</strong> <strong>architectures</strong> on a single robot<br />

also are valid when tackling the multi-robots context, as<br />

it will help to gather an un-predefined number of agents<br />

within a collaborating team. This property is then known as<br />

extensibility or scalability.<br />

Thus, from the above discussion, the main requirements <strong>for</strong><br />

an architecture to be <strong>modular</strong> can be summarised as:<br />

• permit normalized data exchange through the definition<br />

of public interfaces and common communication mechanisms;<br />

• enable extensibility by making no assumptions on underlying<br />

plat<strong>for</strong>m or candidate peripheral hardware;<br />

• be flexible by making no assumptions on missions that


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

will be given to the robot, since new ones will irremediably<br />

be imagined during system’s life.<br />

As will be discussed in section II, many <strong>architectures</strong><br />

developed <strong>for</strong> about twenty years satisfy these characteritics.<br />

Besides, whereas the framework nature - reactive, deliberative<br />

or hybrid - is often a central concern, we will note here that<br />

<strong>modular</strong>ity is independent from this issue, and that it can be<br />

ensured in all cases.<br />

II. PREVIOUS WORK<br />

A. Evolution in <strong>architectures</strong> conception<br />

In the second half of the 80s’, Brooks introduced an<br />

architecture that can maybe be considered the first <strong>modular</strong><br />

one, [7]. Contrary to common software structures at that time,<br />

which often used sequencial treatments to achieve an action,<br />

an organisation based on layers is proposed. Each layer can<br />

operate in parallel and corresponds to a particular task to be accomplished,<br />

see figure 1. Nowadays designs have inherited this<br />

point of view as modules often implement specific behaviours.<br />

However, in Brooks’ system, layers interactions are based on<br />

the subsumption principle (upper layers can block and replace<br />

outputs of lower ones) and not on a real standardized data<br />

exchange. This may bring to a complex links organisation.<br />

Enhanced exchange schemes are present in most following<br />

works. Modules become real “independent computational<br />

units”, executing concurrently and that can use given communication<br />

services, imposed by the architecture, to exchange<br />

in<strong>for</strong>mation. Compared to the subsumption architecture, the<br />

level of competence (i.e., the priority of the module) is not<br />

fixed a priori since all the blocks are considered equal. The<br />

choice <strong>for</strong> the appropriate output can be made, depending on<br />

the implementation, by a global referee or another module<br />

written by the system designer. DAMN, [18], is a remarkable<br />

Fig. 1. Whereas traditional approaches presented a sequential organisation<br />

(above schema), Brooks proposed a structure where layers, often corresponding<br />

to a given level of competence, can execute in parallel and influence the<br />

global robot behaviour by subsuming outputs of lower ones. Figures are taken<br />

from [7].<br />

11<br />

example of the first category. Each block, called “behaviour”<br />

in the sense of Brooks’, issues votes in favour of a specific<br />

possible action that are then collected by an arbiter (figure 2).<br />

The final effective command is, schematically, a weighted sum<br />

of these votes.<br />

Fig. 2. The DAMN architecture as described in [18].<br />

Another famous approach is the one developed at<br />

GeorgiaTech by Arkin, [5]. The principles are close to those<br />

of DAMN: behaviours, here called “motor schemas”, provide<br />

their commands to a process that sums and normalizes them<br />

using the potential fields method. A sligth difference is however<br />

introduced since a homeostatic control system is added:<br />

this can be thought as a bus that collects state variables from<br />

robot internal sensors and broadcast them to all of the motor<br />

schemas. Resulting monitoring in<strong>for</strong>mation are used both to<br />

influence internal parameters of behaviours and their relative<br />

weights. The structure is synthesized on figure 3.<br />

Fig. 3. AuRA principle. This figure both shows the fusion process between<br />

two motor schemas and the homeostatic control system that regulates the<br />

per<strong>for</strong>mance of the overall architecture.<br />

However, theoretically, <strong>modular</strong>ity does not only apply to<br />

behaviours (i.e., reactive modules) but also to deliberative capabilities<br />

of robots. If the complete AuRA framework already<br />

integrates a planning component (as well as a layer responsible<br />

<strong>for</strong> user-robot interaction that can convey human decisions),<br />

the reasoning capacities are fixed once <strong>for</strong> all. But there exists<br />

practical cases <strong>for</strong> which these capacities are more flexible.


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

An army is a typical example of an efficient organisation in<br />

which deliberative agents are not gathered in a centralized<br />

structure, as each soldier is not only able to act but also<br />

to learn, acquire experience, and use complex reasonings to<br />

succeed in his elementary task. And a robot architecture is, in<br />

some way, comparable: in both cases, an upper objective (the<br />

goal of the robot or of the army) can be achieved by getting<br />

elementary agents (modules or men) to work in a coordinated<br />

fashion (i.e., respecting rules imposed by the architecture or<br />

the hierarchy). Such comparisons naturally lead to propose<br />

new robot frameworks, in which deliberative capacities, and<br />

not only reactive behaviours, are also designed in a <strong>modular</strong><br />

manner.<br />

Albus’ researches on 4-D/RCS 1 have been conducted based,<br />

partly, on these reflexions and lead to a node-oriented architecture.<br />

Each elementary component, called a node, integrates<br />

sensory processings and reactive parts, like “classical” modules,<br />

but can also simultaneously gather modeling, learning and<br />

reasoning capabilities, as shown figure 4. Each node is then<br />

arranged within a global hierarchy modeled on the military<br />

structure. A natural way of implementing this architecture is to<br />

grant more deliberative responsibilities to nodes that are placed<br />

high in the hierarchy, whereas lower nodes are rather dedicated<br />

to in<strong>for</strong>mation processing. Furthermore, by construction, this<br />

framework is multirobot-ready: from a macroscopic point of<br />

view, a robot can itself be considered a node and teams<br />

of robots can hence be constituted the same way nodes are<br />

structured inside each robot. More detailed explanations can<br />

be found in [2].<br />

Fig. 4. A typical 4-D/RCS node.<br />

The emergence of modern programming methods, i.e.<br />

object-oriented (OO) approaches, hugely contributed to ease<br />

the implementation of the above concepts. Inheritance and<br />

related mechanisms (polymorphism, methods over-writing)<br />

are directly useful to derive efficient modules and permit<br />

to easily extend robots capacities, whereas the encapsulation<br />

of properties and methods enables objects to share only the<br />

useful interfaces. But these approaches did not only reveal<br />

1 4-D/RCS is the architecture that is embedded on the Demo III Experimental<br />

Unmanned Vehicle (XUV), a project supported by the American DoD. This<br />

probably explains the origin of the parallel between robots architecture and<br />

military organisation.<br />

12<br />

interesting <strong>for</strong> programmers but also inspired the conception<br />

of recent <strong>architectures</strong>. The most appealing one is probably<br />

CLARAty in which the whole functional layer is thought<br />

as a hierarchy of objects. A simple example is shown on<br />

figure 5, whereas very detailed explanations, including coding<br />

considerations, can be found in [19]. One interest of OOconception<br />

is to provide all the mechanisms to build proper<br />

extensible interfaces, without imposing any limitations on<br />

the way objects are internally implemented. COSARC is a<br />

very recent example of this trend: it uses an extension of<br />

OO-methods (the component-based approach) to define four<br />

types of components which internal structure is described with<br />

Petri Nets, please refer to [4] <strong>for</strong> details. This is a relevant<br />

illustration of the ability of object-based languages to both ease<br />

the development of <strong>open</strong> frameworks, satisfying <strong>modular</strong>ity<br />

requirements, and take benefit from any other recognized<br />

approach <strong>for</strong> the modelisation of components behaviour, here<br />

Petri Nets.<br />

Fig. 5. This example illustrates the object-oriented design of the functional<br />

layer of CLARAty. Note that, like all moderns frameworks, it also provides a<br />

decision layer, not shown here. But although most other <strong>architectures</strong> split the<br />

executive and planning levels, these functionalities are here gathered within<br />

the same layer, <strong>for</strong> consistency reasons. See [20] <strong>for</strong> a discussion on this<br />

point.<br />

As noted above, the same requirements of <strong>modular</strong>ity and<br />

extensibility are needed in the multi-robots context. Here, the<br />

main constraint is to enable the adjonction and the removal of<br />

agents within the team. If some <strong>architectures</strong>, like 4-D/RCS,<br />

inherently have the capacity to manage several robots, they<br />

should also tackle the “fault-tolerancy” problem. That is to<br />

say, the lost of a robot or of the communication channel,<br />

during the mission, must not induce the failure of the whole<br />

team. Most of the time, specific coordination schemes are<br />

thus added to the upper layers of the <strong>architectures</strong> and run all<br />

the processes needed to manage a team (especially scalability<br />

mechanisms, communication strategies and task allocation).<br />

Among all approaches proposed, let us quote ALLIANCE and


TABLE I<br />

TECHNOLOGY READINESS LEVELS<br />

Low maturity<br />

1 Basic principles of technology observed &<br />

reported<br />

2 Technology concept and/or<br />

application <strong>for</strong>mulated<br />

3 Analytical and laboratory<br />

studies to validate analytical predictions<br />

Medium maturity<br />

4 Component and/or basic sub-system<br />

technology valid in lab environment<br />

5 Component and/or basic sub-system<br />

technology valid in relevant environment<br />

6 System/sub-system technology model or<br />

prototype demo in relevant environment<br />

High maturity<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

7 System technology prototype demo in an<br />

operational environment<br />

8 System technology qualified<br />

through test & demonstration<br />

9 System technology ‘qualified’ through<br />

successful mission operations<br />

M+. The first one mainly focuses on determining an efficient<br />

fault-tolerant scheme: details can be found in [17]. The<br />

second one, [6], offers an alternative based on negociations<br />

between the robots. It has been successfully integrated in the<br />

LAAS-CNRS framework, [1], that hence provides a complete<br />

<strong>open</strong> architecture gathering all functionalities, from reactive<br />

behaviours to decisional capacities, <strong>for</strong> a lonely robot or a<br />

team of agents.<br />

Besides, all above quoted works do not remain pure theorical<br />

concepts. Many of them have been ported on some<br />

vehicles and proved to be relevant potential candidates <strong>for</strong><br />

real applications.<br />

B. Introducing Technology Readiness Levels<br />

Technology Readiness Levels (TRLs) were initially created<br />

by NASA in 1995 and were officialy adopted by the American<br />

Ministry of Defence (MoD) in 2001. This referential aims<br />

at assessing the maturity of technologies so as to reduce<br />

the risks related to acquisition programmes. It consists of<br />

nine levels, of increasing maturity, that apply to individual<br />

technologies (not to entire systems). The TRLs grid, copied<br />

from [8], is given in table I.<br />

Nowadays results and experimentations show that levels<br />

5/6 can currently be reached. 4-D/RCS framework has thus<br />

been ported to the American XUV (Experimental Unmanned<br />

Vehicle) and its ability to host functionnal modules could be<br />

demonstrated. Besides, Albus’ report [3] com<strong>for</strong>ts us in our<br />

evaluation of the maturity of this architecture when asserting<br />

13<br />

that “the tests are designed to determine whether the Demo<br />

III XUVs have achieved technology readiness level six”.<br />

In France, the same encouraging conclusions can be drawn.<br />

All along SYRANO 2 project, industrials have shown their<br />

ability to deploy a <strong>modular</strong> architecture (although proprietary)<br />

on a prototype tank and their capacity to incrementally add<br />

new functionalities when they become available. Demonstrations<br />

with this vehicle took place in the military camp of<br />

Mourmelon, in a quasi-operational context, as related in [16].<br />

Because only teleoperated functions have been used, not all<br />

possibilities of the architecture have been extensively tested,<br />

and these demonstrations “only” correspond to the TRL-5.<br />

However, a vehicle such as Syrano is TRL-6-ready as, in<br />

theory, advanced autonomous modules can be added the same<br />

way. Moreover results achieved by some laboratories com<strong>for</strong>t<br />

this point of view. For instance, experiments conducted by<br />

LAAS-CNRS in autonomous navigation demonstrated the<br />

ability of a robot to automatically explore an unknown area.<br />

During these tests, all functionalities were used: planning,<br />

supervision, autonomous modules management as well as<br />

human-robot interfaces (at least to send mission reports and<br />

receive high level orders from the operator). The multi-robot<br />

scheme is itself under ‘validation’ as, in some current projects,<br />

additional aerial in<strong>for</strong>mation is provided by a Blimp UAV to<br />

assist the robot in its navigation task.<br />

Since 2004, this laboratory has also been running a robot<br />

equipped with the same architecture, [1], in Cité de l’Espace,<br />

Toulouse. This robot serves as a guide <strong>for</strong> visitors, interacting<br />

with them to retrieve their questions, then conducting them to<br />

the desired point. Since people are obviously neither roboticians<br />

nor technicians, the environment of the robot can be<br />

considered operational. This experiment can maybe not claim<br />

the TRL-7 title, which would require huge reliability, but<br />

definitely proves that this level is now achievable.<br />

Of course, qualitatively judging the architecture, on an<br />

experiment that takes into account the whole system, is quite<br />

difficult and its exact contribution to the overall per<strong>for</strong>mance is<br />

hard to deduce. But at least, this means that services expected<br />

from the software are functional and that internal communications<br />

between framework components work. When, moreover,<br />

the architecture has been ported on heterogeneous systems<br />

(the SYRANO one was previously integrated on a robot jeep,<br />

called DARDS, and is reused <strong>for</strong> a future demining system),<br />

it can be argued that <strong>modular</strong>ity and portability have been<br />

validated. In conclusion, based on the observation of above<br />

quoted experiments, giving TRL-6 as the current achieved<br />

maturity level seems relevant.<br />

Some complementary methods also exist to assess the<br />

maturity of a whole system, System Readiness Levels. A brief<br />

review of SRLs ([9] and table II) confirms that these vehicles<br />

have reached levels 6-7, meaning that demonstrations have<br />

been conducted successfully in representative environment.<br />

2 SYRANO is a teleoperated prototype vehicle, with simple autonomous<br />

behaviours, <strong>for</strong> reconnaissance and detection of potential adverse targets. It<br />

aims at evolving in <strong>open</strong> areas.


1 User requirements<br />

defined<br />

2 System requirements<br />

defined<br />

3 Architectural design<br />

refined<br />

TABLE II<br />

SYSTEM READINESS LEVELS<br />

4 Detailed design<br />

is nominally complete<br />

5 Sub-systems verification<br />

in laboratory environment<br />

6 Sub-system verification<br />

in representative integration environment<br />

7 System prototype demonstration<br />

in a representative integration environment<br />

8 Pre-production system completed and demonstrated<br />

in a representative operational environment<br />

9 System proven through successful<br />

representative mission profile<br />

Nevertheless, the previous section raises a major concern,<br />

as it shows that numerous <strong>architectures</strong> have been develop<br />

concurrently: if each one, separatly, does satisfy the <strong>modular</strong>ity<br />

requirements, components developed <strong>for</strong> one of them cannot<br />

be ported to another. A higher level of specification is still<br />

missing that would ensure interoperability, a property of real<br />

importance, especially in a military context. Consequently,<br />

as upcoming programmes can not systematically rely on a<br />

standard, they only correspond to SRL-2. It is thus quite<br />

urgent to emphasize the research ef<strong>for</strong>t on this point, as will<br />

be discussed in section III. We will first focus on a relevant<br />

American example, JAUS, then present some French research<br />

programmes that tackle the issue.<br />

A. American proposals<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

III. TOWARDS STANDARDISATION<br />

1) History: In the last 20 years, a large number of unmanned<br />

systems have been developed by US companies in<br />

response to the American DoD, but most of them are taskspecific<br />

and non-interoperable. There<strong>for</strong>e, in 1994, JAUGS,<br />

an ef<strong>for</strong>t to avoid the pitfalls of “eaches” in an expanding<br />

domain, was initiated. It was still limited to ground systems<br />

(the “G” actually stands <strong>for</strong> “ground”).<br />

In 1998, OUSD (Office of the Under Secretary of Defence)<br />

chartered a working group, consisting of members from the<br />

government, industry and academia, to develop an architecture<br />

<strong>for</strong> unmanned systems. It set itself five targets:<br />

• support all classes of unmanned systems ;<br />

• advocate rapid technology insertion ;<br />

• provide interoperable Operating Control Units (OCUs) ;<br />

• provide interchangeable/interoperable payloads ;<br />

14<br />

• provide interoperable unmanned systems.<br />

The resulting Joint Architecture <strong>for</strong> Unmanned Systems<br />

(JAUS), available <strong>for</strong> use by defence, academic and commercial<br />

sectors, is an upper level design <strong>for</strong> the interfaces within<br />

the domain of unmanned vehicles. It aims at being independent<br />

from technology, computer hardware, operator use and vehicle<br />

plat<strong>for</strong>ms, and isolated from mission. It is a component based,<br />

message-passing framework that specifies data <strong>for</strong>mats and<br />

methods of communication between computing entities of<br />

unmanned systems.<br />

Its final goal is to reduce development and integration<br />

times, ownership cost, and to enable an expanded range of<br />

vendors by providing a framework <strong>for</strong> technology insertion.<br />

2) JAUS specifications: to date, two documents describe the<br />

JAUS architecture: the Reference Architecture specification<br />

(RA), [13], and the Domain Model (DM), [12].<br />

a) Domain model: the analysis conducted in the DM<br />

on the five above targets, along with the study of military<br />

contracts constraints, urged to define five main requirements<br />

on messages within the architecture. Indeed they need to be<br />

independent from: 1. vehicle plat<strong>for</strong>m, 2. mission, 3. computer<br />

resource, 4. technology and 5. operator use. This document is<br />

also a tool with which customers/users can define both near<br />

and far term operational requirements, <strong>for</strong> unmanned systems,<br />

based on mission needs; and by defining far-term capabilities,<br />

the JAUS Domain Model can actually be considered a “road<br />

map” <strong>for</strong> developers to focus research and design ef<strong>for</strong>ts to<br />

support these future requirements.<br />

In a word, the domain model is a common language which<br />

contains three distinct elements : functional capabilities (FC),<br />

in<strong>for</strong>mational capabilities (IC), and device groups (DG). The<br />

first ones, all documented in DM so that to ease dialog<br />

between users and developers, are a set of capabilities with<br />

similar functional purposes. Eleven categories are identified,<br />

that permit to describe the abilities of an unmanned system:<br />

command, manœuvre, navigation, communication, payload,<br />

safety, security, resource management, maintenance, training<br />

and automatic configuration. In parallel with the functional description,<br />

In<strong>for</strong>mational Capabilities provide a representation<br />

Fig. 6. JAUS domain model representation.


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

of what unmanned systems (should) know. They are groupings<br />

of similar types of in<strong>for</strong>mation. Five categories are depicted in<br />

DM: vehicle status, world model, library, logistics, time/date.<br />

Finally, device groups are a classification of sensors and/or<br />

effectors that are used <strong>for</strong> similar functions. Functional and<br />

in<strong>for</strong>mational capabilities may interface with device groups,<br />

but the JAUS domain model does not define these interfaces.<br />

Figure 6 summarises the DM representation.<br />

b) Reference architecture specification: in the development<br />

cycle, the specification of capabilities described in the<br />

DM will always precede those that appear in the RA, whose<br />

main purpose is to detail all functions and messages that<br />

shall be employed to design new components. All currently<br />

defined messages as well as rules that govern messaging are<br />

also depicted in this second document. It is worth noting<br />

that messaging is the sole accepted method to communicate<br />

between components.<br />

The RA specification comprises three parts. First of them,<br />

the architecture framework provides a description of the structure<br />

of JAUS-based systems. Actually, unmanned systems are<br />

seen as a hierarchical topology, shown on figure 7. A system<br />

is a logical grouping of one or more subsystems, which are<br />

independent and distinct units. A node, in such a topology, is a<br />

‘black-box’ containing all the hardware and software necessary<br />

to provide a complete service, <strong>for</strong> example a mobility or<br />

a payload controller. A component is the lowest level of<br />

decomposition in the JAUS hierarchy: it is an executable task<br />

or process. All the components, which may be found within<br />

an unmanned system, are listed in this first part of the RA.<br />

The above defined topology is very flexible since the only<br />

requirement is that a subsystem be composed of component<br />

software, distributed across one or more nodes. Interoperability<br />

between intelligent systems is achieved by defining functional<br />

components with supported messages. There<strong>for</strong>e, the only<br />

constraint to be JAUS-compliant is that all messages that pass<br />

between components, over networks or via airwaves, shall be<br />

JAUS-compatible messages. No other rules are imposed to<br />

system engineers. Besides, messages coming from or/and sent<br />

to non-JAUS components can have their own protocol.<br />

The definition and the <strong>for</strong>mat of those messages are the<br />

objects of the second and the third parts of the RA. In the<br />

second one, message definition, different classes of messages<br />

(command, query, in<strong>for</strong>m, event setup, event notification) and<br />

message composition (classically, a header and a data fields)<br />

are defined. Messaging protocol is also depicted with the<br />

routing strategy, the way to send large data messages, the way<br />

to establish a connection between two components, as well<br />

as some various messaging rules. The third and final part of<br />

the RA, Message Set, presents the details of command code<br />

usage <strong>for</strong> each message (the command code is an in<strong>for</strong>mation<br />

included in the message header).<br />

c) Other documents: additional documents support the<br />

JAUS standards: a Document Control Plan (DCP) and the<br />

Standard Operating Procedures (SOP), [14]. The first one<br />

defines the process to update the JAUS DM and RA whereas<br />

the second one establishes the charter and organisation <strong>for</strong><br />

15<br />

Fig. 7. The reference architecture from JAUS.<br />

the JAUS working group. A compliance plan, [11], transport<br />

plan and user’s handbook are under development.<br />

3) Conclusion on JAUS: JAUS is not an architecture, as<br />

defined at the beginning of this article. It is rather a process<br />

to ease communication between users and developers and to<br />

standardise exchanges of data within software embedded in an<br />

autonomous system. However, nowadays, JAUS is mandated<br />

<strong>for</strong> use by all of the programmes in the Joint Robotics Program<br />

(JRP), [15], and numerous American manufacturers begin<br />

to follow the requirements. For example, the EOD 3 Man-<br />

Transportable Robotic System (MTRS), PackBot, produced by<br />

IRobot, is JAUS-compliant. Moreover, JAUS is now recognized<br />

as a technical committee within the SAE, Aerospace<br />

Council’s Aviation Systems Division (ASD), which name is<br />

AS-4 Unmanned Systems.<br />

B. French government ef<strong>for</strong>t<br />

For several years now, French DoD has been preparing a<br />

number of studies to get standards to emerge, so that future<br />

<strong>architectures</strong> embedded in military demonstrators be ready <strong>for</strong><br />

interoperability needs. In the ground robotics field, two major<br />

research programmes have been, or are about to be, launched.<br />

At the end of year 2005, the “Démonstrateur BOA” programme<br />

was notified to an industrial group (TGS: Thales,<br />

GIAT Industries, SAGEM). Roughly, BOA is the equivalent of<br />

the American Network Centric Warfare. It aims at proposing<br />

new organisations <strong>for</strong> ground <strong>for</strong>ces (including aerial devices<br />

operating to their profit) with a high degree of interoperability<br />

between the different units, through advanced communication<br />

means. UGVs and mini-UAVS will naturally be part of this<br />

structure since they represent a privileged way to retrieve<br />

in<strong>for</strong>mation, even during high-intensity actions, without exposing<br />

soldiers, enabling new combat strategies such as indirect<br />

firing, see figure 8. Missions which they should respond to are<br />

manifold and heterogeneous, from urban fighting, to logistics<br />

or reconnaissance. Hence, the challenge:<br />

3 explosive ordnance disposal


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Fig. 8. Official illustration of the BOA concept, showing candidate systems,<br />

data exchanges between them and consequent achievable missions.<br />

• integrating unmanned vehicles communications in an<br />

already very constraint electromagnetic environment;<br />

• enabling in<strong>for</strong>mation sharing between robots and with<br />

human units;<br />

• getting highly reconfigurable robots/UAVs so that they<br />

can quickly be adapted to the actual mission.<br />

The third point can only be achieved through the use of a<br />

<strong>modular</strong> architecture. Moreover, since BOA is still a prospective<br />

concept, all possible missions are probably not exhaustively<br />

identified; finally, some specific functions, dealing with<br />

autonomous capacities of ground vehicles, will be provided<br />

by other programmes, the actors of which are not necessarily<br />

involved in BOA. These two supplementary aspects imply that<br />

the software framework be also <strong>open</strong>, to allow the desired<br />

extensibility. But the second above point is maybe the most<br />

critical. The diversity of in<strong>for</strong>mation sources, and the number<br />

of actors that will access them, then encourages to adopt<br />

standards <strong>for</strong> data exchanges.<br />

As a consequence, the DoD insisted on the architecture<br />

part of the robots and vehemently required that <strong>modular</strong>ity<br />

be achieved at all levels (software as well as hardware), that<br />

interfaces be <strong>open</strong> and can be communicated to third parties.<br />

The interoperability constraints were tackled by explicitely<br />

asking <strong>for</strong> the adoption of a standard <strong>for</strong> data exchanges:<br />

JAUS, if obviously not imposed, was quoted as an acceptable<br />

solution, so that to clearly illustrate DoD expectations. Finally,<br />

it is worth noting that in<strong>for</strong>mation provided by unmmanned<br />

vehicles are often of the same nature (localisation, detection<br />

or intelligence data) than those conveied by Battlefield<br />

Management Systems, that will be of primordial importance<br />

in BOA. Robots are then natural candidates to feed these<br />

systems and analyzing data structures used <strong>for</strong> the latter can<br />

also be a relevant source of inspiration.<br />

Although all previous examples are taken from ground<br />

robotics fields, the <strong>open</strong> <strong>modular</strong> architecture is actually an<br />

important subject <strong>for</strong> the near future UAV systems. Currently,<br />

the French DoD is conducting two studies in parallel. The aim<br />

16<br />

<strong>for</strong> both is: “definition of an <strong>open</strong>, standardised, <strong>modular</strong> and<br />

evolutionary architecture <strong>for</strong> a generic and interoperable UAV<br />

system”.<br />

The problematic of an UAV system is large and very complex<br />

because of the multiple interfaces including the onboard<br />

(aircrafts, payloads, airworthiness, air traffic management. . . ),<br />

ground (command, control and exploitation station, recovery,<br />

launch. . . ) and system (communications, certification, subsystem<br />

interfaces, real time synchronization, critical software. . . )<br />

constraints. The studies cover, from the time being, the three<br />

different UAVs system categories: tactical, MALE (Medium<br />

Altitude Long Endurance) and HALE (High Altitude Long<br />

Endurance). The main technical axes of the studies can be<br />

summed up with the following:<br />

• define the best configurable and generic architecture;<br />

• improve the system’s per<strong>for</strong>mances regarding new hard<br />

or software technologies;<br />

• interchangeability of payloads within the “plug and play”<br />

concept;<br />

• obtain secure, certifiable and everlasting architecture.<br />

An “<strong>open</strong>, standardized, <strong>modular</strong> and evolutionary”<br />

architecture is the challenge <strong>for</strong> the future French UAV<br />

systems programmes. Thanks to this, the interoperability will<br />

exist throughout the UAV system’s total cycle life (15 to 20<br />

years).<br />

However, one can still argue that these ef<strong>for</strong>ts towards<br />

standardisation are limited either to ground or to aerial vehicles,<br />

and that one still lacks a federative framework. This is<br />

mainly the goal of the OISAU research programme. Initiated<br />

by ground robotics experts of the DoD, this study <strong>for</strong> “<strong>open</strong><br />

and interoperable autonomous systems” actually introduces no<br />

assumptions concerning the type of the candidate plat<strong>for</strong>ms,<br />

that can either be aerian, ground or even marine vehicles. For<br />

the first time, it explicitely gathers within a lone coherent<br />

programme all the requirements presented above, asking that<br />

the resulting architecture enable :<br />

• plat<strong>for</strong>m and hardware independency;<br />

• cost reduction thanks to standardisation (thus allowing<br />

acquisition and maintenance savings);<br />

• easy integration and replacement of functional modules;<br />

• ability to incrementally proceed to these integration or<br />

replacement to allow systems evolution.<br />

This programme is probably of a primordial importance in<br />

an ef<strong>for</strong>t to get operational robots, that can be introduced in<br />

armed <strong>for</strong>ces.<br />

IV. CONCLUSION<br />

Indeed, many reasons, technical, commercial or practical,<br />

lead to increase the <strong>modular</strong>ity of robots <strong>architectures</strong>. Along<br />

the past ten years, a number of solutions has been proposed<br />

and <strong>open</strong> extensible frameworks are actually available to<br />

robots developers and users. Some of these concepts have even<br />

been successfully ported on real systems, demonstrating their<br />

relevance and maturity.


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

However, the robotics community still lacks a real federative<br />

standard to get unmanned systems interoperable. As a matter<br />

of fact, following the American JAUS example, and since<br />

interoperability is of crucial importance <strong>for</strong> the introduction<br />

of robots within armed <strong>for</strong>ces, the French DoD has decided<br />

to support the development of new normative frameworks.<br />

Besides, this ef<strong>for</strong>t concerns all fields of robotics, from ground<br />

to aerial vehicles, and is at the heart of current research<br />

programmes. A major objective of these works is to increase<br />

the technology and system readiness levels, i.e. to get more<br />

mature and reliable robots.<br />

But, if <strong>modular</strong>ity and standardisation are necessary conditions<br />

<strong>for</strong> <strong>architectures</strong> to meet all the above discussed<br />

requirements, they are not sufficient. Until now, robots are<br />

not completely autonomous systems and, even in the future,<br />

a supervisory control will be at least kept. Nevertheless there<br />

still remains work to determine which role humans deserve and<br />

which level of autonomy will be granted to robots. And the<br />

conclusions of such a work will impact the needed exchanges,<br />

data structures and interfaces that have to take place between<br />

the framework components. That is, the basic characteristics<br />

that enable <strong>modular</strong>ity will deeply depend on the role of the<br />

human in unmanned systems.<br />

ACKNOWLEDGMENT<br />

We would like to thank Agnès Lechevallier, French DoD,<br />

<strong>for</strong> her contribution on the UAV part.<br />

Special thanks go to Jérôme Lemaire, head of our department,<br />

<strong>for</strong> his help and valuable advice on the content of this<br />

paper.<br />

Also note that research works quoted in this paper would<br />

probably deserve much more attention than the one that could<br />

be granted here. They have all bring relevant results, when no<br />

breakthrough, to the field of control <strong>architectures</strong>. Please refer<br />

to the original articles <strong>for</strong> details and in-depth discussions on<br />

their specificities.<br />

REFERENCES<br />

[1] R. Alami, R. Chatila, S. Fleury, M. Ghallab and F. Ingrand, An<br />

Architecture <strong>for</strong> Autonomy, International Journal of Robotics Research,<br />

17(4), April 1998.<br />

17<br />

[2] J.S. Albus, 4-D/RCS: A Reference Model Architecture <strong>for</strong> Demo III, NIS-<br />

TIR 5994, National Institute of Standards and Technology, Gaithersburg,<br />

MD, March 1997.<br />

[3] J.S. Albus, Metrics and Per<strong>for</strong>mance Measures <strong>for</strong> Intelligent Unmanned<br />

Ground Vehicles, in proceedings of the Per<strong>for</strong>mance Metrics <strong>for</strong> Intelligent<br />

Systems (PerMIS) workshop, 2002.<br />

[4] D. Andreu and R. Passama, COSARC: COmponent based Software<br />

Architecture of Robot Controllers, in proceedings of the CAR’06 workshop,<br />

Montpellier, April 2006.<br />

[5] R. Arkin and T. Balch, AuRA: Principles and practice in review, Journal<br />

of Experimental and Theoretical Artificial Intelligence, vol. 9, no. 2-3,<br />

pp. 175-188, 1997.<br />

[6] S.C. Botelho and R. Alami, M+: a scheme <strong>for</strong> multi-robot cooperation<br />

through negotaited task allocation and achievement, in proceedings of<br />

IEEE International Conference on Robotics and Automation, vol. 2, pp.<br />

1234-1239, Michigan, May 1999.<br />

[7] R.A. Brooks, A robust layered control system <strong>for</strong> a mobile robot, IEEE<br />

Journal of Robotics and Automation, vol. RA-2, no. 1, pp. 14-23, April<br />

1986.<br />

[8] Future Business Group, Technology Readiness Levels (TRLs) guidance,<br />

FBG/36/10, January 11th 2005.<br />

[9] Future Business Group, System maturity assessment using System Readiness<br />

Levels guidance, FBG/43/01/01, v3.2, February 16th 2006.<br />

[10] L. Gillet and Y.L. Lunel, Des robots pour assister le combattant<br />

débarqué en zone urbaine : les démonstrateurs MiniRoC, L’armement,<br />

no. 85, March 2004.<br />

[11] JAUS, The Joint Architecture <strong>for</strong> Unmanned Systems, Compliance<br />

Specification, v. 1.1, March 10th 2005.<br />

[12] JAUS, The Joint Architecture <strong>for</strong> Unmanned Systems, Domain Model,<br />

volume I, v. 3.2, March 10th 2005.<br />

[13] JAUS, The Joint Architecture <strong>for</strong> Unmanned Systems, Reference Architecture<br />

SPecification, volume II, parts 1-3, v. 3.2, August 13th 2004.<br />

[14] JAUS, The Joint Architecture <strong>for</strong> Unmanned Systems, Standard Operating<br />

Procedures, v. 1.5, October 10th 2002.<br />

[15] JRP, Joint Robotics Program, Master Plan, FY2005.<br />

[16] J.G. Morillon, O. Lecointe, J.-P. Quin, M. Tissedre, C. Lewandowski,<br />

T. Gauthier, F. Le Gusquet and F. Useo, SYRANO: a ground robotic<br />

system <strong>for</strong> target acquisition and neutralization, in proceedings of SPIE<br />

Aerosense, Unmanned Ground Vehicle Technology V, vol. 5083, pp.<br />

38-51, September 2003.<br />

[17] L.E. Parker, ALLIANCE: An Architecture <strong>for</strong> Fault Tolerant Multi-Robot<br />

Cooperation, IEEE Transactions on Robotics and Automation, vol. 14,<br />

no. 2, pp. 220-240, 1998.<br />

[18] J.K. Rosenblatt, DAMN: A Distributed Architecture <strong>for</strong> Mobile Navigation,<br />

in proceedings of the 1995 AAAI Spring Symposium on Lessons<br />

Learned from Implemented Software Architectures <strong>for</strong> Physical Agents,<br />

H. Hexmoor & D. Kortemkamps (Eds.), Menlo Park, CA:AAAI Press.<br />

[19] R. Volpe, I.A.D. Nesnas, T. Estlin, D. Mutz, R. Petras and H. Das,<br />

CLARAty: Coupled Layer Architecture <strong>for</strong> Robotic Autonomy, technical<br />

report, Jet Propulsion Laboratory, December 2000.<br />

[20] R. Volpe, I.A.D. Nesnas, T. Estlin, D. Mutz, R. Petras and H. Das, The<br />

CLARAty architecture <strong>for</strong> robotic autonomy, in proceedings of IEEE<br />

Aerospace Conference, Montana, March 2001.


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Integrating human / robot interaction into robot control<br />

<strong>architectures</strong> <strong>for</strong> defense applications<br />

Delphine Dufourd a and André Dalgalarrondo b<br />

a DGA / Service des Programmes d’Armement Terrestre<br />

10, Place Georges Clemenceau, BP 19, 92211 Saint-Cloud Cedex, France<br />

b DGA / Centre d’Essais en Vol, base de Cazaux<br />

BP 416, 33260 La Teste, France<br />

ABSTRACT<br />

In the near future, military robots are expected to take part in various kinds of missions in order to assist<br />

mounted as well as dismounted soldiers: reconnaissance, surveillance, supply delivery, mine clearance, retrieval<br />

of injured people, etc. However, operating such systems is still a stressful and demanding task, especially when<br />

the teleoperator is not sheltered in an armoured plat<strong>for</strong>m. There<strong>for</strong>e, effective man / machine interactions and<br />

their integration into control <strong>architectures</strong> appear as a key capability in order to obtain efficient semi-autonomous<br />

systems. This article first explains human / robot interaction (HRI) needs and constraints in several operational<br />

situations. Then it describes existing collaboration paradigms between humans and robots and the corresponding<br />

control <strong>architectures</strong> which have been considered <strong>for</strong> defense robotics applications, including behavior-based teleoperation,<br />

cooperative control or sliding autonomy. In particular, it presents our work concerning the HARPIC<br />

control architecture and the related adjustable autonomy system. Finally, it proposes some perspectives concerning<br />

more advanced co-operation schemes between humans and robots and raises more general issues about<br />

insertion and a possible standardization of HRI within robot software <strong>architectures</strong>.<br />

Keywords: Ground robotics, human robot interaction, control <strong>architectures</strong>, defense applications, reconnaissance<br />

robot.<br />

1. INTRODUCTION<br />

Given recent advances in robotics technologies, unmanned ground systems appear as a promising opportunity <strong>for</strong><br />

defense applications. In France, teleoperated vehicles (e.g. AMX30 B2DT) will soon be used <strong>for</strong> mine breaching<br />

operations. In current advanced studies launched by the DGA (Délégation Générale pour l’Armement i.e. the<br />

French Defense Procurement Agency) such as PEA Mini-RoC (Programme d’Etudes Amont Mini-Robots de<br />

Choc) or PEA Démonstrateur BOA (Bulle Opérationnelle Aéroterrestre), robotics systems are considered <strong>for</strong><br />

military operations in urban terrain as well as in <strong>open</strong> environments, in conjunction with mounted or dismounted<br />

soldiers. To fulfill the various missions envisionned <strong>for</strong> unmanned plat<strong>for</strong>ms, the needs <strong>for</strong> human / robot<br />

interaction (HRI) increase so as to benefit from the orthogonal capacities of both the man and the machine.<br />

In section 2, we list several missions considered <strong>for</strong> military land robots and explain related issues concerning<br />

HRI. In section 3, we present an overview of existing paradigms concerning HRI and describe a few existing<br />

applications in the field of defense robotics. In section 4, we focus on an example concerning reconnaissance<br />

missions in urban warfare and present the work on the HARPIC architecture per<strong>for</strong>med at the Centre d’Expertise<br />

Parisien of the DGA. Finally, in section 5, we raise more general questions about the introduction of HRI into<br />

control <strong>architectures</strong> and about a potential standardization of these interactions, be<strong>for</strong>e concluding in section 6.<br />

2.1. Missions <strong>for</strong> military robots<br />

2. OPERATIONAL CONTEXT<br />

Unmanned ground systems can now be considered as key technologies <strong>for</strong> future military operations. Firstly,<br />

they remove humans from harms way by reducing their exposition on the battlefield. Moreover, they can provide<br />

various services and avoid humans dull, dirty or difficult tasks such as surveillance or heavy load carrying. More<br />

generally, in the future, they should be used as <strong>for</strong>ce multipliers and risk reducer in land operations.<br />

Further author in<strong>for</strong>mation:<br />

D.D.: E-mail: delphine.dufourd@dga.defense.gouv.fr, phone: +33 (0)1 47 71 45 86<br />

A.D.: E-mail: andre.dalgalarrondo@dga.defense.gouv.fr, phone: +33 (0)5 57 15 47 62<br />

18


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Figure 1. A prospective view of future French network centric warfare.<br />

As a result, the range of missions <strong>for</strong> unmanned ground systems in the defense and security field is widening.<br />

Among them, let us mention reconnaissance and scout missions, surveillance, target acquisition and illumination,<br />

supply delivery, mule applications, obstacle clearance, remote manipulation, demining, explosive ordnance<br />

disposal, retrieval of injured people, communication relays, etc. These missions include different time steps from<br />

planning and deployement, mission fulfilment with human supervision and possible re-planning to reployment<br />

and maintenance, as well as training sessions. They take place on various terrains, ranging from structured urban<br />

environments to destructured (destroyed) places and ill-structured or unstructured <strong>open</strong> areas. Moreover, defense<br />

operations present a few specificities compared to civil applications. Unmanned vehicles will often have to deal<br />

with challenging ill-known, unpredictable, changing and hostile environments. In urban warfare <strong>for</strong> instance,<br />

robots will face many incertainties and they will have to intermix with friends, foes and bystanders. As a partner<br />

or as an extent of a warfighter, a robot must also con<strong>for</strong>m to different constraints and doctrines: he must neither<br />

increase the risk <strong>for</strong> the team – which stands <strong>for</strong> ”surprise effect” cancelling and trap triggering <strong>for</strong> instance –<br />

nor impose an important workload to the human supervisor. Moreover, some decisions such as engaging a target<br />

cannot be delegated to a robot: it is crucial that man stay in the loop in such situations. Finally, robots will be<br />

inserted into large organizations (systems of systems designed <strong>for</strong> network-centric warfare – cf. Fig. 1) and will<br />

have to co-operate with other manned or unmanned entities, following a command hierarchy. There<strong>for</strong>e, efficient<br />

and scalable man / machine collaboration appears as a necessity.<br />

2.2. Why introduce HRI into military operations ?<br />

On the one hand, it is desirable that robots should operate as autonomously as possible in order to reduce the<br />

workload of the operator and to be robust to communication failures with the control unit. For example, rescue<br />

operations at the World Trade Center per<strong>for</strong>med using the robots developped <strong>for</strong> the DARPA (U.S. Defense<br />

Advanced Research Projects Agency) TMR (Tactical Mobile Robotics) program showed that it was difficult <strong>for</strong><br />

a single operator to ensure both secure navigation <strong>for</strong> the robot and reliable people detection. 1<br />

On the other hand, technology is not mature enough to produce autonomous robots which could handle<br />

the various military situations on their own. Basic capacities that have been shown on many robots include<br />

obstacle detection and avoidance, waypoint and goal oriented navigation, detection of people, detection of threats,<br />

in<strong>for</strong>mation retrieval (image, sound), localization and map building, exploration or coordination with other<br />

robots. Most of them can be implemented on a real robot with a good reliability in a reasonably difficult<br />

environnement but it is not enough to fulfill military requirements. For instance, so far, no robot has been able<br />

to move fully autonomously in the most difficult and unstructured arenas of the NIST (US National Institute<br />

of Standards and Technologies) in the context of search and rescue competitions. 2 Moreover, some high-level<br />

doctrines and constraints (such as discretion) may be difficult to describe and <strong>for</strong>malize <strong>for</strong> autonomous systems.<br />

Finally, humans and robots have complementary capabilities 3 : humans are usually superior in perception<br />

(especially in environment understanding), in knowledge management and in decision making while robots can<br />

be better than humans in quantitative low-level perception, in precise metric localization and in displacement<br />

in confined and cluttered space. Above all, robots can stand dull, dirty and dangerous jobs. There<strong>for</strong>e, it seems<br />

19


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

that today’s best solutions should rely on a good collaboration between humans and robots e.g. when robots act<br />

as autonomously as possible but remain under human supervision.<br />

2.3. What kind of HRI is needed <strong>for</strong> military operations ?<br />

The modalities concerning human / robot interaction may vary depending on the situation, the mission or<br />

the robotic plat<strong>for</strong>m. For instance, during many demining operations, the teleoperation of the robot could be<br />

per<strong>for</strong>med with a direct view of the unmanned vehicle. However, the teleoperation of a tracked vehicle like the<br />

French demonstrator SYRANO (Système Robotisé d’Acquisition et de Neutralisation d’Objectifs) is per<strong>for</strong>med<br />

with an Operator Control Unit (OCU) set up in a shelter which could be beyond line of sight if the transmission<br />

with the robot is ensured. In urban environment, the dismounted soldier in charge of the robot may need to<br />

protect himself and to operate the robot under stressful conditions either in close proximity or at a distance<br />

varying from a few meters to hundreds of meters. Concerning the effectors and the functions controlled by the<br />

human / robot team, some missions could imply precise remote manipulation with a robotic arm or the use of<br />

specific payloads (pan / tilt observation sensors, diversion equipments such as smoke-producing devices...) while<br />

others would only include effectors <strong>for</strong> the navigation of the plat<strong>for</strong>m. Finally, some missions may be per<strong>for</strong>med in<br />

a multi-robot and multi-entities context, using either homogeneous or heteregenous vehicles, with issues ranging<br />

from authority sharing to multi-robot cooperation and insertion into network-centric warfare organizations.<br />

There<strong>for</strong>e, defining general HRI systems <strong>for</strong> robots in defense applications may be quite challenging.<br />

3. EXISTING HUMAN / ROBOT INTERACTION SYSTEMS<br />

3.1. Overview of existing paradigms <strong>for</strong> HRI<br />

Many paradigms have been developed <strong>for</strong> human robot interaction. They allow various levels of interaction with<br />

different levels of autonomy and dependance. To introduce our work concerning the HARPIC architecture as<br />

well as related work in the defense field, we attempt to briefly list below the main paradigms from the least<br />

autonomous (teleoperation) to the most advanced (mixed-initiative) where humans and robots can co-operate<br />

to determine their goals and strategies.<br />

Teleoperation is the most basic and mature mode. In this mode the operator has full control of the robot<br />

and must assume total responsibility <strong>for</strong> mission safety and success. This mode is suitable in complicated or<br />

unexpected situations which no algorithm can deal with. In return this mode often means a heavy workload <strong>for</strong><br />

the teleoperator who needs to focus his attention on the ongoing task. With Supervisory control 4 the operator<br />

(called the supervisor) orders a subordonate (the robot) to achieve a predefined plan. In this paradigm, the<br />

robot is merely a tool executing tasks under operator monitoring. This interaction mode can adapt to low level<br />

bandwith and high level control but needs constant vigilance from the operator who must react in case of failures<br />

(to per<strong>for</strong>m manual mission re-planning <strong>for</strong> example). This mode is only suitable <strong>for</strong> static environments where<br />

planning is really effective. Behavior-based teleoperation 5 replaces fine-grained control by robot behaviors which<br />

are locally autonomous. In comparison to supervisory control, this paradigm brings safety and provides the<br />

robot with the opportunity to react to its own environment perception (e.g. to avoid obstacles). Thus, it allows<br />

the operator to be more negligent. Adjustable autonomy 6 refers to the adjustment of the level of autononomy<br />

which has been initiated by the operator, by another system or by the system himself and while the system<br />

operates. The goal of this paradigm is generally to increase the negligence time allowed to the operator while<br />

maintaining the robot to an acceptable safety and effectiveness level. In Traded control 7 the operator controls<br />

the robot during one part of the task and the robot behaves autonomously during the other part of this task.<br />

This paradigm may lead to an equivalent of the kidnapped robot problem. While the robot is controlled by the<br />

operator, it can loose its situation awareness and face difficulties when coming back to autonomous control. In<br />

Shared control, 8 part of the robot functions are autonomous and the remaining are controlled by the operator.<br />

It is important to notice that this mode requires constant attention from the operator. If the robot is only in<br />

charge of some safety mechanisms, this paradigm is equivalent to the safeguarded teleoperation. Mixed-initiative 9<br />

is characterized by a continuous dialogue between the operator and the robot where both of them share decision<br />

and responsibility. Collaborative control 10 may be described as a restrictive implementation of mixed-initiave.<br />

In this approach, the human robot dialogue is mainly reduced to a predefined set of questions and answers.<br />

All this paradigms may be considered as subsets of the human / robot teaming domain with important<br />

overlapping. There<strong>for</strong>e, many robotic experimentations use a mix of these paradigms, including our work on<br />

HARPIC and related projects described in the next section.<br />

20


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

3.2. Existing applications concerning defense applications<br />

Most applications <strong>for</strong> military unmanned ground vehicles are based on existing control <strong>architectures</strong> and the<br />

HRI mechanisms often reflect this initial approach. Most of these <strong>architectures</strong> are hybrid, but some of them<br />

are more oriented towards the deliberative aspect while others focus mostly on their reactive components. As<br />

a result, HRI mechanisms are sometimes more oriented towards high level interaction at the planning phase<br />

(deliberative) while others tend to be more detailed at the reactive level.<br />

For instance, the small UGV developed by the Swedish Royal Institute of Technology 11 <strong>for</strong> military operations<br />

in urban terrain is inspired from the SSS (Servo, Subsumption, Symbolic) hybrid architecture. The individual<br />

components correspond to reactive behaviors such as goto, avoid, follow me, mapping, explore, road-follow and<br />

inspect. During most tasks, only a single behavior is active but when multiple behaviors are involved, a simple<br />

superposition principle is used <strong>for</strong> command fusion (following the SSS principles). The activation or desactivation<br />

of a behavior seems to be achieved using the planning realized be<strong>for</strong>e the mission. The interface is state-based<br />

(modeled as a simple finite automata) so that the user can choose a task from a menu and make simple use<br />

of simple buttons to control a task. However, the authors do not describe precisely the planning module (the<br />

deliberative level) nor its relationship with the direct activation of behaviors (more reactive). Thus, it seems<br />

that the stress has been laid on the lower levels of the architecture (Servo, Subsumption).<br />

The software architecture of the US Idaho National Engineering and Environmental Laboratory (INEEL),<br />

which has been tested by the US Space and Naval Systems Center in the scope of the DARPA TMR (Tactical<br />

Mobile Robotics) project, 12 was partially inspired by Brooks’ subsumption architecture, which provides a<br />

method <strong>for</strong> structuring reactive systems from the bottom up using layered sets of rules. Since this approach<br />

can be highly robust in unknown or dynamic environments (because it does not depend on an explicit set of<br />

actions), it has been used to create a robust routine <strong>for</strong> obstacle avoidance. Within INEEL’s software control<br />

architecture, obstacle avoidance is a bottom-up layer behavior, and although it underlies many different reactive<br />

and deliberative capabilities, it runs independently from all other behaviors. This independance aims at reducing<br />

interference between behaviors and lowering overall complexity. INEEL also incorporated deliberative behaviors<br />

which function at a level above the reactive behaviors. Once the reactive behaviors are “satisfied”, the deliberative<br />

behaviors may take control, allowing the robot to exploit a world model in order to support behaviors such as<br />

area search, patrol perimeter and follow route. These capabilities can be used in several different control modes<br />

available from INEEL’s OCU (and mostly dedicated to urban terrain). In safe mode, the robot only takes<br />

initiative to protect itself and the environment, but allows the user to otherwise drive the robot. In shared mode,<br />

the robot handles the low-level navigation, but accepts intermittent input, often at the robot’s request, to help<br />

guide the robot in general directions. In autonomous mode, the robot decides how to carry out high-level tasks<br />

such as follow that target or search this area without any navigational input from the user. There<strong>for</strong>e, this system<br />

can be considered as a sliding autonomy concept.<br />

The US Army Demo III project is based on the reference architecture 4D-RCS designed by the NIST (based<br />

on the German 4D and on the NIST <strong>for</strong>mer RCS <strong>architectures</strong>) which allows a decomposition of the robot<br />

mission planning into numerous hierarchical levels. 13 This architecture can be considered as hybrid in the sense<br />

that it endows the robot with both reactive behaviors (in the lower levels) and advanced planning capabilities.<br />

Theoritically speaking, the operator can explicitely interact with the system at any hierarchical level. The<br />

connections to the OCU should enable a human operator to input commands, to override or modify system<br />

behavior, to per<strong>for</strong>m various types of teleoperation, to switch control modes (e.g. automatic, teleopration, single<br />

step, pause) and to observe the values of state variables, images, maps and entity attributes. This operator<br />

interface could also be used <strong>for</strong> programming, debugging and maintenance. However, in the scope of the Demo<br />

III program, only lower levels of the architecture have been fully implemented (servo, primitive and autonomous<br />

navigation subsystems) but they led to significant autonomous mobility capacities tested during extensive field<br />

exercises at multiple sites. 14 The OCUs feature touch screens, map-based display and context-sensitive pulldown<br />

menus and buttons. They also propose a complete set of planning tools as well as built-in mission rehearsal<br />

and simulation capabilities. In the future, the NIST and the Demo III program managers intend to implement<br />

more tactical behaviors involving several unmanned ground vehicles, thus adressing higher levels of the 4D-RCS<br />

hierarchy. This will enable the developers to test effectively the scalability and <strong>modular</strong>ity of such a hierarchical<br />

architecture.<br />

The German demonstrator PRIMUS-D, 15 dedicated to reconnaissance in <strong>open</strong> terrain, is also compatible<br />

with the 4D/RCS achitecture. The authors provide various details about the data flows between subsystems,<br />

which gives indications about the way HRI components could interact and be linked with robot subsystems.<br />

Within PRIMUS OCU, all in- and outputs are organized in the segment “Man-Machine Interface” which enables<br />

the operator to per<strong>for</strong>m mission planning, command and control of the vehicle. Inputs from the operator flow to a<br />

“coordination” segment which per<strong>for</strong>ms mainly plausibility checks, command management and command routing.<br />

21


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Commands or entire mission plans are transmitted to the robot vehicle by the segment “communication”, if<br />

“coordination” decides that the command could be executed by the robot. An additional software module called<br />

“payload” manages the RSTA payload. Concerning the outputs from the robot, “communications” <strong>for</strong>wards<br />

received data depending on the data type to coordination (robot state in<strong>for</strong>mation) or to “signal and video<br />

management”. On the robot side, command systems are received by a “communication” segment and <strong>for</strong>warded<br />

to “coordination” <strong>for</strong> plausibility checks. If the command can be executed by the robot, the command will be<br />

pipelined to the next segment: either “payload” if it concerns the RSTA module or “driving” and “perception”<br />

if it is related to the navigation of the plat<strong>for</strong>m. The PRIMUS robot can be operated according to five modes:<br />

autonomous driving with waypoint navigation, semi-autonomous driving (with high-level commands such as<br />

direction, range and velocity), remote control (assisted with obstacle detection and collision avoidance), road<br />

vehicle and autonomous vehicle following. It can thus be considered as an adjustable autonomy concept, including<br />

simple or behavior-based teleoperation and autonomous modes. However, it is unclear how <strong>modular</strong> the system<br />

is and how new capabilities could be added to the system.<br />

In the scope of the DARPA MARS (Mobile Autonomous Robotic System) program, Fong, Thorpe and Baur<br />

have developed a very rich HRI scheme based on collaborative control where humans and robots work as partners<br />

and assist each other to achieve common goals. So far, this approach has been mainly applied to robot navigation:<br />

the robot asks human questions to obtain assistance with cognition and perception during task execution. An<br />

operator profile is also defined so as to per<strong>for</strong>m autonomous dialog adaptation. In this application, collaborative<br />

control has been implemented as a distributed set of modules, connected by a message-based architecture. 10 The<br />

main module of this architecture seems to be the safeguarded teleoperation controller which supports varying<br />

degrees of cooperation between the operator and the robot. Other modules include an event logger, a query<br />

manager, a user adapter, a user interface and the physical mobile robot itself.<br />

Finally, concerning multirobot applications, the DARPA TMR program also led to the demonstration of<br />

multirobot capabilities in urban environment based on the AuRA control architecture, in relation with the MissionLab<br />

project. 16 This application relies on schema-based behavioral control (where the operator is considered<br />

as one of the available behaviors <strong>for</strong> the robots 17 ) and on a usability-tested mission specification system. The<br />

software architecture includes three major subsystems: 1) a framework <strong>for</strong> designing robot missions and a means<br />

<strong>for</strong> evaluating their overall usability; 2) an executive subsystem which represents the major focus <strong>for</strong> operator<br />

interaction, providing an interface to a simulation module, to the actual robot controllers, to premission specification<br />

facilities and to the physical operator ground station; 3) a runtime control subsystem located on each active<br />

robot, which provides an execution framework <strong>for</strong> enacting reactive behaviors, acquiring sensor data, reporting<br />

back to the executive subsystem to provide situation awareness to the team commander. A typical mission starts<br />

with a planning step through the MissionLab system. This mission is compiled through a series of langages that<br />

bind it to a particular robot (Pioneer ou Urbie). It is then tested in faster than real-time simulation and loaded<br />

to the real robot <strong>for</strong> execution. During execution, a console serves as monitor and control interface: it makes it<br />

possible to intervene globally on the mission execution (stop, pause, restart, step by step...), on the robot groups<br />

(using team teleautonomy, <strong>for</strong>mation maintenance, bounding overwatch, by directing robots to particular regions<br />

of interest or by altering their societal personality), and on the individual robots (activating behaviors such as<br />

obstacle avoidance, waypoint following, moving towards goals, avoiding enemies, seeking hiding places, all cast<br />

into mission-specific assemblages, using low-level software or hardware drivers such as movement commands,<br />

range measurement commands, position feedback commands, system monitoring commands, initialization and<br />

termination). The AuRA architecture used in this work can be regarded as mostly reactive: this reactivity<br />

appears mainly through the robot behaviors while other modules such as the premission subsystem look more<br />

deliberative (but they seem to be activated be<strong>for</strong>ehand).<br />

To conclude about these various experiences, most of these systems implement several control modes <strong>for</strong><br />

the robot, corresponding to different autonomy levels: it seems that a single HRI mode is not sufficient <strong>for</strong> the<br />

numerous tasks and contexts military robots need to deal with. Most of them are based on well-known <strong>architectures</strong><br />

(some of these <strong>architectures</strong> were originally designed <strong>for</strong> autonomous systems rather than semi-autonomous<br />

ones), leading to various HRI mechanisms. However, it is still difficult to compare all these approaches since<br />

we lack precise feedback (except <strong>for</strong> the few systems which were submitted to – heterogeneous but – extensive<br />

per<strong>for</strong>mance evaluations, e.g. teleoperation / autonomy ratio <strong>for</strong> Demo III, 18 MissionLab’s usability tests 19 or<br />

urban search and rescue competitions 20 ) and since they address different missions and different contexts (monorobot<br />

vs multi-robot, urban terrain vs ill-structured environments, etc.). Moreover, it is hard to assess their<br />

scalability and <strong>modular</strong>ity in terms of HRI: <strong>for</strong> instance, does the addition of a new HRI mode compell the developer<br />

to modify large parts of the architecture, do they allow easy extensions to multi-robot and multi-operator<br />

configurations ?<br />

22


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

In the next section, we will describe our work on the HARPIC architecture so as to illustrate and give a<br />

detailed account of an adjustable autonomy development: this description will raise more general issues about<br />

scalability and <strong>modular</strong>ity, thus leading to <strong>open</strong> questions concerning the insertion of HRI within robot software<br />

<strong>architectures</strong>.<br />

4. PRESENTATION OF OUR WORK ON HARPIC<br />

In November 2003, the French defense procurement agency (Délégation Générale pour l’Armement) launched<br />

a prospective program called “PEA Mini-RoC” dedicated to small unmanned ground systems. This program<br />

focusses upon plat<strong>for</strong>m development, teleoperation and mission modules. Part of this program aims at developing<br />

autonomous functions <strong>for</strong> robot reconnaissance in urban terrain. In this context, the Centre d’Expertise Parisien<br />

(CEP) of the DGA has conducted studies to demonstrate the potentialities of advanced control strategies <strong>for</strong><br />

small robotic plat<strong>for</strong>m during military operations in urban terrain. Our goal was not to produce operational<br />

systems but to investigate several solutions on experimental plat<strong>for</strong>ms in order to suggest requirements and to<br />

be able to build specifications <strong>for</strong> future systems (e.g. the demonstrator <strong>for</strong> adjustable autonomy resulting from<br />

PEA TAROT).<br />

We believe that in short term robots will not be able to handle some uncertain situations and that the most<br />

challenging task is to build a software organization which provides a pertinent and adaptative balance between<br />

robot autonomy and human control. For a good teaming, it is desirable that robots and humans share their<br />

capacities along a two-way dialogue. This is the approximate definition of the mixed-initiative control mode but,<br />

given the maturity of today’s robots and the potential danger <strong>for</strong> soldiers, we do not think that mixed-initiative<br />

could be the solution <strong>for</strong> now. There<strong>for</strong>e adjustable autonomy, with a wide range of control modes – from basic<br />

teleoperation to even limited mixed-initiative – appears as the most pragmatic and efficient way.<br />

Thus, based on previous work concerning our robot control architecture HARPIC, we have developped a man<br />

machine interface and software components that allow a human operator to control a robot at different levels of<br />

autonomy. In particular, this work aims at studying how a robot could be helpful in indoor reconnaissance and<br />

surveillance missions in hostile environment. In such missions, a soldier faces many threats and must protect<br />

himself while looking around and holding his weapon so that he cannot devote his attention to the teleoperation<br />

of the robot. There<strong>for</strong>e, we have built a software that allows dynamic swapping between control modes (manual,<br />

safeguarded and behavior-based) while automatically per<strong>for</strong>ming map building and localization of the robot. It<br />

also includes surveillance functions like movement detection and is designed <strong>for</strong> multirobot extensions.<br />

We first explain the design of our agent-based robot control architecture and discuss the various ways to control<br />

and interact with a robot. The main modules and functionnalities implementing those ideas in our architecture<br />

are detailed. Some experiments on a Pioneer robot equipped with various sensors are also briefly presented, as<br />

well as promising directions <strong>for</strong> the development of robots and user interfaces <strong>for</strong> hostile environment.<br />

4.1. General description of HARPIC<br />

HARPIC is a hybrid architecture (cf. figure 2) which consists in four blocks organized around a fifth: perception<br />

processes, an attention manager, a behavior selector and action processes. The core of the architecture (the fifth<br />

block) relies on representations.<br />

Sensors yield data to perception processes which create representations of the environment. Representations<br />

are instances of specialized perception models. For instance, <strong>for</strong> a visual wall-following behavior, the representation<br />

can be restricted to the coordinates of the edge detected in the image, which stands <strong>for</strong> the wall to follow.<br />

To every representation are attached the references to the process which created it: date of creation and various<br />

data related to the sensor (position, focus...). The representations are stored with a given memory depth.<br />

The perception processes are activated or inhibited by the attention manager and receive in<strong>for</strong>mation on the<br />

current behavior. This in<strong>for</strong>mation is used to <strong>for</strong>esee and check the consistency of the representation. The attention<br />

manager has three main functions: it updates representations (on a periodical or on an exceptional basis),<br />

it supervises the environment (detection of new events) and the algorithms (prediction/feedback control), and it<br />

guarantees an efficient use of the computing resources. The action selection module chooses the robot’s behavior<br />

depending on the predefined goal(s), the current action, the representations and their estimated reliability.<br />

Finally, the behaviors control the robot’s actuators in closed loop with the associated perception processes.<br />

The key ideas of this architecture are:<br />

• The use of sensorimotor behaviors binding perceptions and low-level actions both internally and externally: the<br />

internal coupling allows to compare a prediction of the next perception (estimated from the previous perception<br />

and the current control) with the perception obtained after application of the control, in order to decide whether<br />

23


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

activation<br />

inhibition<br />

communication<br />

agent<br />

distant<br />

agents<br />

attention<br />

events<br />

behavior<br />

manager<br />

selection<br />

agent agent<br />

errors<br />

perception<br />

agents<br />

sensors<br />

representations<br />

current action<br />

behavior<br />

action<br />

selection<br />

agents<br />

actuators<br />

Figure 2. Functional diagram of the HARPIC architecture.<br />

the current behavior runs normally or should be changed; the external coupling is the classic control loop between<br />

perception and action.<br />

• Use of perception processes with the aim of creating local situated representations of the environment. No<br />

global model of the environment is used; however less local and higher level representations can be built from<br />

the instantaneous local representations.<br />

• Quantitative assessment of every representation: every algorithm is associated with evaluation metrics which<br />

assign to every constructed representation a numerical value expressing the confidence which can be given to it.<br />

We regard this assessment as important feature since any processing algorithm has a limited domain of validity<br />

and its internal parameters are best suited <strong>for</strong> some situations only. There is no perfect algorithm that always<br />

yields “good” results.<br />

• Use of an attention manager: it supervises the execution of the perception processing algorithms independently<br />

from the current actions. It takes into account the processing time needed <strong>for</strong> each perception process, as well as<br />

the cost in terms of computational resources. It also looks <strong>for</strong> new events due to the dynamics of the environment,<br />

which may signify a new danger or opportunities leading to a change of behavior. It may also trigger processes<br />

in order to check whether the sensors operate nominally and it can receive error signals coming from current<br />

perception processes. It is also able to invalidate representations due to malfunctioning sensors or misused<br />

processes.<br />

• The behavior selection module chooses the sensorimotor behaviors to be activated or inhibited depending on the<br />

predefined goal, the available representations and the events issued from the attention manager. This module is<br />

the highest level of the architecture. It should be noted that the quantitative assessment of the representations<br />

plays a key role in the decision process of the behavior selection.<br />

4.2. HARPIC implementation<br />

Fundamental capacities of our architecture encompass <strong>modular</strong>ity, encapsulation, scalability and parallel execution.<br />

To fulfill these requirements, we decided to use a multi-agent <strong>for</strong>malism which fits naturally our need <strong>for</strong><br />

encapsulation into independent, asynchronous and heterogeneous modules. The communication between agents<br />

is realized by messages. Object oriented languages are there<strong>for</strong>e absolutely suited <strong>for</strong> programming agents: we<br />

chose C++. We use POSIX Threads to obtain parallelism: each agent is represented by a thread in the overall<br />

process. For us, multi-agent techniques is an interesting <strong>for</strong>malism and although our architecture could be<br />

implemented without them it led to a very convenient and scalable framework.<br />

All the agents have a common structure inherited from a basic agent and are then specialized. The basic<br />

agent can communicate by sending messages, has a mailbox where it can receive messages and runs its own<br />

process independently from orther agents. Initially, all present agents have to register to a special agent called<br />

the administrator which records all in<strong>for</strong>mation about agents (name, type, representation storage address...). All<br />

these data can be consulted by any agent. Then, when an agent is looking <strong>for</strong> another one <strong>for</strong> a specific job to<br />

24


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

do, it can access to it and to its results. It is <strong>for</strong> example what is happening when an action agent has to use a<br />

representation coming from a perception agent.<br />

Perception and action agents follow the global scheme. The action agent is activated by a specific request<br />

coming from the behavior selection agent. The selection agent orders him to work with a perception agent by<br />

sending its reference. The action agent sends in turn a request to the proper perception agent. Perception<br />

agents are passive, they only run upon request, per<strong>for</strong>m a one shot execution and then wait <strong>for</strong> a new message.<br />

Furthermore, a perception agent can activate another agent and build a more specific representation using its<br />

complementary data. Many action and perception agents run at the same time but most are waiting <strong>for</strong> messages.<br />

Only one behavior (composed of a perception agent and an action agent) is active at the same time. Within<br />

a behavior, it is up to the action agent to analyze representations coming from the perception agent and to<br />

establish the correct control orders <strong>for</strong> the plat<strong>for</strong>m.<br />

The attention agent supervises the perception agents. It has a look-up table where it can find the types<br />

of perception agents it has to activate depending on the current behavior. It is also in charge of checking the<br />

perception results and of declaring new events to the behavior selection agent when necessary. The advantage<br />

of this organization is detailed in a previous paper. 21<br />

The selection agent has to select and activate the behavior suited <strong>for</strong> the robot mission. This agent may be<br />

totally autonomous or constitute the process that runs the human computer interface. In this work, it is the<br />

second case and this agent is detailled in paragraph 4.4.<br />

We use two specific agents to bind our architecture to hardware. The first one is an interface between the<br />

software architecture and the real robot. This agent awaits a message from an action agent be<strong>for</strong>e translating<br />

the instructions into comprehensible orders <strong>for</strong> the robot. Changing the real robot requires the use of a specific<br />

agent but no change in the overall architecture. The second agent acquires images from the grabber card at a<br />

regular rate and stores them in computer memory which can be consulted by perception agents.<br />

Finally, we use a specific agent <strong>for</strong> IP communication with distant agents or other <strong>architectures</strong>. This agent<br />

has two running loops: an inner loop in which it can intercept messages from other agent belonging to the same<br />

architecture, and an external loop to get data or requests from distant <strong>architectures</strong>. This agent surpervises the<br />

(dis)appearance of distant agents or <strong>architectures</strong>. It allows the splitting of an architecture on many computers or<br />

the communication between several <strong>architectures</strong>. For example, this agent is useful when agents are distributed<br />

between the robot onboard computer and the operator control unit.<br />

4.3. Agents <strong>for</strong> SLAM and path planning<br />

Map building is per<strong>for</strong>med by a perception agent that takes laser sensor data and odometry data as input and<br />

outputs a representation which contains a map of the environment made of registered laser scans. The map<br />

building technique used combines Kalman filtering and scan matching based on histogram correlation. 22 This<br />

agent is executed whenever new laser data are available (e.g. 5 Hz), but it adds data to the map only when the<br />

robot has moved at least one meter since the last map update.<br />

Localization is per<strong>for</strong>med by a perception agent that takes odometry, laser data and the map representation<br />

as input and outputs a representation containing the current position of the robot. In its current implementation,<br />

this agent takes the position periodically estimated by the mapping algorithm and interpolates between these<br />

positions using odometry data to provide anytime position of the robot. This agent is executed upon request by<br />

any other agent that has to use the robot position (e.g. mapping, planning, human computer interface...).<br />

Finally, path planning is carried out by an action agent that takes the map and position representations as<br />

inputs and outputs motor commands that drive the robot toward the current goal. This agent first converts the<br />

map into an occupancy grid and using a value iteration algorithm it computes a potential that gives <strong>for</strong> every<br />

cell of the grid the length of the shortest path from this cell to the goal. The robot movements are then derived<br />

using gradient descent on this potential from the current robot position. The path planning agent is used in the<br />

Navigation control mode of the operator control unit (described below).<br />

4.4. HCI agent<br />

In this implementation, our human computer interface is a graphical interface managed by the behavior selection<br />

agent of HARPIC. It is designed to use a small touch-screen of 320x240 pixels such as the one that equipped<br />

most personal digital assistant (PDA). With this interface, the user has to select a screen coresponding to one of<br />

the control mode he wants to activate or a screen showing the environement measures and representations built<br />

by the robot. These screens are described below.<br />

The Teleop screen corresponds to a teleoperation mode where the operator controls the translational and the<br />

rotational speed of the robot (see figure 3 (left)) by defining on the screen the end of the speed vector. The<br />

25


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Figure 3. Interface <strong>for</strong> laser-based (left), image-based (center) teleoperation and goal navigation (right).<br />

Figure 4. Image screen in normal (left) and in low-light condition (right) with overlaid polygonal view.<br />

free space measured by the laserscanner can be superimposed on the screen and it appears in white color. The<br />

operator can also activate anti-collision and obstacle avoidance functions. Messages announcing obstacles are<br />

also displayed. This screen allows full teleoperation of the robot displacement (with or without seeing it thanks<br />

to the laser free space representation) as well as safeguarded teleoperation. As a result the operator has full<br />

control on the robot in order to trade with precise movement in cluttered space or with autonomous mobility<br />

failure. In return, he must keep defining the robot speed vector otherwise it stops and waits. A functionnality<br />

related to the reflexive teleoperation 12 concept also enables the operator to activate behaviors depending on the<br />

events detected by robot. Indeed, clicking on “wall” or “corridor” messages appearing on the screen makes the<br />

system trigger “wall following” or “corridor following” behaviors, thus activating a new control mode.<br />

The Image screen allows to control the robot by pointing at a location or defining a direction within the<br />

image acquired by the onboard robot camera. As the previous mode, this one enables full control or safeguarded<br />

teleoperation <strong>for</strong> the robot displacement. Two sliders operate the pan and tilt unit of the camera. When the<br />

GoTo XY function is enabled, the location selected by the operator in the image is translated into a robot<br />

displacement vector by projection onto the ground plane with respect to the camera calibration and orientation<br />

angles. The Direction function moves the robot when the operator defines the end of the speed vector on the<br />

26


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Figure 5. Interface <strong>for</strong> agent (left), program (center) and robot (right) selection.<br />

screen. The selectable Laser function draws a polygonal view of the free space in front of the robot (in an<br />

augmented reality way) which is built from the projection of the laser range data in the image. This “Tron-like”<br />

representation allows to control the robot in the image whenever there is no sufficient light <strong>for</strong> the camera.<br />

Incidentally, this function provides a visual checking of the good correspondance between the laser data and the<br />

camera data. Figure 4 illustrates the effect of this function. If the GoTo XY or Direction functions are not<br />

enabled and if the robot is in any autonomous mode, this screen can be used by a operator to supervise the<br />

robot action by viewing the images of the onboard camera. However, it can stop the robot in case of emergency.<br />

The Navigation screen shows a map of the area already explored by the robot. In this control mode, the<br />

operator has to point out at a goal location in the map to trigger an autonomous displacement of the robot<br />

up to this goal. The location can be changed dynamically (whenever the previous goal has not been reached).<br />

The planning process is immediate (a few seconds). When new areas are discovered the map is automatically<br />

updated. As shown in figure 3 (right), the map is an occupancy grid where bright areas correspond to location<br />

which have been observed as free more often than darkest area. Three sliders can translate or zoom the displayed<br />

map. This is an autonomous control mode where the operator can select a goal and <strong>for</strong>get the robot.<br />

The Agents screen allows the composition and the activation of behaviors by selecting an appropriate pair<br />

of perception and action agents (see fig. 5 (left)). For example, it allows to execute behaviors like obstacle<br />

avoidance, wall following or corridor following with different perception or action agents (each one corresponding<br />

to a specific sensor or algorithm) <strong>for</strong> a same behavior. For example, a wall following behavior can result from a<br />

perception agent using the camera, from another one using the laserscanner and from various algorithms. This<br />

control mode corresponds to the behavior-based teleoperation paradigm. However this screen has been mainly<br />

designed <strong>for</strong> expert users and development purpose. It lacks simplicity but it will be easily reduced to a small<br />

number of buttons when the most effective behaviors set will be determined.<br />

The Prog screen corresponds to predefined sequences of behaviors. For example, it enables the robot to<br />

alternate obstacle avoidance and wall or corridor following when respectively an obstacle, a wall or a corridor<br />

appears in the robot trajectory (see fig. 5 (center)). This example is a kind of sophisticated wander mode. More<br />

generally, this mode allows automous tasks that could be described as a sequence of behaviors like exploration or<br />

surveillance where observation and navigation behaviors are combined. The list of these sequences can be easily<br />

augmented to adapt the robot to a particular mission. In this control mode, the robot is autonomous and the<br />

operator workload could be null.<br />

A MultiRobot screen allows the operator to select the robot which will be controlled by his control unit.<br />

Indeed, our interface and sofware architecture is designed to adress more than one robot. In a platoon with<br />

many robots this capacity may give way to the sharing of each robot data or representation and to the exchange<br />

of robot control between soldiers.<br />

The Local Map and Global Map screens show the results of the SLAM agents described in 4.3 (see fig. 6 (left<br />

and center)). The first one is a view of the free zone determined on each laser scan. The second one displays<br />

27


Figure 6. Interface <strong>for</strong> local map (left), global map (center) and moving object detection (right).<br />

the global map of the area explored by the robot and its trajectory. The circle on the trajectory indicates the<br />

location where the SLAM algorithm has added new datas. The current position of the robot is also shown on the<br />

map. As on some others screens, sliders allow the operator to translate and to zoom the display. These screens<br />

may be used when the robot is in any autonomous mode to supervise its movements.<br />

The Detection screen displays moving objects trajectory in the map built by the robot (see fig. 6 (right)).<br />

This screen stands <strong>for</strong> surveillance purpose. The algorithm used is based on movement detection on laser range<br />

data and Kalman filtering. This screen shows that this interface is not limited to displacement control of the<br />

robot but is able to be extended to many surveillance tasks.<br />

The transition between any autonomous or teleoperation screens causes the ending of the current action or<br />

behavior. These transitions have been designed to appear natural to the operator. However, when one of these<br />

modes is actived, it is still possible to use the screens that display robot sensors data or representations without<br />

disactivating them. This feature is valid <strong>for</strong> the operator control unit but also <strong>for</strong> other control units. Thus,<br />

in a soldier team images and representations from a given robot can be viewed by a team member that is not<br />

responsible <strong>for</strong> the robot operation.<br />

4.5. Experimentation<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

The robot we used both in indoor and outdoor experiments is a Pioneer 2AT from ActivMedia equipped with<br />

sonar range sensors, color camera (with motorized lens and pan-tilt unit) and an IBEO laser range sensor. On<br />

board processing is done by a rugged Getac laptop running Linux and equipped with IEEE802.11 wireless link,<br />

frame grabber card and Arcnet card <strong>for</strong> connection to the laser (cf. figure 7). We also use another laptop with the<br />

same wireless link that plays the role of the operator control unit. The agents of the robot control architecture<br />

are distributed on both laptops. We did not use any specialized hardware or real-time operating system.<br />

Several experiments have been conducted in the rooms and corridors of our building and have yielded good<br />

results. In such an environment with long linear walls or corridors, the automous displacement of the robot<br />

using the implemented behaviors is effective. However, in this particular space, a few limitations <strong>for</strong> the SLAM<br />

process have been identified. They mainly come from the laser measurement when the ground plane hypothesis<br />

is not valid and in the presence of glass surfaces.<br />

The largest space we have experimented so far was the exhibition hall of the JNRR’03 conference. 23 Figure<br />

8 shows the global map and the robot trajectory during this demonstration. It took place in the machine-tool<br />

hall of the Institut Français de Mécanique Avancée (IFMA) and in the presence of many visitors. As could be<br />

seen on figure 8, these moving objects introduced some sparse and isolated edges on the map but did not disturb<br />

the map building process. The robot travelled an area about 60 × 60 m large with loops and abrupt heading<br />

changes. The robot displacement was mainly done in the safeguarded teleoperation mode because the building<br />

lacked main structures and directions <strong>for</strong> the robot to follow and because of the presence of people.<br />

28


Figure 7. The experimental plat<strong>for</strong>m.<br />

These experiments have revealed some missing functions in our interface (e.g. in mission map initialization,<br />

manual limitation of the space map, goto starting point behavior...) but no deep change requirement in the<br />

software architecture have been discovered.<br />

4.6. Future work<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Our human computer interface runs on a PC laptop with Linux and Qt C++ toolkit <strong>for</strong> the graphic interface.<br />

It is currently being implemented on a PDA with Linux and Qtopia environment. Moreover, new functions and<br />

behaviors are being integrated onto the plat<strong>for</strong>m such as exploration, go-back-home and assistance in narrow<br />

space crossing like doors. More mission-oriented behaviors such as surveillance and people or vehicle recognition<br />

would also enhance our system and make it more directly suited to operational contexts. In the meantime, we<br />

keep improving existing behaviors to make them as robust as possible. Development of extended multirobot<br />

capacities, interaction and cooperation are also planned as a second step.<br />

Concerning HRI, beyond considerations about ergonomy and usability of the interface, we consider working<br />

on semi-autonomous transition mechanisms between the various control modes, thus extending the simple<br />

reflexive teleoperation functionnalities described above. These transitions could be triggered by changes in the<br />

environment complexity <strong>for</strong> instance: knowing the validity domain of the behaviors, it seems possible to activate<br />

more adapted behaviors, to suggest a switch towards another mode 24 or to request the help of the human<br />

operator. This would also lead to more sophisticated interaction modes such as cooperative control. On-line<br />

evaluation of the overall per<strong>for</strong>mance of behaviors or autonomy modes 24 also appears as a promising direction,<br />

all the more so as such evaluation mechanisms are already integrated within our architecture. 25<br />

Finally, it could be interesting to introduce more direct human/robot interactions <strong>for</strong> the various kinds of<br />

agents (beyond the interface agent). Perception agents might benefit from human cognition in order to confirm<br />

object identification <strong>for</strong> instance. Action agents might request human assistance when the robot cannot deal<br />

autonomously with challenging environments, while the attention agent could be interested by new humandetected<br />

events which would warn the robot about potential dangers or opportunities.<br />

5. PERSPECTIVES AND OPEN ISSUES<br />

The various HRI mechanisms existing in the litterature, including our work on HARPIC, raise many general<br />

questions. For instance, generally speaking, how can we modify existing control <strong>architectures</strong> so as to introduce<br />

efficient HRI ? What features would make some <strong>architectures</strong> more adapted to HRI than others ? What kind of<br />

HRI is supported by existing <strong>architectures</strong> ? Is it possible to conceive general standard <strong>architectures</strong> allowing<br />

any kind of HRI ? What about scalability and <strong>modular</strong>ity <strong>for</strong> HRI mechanisms ? Which HRI modalities can be<br />

considered as most efficient within defense robotics applications ? All these questions can still be considered as<br />

<strong>open</strong> issues. However, based on the examples described in the previous sections and on the recent advances in<br />

software technologies, we can provide a few clues concerning these topics.<br />

29


Figure 8. Map and robot trajectory generated during a JNRR’03 demonstration in the IFMA hall. Each circle on the<br />

robot trajectory represents the diameter of the robot which is about 50 cm.<br />

5.1. HRI within a robot control architecture<br />

5.1.1. HRI modes<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Humans and robots have different perception, decision and action abilities. There<strong>for</strong>e, it is crucial that they<br />

should help each other in a complementary way. Depending on the autonomy control mode, the respective roles<br />

of the man and the robot may vary (see 26 <strong>for</strong> instance <strong>for</strong> a description of these roles according to the control<br />

mode). However, in existing interaction models, humans define the high-level strategies which are almost never<br />

transmitted to the robot: in the most advanced cases, the robot only knows task schedules or behaviors he must<br />

execute.<br />

We have already described eight different HRI modes in section 3.1. Some variants such as safeguarded<br />

teleoperation or reflexive teleoperation could also be mentionned. Moreover, other approaches have been proposed<br />

to characterize autonomy, e.g. ALFUS 27 or ANS, 28 which could lead to other HRI mode definitions.<br />

In the context of military applications, we have seen that adjustable autonomy can be considered as a promising<br />

mode. However, adjustable autonomy can lead to various mechanisms concerning control mode definitions<br />

and transition mechanims between modes: these complicated issues are currently being addressed in the scope<br />

of PEA TAROT (Technologies d’Autonomie décisionnelle pour la RObotique Terrestre) <strong>for</strong> example.<br />

5.1.2. Functions and functional levels concerned by HRI<br />

Any function of the robot may be concerned by HRI, whether it be perception, decision or action. In any<br />

architecture, a human robot interface can theoretically replace any component receiving in<strong>for</strong>mation and sending<br />

commands. However, in many cases, it is not meaningful to make such a replacement since some of these<br />

components can be dealt with (automated or computed) by machines very effectively. Indeed, the control of a<br />

mobile robot can globally operate at three different functional levels. At the lower level, it represents a direct<br />

control on effectors with sensory feedback and/or a direct view on the robot (teleoperation). At the next level,<br />

30


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

the operator is in charge of the selection of tasks or behaviors. The upper level corresponds to the supervision<br />

where the operator takes part in the mission planning and monitors its execution. Depending on the design<br />

approach (respectively reactive, behavioral or deliberative), a control architecture can be modified in order<br />

to integrate a specific HRI operating either on the actuator commmands, on the behaviors or on the mission<br />

planning/execution. This order might be difficult to bypass. For instance, on a very deliberative architecture<br />

including a planning module dedicated to the robot management, it would not be desirable that a HRI could<br />

enable an operator to intervene at the actuator or the behavior level (if the latter exists). Indeed, it might<br />

seriously disturb the planning execution (leading to the kidnapped robot syndrom).<br />

Thus, within some <strong>architectures</strong>, interaction with other agents is only allowed at the highest levels. For<br />

instance, an extension of the three-tier architecture to the RETSINA MAS 2 only plans communications with the<br />

high-level reasoning layer. In specific contexts such as HRI in close proximity, the LAAS is currently developing<br />

sophisticated and dedicated interaction mechanisms (based on common human/robot goals) within the decisional<br />

layer of its architecture. 29 In DORO control architecture, 30 the user can interact with the higher module (the<br />

executive layer) during the planning phase (posting new goals, modifying planning parameters or the generated<br />

plan, etc.) and with the middle functional layer during the executive phase (e.g. deciding where the rover has<br />

to go or when some activities should stop). Nevertheless, the operator is not supposed to communicate with<br />

the lower physical layer. On the other hand, in simple reactive <strong>architectures</strong> like DAMN, command arbitration<br />

allows behaviors as diverse as human teleoperation and robot autonomous safety modules to co-exist. 31 Finally,<br />

in a hybrid architecture such as Harpic, where behavior chaining or planning are only considered as an advice,<br />

there is no major obstacle to the introduction (or the modification) of an HRI at any functional level. In<br />

general hierarchical <strong>architectures</strong> like NASREM, 32 4D/RCS 13 or 3T, 6 HRI is also planned at every level so<br />

that autonomous behaviors can possibly override human commands or conversely be interrupted through human<br />

intervention. 31<br />

Moreover, we can notice that hybrid <strong>architectures</strong> seem well adapted to the insertion of HRI thanks to the<br />

concept of sensorimotor behaviors, which are usually quite meaningful <strong>for</strong> humans. These behaviors also look<br />

like the “elementary actions” of the soldiers, which also makes them good candidates <strong>for</strong> defense applications.<br />

5.1.3. Human / robot interfaces<br />

Human / robot interfaces are key components of semi-autonomous robots and they can be considered as part<br />

of their architecture. Roughly speaking, interfaces must provide the operator with in<strong>for</strong>mation (or raw measurements)<br />

collected by the robot sensors and allow him to send orders. The semantic level of in<strong>for</strong>mation and<br />

commands may vary but today, they are still inferior to those currently manipulated by humans. The interface<br />

can however display global and high-level in<strong>for</strong>mation built by other human actors (such as the tactical situation).<br />

More details can be found in 26 <strong>for</strong> instance concerning classical basic and additional services <strong>for</strong> human /<br />

robot interfaces.<br />

5.2. Constraints and limitations <strong>for</strong> the HRI design<br />

5.2.1. Security constraints<br />

In any given architecture, it seems possible (and sometimes necessary) to replace, double or duplicate a component<br />

(receiving in<strong>for</strong>mation and sending commands) by a human/robot interface in the case when the human operator<br />

expertise or responsability is not possible to avoid, <strong>for</strong> instance to ensure people and goods security. For instance,<br />

a robot must not endanger people in its vicinity.<br />

5.2.2. Technical constraints about communication bandwidth and real time<br />

Every functional level can be associated with a frequency concerning order exchange and in<strong>for</strong>mation feedback<br />

between the human operator and the robot, especially within dynamic environments. For instance, to control<br />

an actuator level using a video feedback, the communication flow between the human and the robot must be<br />

important and very steady. On the other hand, a control at the mission planning level can accomodate sparse and<br />

even irregular communications with low bandwidth. The choice of the software and hardware communication<br />

technologies between the interface and the command receiver/in<strong>for</strong>mation provider systems will depend on this<br />

constraint.<br />

31


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

5.2.3. Constraints concerning ergonomics and human capacities<br />

In the multirobot case or more generally when robots outnumber humans, it is not conceivable that robots should<br />

only be controlled at the actuator level since human operators will suffer from a heavy workload. Higher level<br />

commands (at the behavior or mission levels) are necessary. Moreover, it might be useful to reduce (filter) the<br />

in<strong>for</strong>mation provided to the interfaces (e.g. within limited reality concepts) and to endow robots with autonomous<br />

behaviors so that they can carry on the mission on their own when they have not received any order.<br />

The notion of user profile could be also used to adapt the content of an interface as well as its level of<br />

control so that it becomes better suited to a given human operator. 10 This process may refer to adaptation<br />

to the operator preferences, capacities or states (using pulse monitoring, temperature...). The interface should<br />

also adapt to the environment, e.g. by simplifying displays, showing only the most important in<strong>for</strong>mation in<br />

emergency situations or selecting the in<strong>for</strong>mation depending on the context.<br />

5.2.4. Harware constraints<br />

At the hardware level, a human / robot interface is made both of an in<strong>for</strong>mation output and order input device.<br />

The standardization constraint of an interface will mainly lie in the fact that one must avoid to use devices<br />

whose integration and ergonomy have not been specifically designed and adapted to a particular application<br />

such as a pilot joystick containing all degrees of freedom and buttons corresponding to the robot functions. In<br />

the near future, the most standard interface will probably composed of a keyboard and a screen (tactile screen)<br />

or possibly a simple joystick. A specific interface can be simulated by a standard one (i.e. a tactile screen),<br />

despite a reduced ergonomy and probably a decreasing efficacity of the operator when sending orders.<br />

5.2.5. HRI dependence from missions, plat<strong>for</strong>ms, payloads and interface devices<br />

Concerning general HRI modes, independence seems quite challenging since context may induce very different<br />

HRI needs. For instance, dismounted soldiers will probably not be able to bear as much workload as mounted<br />

soldiers sheltered in an armoured control unit. They may also operate in close vicinity with the robot which<br />

will lead to specific collaboration schemes due to security reasons <strong>for</strong> instance (which might be similar to service<br />

robotics approaches) while remote operation of robot will imply other relationships with the robot, with awareness<br />

problems similar to those encountered urban search and rescue (USAR) challenges.<br />

Each mission and each environment can be associated to specific plat<strong>for</strong>m, payload and HRI device categories.<br />

In a dynamic and hostile environment, a dismounted soldier should use a simple, intuitive and small<br />

interface, which will be quite different from the one used by his unit commander in his command and control<br />

shelter. However, all these equipments can be built based on the same software technologies and on common<br />

communication interfaces.<br />

5.3. New software technologies <strong>for</strong> HRI<br />

Like usual systems in everyday life, military systems are evolving from very centralized computing towards<br />

distributed computing. Future warriors will be equiped with numerous sensors, will wear “smart” clothes and<br />

will team with various semi-autonomous systems. They will rely on technologies such as “ubiquitous, ambient<br />

and pervasive computing”. Well-accepted and efficient computing needs to be context-sensitive, taking into<br />

account individual, collective as well as social dimensions.<br />

Generally speaking (beyond robotic applications), multi-agent systems (MAS) enable coordination between<br />

distributed operations, creation of <strong>open</strong> systems with increasing complexity and increasing abstraction levels.<br />

This paradigm was developed around notions such as autonomy, interaction, organization and emergence to<br />

emphasize the relationship between the user and the system. Practically speaking, MAS combine several emerging<br />

technologies: distributed computing such as “grid-computing” (investigating how to use distributed and<br />

dynamically accessible computing resources in an efficient way), artificial intelligence, constrained programming,<br />

learning, data mining, planning, scheduling and web semantics. Thus, MAS make it possible to move from a<br />

traditional, centralized, closed and passive structure to a decentralized, <strong>open</strong> and pro-active conception. They<br />

are characterized by notions such as extensibility, portability, robustness, fault or failure tolerance and reliability.<br />

Real-time issues as well as complex, constrained and embedded systems are still not fully adressed in the<br />

MAS community. However, agents can be created to adress specific capabilities. Concerning real-time issues,<br />

numerous studies propose anytime algorithms providing results which are all the more satisfying as the algorithm<br />

execution time increases. These algorithms can be interrupted anytime, the quality of the result depending on<br />

the time allocated to their execution. Within HARPIC, this capability has been implemented using several<br />

agents dedicated to the same task, corresponding to algorithms with different costs. The selection of a specific<br />

agent thus depends on the available computing time and on its cost. 25<br />

32


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

As a result, it seems that the numerous capacities and technologies involved in MAS are well suited to new operationnal<br />

concepts such as network-centric warfare, where humans and manned systems are supposed to interact<br />

with semi-autonomous and robotic entities. The multi-agent paradigm also appears as an interesting framework<br />

<strong>for</strong> the development of robot control <strong>architectures</strong> designed <strong>for</strong> defense applications. Indeed, multi-agent systems<br />

enable to build bottom-up applications, without being constrained with evolving technical requirements. The<br />

conception of military systems (or systems of systems) whose life cycle may turn out to be long could thus benefit<br />

from the multi-agent paradigm. Indeed, agents can be considered as services which can be used by the system<br />

depending on its current needs.<br />

Multi-threading and object-oriented languages can also facilitate the development of software architecture <strong>for</strong><br />

robots, especially in a multi-agent framework. Beyond the similarity between the object and the agent concepts,<br />

the object-oriented language C++ has allowed us to develop common structures <strong>for</strong> agent hierarchies, inheriting<br />

from mechanisms such as communication by messages between the agents. For example, without resorting to<br />

multi-agent systems, the conceptors of the CLARAty architecture (considered as NASA future architecture) have<br />

also opted <strong>for</strong> an object-oriented approach in order to facilitate code reusability and extension. 33 Indeed, this<br />

approach favors a hierarchical an <strong>modular</strong> decomposition of the system at different levels of abstraction. For<br />

instance, a class can provide a locomotion fonction which will be more and more specialized as it comes closer<br />

to the effectors (wheels, tracks...). This approach could also favour a hierarchical decomposition of the HRI,<br />

depending on the precision of the in<strong>for</strong>mation or orders exchanged between both entities.<br />

As <strong>for</strong> most parts of the software architecture, the development of HRI mechanisms will probably rely on<br />

various tools and tool sets. These tools are likely to be unified within integration plat<strong>for</strong>ms including both<br />

architecture and environment and containing all the necessary functionalities to create interfaces and interaction<br />

modes, ranging from specification assistance to simulation and validation tools (see 16 <strong>for</strong> instance).<br />

5.4. HRI scalability<br />

Scalability can be viewed as an extension capability avoiding:<br />

• extensive studies concerning the design of the architecture when adding new functionalities;<br />

• incompatibilities, disturbances or dead-lock between system components;<br />

• reduced per<strong>for</strong>mances which could be due to coordination issues between the system components or to communication<br />

bottlenecks.<br />

To tackle (and possibly solve) this problem, military organizations rely on specialized yet adaptive unities <strong>for</strong><br />

every activity (based on human adaptation capabilities), communication and local negociation or re-organization<br />

capabilities <strong>for</strong> every component, as well as a strong hierarchy wherein orders and reports flow rapidly.<br />

Specialization, communication and hierarchy are quite easy to reproduce on software systems. Adaptation<br />

(including learning), negociation and re-organization remain more <strong>open</strong> issues. Some of these capabilities have<br />

been developed <strong>for</strong> communication purposes <strong>for</strong> example, where QOS (quality of service) or transmission bandwidth<br />

are negociated depending on the quality of the medium. They are also at the core of an emblematic<br />

advanced project by IBM concerning “Autonomic Computing”, where a MAS-based system aims at delegating<br />

to software systems part of their own management, using self-healing and self-analysing agents. Moreover, agent<br />

hierarchization and priorizing mechanisms have already been implemented on MAS, which appear as a promising<br />

paradigm in terms of adaptation, negociation and re-organization. Indeed, such capabilities are part of the<br />

motivations <strong>for</strong> creating MAS.<br />

Concerning the in<strong>for</strong>mation or the representations which are to be manipulated within future large systems<br />

and systems of systems (containing both humans and robots), it seems difficult today to plan everything in<br />

advance. It should be possible to define the detail level <strong>for</strong> every entity. The in<strong>for</strong>mation which could be delivered<br />

by entities of higher, similar or lower hierarchical ranks should be submitted to queries and answers, according<br />

to context-dependent rules or hierarchical decisions. One could thus envision future military network-centric<br />

systems as some kinds of web-services endowed with hierachical rights and improved security.<br />

5.5. HRI standardization<br />

Current design approaches concerning robot software <strong>architectures</strong> may not exploit the full possibilities <strong>for</strong> HRI<br />

(e.g. a given architecture may not adapt to any HRI mode): the emergence of reference and standardized <strong>architectures</strong><br />

might help to reduce these specificities. Moreover, standards <strong>for</strong> <strong>architectures</strong> and <strong>for</strong> associated HRI<br />

would probably facilitate the extension of current systems, the specification and development of new <strong>architectures</strong><br />

as well as the validation of their design.<br />

33


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

However, HRI standardization goes beyond the simple fact of using standard technologies. It requires <strong>modular</strong><br />

conception and component scalability as well. Thus, important issues remain regarding HRI design. Is it<br />

possible to build generic logical views of control architecture with associated HRI components, including the<br />

communication mechanisms between these components ? Is there a limitation in this standardization taking into<br />

account the wide range of missions <strong>for</strong> robots or the various HRI modes ?<br />

Concerning the specific components that should appear in generic logical views of <strong>architectures</strong> including HRI,<br />

it seems that depending on the approach, new components may arise. For instance, in the context of service<br />

robotics, LAAS has defined interaction agents (representing interacting humans at the robot decisional level) as<br />

well as task delegates (monitoring task execution towards the goal and the level of engagement of interaction<br />

agents through specific observors). 29 In the case of Fong’s approach about collaborative control, 3 specific<br />

segments have also been added such as a query manager or a user adapter. The challenge would thus consist in<br />

building views which would remain general enough to avoid specific solutions (related to specific contexts) and<br />

at the same time that would remain detailed enough so as to be useful during the specification, developement<br />

and validation phases of new <strong>architectures</strong>.<br />

At the software level, the standardization of human / robot interfaces including communication with surrounding<br />

systems is not very different from the standardization of any human / machine interface communicating<br />

with a wider application: this subject has been extensively studied in the field of distributed computing. For<br />

instance, the 3-tier architecture aims at dividing an application into three layers : a presentation layer, a processing<br />

layer and a data access layer. The presentation layer, i.e. the display and the dialog with the user can<br />

be based on HTML (using a web navigator) or on WML (using a GSM). Architectures such as SOAP (service<br />

oriented <strong>architectures</strong>), which have become popular through Web-services, make it posible to create services<br />

(software components with a strong inner coherence) with weak external coupling. These <strong>architectures</strong> rely on<br />

standards such as XML (Extensible Markup Language) <strong>for</strong> data description, WSDL (Web Services Description<br />

Language) <strong>for</strong> the description of service interfaces, UDDI (Universal Description Discovery and Integration) <strong>for</strong><br />

service index management and SOAP (Simple Object Access Protocol) <strong>for</strong> service calling.<br />

In military in<strong>for</strong>mation systems, standards (16 or 22 links, MIDS...) have already been developed to exchange<br />

numerical messages, concerning tactical situations <strong>for</strong> instance. In MAS, interaction languages between agents<br />

have been specified within the multi-agent community, e.g. KQML or ACL (Agent Communication Language)<br />

from the FIPA (Foundation <strong>for</strong> Intelligent Physical Agents).<br />

These technologies and the associated mechanisms could be used <strong>for</strong> software standardization purposes, <strong>for</strong><br />

instance within networks where robots would interact with human operators and various other systems. It seems<br />

that these standards could also be considered as references <strong>for</strong> future HRI purposes in the defense field.<br />

For example, in a very prospective approach, similarly to common approaches used in the web-services, could<br />

it be interesting to integrate a service index at the battlefield network level, allowing the negociation of services<br />

between all entities? Will we see robots and soldiers naturally sharing in<strong>for</strong>mation, helping each other to do<br />

their job and fulfill their mission ?<br />

6. CONCLUSION<br />

In this article, we first listed some operational missions which could involve unmanned vehicles and explained<br />

various needs <strong>for</strong> HRI. In the next section, we mentionned a few robotic systems designed <strong>for</strong> military applications<br />

and described the HRI component within their software <strong>architectures</strong>. Then we presented our work concerning<br />

the development of an effective software enabling to control a reconnaissance robot: we have created a robot<br />

control architecture based on a multiagent paradigm which allows various levels of autonomy and interaction<br />

between an operator and the robot. According to the state of the art in robot autonomy in hostile environment<br />

we think that adjustable autonomy is one of the most pertinent choice <strong>for</strong> the next generation of military robots.<br />

Our work shows that many control modes with seamless transitions can be fairly integrated in a quite simple<br />

operator control unit. It has also confirmed the good behavior of our software architecture HARPIC and the great<br />

advantage of the flexible multiagent approach when adding many functions. Moreover, this kind of interface is a<br />

good mean to experiment, evaluate or demonstrate autonomous robot behaviors and HRI, which will hopefully<br />

allow us to gather more military requirements and to improve the specification of future systems. Finally, based on<br />

the lessons learned from the development of this software architecture and from other related work, we discussed<br />

more general issues concerning the adaptation of existing <strong>architectures</strong> to HRI and the possible standardization of<br />

HRI within reference <strong>architectures</strong>. We also proposed a few guidelines <strong>for</strong> the practical development of softwares<br />

<strong>for</strong> <strong>architectures</strong> which would be suited to HRI.<br />

To conclude, military contexts offer a wide range of situations <strong>for</strong> robots and tackling these issues could<br />

lead to numerous innovations in the field of HRI. Even though some of these problems are currently addressed<br />

34


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

in prospective programs launched by the DGA such as PEA Mini-RoC, PEA TAROT or PEA Demonstrateur<br />

BOA, there is still a major ef<strong>for</strong>t to be done in order to meet tomorrow needs.<br />

ACKNOWLEDGEMENTS<br />

The work concerning HARPIC was entirely per<strong>for</strong>med within the Geomatics-Imagery-Perception department of<br />

the Centre d’Expertise Parisien of the DGA. The authors would like to thank S. Moudere, E. Sellami, N. Sinegre,<br />

F. Souvestre, D. Lucas, R. Cornet, G. Calba, C. Couprie and R. Cuisinier <strong>for</strong> their contributions to the software<br />

development of this work. They are also very grateful to D. Filliat, M. Lambert, E. Moline and D. Luzeaux<br />

who shared their reflexions about autonomy and human / robot interaction and helped to improve the HARPIC<br />

system.<br />

REFERENCES<br />

1. S. Pratt, T. Frost, A. M. Shein, C. Norman, M. Ciholas, G. More, and C. Smith, “Applications of tmr<br />

technology to urban search and rescue: lessons learned at the wtc disaster,” in SPIE AeroSense, Unmanned<br />

Ground Vehicle Technology IV, (Orlando, USA), 2002.<br />

2. I. R. Nourbakhsh, K. Sycara, M. Koes, M. Yong, M. Lewis, and S. Burion, “Human-robot teaming <strong>for</strong><br />

search and rescue,” in Pervasive Computing, 2005.<br />

3. T. Fong, C. Thorpe, and C. Baur, “Collaboration, dialogue, and human-robot interaction,” in Proceedings<br />

of the 10th International Symposium of Robotics Research, (Lorne (Victoria, Australia)), November 2001.<br />

4. P. Backes, K. Tso, and G. Tharp, “The web interface <strong>for</strong> telescience,” in IEEE International Conference on<br />

Robotics and Automation, ICRA’97, (Albuquerque, NM), 1997.<br />

5. M. Stein, “Behavior-based control <strong>for</strong> time-delayed teleoperation,” Tech. Rep. 378, GRASP Laboratory,<br />

University of Pennsylvania, 1994.<br />

6. G. Dorais, R. Bonasso, D. Kortenkamp, B. Pell, and D. Schreckenghost, “Adjustable autonomy <strong>for</strong> humancentered<br />

autonomous systems on mars,” in 6 th International Joint Conference on Artificial Intelligence<br />

(IJCAI), Workshop on Adjustable Autonomy System, 1999.<br />

7. D. Kortenkamp, R. Bonasso, D. Ryan, and D. Schreckenghost, “Traded control with autonomous robot as<br />

mixed initiative interaction,” in 14 th National Conference on Artificial Intelligence, (Rhode Island, USA),<br />

july 1997.<br />

8. T. Röfer and A. Lankenau, “Ensuring safe obstacle avoidance in a shared-control system,” in Seventh International<br />

Conference on Emergent Technologies and Factory Automation, EFTA’99, (Barcelonna, Spain),<br />

1999.<br />

9. G. Ferguson, J. Allen, and B. Miller, “TRAINS-95: Towards a mixed-initiative planning assistant,” in<br />

Third International Conference on AI Planning Systems, AIPS-96, 1996.<br />

10. T. Fong, C. Thorpe, and C. Baur, “Collaborative control: a robot-centered model <strong>for</strong> vehicle teleoperation,”<br />

in AAAI Spring Symposium on Agents with Ajustable Autonomy, Technical report SS-99-06, (Memlo Park),<br />

1999.<br />

11. H. I. Christensen, J. Folkesson, A. Hedstr ´ ’om, and C. Lundberg, “UGV technology <strong>for</strong> urban navigation,” in<br />

SPIE Defense and Security Conference, Unmanned Ground Vehicle Technology VI, (Orlando, USA), 2004.<br />

12. E. B. Pacis, H. R. Everett, N. Farrington, and D. Bruemmer, “Enhancing functionnality and autonomy in<br />

man-portable robots,” in SPIE Defence and Security Conference, Unmanned Ground Vehicle Technology<br />

VI, (Orlando, USA), 2004.<br />

13. J. Albus, “4D/RCS: a reference model architecture <strong>for</strong> intelligent unmanned ground vehicles,” in SPIE<br />

AeroSense Conference, Unmanned Ground Vehicle Technology IV, (Orlando, USA), 2002.<br />

14. J. A. Bornstein, “Army ground robotics research program,” in SPIE AeroSense Conference, Unmanned<br />

Ground Vehicle Technology IV, (Orlando, USA), 2002.<br />

15. G. Schaub, A. Pfaendner, and C. Schaefer, “PRIMUS: autonomous navigation in <strong>open</strong> terrain with a tracked<br />

vehicle,” in SPIE Defense and Security Conference, Unmanned Ground Vehicle Technology VI, (Orlando,<br />

USA), 2004.<br />

16. R. C. Arkin, T. R. Collins, and Y. Endo, “Tactical mobile robot mission specification and execution,” in<br />

Mobile Robots XIV, pp. 150–163, (Boston, MA), September 1999.<br />

17. K. S. Ali and R. C. Arkin, “Multiagent teleautonomous behavioral control,” in Machine Intelligence and<br />

Robotic Control (MIROC), 1(1), pp. 3–17, 2000.<br />

18. B. A. Bodt and R. S. Camden, “Technology readiness level 6 and autonomous mobility,” in SPIE AeroSense<br />

Conference, Unmanned Ground Vehicle Technology VI, (Orlando, USA), April 2004.<br />

35


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

19. Y. Endo, D. C. MacKenzie, and R. C. Arkin, “Usability evaluation of high-level user assistance <strong>for</strong> robot<br />

mission specification,” in Special Issue on Human-Robot Interaction of the SMC Transactions: Part C, 2004.<br />

20. J. Scholtz, B. Antonishek, and J. Young, “Evaluation of human-robot interaction in the nist reference<br />

search and rescue test arenas,” in Per<strong>for</strong>mance Metrics <strong>for</strong> Intelligent Systems (PerMIS’04), 2004.<br />

21. D. Luzeaux and A. Dalgalarrondo, Hybrid architecture based on representations, perception and intelligent<br />

control, ch. Studies in Fuziness and Soft Computing: Recent Advances in Intelligent Paradigms and Applications.<br />

ISBN-3-7908-1538-1, Physica Verlag, Heildelberg, 2003.<br />

22. T. Röfer, “Using histogram correlation to create consistent laser scan maps,” in IEEE International Conference<br />

on Robotics Systems (IROS-2002), (EPFL, Lausanne, Switzerland), 2002.<br />

23. “Quatrièmes journées nationales de recherche en robotique (jnrr’03),” (Clermont-Ferrand, France,<br />

http://lasmea.univ-bpclermont.fr/jnrr03/), 8-10 October 2003.<br />

24. M. Baker and H. A. Yanco, “Autonomy mode suggestions <strong>for</strong> improving human-robot interaction,” in IEEE<br />

Conference on Systems, Man and Cybernetics, October 2004.<br />

25. A. Dalgalarrondo, Intégration de la fonction perception dans une architecture de contrôle de robot mobile<br />

autonome. PhD thesis, Université de Paris-Sud, Centre d’Orsay, janvier 2001.<br />

26. A. Dalgalarrondo, “De l’autonomie des systèmes robotisés,” in Technical report, ref. CTA/02 350<br />

108/RIEX/807, 2003.<br />

27. H.-M. Huang, K. Pavek, J. Albus, and E. Messina, “Autonomy levels <strong>for</strong> unmanned systems (alfus)<br />

framework: An update,” in SPIE Defence and Security Conference, Unmanned Ground Vehicle Technology<br />

VII, (Orlando, USA), 2005.<br />

28. G. M. Kamsickas and J. N. Ward, “Developing ugvs <strong>for</strong> the fcs program,” in SPIE AeroSense, Unmanned<br />

Ground Vehicle Technology V, (Orlando, USA), 2003.<br />

29. R. Alami, A. Clodic, V. Montreuil, E. A. Sisbot, and R. Chatila, “Task planning <strong>for</strong> human-robot interaction,”<br />

in Joint sOc-EUSAI conference, October 2005.<br />

30. A. Finzi and A. Orlandini, “A mixed-initiative approach to human-robot interaction in rescue scenarios,”<br />

in International Conference on Automated Planning and Scheduling (ICAPS), Printed Notes of Workshop<br />

on Mixed-Initiative Planning and Scheduling, pp. 36–43, 2005.<br />

31. T. Fong, C. Thorpe, and C. Baur, “Multi-robot remote driving with collaborative control,” in IEEE Transactions<br />

on Industrial Electronics, 50(4), August 2003.<br />

32. J. Albus, R. Lumia, J. Fiala, and A. Wavering, “NASREM: the nasa/nbs standard reference model <strong>for</strong><br />

telerobot control system architecture,” in Technical report, Robot System Division, National Institute of<br />

Standards and Technology, 1987.<br />

33. F. Ingrand, “Architectures logicielles pour la robotique autonome,” in Quatrièmes Journées Nationales de<br />

Recherche en Robotique (JNRR’03), (Clermont-Ferrand, France), October 2003.<br />

36


Abstract<br />

ProCoSA: a software package<br />

<strong>for</strong> autonomous system supervision<br />

Magali BARBIER 1 – Jean-François GABARD 1<br />

Dominique VIZCAINO 2 – Olivier BONNET-TORRÈS 1,2<br />

1<br />

ONERA/DCSD – 2 av. Edouard Belin – 31055 Toulouse cedex 4 - FRANCE<br />

{ Magali.Barbier, Jean-Francois.Gabard, Olivier.Bonnet }@onera.fr<br />

2<br />

SUPAERO/LIA – 10 av. Edouard Belin – 31055 Toulouse cedex 4 - FRANCE<br />

Dominique.Vizcaino@supaero.fr<br />

Autonomy is required onboard uninhabited vehicles that move in partially known and dynamic environments. This<br />

autonomy is made possible thanks to the use of embedded decisional software <strong>architectures</strong>. This paper presents ProCoSA, an<br />

asynchronous Petri net-based tool that enables implementing such <strong>architectures</strong>. It allows procedure programming and<br />

execution in autonomous systems. Vehicle behaviours are modelled using Petri nets; a Petri net player runs the model and<br />

manages links with other software components. Several decisional <strong>architectures</strong> developed using ProCoSA <strong>for</strong> different types<br />

of vehicles - Autonomous Underwater Vehicle (AUV), autonomous Uninhabited Aerial Vehicle (UAV) and autonomous<br />

Uninhabited Ground Vehicles (UGVs) - are described in this paper, together with associated experiments and results.<br />

1. Introduction<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Research on autonomy is per<strong>for</strong>med <strong>for</strong> Uninhabited<br />

Ground Vehicles (UGVs), Uninhabited Aerial Vehicles<br />

(UAVs), Autonomous Underwater Vehicles (AUVs) and<br />

space vehicles. Autonomy is characterised by the level of<br />

interaction between the vehicle and a human operator: the<br />

higher level the operator’s decisions are, the more<br />

autonomous the vehicle is. Between tele-operation (no<br />

autonomy) and full autonomy (no operator intervention),<br />

there are several ways to allow a vehicle to control its own<br />

behaviour during the mission [1].<br />

Autonomous vehicles that move in partially known and<br />

dynamic environments have to deal with asynchronous<br />

disruptive events. Hence the need <strong>for</strong> implementing<br />

onboard decision capabilities that allow the vehicle to<br />

per<strong>for</strong>m the mission even when the initial plan prepared<br />

offline is not valid any more. Decision capabilities, which<br />

guarantee the adaptability of the vehicle to variable<br />

environmental conditions, must be implemented in a<br />

dedicated architecture able to manage the components of<br />

37<br />

the whole control loop {perception, situation assessment,<br />

decision, and action}. The capacity to integrate<br />

environmental in<strong>for</strong>mation given by sensors and to evaluate<br />

the current state is indeed essential <strong>for</strong> the vehicle to assure<br />

its own safety and the desired level of autonomy.<br />

In an embedded decisional software architecture, a high<br />

level function is required to control mission execution: it<br />

supervises the nominal execution and triggers reactions to<br />

disruptive events. This function includes interactions with<br />

the physical system through dedicated control algorithms<br />

and deliberative task management.<br />

The supervision function considered in this paper,<br />

sometimes called mission execution control function, does<br />

not include offline mission preparation, task allocation on<br />

computers, actuator control, nor replaces the underlying<br />

real time operating system. Its central role is shown on<br />

Figure 1.


Decision<br />

Planning<br />

Navigation<br />

Guidance<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Operator<br />

Supervision<br />

Vehicle<br />

Situation Monitoring<br />

and Assessment<br />

Perception<br />

Figure 1 – Central role of the supervision function<br />

The embedded decisional software architecture has to<br />

integrate all the data relative to the mission (its objectives),<br />

vehicle behaviour monitoring, connection to deliberative<br />

software programs, communication with the ground station<br />

and other vehicles, and reaction to disruptive events. The<br />

main features required <strong>for</strong> such an architecture are:<br />

robustness, reliability, <strong>modular</strong>ity, flexibility, genericity -<br />

regarding to mission, vehicle and environment -,<br />

independence from software components, easy interfacing.<br />

Several types of <strong>architectures</strong> exist in the literature. In<br />

this paper, we focus on the ProCoSA software package,<br />

used by ONERA <strong>for</strong> controlling and monitoring highly<br />

autonomous systems.<br />

2. ProCoSA software package<br />

ProCoSA, which stands <strong>for</strong> “Programmation et Contrôle<br />

de Systèmes à <strong>for</strong>te Autonomie”, was first developed in<br />

1993 at ONERA while per<strong>for</strong>ming research in the field of<br />

mobile robotics. Several enhancements and its rewriting in<br />

an interpreted language led in 1999 to its official<br />

registration.<br />

ProCoSA was designed in order to provide the<br />

developer with an integrated package putting together and<br />

synchronising the various functions achieving system<br />

autonomy, among which:<br />

• data processing (sensor data, situation assessment data,<br />

operator’s input);<br />

• nominal mission monitoring and control (vehicle and<br />

payload control actions);<br />

• decision (management of disruptive events, replanning).<br />

These functions are often developed as separate<br />

subsystems. They have to co-operate in order to fulfil the<br />

autonomous system behaviour requirements <strong>for</strong> the<br />

specified missions. More precisely, the needs are the<br />

following:<br />

• offline tasks: specification of the nominal and nonnominal<br />

procedures, including co-operation between<br />

procedures and software programs, software program<br />

coding <strong>for</strong> embedded operation; a software program<br />

includes a set of functions that can be called by<br />

procedures;<br />

38<br />

• online tasks: procedure execution, event monitoring,<br />

and management of the dialog with the operator.<br />

ProCoSA is based on the Petri net graphical and<br />

mathematical modelling tool <strong>for</strong> discrete event systems. It<br />

includes the following components:<br />

• EdiPet, a graphical user interface <strong>for</strong> Petri nets, is used<br />

both by the developer <strong>for</strong> procedure design and by the<br />

operator <strong>for</strong> execution monitoring;<br />

• JdP is the Petri net player of ProCoSA:<br />

. it executes the procedures: looking <strong>for</strong> the<br />

occurrence of events, it fires the event-triggered<br />

transitions of the Petri nets and runs associated<br />

actions;<br />

. it supervises the dialog between procedures and<br />

software programs;<br />

. it manages the communication with systems<br />

outside the vehicle.<br />

• Tiny, a Lisp interpreter specially dedicated to<br />

distributed embedded applications, is the development<br />

language of JdP. The communication protocol <strong>for</strong> the<br />

exchange of data is socket-based (TCP/IP).<br />

Tiny and JdP were respectively developed at the<br />

computer science and automatic control departments of<br />

ONERA. Prolexia Company developed EdiPet.<br />

The following subsections describe:<br />

• the Petri net <strong>for</strong>malism used in ProCoSA;<br />

• the Petri net player (JdP);<br />

• EdiPet;<br />

• the property verification process.<br />

2.1. ProCoSA Petri nets<br />

A Petri net [2] is a bipartite graph with<br />

two types of nodes: P is a finite set of places and T is a<br />

finite set of transitions. Arcs are directed and represent the<br />

<strong>for</strong>ward incidence function F: P × T → N (from a place to a<br />

transition) and the backward incidence function<br />

B: P × T → N (from a transition to a place) respectively.<br />

The marking of a Petri net is defined as a function from<br />

P→ N, and symbolised by tokens: a given marking is<br />

associated to a given state of the system modelled by the<br />

Petri net. The evolution of tokens within the net is achieved<br />

trough transition firing (Figure 2), which obeys transition<br />

firing rules:<br />

• a transition is enabled if its input places contain at least<br />

the number of tokens given by the F <strong>for</strong>ward incidence<br />

function;<br />

• an enabled transition can be fired, and if it is fired, this<br />

number of tokens is removed from its input places;<br />

• a number of tokens given by the B backward incidence<br />

function is added in its output places.<br />

Petri nets allow sequencing, parallelism and<br />

synchronisation to be easily represented.


Figure 2 – Example of a Petri net transition firing sequence:<br />

t1 t2 t1 t3<br />

Petri nets used by ProCoSA are interpreted nets: triggering<br />

events and actions are attached to transitions: an enabled<br />

transition is now fired iff the associated triggering event<br />

occurs, and the associated actions are executed. They also<br />

are “safe” nets: only unary arcs are used, and places should<br />

not contain more than one token. Special places, called<br />

“global places”, have been introduced in order to ease<br />

synchronisation between nets while preserving <strong>modular</strong>ity:<br />

a global place is “shared” between different nets, thus<br />

enabling the behaviour modelled by a given net to take into<br />

account a state of the system modelled in another net. This<br />

feature is particularly suitable <strong>for</strong> handling disruptive<br />

events. Timers can be programmed within ProCoSA: a<br />

special action enables a timer variable to be instantiated,<br />

which allows actions with a limited duration to be<br />

modelled. Finally, the hierarchical modelling features<br />

offered by ProCoSA enable the developer to structure the<br />

whole application in a generic way: at the highest<br />

description level, nets model generic behaviours, regardless<br />

of the characteristics of a given vehicle; at the lowest level,<br />

they model the sequences of elementary actions to be<br />

per<strong>for</strong>med by vehicle or payload. This <strong>modular</strong> approach<br />

eases a quick adaptation to system changes (e.g. taking into<br />

account a new payload).<br />

Several types of actions can be associated to transitions:<br />

• activation of a Petri net (a Petri net is activated when it<br />

receives its initial marking);<br />

• deactivation of a Petri net (when a Petri net is<br />

deactivated, it looses its marking);<br />

• emission of an event;<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

39<br />

• emission of a request towards JdP (e.g. a timer<br />

initialisation);<br />

• emission of a message towards a software program.<br />

Several parameters can be associated to an event and<br />

used by the actions associated to the transition. This<br />

enables to establish a limited data flow between the<br />

different software programs activated by the Petri nets:<br />

when a software program ends, it sends an event towards a<br />

Petri net (usually the one that launched it) with a set of<br />

output parameters. Those parameters can be immediately<br />

transferred by the receiving transition to the next software<br />

program activated by this transition.<br />

2.2. The JdP Petri net player<br />

JdP was developed in Tiny language. Tiny is a Lisp<br />

interpreter designed <strong>for</strong> distributed embedded applications<br />

and includes a library implementing the TCP/IP<br />

communication protocol. An important feature of ProCoSA<br />

lies in the fact that there is no code translation step between<br />

the Petri net procedures and their execution: they are<br />

directly interpreted by the Petri net player, thus avoiding<br />

any supplementary error causes.<br />

When a ProCoSA application is launched, JdP first<br />

reads the Petri net structures and establishes the socket<br />

connections with EdiPet (if used during the execution<br />

phase) and software programs. Specified Petri nets are<br />

activated (they receive their initial markings), and the<br />

internal JdP loop is ready to receive the incoming events.<br />

2.3. EdiPet graphical user interface<br />

Prolexia Company developed the EdiPet graphical user<br />

interface (Figure 3). This tool is used both <strong>for</strong> the<br />

development of a ProCoSA project and <strong>for</strong> execution<br />

monitoring. The set of Petri nets, the set of software<br />

function names and their relations define a project in<br />

EdiPet. EdiPet thus allows:<br />

• the connections inside the whole project between JdP,<br />

nets and software programs;<br />

• the graphical creation of Petri nets; several editor<br />

windows display and allow to modify attributes<br />

associated to each object (net, place, transition, event,<br />

action);<br />

• the generation of relevant interfaces between Petri nets<br />

and software programs; EdiPet generates the function<br />

prototypes, which have then to be filled by the software<br />

developer;<br />

• the display of the net states during execution; when<br />

activated (which means that one token is present),<br />

places and transitions are displayed in red.<br />

During the execution phase, EdiPet can be used in the<br />

ground station of the autonomous vehicle, as far as a<br />

communication link is established.


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Figure 3 – EdiPet graphical user interface<br />

The example shown on Figure 4 and Figure 5 models a<br />

simple project <strong>for</strong> the supervision of a UAV. The objective<br />

of the mission is to join a sequence of waypoints. Several<br />

payloads are available onboard, and the activation of a<br />

given payload is associated to each waypoint. The<br />

MISSION Petri net models the main mission phases: roll,<br />

takeoff, climb, transits to each waypoint, approach and<br />

landing. The GUI software program simulates the guidance<br />

of the vehicle. In the nominal execution, the DEC<br />

decisional software program gives the next waypoint to<br />

join. The EVENTS Petri net models two examples of nonnominal<br />

reactions. If a payload fails, MISSION is<br />

deactivated, DEC is called and computes another list of<br />

waypoints that do not use the faulty payload. The<br />

replanning transition of the MISSION net is then fired and<br />

the vehicle continues the mission with the new list of<br />

waypoints. In case of engine failure, MISSION is also<br />

deactivated, DEC is called and computes the nearest<br />

emergency site. The EVENTS net supervises directly the<br />

transit to this site by calling GUI.<br />

Figure 4 – Example of EdiPet project<br />

40<br />

Figure 5 – Examples of EdiPet Petri nets<br />

2.4. Verification process<br />

ProCoSA includes a verification tool, which makes use<br />

of well known Petri net analysis techniques to check that<br />

some “good” properties are satisfied by the procedures,<br />

both at the single procedure level and at the whole project<br />

level (that is to say taking into account inter-net<br />

connections).<br />

The following properties are checked:<br />

• place safety (not more than one token per Petri net<br />

place);<br />

• detection of dead markings (deadlocks);<br />

• detection of cyclic firing sequences (loops).<br />

The principle of this analysis lies on the automatic<br />

generation of the reachability graph, which contains all the<br />

possible reachable states of each net and of the whole set of<br />

interconnected nets. As nets are safe, this set is necessarily<br />

finite, and its analysis permits to deduce the above<br />

properties.


3. Applications<br />

Several projects are ongoing with ProCoSA:<br />

• with DGA/GESMA on an Autonomous Uninhabited<br />

Vehicle [3];<br />

• with EADS DS SA on an Uninhabited Aerial Vehicle<br />

[4];<br />

• at SUPAERO, a French Engineer School, on<br />

Uninhabited Ground Vehicles, <strong>for</strong> team operation [1].<br />

3.1. AUV project<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

GESMA, in co-operation with ONERA and<br />

PROLEXIA, develops a software architecture with four<br />

levels of autonomy from tele-operated mission to fully<br />

autonomous goal driven mission. Tele-operation is seen as<br />

level 0: the operator uses a control box to move the vehicle.<br />

Main tests of the vehicle were per<strong>for</strong>med within this level:<br />

battery, communication, sonar, and other sensors. At the 1 st<br />

autonomy level, an ordered set of elementary controls<br />

prepared by the operator describes the mission. Sixteen<br />

controls combining monitoring and modification of main<br />

variables (duration, speed, heading, immersion, and<br />

altitude) have thus been implemented. In May 2005, sea<br />

trials conducted with the Redermor AUV successfully<br />

validated the pilot software program. At the 2 nd autonomy<br />

level, a mission is defined by a set of segments (straight<br />

line trajectories).<br />

A ProCoSA based architecture has been implemented<br />

<strong>for</strong> the 3 rd autonomy level: the mission is defined by a set<br />

of mission areas where a survey procedure is per<strong>for</strong>med. At<br />

the end of these operations, the vehicle joins the end area.<br />

The environment is defined by bathymetry, currents,<br />

<strong>for</strong>bidden areas and non-navigable water data. The<br />

planning software program has then to compute the 2D<br />

itinerary (the order to join the mission areas), the 4D<br />

trajectory between mission areas, and the survey planning.<br />

3.1.1. Experimental configuration and<br />

decisional architecture overview<br />

Experiments are conducted with the Redermor AUV<br />

(Figure 6). Three computers and thirteen distributed Can<br />

interfaces with computation capabilities are installed on the<br />

plat<strong>for</strong>m. Serial link, Can Bus, I2C and Ethernet<br />

connections are available <strong>for</strong> payload integration and data<br />

exchange. OA1 computer is in charge of complex vehicle<br />

functions, supervision and mission planning; it thus<br />

includes the decisional software architecture. OA2 and<br />

OA3 computers are used <strong>for</strong> sonar payload controls and<br />

treatments like Computer Aided Detection and<br />

Classification algorithms <strong>for</strong> mine warfare. The embedded<br />

architecture is shown on Figure 7.<br />

41<br />

Data<br />

manager<br />

GDD<br />

events<br />

OA1<br />

EVT<br />

state<br />

sensors Drivers<br />

Figure 6 – Redermor AUV<br />

Planning PLN<br />

Petri Player<br />

ProCoSA<br />

Guidance GUI<br />

commands<br />

Pilot IDC<br />

mission<br />

controls<br />

Data server<br />

OA_NAVIO<br />

Petri nets<br />

vehicle<br />

behaviour<br />

OA2, OA3<br />

Payload drivers<br />

and treatments<br />

events<br />

CAN bus Drivers actuators<br />

Figure 7 – AUV embedded architecture<br />

For mission supervision, the decisional architecture in<br />

OA1 computer includes:<br />

• the Petri net player of ProCoSA;<br />

• vehicle behaviour modelling through Petri nets, <strong>for</strong><br />

nominal and non-nominal situations;<br />

• four software programs connected to ProCoSA:<br />

planning (PLN), guidance (GUI), dynamic data<br />

manager (GDD) and event listener (EVT);<br />

• the pilot program software (IDC) that computes controls<br />

sent to the engine;<br />

• the data server (OA_NAVIO) developed by GESMA<br />

that carries out bi-directional communication links with<br />

the hardware architecture.<br />

3.1.2. Nominal scenario<br />

The behaviour of the vehicle during the execution of a<br />

mission is described in eleven Petri nets. This description is<br />

hierarchical (Figure 8):<br />

• In Mission net, at level 1, two places model the stop and<br />

the ongoing states of the mission;<br />

• Mission_Phases net at level 2 models main phases of<br />

the ongoing mission: planning initialisation, the loop<br />

structure enabling the vehicle to join each mission area<br />

(transit to the area and survey) and transit to the end<br />

area;


• Three Petri nets model level 3: Transit_to_Area net <strong>for</strong><br />

transiting to the next mission area and Operation net <strong>for</strong><br />

survey achievement; Initialisation net runs itinerary and<br />

operation planning when starting the mission;<br />

• Level 4 is devoted to computation: Itinerary_Planning<br />

net computes an itinerary <strong>for</strong> the saved mission taking<br />

into account non-navigable areas; Trajectory_Planning<br />

net computes a trajectory between two areas modelled<br />

by their centroid waypoint; Operation_Planning net<br />

asks <strong>for</strong> vehicle state and computes the operative<br />

sequence; both a trajectory and an operative sequence<br />

are composed of linear trajectory followings and course<br />

changes;<br />

• Level 5 executes the mission: Trajectory net asks <strong>for</strong> the<br />

next trajectory and runs it; Survey net asks <strong>for</strong> the<br />

operative sequence and runs it; Planning_and_<br />

course_change net computes the required gyrations and<br />

heading following sequence and executes it.<br />

Initialisation<br />

Initialisation<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Mission<br />

Mission<br />

Mission Phases<br />

Mission Phases<br />

Transit to Area<br />

Transit to Area<br />

Operation<br />

Operation<br />

PLN<br />

Itinerary<br />

Itinerary<br />

planning<br />

planning<br />

PLN<br />

Trajectory<br />

Trajectory<br />

planning<br />

planning<br />

PLN<br />

Operation<br />

Operation<br />

planning<br />

planning<br />

GDD GDD GDD<br />

Trajectory<br />

Trajectory<br />

GDD<br />

GUI<br />

Survey<br />

Survey<br />

GDD<br />

GUI<br />

Software programs<br />

PLN: planning<br />

GUI: guidance<br />

GDD: data manager<br />

Planning<br />

Planning<br />

and<br />

and<br />

course<br />

course<br />

change<br />

change<br />

PLN GUI<br />

Figure 8 – Hierarchy of Petri nets<br />

3.1.3. Non-nominal scenario<br />

Many events can affect AUV missions and require<br />

onboard replanning. At present, three types of events are<br />

implemented:<br />

• an alarm event <strong>for</strong>ces the vehicle to move directly to the<br />

end area: a new itinerary to avoid non-navigable areas<br />

and a new trajectory are computed;<br />

• when arriving on an objective area, the real current is<br />

different from the predicted one and invalidates the<br />

already-computed survey: a new operative sequence is<br />

thus computed;<br />

• the operator asks <strong>for</strong> a local operation of inspection, <strong>for</strong><br />

example to inspect a suspicious object. A specific<br />

operative sequence is planned be<strong>for</strong>e the vehicle<br />

resumes its mission.<br />

These events are considered in the architecture through<br />

three new Petri nets. The Decision net implements the<br />

decisions that the vehicle must make according to the type<br />

of event, e.g. return to the end area in case of an alarm<br />

event. The Action net executes the decisions; it can run<br />

nominal nets. The Inspection net executes the inspection<br />

asked by the operator.<br />

42<br />

3.1.4. Lab bench tests<br />

A bench test has been developed to test the whole<br />

decisional architecture. The OA_NAVIO data server has<br />

been connected to on the one hand a Redermor simulator,<br />

and on the other hand the IOVAS interface. The simulation<br />

of several missions allowed to validate the desired<br />

behaviour of the vehicle (Petri nets), the decisional<br />

functions of PLN, the management of dynamic data in<br />

GDD, the guidance and the pilot of the vehicle (GDD and<br />

IDC) and the reception of disrupted events (EVT) together<br />

with the supervision on IOVAS operator interface (Figure<br />

9). Nominal and non-nominal scenarios were both<br />

successfully simulated.<br />

Figure 9 – Supervision of a simulated AUV mission.<br />

Survey area is blue, <strong>for</strong>bidden area is red, planning trace is<br />

yellow, vehicle simulated trace is green.<br />

3.1.5. Sea experiments<br />

Recent sea experiments have been conducted in<br />

Douarnenez Bay. Three missions were carried out. The<br />

vehicle is followed by acoustic means, and only a few<br />

points are currently available (Figure 10). Vehicle<br />

immersion, transit to the survey area, line following at a<br />

given altitude and return to the end area were successfully<br />

per<strong>for</strong>med. These experiments validated the embedded use<br />

of ProCoSA. Emphasis should now be put on guidance<br />

accuracy, perception function, classification of disruptive<br />

events, situation monitoring and assessment functions.<br />

Figure 10 – Supervision of a real AUV mission.<br />

Acoustic vehicle trace is green.


3.2. UAV project<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

EADS and ONERA are involved in a national project<br />

that aims at testing an architecture designed <strong>for</strong> mission<br />

supervision in a UAV and demonstrating the relevance of<br />

such <strong>architectures</strong> in future uninhabited vehicles. As all<br />

categories of UAVs have to per<strong>for</strong>m their missions in<br />

complex environments with the same types of constraints,<br />

the embedded architecture has to be generic, i.e. not<br />

dedicated to a given mission, environment or vehicle. As<br />

the mission may be disrupted by internal or external events,<br />

e.g. failures, weather situation, interfering aircraft, and<br />

threats, onboard plan monitoring and replanning are<br />

required in order to deal with nominal or disruptive events,<br />

avoid systematic return to base and proceed with the<br />

mission as well as possible given the new constraints.<br />

3.2.1. Experimental configuration and<br />

decisional architecture overview<br />

Experiments are conducted on a light plane, a Dyn’Aero<br />

MCR-4S (Figure 11). Two computers are devoted to the<br />

control part of the plane, and a third one to the decision<br />

part, i.e. mission management. The first control computer is<br />

directly linked to the plane sensors and actuators (e.g. the<br />

automatic pilot) and to the ground station, while the second<br />

one acts as an interface between the previous real time<br />

control computer and the decision computer: it sends<br />

<strong>for</strong>matted frames and interprets elaborated orders.<br />

Figure 11 – Light plane used <strong>for</strong> experiments in UAV<br />

project<br />

The role of the software decisional architecture<br />

implemented on the decision computer through ProCoSA<br />

(Figure 12) is thus to monitor the main mission phases of<br />

the nominal scenario, to manage the dialog with the<br />

operator (payload use), and to generate control decision<br />

when disruptive events occur. In order to elaborate the prespecified<br />

events used by ProCoSA from the telemetry<br />

frame data, an additional interface software layer was<br />

developed (Figure 13).<br />

43<br />

decision computer<br />

ProCoSA<br />

JdP<br />

Petri Player<br />

Petri<br />

nets<br />

EdiPet<br />

subsystem<br />

software<br />

connection<br />

decision<br />

computation<br />

environment<br />

database<br />

interface software<br />

mission<br />

database<br />

dialog windows<br />

frame receipt<br />

event<br />

processing<br />

frame<br />

emission<br />

UAV<br />

database<br />

frames<br />

Figure 12 – UAV embedded decisional software<br />

architecture<br />

Figure 13 – Example of the “ready <strong>for</strong> takeoff” event<br />

elaboration<br />

3.2.2. Nominal scenarios<br />

control<br />

computer<br />

A four-level mission modelling architecture was<br />

defined in order to guarantee a generic approach:<br />

• level 0: initialisation of the communication protocols<br />

between ProCoSA and the other software layers;<br />

• level 1: global state of the mission and modes<br />

monitoring (nominal - non nominal);<br />

• level 2: main nominal phases of the mission (from preflight<br />

tests and takeoff to touchdown and end-of-flight<br />

tests);<br />

• level 3: sub-phases of the mission.<br />

Level 3 corresponds to less generic procedures, i.e.<br />

more specific to the vehicle type or to mission and payload<br />

characteristics. The Petri net shown on Figure 14 details the<br />

linking of the different steps within the operational area:<br />

this net clearly shows the looped structure enabling the set<br />

of pre-programmed tasks to be achieved, and includes<br />

communication requests to the operator. ProCoSA timers<br />

are used to limit the time allowed <strong>for</strong> the operator’s answer.


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Figure 14 – Modelling of the linking of operational tasks<br />

3.2.3. Non-nominal scenarios<br />

In order to be able to apply a generic approach to deal<br />

with disruptive events, they were classified in four<br />

categories:<br />

• catastrophic events lead to mission abortion, and cannot<br />

be recovered; when such an event occurs, the<br />

processing of any other kind of events is aborted and no<br />

further incoming event can be processed; example:<br />

engine total failure;<br />

• safety-related events lead to modifying the flight profile<br />

or the flight plan - e.g. change route <strong>for</strong> a while - which<br />

may induce delays or new constraints on the use of the<br />

payload; examples: interfering aircraft, new <strong>for</strong>bidden<br />

area, turbulence...<br />

• mission-related events only have consequences on the<br />

mission itself; replanning amounts to adapt the mission<br />

to the new constraints, e.g. remove waypoints;<br />

examples: camera failure, violated temporal constraint,<br />

new mission goal...<br />

• communication-related events are related to<br />

communication breakdowns between the UAV and the<br />

ground; such events result in the UAV being fully<br />

“autonomous” there<strong>for</strong>e it has to proceed with the<br />

mission as planned; example: telemetry failure.<br />

According to this classification, one Petri net was<br />

designed <strong>for</strong> each disruptive event category: an example is<br />

given by the engine failure Petri net shown on Figure 15:<br />

one can note the use of the ProCoSA “global places”<br />

feature (see section 2.1), which enables to adapt the<br />

44<br />

reconfiguration actions to the current state of the mission.<br />

This reconfiguration process is achieved through software<br />

function activation requests, which enable to build a set of<br />

control orders to be sent to the control computer<br />

Figure 15 – Engine failure reaction modelling<br />

3.2.4. Ground and flight tests<br />

Two series of field tests are planned in March and May<br />

2006. A test series will be organised as a two-step process:<br />

during the first week, ground tests will be conducted in<br />

order to prepare and simulate the scenarios, which will be<br />

run on the plane. Flight tests will be conducted during the<br />

second week, with a pilot and an operator onboard the<br />

plane.<br />

The nominal scenario will be tested first. Non-nominal<br />

scenarios implying a unique disruptive event will be<br />

considered afterwards, and eventually scenarios including<br />

two or three cumulative disruptive events.<br />

A double check process will be achieved during each<br />

flight test. Flight data (telemetry frames) and the<br />

corresponding Petri net states will be registered onboard.<br />

The ProCoSA layer of the decision computer will be<br />

duplicated on the ground station that will also receive the<br />

real-time telemetry frames, thus enabling system state to be<br />

monitored on the ground.<br />

3.3. UGVs project<br />

The computer science and automation lab (LIA) at<br />

SUPAERO and the Systems Control and Flight Dynamics<br />

Department (DCSD) at ONERA are involved in a cooperation<br />

on mobile robotics to answer a national need to<br />

integrate robots into military operations. The project goal is<br />

to operate several autonomous robots. In these studies, the<br />

choice of a centralised architecture was made.<br />

3.3.1. Experimental configuration and<br />

decisional architecture overview<br />

Robots used in the project are Pekee robots, developed<br />

by Wany Robotics (Montpellier, France). They feature


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

three individual racks <strong>for</strong> computer cards (<strong>for</strong><br />

communication via WiFi, and/or camera and image<br />

management), a shock detection sensor and are surrounded<br />

by an Infra Red sensor ring (Figure 16).<br />

Figure 16 - A Pekee robot. Note the WiFi antenna, the IR<br />

sensor ring and the camera, as well as two occupied racks<br />

Two libraries were developed:<br />

• the movement library stores elements such as free<br />

translation and rotation but also half-controlled<br />

displacements such as translation until obstacle or<br />

controlled movements such as following a wall, a<br />

corridor, a sinuous route...<br />

• the picture library is based on <strong>open</strong>CV primitives and<br />

allows in particular the detection of obstacles and<br />

localisation of markers.<br />

These library elements constitute a set of services that<br />

the agent may use in order to achieve the mission goals.<br />

Several groups of students worked on these<br />

experiments. The last objective was to implement the<br />

control of the basic moves <strong>for</strong> two robots in a known<br />

environment. The mission (Figure 17) consists in virtually<br />

changing the position of several coloured rings following a<br />

defined order. As robot moves are independent, a planning<br />

algorithm was developed to manage area occupancy<br />

conflicts.<br />

start areas<br />

Pekee robots<br />

bottleneck<br />

area<br />

load areas<br />

Figure 17 – UGVs mission<br />

The control architecture <strong>for</strong> each robot heavily relies on<br />

ProCoSA. The architecture is composed of three layers<br />

(Figure 18):<br />

• the ProCoSA layer models actions chronology; it is<br />

centralised;<br />

45<br />

• the interface layer translates ProCoSA orders into robot<br />

controls and robot sensor data into ProCoSA events; it<br />

is written using the Visual C++ development tool;<br />

controls are mainly related to speed and heading;<br />

• the robot layer executes controls and sends sensor data<br />

coming from IR sensors and camera image.<br />

ProCoSA<br />

events<br />

orders<br />

Visual C++<br />

interface<br />

sensor data<br />

controls<br />

Figure 18 – UGV architecture<br />

3.3.2. Lab tests<br />

Pekee Robot<br />

Eight elementary moves have been modelled: they<br />

allow the robot to move in the labyrinth and to take into<br />

account the bottleneck area (go <strong>for</strong>ward, go backward, turn<br />

right, turn left, follow right wall, follow left wall, enter<br />

bottleneck, exit bottleneck). Only one robot could enter this<br />

area at the same time. Seven Petri nets were developed,<br />

three per robot and one <strong>for</strong> their synchronisation, i.e. the<br />

management of the conflict in the bottleneck area according<br />

to the plan algorithm.<br />

The mission was successfully executed by robots. This<br />

validated the use of ProCoSA <strong>for</strong> synchronising a tworobot<br />

mission.<br />

3.3.3. Generic supervision approach<br />

Some work [5] is carried out on designing a more<br />

generic supervision architecture that would take advantage<br />

of the <strong>modular</strong> nature of the robot and run the controller<br />

and the diagnosis module (Figure 19).<br />

Figure 19 - Robot embedded architecture<br />

Figure 20SEQARABIC proposes a Petri net model <strong>for</strong><br />

the controller. The initial planning creates the plan. The<br />

execution phase runs the plan that in its turn executes<br />

actions from the libraries. A replanning step is triggered at


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

the occurrence of a disruptive event: the robot is set in a<br />

reaction mode (safety mode) while the event is analysed.<br />

The consequences of the event are then dealt with during<br />

the repair that recreates the plan. Once the repair is<br />

calculated, it is adjusted so as to smoothly switch from<br />

reaction to repaired plan execution.<br />

Figure 20 - ProCoSA plan controller<br />

3.3.4. Future experiments<br />

The mission will consist in a search and rescue<br />

operation in a partially known urban environment. The<br />

team is composed of two aerial robots and two terrestrial<br />

robots. The robots have knowledge of possible routes to<br />

move in the environment. The uncertainty parameters, such<br />

as obstacles barring expected paths, robot or service<br />

failure... are handled through event firings that trigger<br />

replanning phases on parts of the plan.<br />

The experimental set-up uses four Pekee robots, a labyrinth<br />

modelling the environment and a transparent pane to bear<br />

the “aerial” robots (Figure 21). The two aerial robots are<br />

characterised by a downward camera in order to detect<br />

ground obstacles and deliver accurate global localisation<br />

<strong>for</strong> ground robots. All communications are WiFi-based.<br />

46<br />

Figure 21 - Experimental set-up. Note the two aerial robots<br />

on the Plexiglas pane with downward camera<br />

4. Conclusions<br />

The current version of ProCoSA allows designing<br />

decisional embedded software architecture: Petri nets<br />

describe the execution logic <strong>for</strong> the various specified<br />

autonomous vehicle behaviours, including both nominal<br />

mission phases and reactions to disruptive events. It also<br />

manages connections with software programs associated to<br />

specific functionalities such as situation assessment,<br />

planning, guidance. Those behaviours are directly<br />

interpreted and executed by the Petri net player, without<br />

any intermediate code translation step, and on-line<br />

execution monitoring is possible with EdiPet. Significant<br />

validation steps can be achieved during the design phase<br />

thanks to Petri net <strong>for</strong>mal analysis properties.<br />

ProCoSA software has been successfully used in<br />

several projects <strong>for</strong> the control of autonomous vehicles.<br />

First sea experiments validated its use in embedded<br />

architecture. Aerial tests are planned by the end of this<br />

year. Research on its implementation in several mobile<br />

robots composing a team is ongoing.<br />

The main objective of the proposed embedded<br />

decisional software architecture is to supervise mission<br />

execution whatever occurs. It thus deals with<br />

environmental uncertainties: reactions to disruptive events<br />

are implemented in Petri nets that call deliberative tasks.<br />

Deliberative tasks and mission data are independent of<br />

ProCoSA and that point offers <strong>modular</strong>ity and genericity to<br />

the whole architecture. Indeed, specificity of the vehicle is<br />

taken into account in databases and in the interfaces<br />

connected to the control computer.<br />

Current results point out several possible ways of<br />

improvement:<br />

• a perception function that studies and develops methods<br />

to elaborate and update real world sensor data, would<br />

help to take appropriate decisions; of course, good data<br />

quality is required as well;<br />

• a situation monitoring and assessment function that<br />

estimates the system parameters and predicts their


evolution could help to anticipate the arrival of<br />

disruptive events; this should also increase security<br />

level <strong>for</strong> the vehicle; image processing is also a difficult<br />

task to implement onboard;<br />

• a generic study of all types of events, their classification<br />

and identification of associated reactions are necessary<br />

in all autonomous system, as emphasised in UAV<br />

experiments;<br />

• all studies drew attention on the necessity of enhanced<br />

planning algorithms <strong>for</strong> autonomous vehicles; research<br />

have to be conducted to improve proposed algorithm<br />

with regard to duration constraints; mission objectives<br />

could also be selected onboard (objective planning<br />

function) according to collected environmental data;<br />

• all possible communications between the ground<br />

operator and the vehicle have to be defined properly,<br />

especially the operator’s decisions and the associated<br />

reactions onboard. This communication protocol gives<br />

the vehicle its level of autonomy. An onboard<br />

architecture adapting its autonomy level according to<br />

the types of disruptive events could also be considered;<br />

• simulation remains essential to valid autonomy<br />

architecture be<strong>for</strong>e its implementation; the use of a<br />

bench test be<strong>for</strong>e sea experiments in the AUV project<br />

led to architecture validation during the first<br />

autonomous missions;<br />

• studies on the collaboration between several<br />

autonomous vehicles have to continue, as this is the<br />

main point in future operational theatres;<br />

• the operator’s role evolves as the autonomy level<br />

increases, and ground systems have to evolve as well,<br />

e.g. with implementation of decision support systems.<br />

References<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

[1] B.T. Clough, Metrics, Schmetrics ! How the heck do<br />

you determine a UAV’s autonomy anyway ?<br />

Per<strong>for</strong>mance Metrics <strong>for</strong> Intelligent Systems<br />

Workshop, 2002, Gaithersburg, MA, USA.<br />

[2] Murata, T. (1989). Petri nets: properties, analysis and<br />

applications. IEEE. 77 (4), pp. 541-580.<br />

[3] F. Dabe, H. Ayreault, M. Barbier, S. Nicolas, Goal<br />

Driven Planning and Adaptivity <strong>for</strong> AUVs, UUST 05<br />

Unmanned Untethered Submersible Technology, 21-24<br />

August 2005, Durham, NH, USA.<br />

[4] M. Barbier, J.F. Gabard, J.H. Llareus, C. Tessier, J.<br />

Caron, H. Fortrye, L. Gadeau, G. Peiller,<br />

Implementation and Flight Testing of an Onboard<br />

Architecture <strong>for</strong> Mission Supervision, 21 st IUAVS<br />

International Unmanned Air Vehicle Systems<br />

Conference, April 3-5, 2006, Bristol, UK.<br />

[5] O. Bonnet-Torrès and C. Tessier, Cooperative Team<br />

Plan: Planning, Execution and Replanning, AAAI'06<br />

Spring Symposium on Distributed Schedule and Plan<br />

Management, March 2006, Stan<strong>for</strong>d, CA, USA.<br />

47


Abstract<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

GOAL DRIVEN PLANNING AND ADAPTIVITY FOR AUVS<br />

Hervé Ayreault, Frederic Dabe<br />

GESMA, Groupe d’Etudes Sous-Marines de l’Atlantique<br />

BP42 – 29240 Brest Armées – France<br />

{herve.ayreault, frederic.dabe}@dga.defense.gouv.fr<br />

Magali Barbier<br />

ONERA, Office National d’Etudes et de Recherches Aérospatiales, Toulouse Center<br />

BP 4025 – 31055 Toulouse cedex 4 – France<br />

Magali.Barbier@onera.fr<br />

Stéphane Nicolas, Gaël Kermet<br />

PROLEXIA, 865 avenue de Bruxelles – 83500 La Seyne-sur-mer – France<br />

{snicolas, gkermet}@prolexia.fr<br />

Environmental knowledge is increasing every<br />

day but it is neither comprehensive nor<br />

perfect. For unsupervised long mission, it is<br />

difficult to mitigate with these environmental<br />

uncertainties. There<strong>for</strong>e, an embedded system<br />

is required to allow automatic re-planning<br />

with respect to real environmental data<br />

collected during the mission. In order to<br />

achieve this re-planning function, the<br />

in<strong>for</strong>mation system must be able to deal with<br />

the goals that led to the initial mission<br />

planning.<br />

GESMA, in cooperation with ONERA and<br />

PROLEXIA, develops a software architecture<br />

with four levels of autonomy from<br />

teleoperated mission to fully autonomous goal<br />

driven mission. The 3 rd level is described in<br />

this paper; at this level, the mission is defined<br />

by a set of objective areas where a survey<br />

procedure is per<strong>for</strong>med. The onboard<br />

hardware architecture is implemented on three<br />

computers whereas the software architecture<br />

has been developed around ProCoSA. The<br />

Man Machine Interface on one hand<br />

facilitates the offline preparation of a mission<br />

and on the other hand supervises the online<br />

tracking of the vehicle. Planning algorithms<br />

48<br />

online compute new itineraries and<br />

trajectories according to the occurrence of<br />

critical events. On-going tests are per<strong>for</strong>med<br />

both by simulation and at sea.<br />

The paper will present the different autonomy<br />

levels of the onboard architecture, the way<br />

there are implemented on the AUV,<br />

optimization algorithms, operators MMI and<br />

results of on-going tests.<br />

Introduction<br />

Research on autonomy <strong>for</strong> unmanned vehicles<br />

is per<strong>for</strong>med in robotic, aerial and spatial<br />

fields. The autonomy is characterized by the<br />

level of interaction between the vehicle and<br />

the operator (human in general): the more<br />

abstract the operator decisions are, the more<br />

autonomous the vehicle is. There are between<br />

teleoperated vehicles (no autonomy) and fully<br />

autonomous vehicles (no operator<br />

intervention) several ways to allow a system<br />

to control its own behavior during its mission<br />

[1].


When the vehicle moves in a partially known<br />

and dynamic environment, one way to make<br />

the vehicle autonomous is to implement<br />

onboard decisional capabilities. They allow<br />

the vehicle to per<strong>for</strong>m its mission even when<br />

the initial plan prepared offline is no more<br />

valid. Decision capabilities, which guaranty<br />

the adaptivity of the vehicle, must be<br />

implemented in an architecture in order to<br />

close the loop {perception, situation<br />

evaluation, decision, and action}. The<br />

capacity to integrate environmental<br />

in<strong>for</strong>mation via sensors and to evaluate the<br />

current state is indeed essential <strong>for</strong> the vehicle<br />

to assure its own safety and a minimum level<br />

of autonomy.<br />

Figure 1 - Redermor AUV<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

GESMA works on autonomy projects <strong>for</strong><br />

several years. One is devoted to the<br />

development of an autonomous system <strong>for</strong> the<br />

execution of missions in a partially known<br />

environment. The choice was made to<br />

implement several levels of increasing<br />

autonomy while conducting in parallel sea<br />

trials to validate each implementation. Sea<br />

trials are conducted with the Redermor AUV.<br />

In this project, ONERA develops the onboard<br />

decisional architecture, whereas PROLEXIA<br />

develops the man machine interface.<br />

49<br />

The paper first describes the selected levels of<br />

autonomy in relation with the type of mission<br />

to be per<strong>for</strong>med. The second paragraph<br />

describes the architecture <strong>for</strong> one high<br />

decisional autonomous level. A man-machine<br />

interface to prepare the mission and to<br />

supervise it is thus presented in the third<br />

paragraph. The fourth chapter introduces the<br />

event generation by data acquisition and<br />

treatment. The main decisional task (fifth<br />

part) is the computation of a new plan when<br />

the current plan fails: optimization algorithms<br />

are implemented to allow the online reaction<br />

to events; there<strong>for</strong>e, the goal driven<br />

deliberative planning products a plan of<br />

actions based on a set of high level goals and<br />

constraints. On-going tests given in the sixth<br />

part highlight the interest of the architecture<br />

when event occurs and modify the security,<br />

the mission goal or the measurement process.<br />

We conclude about this research which could<br />

conduct to an operational product.<br />

Selected levels of autonomy and missions<br />

The teleoperation is seen as the 0 level: the<br />

operator uses a control box to move the<br />

vehicle. Main tests of the vehicle have been<br />

per<strong>for</strong>med within this level: battery,<br />

communication, sonar, and other sensors…<br />

At the 1 st level, an ordered set of elementary<br />

controls prepared by the operator describes<br />

the mission. 16 controls combining the<br />

following and the modification of main<br />

variables have thus been implemented. Main<br />

variables are duration, speed, heading,<br />

immersion, altitude and examples of control<br />

are “follow the current heading during X<br />

seconds”, “go to the X immersion with the<br />

same heading”, “turn until the X heading”.<br />

Then, the onboard system, through a<br />

commands interface software program, online<br />

computes consigns sent to the actuators of the<br />

vehicle.<br />

At the 2 nd level, operator defines a set of<br />

segments (straight line trajectories). Onboard,<br />

a planning software computes the course<br />

changes between the end of a segment and the


eginning of the next one. A guidance<br />

software program computes 1 st level<br />

elementary controls to follow the different<br />

parts of the obtained plan. An emergency plan<br />

is also regularly updated to allow the vehicle<br />

to join one of the pre-defined recuperation<br />

areas if an emergency event occurs.<br />

At the 3 rd level, the mission is defined by a set<br />

of mission areas where a survey procedure is<br />

per<strong>for</strong>med. The environment is defined by<br />

bathymetry, currents, <strong>for</strong>bidden areas and<br />

non-navigable water data. The planning<br />

software program has then to compute the 2D<br />

itinerary (the order to join the mission areas),<br />

the 4D trajectory between mission areas, and<br />

the survey planning. The guidance is similar<br />

to the one of the 2 nd level, and the navigation<br />

to the one of the 1 st level. As <strong>for</strong> the 2 nd level,<br />

in order to be adaptive, the onboard<br />

architecture must be able to react to events,<br />

which modify the initial planning.<br />

The full autonomy is seen as the 4 th level: the<br />

vehicle does its best to per<strong>for</strong>m the global<br />

mission without communication with the<br />

operator. In real situation, this autonomous<br />

level is currently not applicable <strong>for</strong> the whole<br />

duration of a mission. Some critical decisions<br />

have to be validated by an operator; some<br />

delicate tasks also require human<br />

intervention. However, the needs in terms of<br />

autonomy vary during a mission, and a<br />

solution could be to adjust the level according<br />

to the evaluated situation. For example, the 4 th<br />

level is required when communication links<br />

are – intentionally or not – cut off. These<br />

adjusts could thus be per<strong>for</strong>med by the<br />

vehicle or by the operator [2].<br />

This paper focuses on the 3 rd autonomy level.<br />

Onboard architecture<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

3 computers and 13 distributed Can interfaces<br />

with computation capabilities are installed on<br />

the plat<strong>for</strong>m. Serial link, Can Bus, I2C and<br />

Ethernet connections are available <strong>for</strong> all<br />

types of payload integration and data<br />

exchange.<br />

The 13 Can interfaces are dedicated to the<br />

vehicle itself and support the 0 and 1 st level<br />

50<br />

functions, <strong>for</strong> example fins controllers, or<br />

battery monitoring.<br />

One of the computers (OA1) is in charge of<br />

complex and real time vehicle functions. The<br />

supervision and mission planning is<br />

implemented in an other computer (OA2).<br />

Levels 2, 3 and 4 are executed on that<br />

plat<strong>for</strong>m.<br />

Two computers (OA2 & OA3) are used <strong>for</strong><br />

sonar payload controls and treatments like<br />

Computer Aided Detection and Classification<br />

algorithms <strong>for</strong> mine warfare. This program<br />

has already been tested and can generate Mine<br />

Like Contacts as events <strong>for</strong> future replanning.<br />

The mission control software has Ethernet<br />

interfaces and a specific driver allows<br />

communication with the Can bus. It has a<br />

<strong>modular</strong> design in order to facilitate<br />

development, integration and tests. Its<br />

architecture separates physically the 0 and 1 st<br />

level from the others. On figure 2, the yellow<br />

boxes model the decisional architecture, the<br />

blue box models the direct relationship<br />

between this architecture and the action level,<br />

the green box models the computation of<br />

consigns (1st level program), and the pink<br />

box models the event generation by data<br />

acquisition and treatment. The bottom boxes<br />

model the communication with the hardware<br />

architecture.<br />

Data<br />

manager<br />

GDD<br />

OA1<br />

Sensors<br />

OA2 mission<br />

EVT<br />

events<br />

status<br />

Planning optimization PLN<br />

Petri player<br />

ProCoSa<br />

Guidance GUI<br />

commands<br />

Interface of commands<br />

IDC<br />

consigns<br />

Data server of GESMA<br />

OA_NAVIO<br />

events<br />

Petri nets<br />

Vehicle<br />

behavior<br />

OA2 and OA3<br />

Payload<br />

drivers and<br />

treatments<br />

Can big boxes<br />

Drivers CAN Bus Drivers Actuators<br />

Figure 2 - Onboard architecture


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

The decisional architecture is based on the<br />

ProCoSA program [3], which was developed<br />

<strong>for</strong> programming and execution monitoring of<br />

autonomous systems. Behavior of the vehicle<br />

during the mission is described by Petri nets<br />

[4] (directed graphs with two kinds of nodes,<br />

called places and transitions); in ProCoSA,<br />

places represent the considered behavior<br />

internal states and transitions indicate the<br />

phenomenon that change the behavior<br />

execution state. The Petri Player of ProCoSA<br />

is the complex automate which runs the<br />

system in accordance with Petri nets and<br />

computation software such as the planning<br />

and guidance programs.<br />

Man-Machine Interface<br />

The Man Machine Interface, named IOVAS,<br />

provides several graphical tools to prepare,<br />

check and supervise missions of different<br />

kinds of vehicles: AUVs, ships.<br />

The preparation tool (Figure 3 and Figure 4)<br />

allows to graphically define the vehicles<br />

configuration and constraints and to design<br />

the tasks sequence to be executed by each of<br />

them. Each task definition is dynamically<br />

checked with the environment data (altitude<br />

to ground, <strong>for</strong>bidden areas, current) and with<br />

the vehicle’s constraints like max autonomy,<br />

max immersion, min altitude, max speed etc.<br />

The preparation tool displays bathymetric<br />

lines, currents (arrows) and <strong>for</strong>bidden areas to<br />

help the operator to prepare the mission.<br />

During the preparation process, at any time,<br />

the operator can validate the mission by<br />

running the planning algorithm. The predicted<br />

trajectory is then displayed over the map.<br />

Two levels of tasks can be defined:<br />

− 1 st level tasks: definition of elementary<br />

controls like “Follow heading ALPHA at<br />

an immersion Z <strong>for</strong> time T”.<br />

− 3 rd level tasks: definition of objectives (in<br />

our case geographic areas to survey with<br />

associated payloads).<br />

51<br />

Figure 3 - Preparation interface<br />

Figure 4 - MMI environmental data<br />

The supervision tool (Figure 5) is used to<br />

track the vehicles positions in real-time using<br />

data coming from acoustic links, short base<br />

line tracking system or GPS serial links.<br />

It displays each vehicle’s trajectory and<br />

detailed status if available (heading,<br />

immersion, altitude, energy). The real<br />

trajectories can be compared in real time with<br />

the planned ones.


Figure 5 - Supervision interface<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Events generation, Data treatment<br />

AUV mission planning is per<strong>for</strong>med <strong>for</strong><br />

specific tasks or goals. Its efficiency depends<br />

on its capacity to integrate environmental<br />

in<strong>for</strong>mation and sensors status and is<br />

there<strong>for</strong>e related to AUV security insurance,<br />

mission optimization and data quality<br />

insurance.<br />

For AUV security, the mission planning must<br />

consider bathymetry, known obstacles,<br />

density variation. Currents are also important<br />

in<strong>for</strong>mation to guaranty the energy<br />

consumption and there<strong>for</strong>e the feasibility of<br />

the mission (AUV must have enough energy<br />

to come back). Bathycelerimetry distribution<br />

might also influence the acoustic<br />

communication <strong>for</strong> supervised mission.<br />

Mission optimization is important <strong>for</strong> mission<br />

efficiency which is a high operational<br />

requirement. Assuming that the mission is<br />

defined by a set of operation areas, the<br />

optimization can be as simple as calculating<br />

the geometric shortest path or as complex as<br />

computing a path while optimizing energy<br />

consumption related to tides and currents.<br />

Data quality is relative to exhaustivity,<br />

accuracy and confidence. The mission<br />

planning must guaranty perfect sonar<br />

coverage and overlap whatever the<br />

bathymetry is. It must take into account<br />

52<br />

navigation errors due to seabed characteristics<br />

(Doppler doesn’t like mud) or too strong<br />

currents that might induce unstability.<br />

Another requirement would be to have<br />

heading perpendicular to sand ripples <strong>for</strong><br />

sonar acquisition.<br />

There<strong>for</strong>e, many events can affect AUV<br />

mission and require onboard re-planning. At<br />

that time, GESMA ef<strong>for</strong>ts mainly focus on<br />

two different kinds of events.<br />

− Real time currents assessment is one of<br />

them as it might influence survey heading<br />

and energy consumption. Different<br />

solutions to get current values are<br />

evaluated like parameters identification,<br />

DVL/ADCP sensor, and electromagnetic<br />

probe.<br />

− Regarding mine warfare, it is very<br />

important to increase the level of<br />

efficiency of Computer Aided Detection<br />

algorithms. Onboard re-planning <strong>for</strong><br />

multi-aspect sonar acquisition of mine like<br />

contacts is one of the main GESMA<br />

objectives in a near future. Mission replanning<br />

to take into account low level of<br />

efficiency due to environmental<br />

conditions (sand ripples or reverberation<br />

<strong>for</strong> examples) will be the next step.<br />

Planning algorithms<br />

The objective of the planning function is to<br />

compute the movements of the vehicle <strong>for</strong> the<br />

achievement of the mission. This computation<br />

is per<strong>for</strong>med offline during the preparation of<br />

the mission to allow the operator to see its<br />

feasibility and the estimated vehicle behavior.<br />

Onboard and online, the function gives<br />

autonomy to the vehicle. A global online<br />

computation of all the movements by only<br />

one algorithm hasn’t been considered <strong>for</strong><br />

several reasons: the number of constraints is<br />

high and could lead to a complex algorithm,<br />

the computation duration has to be relatively<br />

short, some problems could be locally solved.<br />

The decision was then taken to develop<br />

several planning algorithms.


As the mission is defined by a set of<br />

objectives, the first algorithm computes an<br />

itinerary that is it orders the objectives. In this<br />

2D search, objectives are modeled by their<br />

geometric centroid waypoint. A mission<br />

graph is built with the objective waypoints,<br />

the start and the end waypoints. The costs of<br />

the edges are computed taking into account<br />

the current and the non-navigable areas. For<br />

each pair of waypoints in the mission graph, a<br />

Dijkstra algorithm finds iteratively the<br />

shortest path which is built on a sort of<br />

reduced visibility graph that allows avoiding<br />

known obstacles (Figure 6). The cost matrix<br />

is not symmetric because of the current. A<br />

Little algorithm [5] looks then <strong>for</strong> a<br />

Hamiltonian path of lowest cost in the<br />

mission graph.<br />

mission graph<br />

waypoint<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

obstacles<br />

mission graph<br />

waypoint<br />

Figure 6 - Shortest path between two waypoints of the mission<br />

graph<br />

The second algorithm computes the actions to<br />

achieve a survey operation. The operative<br />

sequence is composed of linear trajectory<br />

followings and course changes. To consider<br />

sonar constraints, the survey is made at steady<br />

altitude. The main direction of the survey<br />

depends on the direction of the estimated<br />

current.<br />

The third algorithm computes the 4D<br />

trajectory between each pair of the itinerary.<br />

The itinerary is followed at nominal steady<br />

speed. Between each pair of objective areas,<br />

we assume that the vehicle follows a steady<br />

immersion, even when it avoids nonnavigable<br />

areas (Figure 7). This modeling<br />

allows limiting the risk due to the bathymetry<br />

uncertainties. The trajectory avoids nonnavigable<br />

areas with relevant course changes.<br />

The slope of the ascents and descents<br />

53<br />

considers the constraints of the vehicle. If the<br />

maximum slope can’t be respected, course<br />

changes are added to the trajectory.<br />

-20<br />

-40<br />

-60<br />

-80<br />

-100<br />

0<br />

0 10000 20000 30000 40000 50000 60000<br />

survey 1<br />

survey 2<br />

obstacle avoidance<br />

survey 3<br />

Figure 7 - Trajectory immersion (m.) vs. trajectory length (m.)<br />

<strong>for</strong> a 3 objectives mission<br />

The itinerary and operation planning<br />

algorithms are also implemented in the Man<br />

Machine Interface to allow an operator to<br />

prepare a mission. Figure 8 shows the<br />

application of the planning function on the 2D<br />

map of the MMI. Objective areas in blue are<br />

surveyed, <strong>for</strong>bidden areas in red and nonnavigable<br />

areas (coastline in red, coast area in<br />

green) are avoided. Numerous tests have<br />

validated the planning function in typical and<br />

untypical missions.<br />

Figure 8 - Example of planning computation<br />

The previous planning function gives its<br />

autonomy to the vehicle when used in<br />

response to the occurrence of events which<br />

invalid the current plan (either initial or<br />

already recomputed). Events are managed by<br />

the onboard architecture, which calls the<br />

different planning algorithms according to<br />

their level of criticity and the current status of<br />

the vehicle.


On-going tests<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

In May 2005, the level 1 was successfully<br />

validated at sea. It was a very important step<br />

as this level has direct interaction with the<br />

vehicle itself. It guaranties an easy software<br />

integration of all other levels that are already<br />

evaluated by simulation.<br />

The 3 rd level architecture has been validated<br />

by simulation, in nominal and re-planning<br />

situations. Three types of events have been<br />

successfully simulated:<br />

− An alarm event <strong>for</strong>ces the vehicle to move<br />

directly to the end waypoint: a new<br />

itinerary to avoid non-navigable areas<br />

then a new trajectory are computed.<br />

− When arriving on an objective area, the<br />

real current is different from the predicted<br />

one and invalidates the already-computed<br />

survey: an operative sequence is<br />

computed taking into account new data.<br />

− The operator asks <strong>for</strong> a local operation of<br />

inspection, <strong>for</strong> example to inspect a<br />

suspicious object. A specific operative<br />

sequence is planned then the vehicle<br />

resumes its mission.<br />

The 3 rd level will be tested at sea in March<br />

and April 06. An example of test will concern<br />

the current. False current data will be used to<br />

generate the initial planning. Once on the<br />

survey zone, real time measurement should<br />

start the re-planning process.<br />

Conclusion and future work<br />

If goal driven planning has proven to be<br />

feasible in simulation, the recent<br />

developments made by GESMA, ONERA<br />

and PROLEXIA push towards a fully<br />

operational system taking into account:<br />

− Environmental data from hydrographic<br />

database and standards,<br />

− A <strong>modular</strong> embedded architecture with<br />

evolutionary potential <strong>for</strong> onboard<br />

supervision and re-planning,<br />

54<br />

− A user friendly MMI giving advanced<br />

operator functions <strong>for</strong> goal mission<br />

preparation and supervision.<br />

GESMA future works on autonomy will<br />

mainly focus on REA AUVs. Their missions<br />

are mainly characterized by the following<br />

points:<br />

- The efficiency of the mission relies on<br />

the quality of the environmental data<br />

collected and the percentage of<br />

surveyed area.<br />

- The REA AUVs are obviously<br />

operated in unknown environment (or<br />

even hostile <strong>for</strong> some military<br />

purpose).This lead to increased<br />

security and autonomy compare to<br />

survey mission.<br />

As a response to these difficulties, GESMA<br />

will conduct works on:<br />

- Re-planning capacity (adaptativity)<br />

that will take into account<br />

per<strong>for</strong>mance mapping. This<br />

per<strong>for</strong>mance mapping will assess in<br />

real time the quality of the collected<br />

environmental data. If a given<br />

threshold is not achieved, a new path<br />

planning is generated to improve the<br />

overall per<strong>for</strong>mance.<br />

- Real time goal optimisation. Compare<br />

to classical goal driven mission in<br />

mine warfare, in the case of REA, the<br />

goal are modified in real time taking<br />

into account the collected<br />

environmental data like currents,<br />

bathymetry, and water density. For<br />

example, near shore mission can not<br />

be geographically limited in advance<br />

by an operator and the AUV needs to<br />

find the best reliable path to get as<br />

near as possible to the beach.


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

References<br />

[1] Clough, B.T. (2002). Metrics, Schmetrics ! How<br />

the heck do you determine a UAV’s autonomy<br />

anyway ? Per<strong>for</strong>mance Metrics <strong>for</strong> Intelligent Systems<br />

Workshop. Gaithersburg, MA, USA.<br />

[2] Goodrich, M.A., D.R. Olsen, J.W. Crandall and<br />

T.J. Palmer (2001). Experiments in adjustable<br />

autonomy. Workshop on Autonomy Delegation and<br />

Control. IJCAI 2001. Seattle WA.<br />

[3] http://www.cert.fr/dcsd/cd/PROCOSA<br />

[4] Murata, T. (1989). Petri Nets: properties, analysis<br />

and applications, Proceeding of the IEEE, 77(4), 541-<br />

580.<br />

[5] Little, J.D.C, K.G. Murty, D.W. Sweeny and C.<br />

Karel (1963). An algorithm <strong>for</strong> the Traveling Salesman<br />

Problem, Operations Research 11 pp. 972-978.<br />

55


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

ADVANCED CONTROL FOR AUTONOMOUS UNDERWATER VEHICLES<br />

Michel PERRIER<br />

Underwater Systems Department<br />

Ifremer Mediterranean Center<br />

mperrier@ifremer.fr<br />

ABSTRACT<br />

Ifremer operates manned and unmanned submarines <strong>for</strong> scientific exploration since many years. The need<br />

<strong>for</strong> more classical survey AUV (Autonomous Underwater Vehicles) has arisen within Ifremer’s scientific<br />

programs these past years. The maturity of the AUV technology has permitted to set-up and launch an<br />

operational program <strong>for</strong> a fleet of coastal survey AUVs within Ifremer. R&D activities per<strong>for</strong>med by<br />

Ifremer since many years on advanced control, diagnosis and embedded software architecture, find in this<br />

new generation of operational Underwater Vehicles a real field of application.<br />

This paper describes current development on on-board supervision software architecture aiming at<br />

increasing the security of an AUV and its environment, while improving its per<strong>for</strong>mance.<br />

KEYWORDS: AUV, Intelligent Control, Embedded Architecture.<br />

1. INTRODUCTION<br />

Ifremer has been engaged <strong>for</strong> many years in underwater technologies and in the operational use of<br />

underwater systems within the French oceanographic fleet. Development of the deep sea ROV victor 6000,<br />

and a complete upgrade of the well-known manned submersible nautile, are some of the major activities<br />

undertaken recently by the Underwater Systems Department within Ifremer.<br />

In parallel to these operational vehicles, and in order to be ready with a new generation of underwater<br />

systems, R&D activities have been pursued in a wide spectrum of areas, and in particular within AUV<br />

technologies domain.<br />

In the context of deep water technology development, numerous projects mainly targeted <strong>for</strong> offshore<br />

applications have been conducted recently by Ifremer with European industrial and research partners. This<br />

ef<strong>for</strong>t started in 1998 with the development of the supervised AUV sirene designed <strong>for</strong> accurate launch and<br />

deployment of a benthic station [1]. The technology developed in this project has then been applied to the<br />

swimmer project [2] aiming at developing an hybrid AUV, designed as a shuttle <strong>for</strong> carrying a classical<br />

ROV and deploying it once docked on a pre-installed bottom-station.<br />

The successful development of these projects conducted in going further within the domain of autonomous<br />

intervention in the frame of the alive project aiming at the development of an Intervention-AUV (I-AUV)<br />

capable of advanced control <strong>for</strong> autonomous docking using vision- and sonar-based control. During this<br />

project, a significant breakthrough in both optical dynamic positioning in a local environment and precise<br />

autonomous docking was made [3][4].<br />

Beyond these technological advances, the need <strong>for</strong> a more classical survey AUV has arisen within<br />

Ifremer’s scientific programs. This has led to the set-up and launch of an operational program <strong>for</strong> a fleet of<br />

coastal survey AUVs.<br />

The industrial capability to provide survey vehicles was broad enough in 2002 to justify the launch of an<br />

international tender <strong>for</strong> a generic vehicle, which would allows Ifremer to retain control over the continuing<br />

56


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

development of other specific technologies related to navigation and positioning, communication, data<br />

exploitation and <strong>modular</strong> payload development.<br />

Requirements and basic specification<br />

2. AUV PROGRAM AT IFREMER<br />

Scientific needs have arisen <strong>for</strong> multi-sensor underwater surveys. This is particularly true in the fields of<br />

physical oceanography, marine geology and fish stock evaluation in the continental shelf and margin<br />

regions where socio-economical demands are fuelling the need <strong>for</strong> more detailed surveys. Those needs<br />

have led Ifremer in 2002 to launch an AUV program with the aim of using autonomous vehicles <strong>for</strong><br />

operational scientific survey by 2005.<br />

The analysis of scientific requirements, collected from several French and European oceanographic<br />

institutes, has led to the specification of a 600 to 800kg, 3000m depth-rated, <strong>modular</strong> vehicle with more<br />

than 100km range capabilities, and around 200kg payload capacity. For coastal applications this vehicle<br />

must be operated by a limited crew team from small (


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Figure 1. the aster x AUV © Ifremer<br />

Control subsystem, networking and data logging<br />

The VCC (“Vehicle Control Computer”) system is an industrial rack-mounted Compact PCI computer with<br />

built-in expansion capability. The vehicle I/O uses a combination of distributed and local devices. There are<br />

digital and analogue input/output, including an Ethernet network switch and serial link.<br />

A dedicated board provides a Network Time Protocol (NTP) service to other network nodes, and allows<br />

subsystem synchronisation by generating TTL sequences. Time synchronisation is obtained from the PPS-<br />

GPS, when satellite lock is available. All data logging per<strong>for</strong>med by the VCC reflects this GPS referenced<br />

time in UTC. Two Ethernet 100Mb port connections to a network switch are provided to the payloads <strong>for</strong><br />

data communication.<br />

A 10GB hard-drive stores software executables, mission plans, and vehicle data log files. All critical<br />

vehicle mission in<strong>for</strong>mation is logged to the hard-drive during operation, at a configurable desired rate. The<br />

lists of parameters to be logged are modified on the SCC (“Surface Control Computer”) and archived with<br />

the mission plan files <strong>for</strong> future reference. At the end of a mission, data log files are uploaded to the SCC<br />

and also stored with the mission plan files.<br />

Surface control and display systems<br />

For flexibility the surface consoles are portable. They contain the following equipment:<br />

• The SCC (“Surface Control Computer”).<br />

• Positioning system<br />

• DataLinc 2.4 GHz Ethernet radio.<br />

The SCC has interfaces with the following equipment:<br />

• Umbilical Telemetry<br />

• Acoustic Telemetry<br />

• Radio Telemetry<br />

• Surface Positioning System<br />

The operator interface displays vehicle in<strong>for</strong>mation in both graphical and text <strong>for</strong>ms as appropriate.<br />

Mission programming<br />

Mission plans are defined by ASCII text files containing mission task verbs : built-in keywords and<br />

comments. The built-in keywords are “entry label” and “goto” which together can be used <strong>for</strong> "looping"<br />

and "jumping" within a mission. The task verbs fall into two categories: geographic and other. Geographic<br />

tasks are closely tied to a geographical position (latitude & longitude). Other types of tasks are not closely<br />

bound to position and include <strong>for</strong> example the ability to turn equipment on or off <strong>for</strong> specific parts of the<br />

58


mission. Each geographic task has a configurable vertical mode (depth or altitude), a vertical setpoint, a<br />

speed mode and a speed setpoint. Using a series of task verbs with differing depth or altitude setpoint, it is<br />

possible to plan a mission with virtually any 3D trajectory. The current list of geographic task verbs<br />

includes “target”, “line_follow” and “circle”, but can easily be expanded by editing a grammar definition<br />

file and including the necessary configuration to make the AUV carry out the new task.<br />

The MIMOSA software (“MIssion Management and Operation <strong>for</strong> Subsea Autonomous vehicle”),<br />

currently under development and test at Ifremer, will be an optimised tool proposed to the scientific users<br />

of the AUV <strong>for</strong> mission description and programming.<br />

Mission plan files are generated and simulated at the surface console (<strong>for</strong> final validation), then<br />

downloaded to the vehicle using any suitable data link, including the Ethernet radio modems.<br />

Description<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

3. ON-BOARD SOFTWARE ARCHITECTURE<br />

The controller of the vehicle is based on ISE’s ACE (“Automated Control Engine”) system and on the<br />

already proven architecture used on the “Theseus” vehicle by ISE [6]. The primary capabilities of vehicle<br />

control software are mission execution, guidance, control, fault detection and energy management. In order<br />

to provide these capabilities the system is broken down into subsystems. Each subsystem is assigned a<br />

specific task, which is self-contained, testable and less complex than the system as a whole. The<br />

subsystems further break down tasks into ACE software components. This approach maximises flexibility<br />

of the control system. When change is necessary, often only one ACE component is affected in the<br />

subsystem.<br />

The VCC (“Vehicle Control Computer”) design is hierarchical with the mission plan manager at the top<br />

and the low level control loops which control the planes and the thruster at the bottom. The interconnection<br />

between subsystems is implemented by event propagation. External access to events, to inject or read<br />

values, is provided by various existing message-passing interfaces. The main subsystems are:<br />

• Mode Manager: top level system <strong>for</strong> controlling the overall state of the vehicle, enabling subsystems<br />

appropriately and includes mission execution control as well as fault management and responses.<br />

• Guidance: receives trajectory in<strong>for</strong>mation and waypoint positions from the mission plan manager, as<br />

well as vehicle position in<strong>for</strong>mation from the positioning sub-system.<br />

• Positioning: interfaces with the vehicle navigation sensors and outputs the vehicle position in latitude<br />

and longitude.<br />

• Energy Management: monitors past and predicted future energy usage and generates fault conditions to<br />

abort or modify the mission when problems occur.<br />

• Control: closed-loop planes position control and closed-loop speed control.<br />

• Telemetry: Data is transferred between the SCC and VCC via the Telemetry module. The telemetry<br />

system has been specifically developed to use an acoustic modem, as well as packet radio and Ethernet<br />

links.<br />

• Logging: Any data can be logged on the vehicle computer at a configurable desired rate.<br />

Emergency sub-system summary<br />

For security aspects, the vehicle is fitted with several devices such as a weight recovery, a self- powered<br />

Novatech Model ST-400AR strobe light, a self-powered ORE 4336B acoustic pinger/transponder and a<br />

self-powered Novatech RF-700AR. radio beacon.<br />

The fault manager system detects vehicle faults and takes pre-defined fault action. These actions can be<br />

programmed in a look-up table and can be different <strong>for</strong> various mission phases. Examples of fault<br />

responses <strong>for</strong> the vehicle include “STOP and surface”, “STOP and park on the bottom”, “Change mission<br />

step”, or “Ignore the fault and continue”.<br />

59


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

4. DESIGN OF ON-BOARD SUPERVISION CONTROLLER<br />

R&D activities background<br />

Ifremer has been involved in ADVOCATE and ADVOCATE II European projects conducted from 1998 to<br />

2005 which aimed to design and develop <strong>modular</strong> embedded software architecture <strong>for</strong> intelligent and<br />

advanced control of autonomous systems like AUV [7].<br />

One of the main objective of these projects was to specify and design a <strong>modular</strong> software architecture able<br />

to be plugged onto an existing control architecture. Main ef<strong>for</strong>ts have been put on the specifications of the<br />

interface between existing and new software architecture. This study resulted in the definition of protocols<br />

of data exchange, nature of data to be exchanged, and level of interaction of the supervision and diagnosis<br />

software architecture over the existing piloting one.<br />

The goal of this new software architecture is to detect and diagnose malfunctioning in the monitored<br />

vehicle in terms of subsystems (sensors, actuators, payload, energy source,...) or behaviour (mission<br />

execution, high-level control, diagnosis and decision). This architecture and its intelligent components has<br />

been successfully demonstrated at the end of the ADVOCATE II project on the Atlas Elektronik GmbH<br />

AUV MiniC in April 2005 [8].<br />

Description of PSE<br />

The outcomes of this project have been exploited by Ifremer <strong>for</strong> improving the on-board control<br />

architecture of the aster x AUV by designing the PSE software module (PSE is the French acronym <strong>for</strong><br />

“Embedded Supervised Piloting”).<br />

PSE is designed as an on-board supervision module able to monitor the functioning of the AUV and its<br />

subsystems (sensors, actuators, energy source), and to monitor and supervise the mission plan execution<br />

with the objective to verify the nominal execution of the programmed mission plan and modify locally or<br />

globally the mission plan when the actual AUV behaviour and functioning is different from the nominal<br />

one.<br />

The abnormal situations can be numerous. For instance:<br />

• Malfunctioning or failure of a subsystem (sensor, actuator,..)<br />

• AUV behaviour is not the expected one (un<strong>for</strong>eseen evolution of the environment characteristics like<br />

underwater current <strong>for</strong> instance) which can lead to undesired situations such as increasing the energy<br />

consumption, missing a “rendezvous” in the mission (geographical, temporal,..)<br />

• Detection of an obstacle<br />

The specification of PSE has been made in such a way that its activation/deactivation does not impact the<br />

nominal functioning of the vehicle architecture. In any case, the “last word” is always given to the fault<br />

management system of the AUV in charge of critical faults management. The PSE objective is to limit or<br />

avoid undesired mission abortion when the encountered situation is not a real critical one, and when a<br />

modification of the mission plan or the vehicle configuration can optimise the use of the AUV evaluated<br />

through the scientific data collection.<br />

Internal design of PSE<br />

The PSE module is built around an expert system. All data generated by the AUV or useful <strong>for</strong> its<br />

monitoring are handled by PSE: sensors and actuators data, payload status, energy monitoring system,<br />

mission plan execution,...all types of data are supported by PSE (numerical and non numerical data).<br />

A rules database, subdivided into “Diagnosis rules” and “Decision rules” with different activation and<br />

analysis methods managed by a dedicated and powerful inference engine are defined. “Diagnosis rules”<br />

per<strong>for</strong>m the diagnosis on the situation and produce input data <strong>for</strong> the “Decision rules” in charge of<br />

60


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

determining the best recovery actions to overcome the diagnosed abnormal situation. The set of recovery<br />

actions contains:<br />

• Modification of one or several parameters of the nominal mission : <strong>for</strong> instance, increase the vehicle<br />

speed in order to be on-time <strong>for</strong> an imperative “rendezvous”, change the vehicle altitude <strong>for</strong> increasing<br />

the quality of the data collected by the payload sensors,...<br />

• Suspension of the nominal mission plan <strong>for</strong> execution of a local trajectory (obstacle avoidance <strong>for</strong><br />

instance) and then resume the initial mission<br />

• Short-cut the nominal mission plan <strong>for</strong> instance when energy consumption is higher than planned<br />

• Change the nominal mission plan <strong>for</strong> a new one (built by PSE or taken from the PSE knowledge<br />

database)<br />

Once the recovery actions is determined thanks to the activation of the Decision rules, PSE initiates a<br />

dedicated dialogue with the vehicle controller in order to apply its decision.<br />

Configuration of PSE<br />

The configuration of PSE consists in “programming” the intelligence of PSE: the different set of rules <strong>for</strong><br />

diagnosis and decision, as well as the integration of additional intelligent processes and functions integrated<br />

within PSE as external functions. For instance, it is planned to integrate automatic mission planning<br />

algorithms, obstacle avoidance algorithms or advanced diagnosis modules [9].<br />

The NEMO software suite<br />

PSE is part of a complete software suite called NEMO which consists, in addition to the on-board PSE<br />

module, in a set of different tools dedicated to development, configuration, analysis, and monitoring:<br />

• Configuration module. Allows to configure PSE: management of the diagnosis and decision rules,<br />

configuration files, selection of the monitored data, selection of the logged data,... This stage results in<br />

the production of “PSE configuration files” to be downloaded into the PSE module <strong>for</strong> execution on<br />

the AUV<br />

• Analyser module. Allows to replay and analyse logged data. These data are not the AUV data already<br />

logged in the VCC, but the internal data produced and manipulated by PSE. For instance, the rules<br />

fired by the inference engine and the diagnosis and decisions per<strong>for</strong>med during the mission execution.<br />

• Monitoring module. Used <strong>for</strong> the supervision and control of the internal/external functioning of PSE<br />

when a communication link exists with the AUV or when running on the simulation plat<strong>for</strong>m. A<br />

dedicated man-machine interface is then used <strong>for</strong> displaying the evolution of the internal state of PSE<br />

(diagnosis and decision rules, data,...)<br />

Current development and objectives<br />

A simulation plat<strong>for</strong>m is currently under development and will be progressively used <strong>for</strong> testing and<br />

validating the PSE module and the associated NEMO suite tools. This simulation plat<strong>for</strong>m consists in the<br />

duplication of the complete computers configuration of the real AUV: SCC and VCC, communication link.<br />

Within the plat<strong>for</strong>m, the AUV behaviour is simulated thanks to a dynamic model of the real vehicle. The<br />

vehicle state obtained from this model are then injected in a sensor simulation software providing a set of<br />

message data having the same <strong>for</strong>mat than the real sensing devices. This allows to use in the plat<strong>for</strong>m the<br />

same VCC software, with in particular the same protocol and communication links, than on the real<br />

vehicle.<br />

The development of the complete NEMO software suite is currently on-going, and will be completed by the<br />

end of 2006. Progressive integration and test of its software components will be per<strong>for</strong>med on a simulation<br />

plat<strong>for</strong>m of the aster x AUV, allowing to test critical situation without any risk <strong>for</strong> the real vehicle. First<br />

implementation and tests of the PSE module on the real AUV are planned <strong>for</strong> the end of 2006.<br />

61


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

5. CONCLUSION<br />

The PSE supervision module, part of the NEMO software suite, aims to supervise and monitor the aster x<br />

AUV during the execution of mission. The PSE module will be able to detect any malfunctioning in the<br />

AUV subsystems, or any abnormal situations during the execution of the programmed mission plan. The<br />

level of interaction of the PSE module with the vehicle control architecture relies on real-time adaptation of<br />

mission parameters, or modification of the mission plan itself.<br />

Preliminary results concerning the first use of the PSE module are expected to be obtained on the<br />

simulation plat<strong>for</strong>m of the aster x vehicle by the end of 2006. Progressive integration and test of PSE on the<br />

real AUV itself will be conducted later on.<br />

6. REFERENCES<br />

[1] Rigaud V., Semac D., Drogou M., Opderbecke J, Marfia C. “From SIRENE to SWIMMER – Supervised<br />

Unmanned Vehicles: Operational Feedback from Science to Industry”, ISOPE’99, Brest, May 31 – June 4 1999.<br />

[2] Chardard Y, Rigaud V. “SWIMMER French Group Developing Production Umbilical AUV”, Offshore<br />

Magazine, October 1998, pp 66-67<br />

[3] Marty P., (2004), “ALIVE : an Autonomous Light Intervention Vehicle”, Advances In Technology For<br />

Underwater Vehicles Conference, Oceanology International<br />

[4] M. Perrier, L. Brignone, “Optical Stabilization <strong>for</strong> the ALIVE Intervention AUV”, ISOPE 2004, May 2004,<br />

Toulon<br />

[5] Ferguson J. “Explorer - A Modular AUV <strong>for</strong> Commercial Site Survey” Underwater Technology 2000, Tokyo,<br />

May 23rd - 26th, 2000.<br />

[6] Ferguson J. “The Theseus Autonomous Underwater Vehicle – An AUV Success Story”, Unmanned Underwater<br />

Vehicle Showcase 98 Conference Proceedings, Southampton Oceanography Centre, Southampton, UK, pp 99<br />

[7] ADVOCATE Consortium, “ADVOCATE: ADVanced On-board diagnosis and Control of Autonomous<br />

sysTEms”, IPMU’2002 (In<strong>for</strong>mation Processing and Management of Uncertainty in Knowledge Based System),<br />

1-5 July 2002, Annecy<br />

[8] M. Perrier, J.Kalwa, “Intelligent Diagnosis <strong>for</strong> Autonomous Underwater Vehicles using a Neuro-Symbolic<br />

System in a Distributed Architecture”, OCEANS Europe 2005, Juin, Brest, France<br />

[9] M. Perrier, “Autonomous robot health monitoring using neuro-symbolic system “, ISORA 2004, Juin 2004,<br />

Séville<br />

62


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Session « Industrials »<br />

63


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Compared Architectures of Vehicle Control System (Vetronics)<br />

and application to UXV<br />

Jean-Philippe QUIN<br />

Thales<br />

ABSTRACT<br />

Many architecture of electronic system, vetronics, does exist already in the civilian<br />

area (automotive and more) and in the military field (MBT, IFV, fighters, subs,<br />

frigates…). These <strong>architectures</strong> do control either the dynamics of the vehicle, or the<br />

situation awareness and C4I communication.<br />

They integrate an onboard human presence able to play a significant part in the<br />

system management and to fix any incoming failure.<br />

What is relatively new <strong>for</strong> UXV (UXV stands <strong>for</strong> UAV, UGV and UUV) architecture is<br />

evident: the non-presence of human, letting the system operating by itself in every<br />

functional mode and with only a weak link to human control. This article is mainly<br />

UGV oriented but common features with others robots may be found here.<br />

All things considered, the basic architecture of an UXV must be designed with the<br />

actual basics (multiplexed, redundant…), COTS/MOTS components, and specific<br />

functional modes due to this particularity. This a must due to cost, time-to-market,<br />

buts with a common problem in military field, obsolescence.<br />

An interesting question is the nature of UXV: does it have to be a specifically<br />

designed robot, without any possibility to be human driven or is it a manned vehicle<br />

with a robotized option as needed?<br />

This presentation will take several models to explain what could be architecture <strong>for</strong><br />

UXV and how recent development in automotive industry can be used in UXV or <strong>for</strong><br />

generic AFV, with adaptation.<br />

This paper presents a short historical development of automotive, then a compared<br />

architecture with a common view, with vetronics <strong>for</strong> UXV, and a conclusion with<br />

future trends.<br />

64


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

A SHORT HISTORY of AUTOMOTIVE and VETRONICS<br />

The 20-70’s age.<br />

In the olden times of automotive, architecture was quite simple: linking a button to an<br />

actuator, linking mechanically a wheel-drive to the wheels, a throttle pedal to the<br />

engine through a cable, a braking pedal to the brake drums and discs by hydraulics<br />

pipes. A one to one model.<br />

The 80-00’s age.<br />

New criteria appeared: the growing need of new functions leaded by marketing<br />

ef<strong>for</strong>ts, profitability and security pushed to a new design with a better level of safety.<br />

And the complexity of electric harness was not yet sustainable if 2000 cars must be<br />

manufactured a day, as quality problems where beginning to be unresolved.<br />

Considering this new aspect, options offered as ABS, air conditioning, electric<br />

powered windows are at an upper level of complexity due either to the wiring, the first<br />

problem, or the need of a global system approach.<br />

Design and integration of CAN and VAN buses was perceived as the best solution<br />

but many bad designs lead to problems, specifically in H1 class.<br />

In parallel, military systems as fighters, subs, frigates and eventually tanks, were<br />

starting to use own standards, which were after standardizing problems, working but<br />

at a high cost. MIL-1553B is an example. Just note that debugging tools where very<br />

poor, not designed <strong>for</strong> system level. A difficult approach due to the non-deterministic<br />

control of CAN, still a problem.<br />

The 00-X0 age.<br />

Automotive manufacturers understand now, that a global approach is a must: many<br />

standards as control buses appeared as Flexray, Most, Lin. The physical link is<br />

copper or fibber optic based, the cost of a connection point is under 1 €.<br />

Global system approach link the engine, brakes, airbags, steering wheel, suspension<br />

to provide a high level of security and certainly a must <strong>for</strong> quality, reliability. What is<br />

the cost paid by a manufacturer obliged to call back one million of vehicles and the<br />

bad perception given to the customer (2005).<br />

In parallel, this kind of problem must not happen in a military system, especially in a<br />

weapon equipped system. Just imagine a nuke bomber having a serious problem in<br />

the navigation and attack system during a mission…<br />

Another new trend in automotive is energy management. As new functions appear,<br />

de-icing the rear-window, modulated AC with partial zones, navigation equipments,<br />

electric assistance to the steering wheel or even xenon lights, led to a conclusion: the<br />

car in traffic messing condition or cold weather, is not able to provide the convenient<br />

level of energy. Many ways to resolve this issue.<br />

• Dual batteries, one <strong>for</strong> common use, the other <strong>for</strong> engine starting<br />

• 42 V instead of 12 V combined with alterno-starter (a combination of alternator<br />

and starter)<br />

• Hybrid engine as used on Prius or DPE 1 , with energy braking recuperation.<br />

Note that this last concept is already in use on Leclerc MBT turret.<br />

1 DPE: Démonstrateur à Propulsion Electrique : a contract conducted by GIAT industries to develop an<br />

hybrid 6x6 fighting vehicle.<br />

65


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

It has to be pointed up that these approaches may be combined. Military vehicle<br />

should use such energy management in the next vehicle generation but reliability is<br />

at the corner. Global architecture and subsystems as batteries have to be evaluated<br />

with respect to the critical issue of a military mission under a GAM-T1 specification<br />

(temperature, shocks, acceleration) and now, environment. Automotive industry has<br />

the same problem, but only multiplied by a million time…<br />

In conclusion, a parallel approach between what is now available on the civilian<br />

market and what is needed <strong>for</strong> UXV and military vehicles of any nature is relevant<br />

with an adaptation of specification: COTS/MOTS can be used with caution.<br />

COMMON ARCHITECTURE DESIGN<br />

The following draft the design architecture of a 90’s car.<br />

Four buses dynamics, security, MMI and body. The last one is only half speed of the<br />

others due to the functional needs.<br />

The BSI is the common computer acting as central intelligence and the gateway<br />

controller. A same architecture can be applied to UXV, with a suppression of the MMI<br />

function and the airbags feature, if the vehicle will no be used by humans.<br />

Others functions must be analyzed deeper as rear-view mirrors control or door<br />

security, depending on the UXV is a pure robot or a robotized vehicle.<br />

Of course, this analysis is not applicable to UAV or UUV.<br />

New cars are more and more designed <strong>for</strong> shared control between the driver and the<br />

car. The following picture emphasize on brake-by-wire technology. From now, a<br />

degradation mode in braking system oblige the manufacturers, by legislation, to link<br />

directly the pedal to the brake. It is the same issue <strong>for</strong> the driving wheel.<br />

66


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Chief<br />

Station<br />

Head Up<br />

Display<br />

Chief<br />

Power & AC<br />

Management<br />

ETHERNET<br />

Pilot<br />

Station<br />

Head Up<br />

Display<br />

Pilot<br />

BGIT<br />

PC2<br />

Supervisor<br />

VIDEO<br />

SOUND<br />

Distribution<br />

VME<br />

COM COM<br />

Gyrostabilised<br />

Gimball<br />

MAST &<br />

Mechanic Devices<br />

This is the architecture of the Syrano (right of the design) robot based on 1553B bus.<br />

Not car architecture but looking like <strong>for</strong> guidance and pilot functions very close from<br />

ESP. Difference is seen on the left side of the picture: human control through radio<br />

link, not a human in the vehicle (here it is an armored vehicle on a Wiesel 2 tracked<br />

chassis, a 4 tons class).<br />

Controlling a vehicle at any distance (10 km at max <strong>for</strong> Syrano) is an issue. Not the<br />

topic of this article, but a the big issue: teleoperation, semi-autonomy, autonomy,<br />

many words about a real problem: how to deal with a 60km/h vehicle running in all<br />

terrain navigation without being a little bit concerned about.<br />

Let us see about automotive control, as an inside driver.<br />

VME<br />

Bus Manager<br />

& Supervisor<br />

VIDEO<br />

SOUND<br />

Concentration<br />

Things are changing on automotive: many new cars have an electric handbrake<br />

(BMW, Renault…) and electric brakes, what can be seen as the ultimate<br />

development, is pushing.<br />

The other trend is wheel drive assistance. We started with nothing in olden time, a<br />

tractor view. We have now hydraulic and electric assistance, with a <strong>for</strong>ce modulated<br />

1553B<br />

Mast<br />

Computer<br />

PC2 VNH<br />

67<br />

M2<br />

Computer<br />

GUIDANCE PILOT<br />

EFFECTORS<br />

Con. Box<br />

3D LASER<br />

Sensor<br />

Perception<br />

Computer<br />

SENSORS<br />

Con. Box<br />

INU<br />

POWER<br />

MANAGEMENT<br />

&<br />

DISTRIBUTION


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

with respect to speed, dynamic attitude and either condition as parking. As said<br />

above, the engine control is already done by wire. This control needs a sensor to<br />

measure the position and the speed of the action on the driving wheel as well as 2D<br />

inertial attitude of the vehicle <strong>for</strong> ESP function, 3D <strong>for</strong> SUV.<br />

The combination of these controls enables the control of vehicle dynamics through<br />

the well-known ABS, ESP and other ASB derivative functions. An ABS system need<br />

the tuning of other 500 parameters, how many <strong>for</strong> an ESP system?<br />

All these sub-systems can be used in a UXV, more specifically <strong>for</strong> an UGV. As the<br />

sub-system can be controlled through a bus it can easily be used on an UGV.<br />

APPLICATION to UGV<br />

Either bus architecture, or use of already existing sub-systems is applicable to UGV.<br />

The idea is to see how a generic modern vehicle can be robotized at a lower cost<br />

with a double function, be human driven or be robot driven. Examples given <strong>for</strong><br />

mobility control.<br />

BUS<br />

CAN bus is now used in military vehicle (VBCI of GIAT industries and Renault Truck<br />

Defense) or in the FRES program (UK) of the MilCan, an adapted version of the<br />

civilian standard.<br />

The main adaptation is to change the non-deterministic control of CAN to a<br />

deterministic one due to “hard” real-time behavior. This evolution has two interests:<br />

guarantee of a fast responding system in a determined slot of time, and make easier<br />

the integration of the system as a deterministic system: what you order is what is<br />

done in a timed schedule.<br />

BRAKES<br />

Security even <strong>for</strong> an UGV is mandatory, so braking control is.<br />

The left part of the drawing presents the primary and secondary brake lines <strong>for</strong><br />

security purpose. On the right, the assistance to the driver on the brake pedal is done<br />

with the vacuum booster. If adapting a pneumatic valve on a side of the diaphragm,<br />

one can control easily the braking <strong>for</strong>ce without any mechanical actuator and allowing<br />

a human braking of the vehicle simply by disabling the valve, left it closed.<br />

68


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

ABS is to be added to the control of the robot, including emergency braking: these<br />

functions are fully compatible with an UGV. One can use the odometers on each<br />

wheel, without any extra cost.<br />

DIRECTION<br />

Hydraulics with pump, pipes and a non-linear comportment is going to be more and<br />

more replaced by electric assistance <strong>for</strong> the driving wheel.<br />

Electric assistance is under the control of a computer linked to sensors: position and<br />

velocity. This is used to lower the <strong>for</strong>ce needed on the driving wheel but also to be<br />

combined with trajectory control (ESP function) as well as parking aid.<br />

The idea, <strong>for</strong> UGV, is to revert the electric assistance using the motor but inverting<br />

the mechanical gearbox as there is no <strong>for</strong>ce on the driving wheel as no driver. The<br />

amplifier may be used without any change, just be controlled by another computer,<br />

the robot one. A simple by-pass switch insures the commutation between human and<br />

robot drive.<br />

But to add more agility to a wheel-based robot, it is better to disable the ESP<br />

function. Many reasons and a legal one: ESP must not control the direction while in<br />

action.<br />

Nevertheless, a direct control of the direction while braking, with an ABS at less of 3 rd<br />

generation, and by controlling same side brakes of the vehicle offers a better<br />

dynamic control.<br />

For a robot there is no limitation <strong>for</strong> com<strong>for</strong>t in lateral G’s as far as the dynamic<br />

stability is insured. What is done with 2 axes inertial unit, what can be found with<br />

Fog’s or integrated silicium device. For robotic purpose, an independent INU seems<br />

to be more convenient.<br />

GAS PEDAL<br />

Nothing more to say that new cars, and all diesel engines have a drive-by-wire to<br />

control the throttle. And the pedal has a position sensor. The control of such a<br />

device is evident.<br />

GEAR BOX<br />

Typical “analogical” gearbox is presented here. Based on hydraulics, pump, valves,<br />

sensors and very particular devices.<br />

69


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Evolution is on the way.<br />

Automatic and Manual gearbox<br />

Manual automatic and now robotized gearboxes are on actual and coming vehicles.<br />

The best <strong>for</strong> UGVs, and <strong>for</strong> drivers, is of course the robotized gearbox as the last<br />

Quickshift from Renault.<br />

As far it may be controlled trough paddles, any embedded computer may control it.<br />

The main issue is to get specific sensors outputs as engine speed, the gear engaged<br />

etc…<br />

From now, we have a complete car able to be trans<strong>for</strong>med to an UXV. But what is<br />

needed to trans<strong>for</strong>m a car to UXV to be fully under control? Intelligence. A great word<br />

as far we now.<br />

70


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Control and Command<br />

As seen be<strong>for</strong>e, all components are present in the automotive world.<br />

UXV, and specifically UGV need more: a shared control between operator and the<br />

robot. A mandatory design <strong>for</strong> robots is a mode oriented one. The robot has to be in<br />

already designed modes, no more, no less. A steady state in any condition.<br />

DEGRADED<br />

MODES<br />

NOMINAL<br />

MODES<br />

LEARNING<br />

MODES<br />

Modes transition (simplified)<br />

HARMONIZATION<br />

MODES<br />

Modes are designed to control the robot in any situation: nominal mode is the normal.<br />

Degraded mode is operating as any incoming failure occurs. In this latest state, robot<br />

control must deal with diverse situation without any help, as in automotive world<br />

where a car is supported by human assistance. The most complex mode.<br />

Steady state means a known level of control in any situations and is the graal <strong>for</strong><br />

robotics control. Situation uncertainty is the common view and to control it, a robust<br />

architecture must be implemented.<br />

Robot control has to be understood as a balanced system: a fully controlled system,<br />

under human supervision or an autonomous one, able to care with any happening<br />

hazard such as fixed obstacle, and more as moving obstacle, or deceptive obstacle<br />

as ponds, moving trees… And this is the only matter addressed <strong>for</strong> mobility, mission<br />

is another issue.<br />

Robotics research is a passion and the most that can be done at now, is to propose<br />

what can be named a «shy» system.<br />

The present UGV science is not still able to do everything, but must be able to protect<br />

the robot itself and to support <strong>for</strong> what it is built, the mission as the only and final<br />

requirement.<br />

71


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Intempora S.A.<br />

“Real Time, Multisensor, Advanced Prototyping Software”<br />

The RT Maps software allows to easily connect any type of sensors and actuators, to<br />

acquire, record, process and transmit every data, in real time.<br />

• A generalized digital recorder, a<br />

plat<strong>for</strong>m <strong>for</strong> acquisition, recording and<br />

processing of all data types, in real time.<br />

• A software associating a simple and<br />

intuitive interface and a robust technology.<br />

• A multisensor measurement instrument,<br />

powerful, precise and adaptative.<br />

• A capacity to date, read again and exchange<br />

with precision quantities of in<strong>for</strong>mation.<br />

72<br />

• A development plat<strong>for</strong>m to create<br />

one’s components and add them<br />

to the many others provided.<br />

• A link between data, allowing their fusion<br />

and their transmission towards actuators.<br />

• A methodology allowing to program<br />

graphically and easily complex applications.<br />

• A link between theory and practice,<br />

experimentations and applications.<br />

RT Maps<br />

• A tool to master the most innovative projects...<br />

Have a good real time !


A New Multisensor Technology<br />

In 2000, the Carsense European project, gathering<br />

industries (FIAT, BMW, Renault, Thales, Ibeo) and research<br />

labs (INRIA, LIVIC...) looked <strong>for</strong> a digital data logger and<br />

chose a solution developed by the «Centre de robotique de<br />

l’Ecole des Mines de Paris»: RT Maps. The objective was the<br />

perception of the objects around a moving vehicle. It was<br />

the first use of RT Maps. RT Maps is now adopted by important<br />

industrial groups (Renault, PSA, Valeo...) and by national<br />

and European projects (Arcos, Puvame, REACT...).<br />

Time and measurement play an essential part in the industrial<br />

and robotics applications: hence RT Maps precisely dates all<br />

data at their time of acquisition. This dating process provides<br />

a complete data flow control during the data processing and<br />

replaying. During the tests, situations and behaviors can<br />

be recorded and analysed later. Reproducing a situation is<br />

thus possible. RT Maps makes easy the link between real and<br />

virtual worlds.<br />

Connect, record and compare all types of sensors<br />

and actuators<br />

Any device suitable <strong>for</strong> connection with a computer, can be<br />

dealt with by RT Maps. In<strong>for</strong>mation from the sensors is acquired;<br />

processed data is sent to the actuators; inbetween, a<br />

working space is dedicated to the user. Connections between<br />

various elements are achieved graphically without any difficulty.<br />

The substitution of a sensor by another is thus done<br />

quickly. Comparison of in<strong>for</strong>mation obtained according to<br />

various types of sensors or technologies is straight<strong>for</strong>ward:<br />

video cameras, analogical and numerical ways, CANbus,<br />

GPS, radars, laser... A recorder allows the simultaneous recording<br />

of various tracks of in<strong>for</strong>mation. In<strong>for</strong>mation is stored<br />

in STDB, Synchronized Timestamped Databases. When<br />

replayed, the sequence is reproduced identically thanks to<br />

the data timestamps. It is possible to play in<strong>for</strong>mation at the<br />

desired speed: accelerated, slowed down, step by step...<br />

A controled investment<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

The graphical interface allows a fast installation and an<br />

easy evolution of the different applications. Functional<br />

bricks (in the <strong>for</strong>m of components) and data (STDB) are<br />

exchangeable and reusable. The test and experimention<br />

phases are simple and fast to implement. While allowing<br />

to save a considerable amount of time, RT Maps’ use<br />

reduces the development costs and the « time to market<br />

». In perfomance and efficiency, RT Maps’ potential<br />

warranties a fast and convincing return on investment.<br />

RT MAPS, INTRODUCTION<br />

References et partners<br />

73<br />

A «diagram» example in RT Maps’ workspace. It represents a complete<br />

application. Various sensors are connected, allowing to send,<br />

in real time, in<strong>for</strong>mation towards an actuator ( here a wheel). All this<br />

in<strong>for</strong>mation is posted simultaneously.<br />

Fuse data in real time and prototype effectively<br />

Data fusion of in<strong>for</strong>mation from various sensors, different or<br />

not, increase the reliability and robustness of the results. It<br />

also allows the use of less expensive sensors. Several assumptions<br />

can be easily tested in order to satisfy the most<br />

complex needs. It is easier to per<strong>for</strong>m more tests in simulated<br />

situation with real data, to identify problems and to<br />

correct them progressively, be<strong>for</strong>e engaging the product in<br />

real conditions of experimentation or production.<br />

Preserve and share knowledge and data<br />

It is always difficult to collect data corresponding to the real<br />

situations. With RT Maps, such sequences can be preserved<br />

and easily exchanged thanks to the STDB. They are reusable<br />

at will. RT Maps allows the users to build important data<br />

banks in order to capitalize and share all the experiments.<br />

Team work and project management are really simplified.<br />

New releases and updates<br />

Intempora permanently improves RT Maps. Updates are regularly<br />

posted on the Internet site and accessible to all the<br />

users under maintenance contract. Major release are the results<br />

of important technological, functional and/or ergonomic<br />

improvements. The migration towards each higher version<br />

is easy. The investments of each user are thus preserved;<br />

data bases and diagrams remain valid and can benefit of<br />

the new tools and available evolutions. The third version of<br />

RT Maps is installed and used by our clients since june 2005.


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

The studio : graphical programming<br />

The Studio is RT Maps’ graphical interface. Applications are<br />

represented by diagrams made of components which can be<br />

parametrized. Efficient and simple to use, the Studio is one<br />

of the many advantages of the software: a few minutes are<br />

enough to set up a complex application. Components, libraries,<br />

diagrams, data bases and scénarii can be exchanged<br />

and integrated.<br />

Components and connections<br />

The components, symbolized by blue boxes, are set<br />

up by simple drag and drop onto the scene. They<br />

interface the sensors, represent the algorithms and<br />

connect the actuators. The mouse allows to draw<br />

«lines» connecting the output of a component to the<br />

input of another. The dataflows are then etablished.<br />

Setting<br />

Many parameters are accessible by dialogboxes. The setting<br />

determines the component’s behavior.<br />

Documentation<br />

A simple click is enough to insert a comment in a diagram<br />

or to get help.<br />

Modularity<br />

When the user wishes to remplace or add a component in a<br />

diagram, it does it graphically, without any coding.<br />

Recording and replaying<br />

A «record» button launches the recording process. The<br />

VCR allows to play back the data bases. The replay speed<br />

and direction can be chosen. A cursor allows to select the<br />

position in the timeline.<br />

Master<br />

Slaves<br />

RT MAPS, TECHNOLOGY<br />

74<br />

Embedded technology<br />

The programming graphical interface can be removed <strong>for</strong><br />

a «hidden» use of the software. The orders of construction<br />

and parameter setting of diagrams are passed<br />

through scripts. Specific graphical interfaces can be<br />

developed to execute applications : they replace the usual<br />

workspace.<br />

Synchronized distributed operation<br />

RT Maps V3 breaks a technological barrier by allowing the<br />

operation of distributed and synchronized plat<strong>for</strong>m on several<br />

machines. A «master» system manages the whole application.<br />

A single clock supervises and synchronizes those of the various<br />

«slaves». The «Master» clock can be the one of the «master»<br />

host or coming from an external source: clock of an acquisition<br />

board, GPS clock...<br />

RT Maps technology is independant of the operating system<br />

used, even in a distributed configuration. Thanks to this new<br />

flexibility, RT Maps can satisfy the processing needs of the most<br />

demanding applications.


The library: components ready to be employed<br />

The RT Maps libraries are sets of components which provide<br />

elementary functions necessary to most applications:<br />

- Data acquisition<br />

- Standard protocole decoding<br />

- Data processing<br />

- Real time displaying<br />

- Data recording and replaying<br />

- Data exportation<br />

- Interfacing with third party software<br />

- Communication<br />

The software supports the majority of the market’s available<br />

sensors. Intempora provides many modules to interface sensors/actuators<br />

of very different natures and per<strong>for</strong>mances. If an<br />

hardware is suitable <strong>for</strong> connection with a computer, its integration<br />

in a RT Maps application is possible.<br />

Exemples of supported sensors:<br />

Webcams, DV camcorders, FireWire DCAM digital cameras,<br />

analog and digital cameras, stereo-vision devices, GPS, inertial<br />

measurements units, radars, laser telemeters, CAN bus,<br />

analog and digital input/output devices, microphones…<br />

Exemples of supported actuators:<br />

Analog and digital controls, electric motors, step by step motors,<br />

brake or other car system, barriers, hooters, light, variable<br />

messages indicators…<br />

New developed components are regularly added to the<br />

libraries.<br />

Intempora<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

More In<strong>for</strong>mations<br />

Please contact us !<br />

Marketing: Gilles MICHEL<br />

Technical aspect: Nicolas du LAC<br />

Tel: +33 1 41 90 03 59<br />

Web Site: www.intempora.com<br />

Email: info@intempora.com<br />

The SDK extension: breaking the limits<br />

The «Software Development Kit» allows the user to<br />

create its own components. The programming is done<br />

in C++; it is facilitated by the skeletons’ code and macro.<br />

Moreover, a complete API (Application Programming<br />

Interface) allows you to reach all the engine’s function<br />

and to remain independant of the operating system<br />

(file system or real time programming <strong>for</strong> example).<br />

Unless specify otherwise, each component runs<br />

in its own thread. The developer is released from<br />

the problems of data protection and inherent<br />

concurrent accesses of multithreads applications.<br />

Many data exchanges policies between components<br />

are integrated (circular buffers, unblocking, D-sampling,<br />

etc...), thus offering the behavior choice fitting to each<br />

application type (recording, real time processing, data<br />

conversion, control...). The user can, <strong>for</strong> example, include<br />

the variables parameter setting or make dynamic the inputs/<br />

outputs number suggested by the graphic component.<br />

The SDK includes the API’s complete documentation and<br />

examples or skeletons code <strong>for</strong> the specific components<br />

development. Finally, integrated assistants are included<br />

into the development environments (such as Microsoft’s<br />

Visual Studio ). They facilitate the generation of compilation<br />

projects.<br />

RT Maps, a responsible and durable choice,<br />

an <strong>open</strong>ing towards new projects and a renewed effectiveness...<br />

75<br />

Test RT Maps Version 3…


DES (Data Exchange System), a publish/subscribe architecture <strong>for</strong> robotics.<br />

Abstract<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

C. Riquier, N. Ricard, C. Rousset<br />

ECA<br />

Rue des Frères Lumière<br />

83130 La Garde<br />

This paper presents ECA software architecture <strong>for</strong> robotics projects such as Miniroc or AUVs.<br />

This architecture is made of two parts:<br />

- Software architecture is the tool to exchange data between processes � the DES: Data<br />

Exchange System.<br />

- Functional architecture is the organization of processes in order to fulfill robot<br />

functions.<br />

This paper presents the DES layer and gives an example of utilization.<br />

The DES is based upon a publish/subscribe design.<br />

A Process is a publisher of the data it “creates”, and a subscriber to the data it needs. It<br />

doesn’t need to know which process will publish the data it needs, neither which will use the<br />

data it publishes. Communications are based upon TCP/IP channels directly between the<br />

publishers and the subscribers. A “Mediator” manages all communications between<br />

processes. All processes ask what they want and tell what they can give. The Mediator tells<br />

everyone who can give what they want. Then all communication links are established directly<br />

between processes. At anytime, a process can ask something more, or stop sending a data. The<br />

Mediator also deals with process disappearance or arrival.<br />

The same data can be published by several publishers with different priorities. Data can have<br />

a period of validity.<br />

Processes can be on different computers and they don’t need to know where the other<br />

processes are.<br />

This architecture allows <strong>modular</strong> hot plug of payloads on robots.<br />

76


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

I - INTRODUCTION<br />

Several years ago (last century in fact !), all ECA robots were based upon point to point<br />

client-server <strong>architectures</strong>. Most of them had only two processes: one <strong>for</strong> HMI and one<br />

embedded in the vehicle.<br />

Around year 2000, according to the increasing complexity of robots (more sensors, more<br />

autonomous behaviors, more computers, …), the need of a distributed, reusable and flexible<br />

architecture arises.<br />

Among available concepts of architecture, we chose the “Publish – Subscribe” one. The first<br />

chapter compares three kinds of possible architecture.<br />

The first “Publish – Subscribe” architecture we developed was named “BDC” (Broadcast<br />

Data Center). Our first AUVs were built around it. After two years of utilization, and<br />

according to robots more and more demanding <strong>for</strong> “real time” per<strong>for</strong>mances, we specified<br />

some improvements of the BDC which has been renamed “DES”.<br />

This paper only describes the DES architecture which now equipped our AUVs and<br />

“Miniroc” ground robots (military robots <strong>for</strong> DGA).<br />

The last chapter of the document illustrates the utilization of the DES, with an example<br />

extract from the Miniroc architecture.<br />

II – COMPARISON OF SEVERAL COMMUNICATION ARCHITECTURES<br />

Distributed real-time applications have unique communication requirements. Real-time<br />

applications must handle different kinds of data flow, such as repetitive updates, single-event<br />

transactions, and reliable transfers; many nodes intercommunicate, making data flow<br />

complex; and dynamic configuration changes occur as nodes leave and join the network.<br />

Strict timing requirements further complicate the entire design.<br />

Traditional client-server <strong>architectures</strong> route all communications through a central server. This<br />

makes them ill-suited to handle real-time data distribution. Publish-subscribe <strong>architectures</strong>,<br />

designed to distribute data to many nodes simultaneously and anonymously, have clear<br />

advantages <strong>for</strong> real-time application developers: they are more efficient, handle complex<br />

communication flow patterns, and map well to underlying connectionless protocols such as<br />

multicast.<br />

Distributed application developers have several choices <strong>for</strong> easing their communications<br />

ef<strong>for</strong>t:<br />

• Client-server, either in the traditional <strong>for</strong>m of a central server node intermediating <strong>for</strong> a set<br />

of clients or its updated manifestation – distributes objects and object brokers<br />

• Publish-subscribe, in the <strong>for</strong>m of middleware that distributes data – publications –<br />

anonymously among applications in one-to-many patterns.<br />

II.1 Client-Server Architectures<br />

Client-server communications generalize the data flow by allowing one server node to<br />

connect simultaneously to many client nodes. Thus, client-server is a many-toone<br />

architecture. It works well when the server has all the in<strong>for</strong>mation. Examples of client-server<br />

applications include database servers, transaction processing systems and central file servers.<br />

When the data is produced by multiple nodes <strong>for</strong> consumption by multiple nodes, clientserver<br />

<strong>architectures</strong> are inefficient because they require an unnecessary transmission step:<br />

77


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

instead of direct peerto-peer, the data must go through the server. The transmission to the<br />

server also adds unknown delay to the system. Furthermore, the server can become a<br />

bottleneck and presents a single point of failure. Multiple-server nets are possible, but they are<br />

very cumbersome to set up, synchronize, manage, and reconnect when failures occur. This<br />

resolves bottleneck and point-of-failure exposures, however it only increases inefficiencies<br />

and bandwidth consumption.<br />

II.2 Object Brokers<br />

CORBA and DCOM are the best-known examples of distributed object <strong>architectures</strong>.<br />

Distributed objects <strong>architectures</strong> are middleware that abstract the complex network<br />

communication functions and promote object re-usability, two features that substantially<br />

reduce the programming ef<strong>for</strong>t. Object brokers do not address several distributed realtime<br />

application data flow characteristics, however: they offer little support to control the<br />

properties governing deterministic data delivery (especially important <strong>for</strong> signal data) and are<br />

cumbersome and unwieldy when programming dynamic, many-to-many flow patterns. This<br />

largely derives from the distributed objects inherent and fundamental reliance on a broker<br />

to route requests and its object management requirements.<br />

II.3 Publish-Subscribe<br />

The publish-subscribe architecture is designed to simplify one-to-many data-distribution<br />

requirements. In this model, an application "publishes” data and "subscribes" to data.<br />

Publishers and subscribers are decoupled from each other too. That is,<br />

• Publishers simply send data anonymously, they do not need any knowledge of the number<br />

or network location of subscribers.<br />

• Subscribers simply receive data anonymously, they do not need any knowledge of the<br />

number or network location of the publisher.<br />

An application can be a publisher, subscriber, or both a publisher and a subscriber.<br />

Publish-subscribe <strong>architectures</strong> are best-suited to distributed applications with complex data<br />

flows.<br />

The primary advantages of publish-subscribe to applications developers are:<br />

• Publish-subscribe applications are <strong>modular</strong> and scalable. The data flow is easy to manage<br />

regardless of the number of publishers and subscribers.<br />

• The application subscribes to the data by name rather than to a specific publisher or<br />

publisher location. It can thus accommodate configuration changes without disrupting the data<br />

flow.<br />

• Redundant publishers and subscribers can be supported, allowing programs to be replicated<br />

(e.g. multiple control stations) and moved transparently.<br />

• Publish-subscribe is much more efficient, especially over client-server, with bandwidth<br />

utilization.<br />

Publish-subscribe <strong>architectures</strong> are not good at sporadic request/response traffic, such as file<br />

transfers. However, this architecture offers practical advantages <strong>for</strong> applications with<br />

repetitive, time-critical data flows.<br />

78


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

III –DES: Data Exchange System<br />

III-1 – General Principles of Publish – subscribe <strong>architectures</strong><br />

Several main features characterize all publish-subscribe <strong>architectures</strong>:<br />

Distinct declaration and delivery. Communications occur in three simple steps:<br />

• Publisher declares intent to publish a publication.<br />

• Subscriber declares interest in a publication.<br />

• Publisher sends a publication issue.<br />

Named publications: PS applications distribute data using named publications. Each<br />

publication is identified by a name by which a publisher declares and sends the data and a<br />

subscriber declares its interest.<br />

Many-to-many communications support: PS distributes each publication issue<br />

simultaneously in a one-to-many pattern. However, the model’s flexibility helps developers<br />

implement complex, many-to-many distribution schemes quite easily. For example, different<br />

publishers can declare the same publication so that multiple subscribers can get the same<br />

issues from multiple sources.<br />

Event-driven transfer. PS communication is naturally event-driven. A publisher can send the<br />

datum when it is ready. A subscriber can block until the datum arrives. The publish-subscribe<br />

services are typically made available to applications through middleware that sits on top of<br />

the operating system’s network interface and presents an application programming interface<br />

(see Figure 1). The middleware presents a publishsubscribe API so that applications make just<br />

a few simple calls to send and receive publications. The middleware per<strong>for</strong>ms the many and<br />

complex network functions that physically distribute the data..<br />

Figure 1. Generic Publish-Subscribe Architecture<br />

79


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

III – 2 – DES overview<br />

All processes of the architecture are called “Agents”.<br />

There is one special agent which is essential: the MEDIATOR<br />

It is the “heart” of the DES. This Deamon is connected with all the agents running in the<br />

system. Any time a new data flow is required or an existing data flow disappear, the Mediator<br />

send to the concerned agents the pieces of in<strong>for</strong>mation they need to establish or destroy the<br />

data flow. So the Mediator neither sends nor receives any data flow. It just establishes them<br />

directly from publisher(s) to subscriber(s).<br />

The figure 2, shows the sequence of life of a data flow: all the transitions between the steps of<br />

life are supervised by the Mediator.<br />

The figure 3, shows the data flow itself once it is established (in the state “Publication” of the<br />

figure 2).<br />

Figure 1 : sequence of life of a data flow<br />

80


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

DES<br />

Subscriber<br />

Three over system processes can be used:<br />

Mediator<br />

deamon<br />

Publication of the data<br />

Figure 2 : data flow publication<br />

SWC: SoftWare Controller :<br />

This service is a daemon which starts all the agents (including the Mediator), monitors them<br />

(from the OS point of view). Some of the agents are defined as “critic”. If one of the critical<br />

agents crashes, the SWC halts all the system properly.<br />

DRC: Data Recording Center<br />

This service is a special agent which subscribes to all the data you have configured, and<br />

records them with the date.<br />

NTS: Network Time Synchronization<br />

This service is not really part of DES architecture but it is required <strong>for</strong> dating of data, as soon<br />

as the architecture is distributed among several CPUs. We use the NTP (Network Time<br />

Protocol) implementation provided with the OS.<br />

III – 3 - Different ways to exchange data through DES<br />

DES<br />

Publisher<br />

The basic principle of publish subscribe is that the subscriber does not decide when to receive<br />

the data. It receives it when the publisher publishes it.<br />

However the DES has a middle layer between data reception and the call from agent<br />

functions, which allows several ways to exchange data.<br />

The event publication:<br />

The publisher publishes its data. The subscriber DES layer receives it and run the associated<br />

callback of the subscriber agent. This allows you to synchronize the subscriber treatments on<br />

reception of data. You typically use this to synchronize a perception and guidance agent on<br />

the reception of the sensor acquisition agent.<br />

81


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

The unsynchronized publication:<br />

The publisher publishes its data. The subscriber DES layer receives it and store it. The<br />

subscriber agent can access the data when it needs it. Only the last data received is stored.<br />

You typically use this to get some parameters that you don’t need to use when they are<br />

published, but only when you start your own treatments. It also allows you to get a data which<br />

has been published be<strong>for</strong>e your agent was started. For example, the kind of robot you are on,<br />

or the parameters of the current camera when you want to do some visual treatments on it.<br />

The event publication with FIFO:<br />

Same principle than the event publication but all received data are stored in a FIFO buffer and<br />

one event per data is generated, even if your precedent treatment is not completed. For<br />

example the subscription to a fire order need to receive all the order sequence (FIRE followed<br />

by a CONFIRM).<br />

The unsynchronized publication with FIFO:<br />

Same principle than the event publication, but all received data are stored in a FIFO buffer,<br />

and each time the agent request a data, the oldest received data is returned.<br />

The event publication with a validity period:<br />

The publisher defines a time of validity (T) on its data. An event is generated in subscriber<br />

agent when the data is received. Another event is generated T sec after the reception and the<br />

data is turned invalid. For example the mobility commands published to the agent dealing<br />

with the robot drive are using a validity period.<br />

The unsynchronized publication with a validity period:<br />

The publisher defines a time of validity (T) on its data. The data is available <strong>for</strong> the subscriber<br />

during T sec after the reception. After that delay, if the subscriber accesses the data, an invalid<br />

access is returned.<br />

The multiple publications without priority:<br />

Several publishers can publish the same data. If you don’t define any priority, all published<br />

data from all publishers are received by the subscriber (all the precedent kinds of publication<br />

can be used).<br />

The multiple publications with priorities:<br />

Several publishers can publish the same data with different priorities. Only the data from the<br />

higher priority publisher, is received by the subscriber.<br />

82


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

IV – DES Use Example<br />

A typical illustration concerns the mobility commands of a mobile robot.<br />

In our architecture the agents interfacing with hardware are named”EV” (<strong>for</strong> Virtual<br />

Equipment).<br />

The example consists in:<br />

- the “Laser EV” :<br />

o it acquires the laser rangefinder data<br />

o it publishes them periodically, event driven by the hardware acquisition.<br />

- the “Vehicle EV” :<br />

o it subscribes to all the commands the vehicle is waiting <strong>for</strong>.<br />

o it publishes all the data coming from the vehicle<br />

- a “Guidance Agent” :<br />

o it subscribes (event driven subscription) to EV laser data. Thus, the guidance<br />

treatments and the mobility command publication are synchronized on the<br />

laser data publication.<br />

o it subscribes to the odometry data (unsynchronized subscription) : when the<br />

agent receive a laser data, it accesses to last receive odometry data and utilizes<br />

dates of data to resynchronize odometry with laser data.<br />

o it publishes mobility commands<br />

- a “Teleoperation Agent” :<br />

o it acquires the operator HMI commands<br />

o it publishes them periodically only if the operator want to take over manually<br />

the control of the vehicle.<br />

Let’s focus on mobility commands data flow:<br />

- two publishers are able to publish these commands with different priorities :<br />

o higher priority <strong>for</strong> téléopération : the operator can supervise the autonomous<br />

guidance and take the hand over if a problem occurs.<br />

o lower priority <strong>for</strong> autonomous guidance<br />

- the data published have a validity period, so :<br />

o the vehicle stop in case of communication loss with the HMI.<br />

o The lower priority publisher take the hand back, when the higher priority<br />

publisher stop to publish<br />

On this very little example, let’s imagine some evolutions:<br />

- You want to decrease the number of CPU � copy your payloads applications on the<br />

vehicle CPU and plug your laser on the vehicle CPU. Everything is still working<br />

without any recompilation neither configuration.<br />

- You want to use your payload on another robot � plug your all payload on the<br />

Ethernet switch of the other robot, and give it the address of the new vehicle Mediator.<br />

If the new vehicle does not already have a DES Virtual Equipment, you just need to<br />

develop the hardware interface.<br />

83


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Payload CPU<br />

Guidance<br />

Agent<br />

Laser<br />

EV<br />

Laser<br />

Figure 4: example of data flow<br />

84<br />

HMI CPU<br />

Teleoperation<br />

Agent<br />

Vehicle<br />

EV<br />

Vehicle<br />

Vehicle CPU<br />

Mediator


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Modular Distributed Architecture <strong>for</strong> Robotics<br />

Embedded Systems<br />

Pierre Pomiers, Vincent Dupourqué<br />

Robosoft, Technopole d'Izarbel, 64210 Bidart, France<br />

Tel: +33 5 59 41 53 66<br />

Fax: +33 5 59 41 53 79<br />

E-mail: pierre@robosoft.fr<br />

Abstract<br />

Tomorrow, advanced transportation robotic applications have to be able to cope with various situations and<br />

per<strong>for</strong>m many different tasks in a dynamic and changing environment. Furthermore, finding concrete<br />

applications in a wide range of user oriented industrial products, such systems, embedding several<br />

computing units, have to cope with an increasing demand of interactivity and to support number of noncritical<br />

pieces of hardware and software. This whole set of different capabilities needs to be per<strong>for</strong>med<br />

reliably and safely over long time periods. To this aim not only advanced programming techniques, but also<br />

appropriate control <strong>architectures</strong> are required. For these reasons, Robosoft proposes both a set of hardware<br />

and software components developed from our own experience in the field of automatic transportation of<br />

people and goods, which can be easily adapted to robotic solutions <strong>for</strong> outdoor risky interventions.<br />

1 Overview<br />

Most papers concerned with real-time embedded application design present experimental tests showing that<br />

theoretical results, obtained from <strong>for</strong>mal analysis, match real-time behavior of embedded system. But they<br />

do not consider critical aspects of integrating, or interfacing, with other non-real-time compatible processes<br />

such as complex high-level control system or user-end applications. To override the complexity of<br />

computing such application algorithms, the control software is built using a dedicated software<br />

environment: iCORE. iCORE relies on the SynDEx 1 data flow graph <strong>for</strong>malism (introduced by INRIA)<br />

which objective is to provide rapid prototyping and error free implementation procedures <strong>for</strong> distributed<br />

and heterogeneous application.<br />

From this flexible and reliable development approach, we present how our advanced mobile<br />

systems could be easily used and customized <strong>for</strong> implementing outdoor applications, and guaranteeing<br />

integrity during risky interventions. Each point of the method is mainly discussed through one of the<br />

Robosoft transportation products: the robuCAB. It is a car-like mobile plat<strong>for</strong>m designed specifically <strong>for</strong><br />

urban applications as well as automated transport. Last, in order to illustrate the possible robot operating<br />

mode, we will focus on software modules covering various needs: autonomous navigation, fleet<br />

management, remote control, specific HMI...<br />

2 iCORE development environment<br />

Robotics solutions, we describe here, make use of custom control <strong>architectures</strong> (composed of one Intel x86<br />

Linux/RTAI machines and from one to 8 Motorola MPC555 based control boards 2 ) with CAN buses as<br />

communication media. This section covers both the development and embedded targets environment.<br />

1 Synchronized Distributed Executive<br />

2 cb555 boards manufactured by Robosoft as part of his own control system products<br />

85


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

2.1 iCORE: an approach based on SynDEx methodology<br />

The application development method discussed here makes use of both SynDEx tools and Robosoft<br />

kernels. Developed by INRIA, SynDEx V6 is a graphical interactive software (see Fig. 1) with on-line<br />

documentation (refer to [3]), implementing the AAA 3 methodology. Here is the list of services offered by<br />

the couple SynDEx and Robosoft proprietary kernels:<br />

• specification of an application algorithm as a conditioned data-flow graph (or interface with the<br />

compiler of one of the Synchronous languages ESTEREL, LUSTRE, SIGNAL through the common<br />

<strong>for</strong>mat DC)<br />

• specification of a multi-component as a graph<br />

• heuristic <strong>for</strong> distributing and scheduling the algorithm on the multi-component with response time<br />

optimization<br />

• visualization of predicted real-time per<strong>for</strong>mances <strong>for</strong> the multi-component sizing<br />

• generation of dead-lock free executives <strong>for</strong> real-time execution on the multi-component with optional<br />

real-time per<strong>for</strong>mance measurement. These executives are built from a processor-dependent executive<br />

kernel. SynDEx comes presently with executives kernels <strong>for</strong> various digital signal processors,<br />

microprocessors and micro-controllers.<br />

The distributing and scheduling heuristics as well as the predicted real-time diagram, help the user to<br />

parallelize his algorithm and to size the hardware while satisfying real-time constraints. Moreover, as the<br />

executives are automatically generated with SynDEx, the user is relieved from low level system<br />

programming and from distributed debugging. This allows optimized rapid prototyping and dramatically<br />

reduces the development cycle of distributed real-time applications.<br />

3 Algorithm Architecture Adequation<br />

Fig. 1: Application design example using SynDEx CAD<br />

86


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

The SynDEx development system described above is able to generated executive binaries <strong>for</strong> various types<br />

of target including the ones that compose the Robosoft control plat<strong>for</strong>m. Robosoft control architecture<br />

typically embeds one or more MPC555 based boards and an Intel x86 Real-time Linux computer.<br />

2.2 Robosoft cb555 control board<br />

The Robosoft cb555 [4] board (see Fig. 2) is a stand-alone four axis controller designed <strong>for</strong> critical<br />

industrial process handling. Including a 32-bit PowerPC architecture, it provides high computation<br />

per<strong>for</strong>mance (refer to Table 1 <strong>for</strong> a detailed description of boards IO capabilities).<br />

Fig. 2: The cb555 Robosoft control board<br />

2.3 Robosoft emPC and wsPC computers<br />

Table 1: Robosoft control board connectors description<br />

The context of real-time embedded applications programming is quite different from the classical one, user<br />

usually meets. This notion of "real-time" is not present in normal Linuses. Such real-time dedicated<br />

mechanisms can be added by installing an RTOS 4 on top of Linux standard kernel [1] [2]. Robosoft based<br />

both emPC and wsPC product ranges (resp. <strong>for</strong> embedded and workstation computers) on RTAI version<br />

which is widely used in embedded industry <strong>for</strong> prototyping and which is supported by very active<br />

companies.<br />

RTAI basic principle is rather simple. RTAI provides deterministic and preemptive per<strong>for</strong>mance in<br />

addition to allowing the use of all standard Linux drivers, applications and functions. To this aim, RTAI<br />

decouples the mechanisms of the real-time kernel from the mechanisms of the general purpose Linux<br />

kernel so that each can be optimized independently and so that the real-time kernel can be kept small and<br />

simple. In our case, the primary function of RTAI kernel is to provide real-time tasks with direct access to<br />

the raw hardware, so that they can execute with minimal latency and maximal processing resource, when<br />

required.<br />

3 The robuCAB implementation<br />

The robuCAB plat<strong>for</strong>m (see figure 3) is a mobile robot offering great transportation capabilities, able to be<br />

driven over urban environments (such as cities, campuses...). Consequently, this plat<strong>for</strong>m perfectly fits a<br />

wide range of missions: autonomous transportation, tele-operation, fleet supervision... The plat<strong>for</strong>m control<br />

is handled by a heterogeneous architecture composed with both cb555 boards and an embedded PC. Figure<br />

3 shows how hardware units are organized: two cb555 controllers dedicated to low-level critical loops,<br />

while the embedded PC focuses more on advanced algorithms and interfaces with asynchronous devices<br />

such as wireless network, GPS, laser scanner... Real-time communications sequences between cb555<br />

4 Real-Time Operating System<br />

87


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

boards and PC rely on a CAN bus, while higher level communications (toward supervisors, network<br />

databases, web server, as well as any other non-real-time devices) are realized trough classical Ethernet<br />

links or serial lines.<br />

Fig. 3: The robuCAB plat<strong>for</strong>m<br />

Fig. 4: robuCAB control architecture<br />

Thanks to a very <strong>modular</strong> hardware and software structure, robuCAB can handled a wide set of application.<br />

As an illustration, let us focus on figure 5. It represents the structure of a supervised transportation<br />

application. Several software levels are depicted, from the most critical 1kHz task to the fully aperiodic<br />

web oriented processes:<br />

• 1kHz control loops driving the four independent motors and the two independent steering jack servos<br />

• a 100Hz loop dedicated to speed and steering profile computing<br />

88


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

• an asynchronous level (running under Linux) handling both DGPS and LMS 5<br />

• aperiodic supervision processes with both web page and database update (running over one or more<br />

computers connected to a network)<br />

Relying on iCORE environment (merging the best of SynDEx programming methodology and Robosoft<br />

<strong>modular</strong> proprietary kernels), these software levels implementation lead to highly predictable and safe<br />

application executions, as well as safe (non-blocking) interactions between software levels.<br />

Fig. 5: robuCAB application structure<br />

4 iCORE approach: a decisive step to both hardware and software <strong>modular</strong>ity<br />

Over the robuCAB example, we show that fairly complex and exhaustive software applications may be<br />

implemented, covering all the needs required <strong>for</strong> transportation missions. The iCORE approach, we<br />

5 LMS SICK laser scanner<br />

89


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

propose, appears to be very well adapted to robotics software development and, moreover, bring a very<br />

interesting <strong>modular</strong> concept. With iCORE approach, <strong>modular</strong>ity is twofold.<br />

First, relying on SynDEx methodology, iCORE provides users with hardware <strong>modular</strong>ity. This<br />

means that once an application is written <strong>for</strong> a given architecture (composed with a set of cb555, PC and<br />

CAN buses), it is able to run on an extension of this architecture without any modification. Hence,<br />

extending the computing capabilities of a robotics plat<strong>for</strong>m is totally ef<strong>for</strong>tless: application is automatically<br />

redistributed in order to handle the new architecture resources.<br />

Secondly, iCORE approach also provides user with software <strong>modular</strong>ity. As shown on figure 1, in<br />

our context, an application is designed as a block diagram. Each block contains either a basic feature (from<br />

iCORE kernels) or another set of blocks implementing a more complex feature. Thus, iCORE approach<br />

make software part be easily reusable: simply by copying and pasting blocks subsets.<br />

Fig. 6: Example of possible plat<strong>for</strong>m <strong>modular</strong>ity<br />

Figure 6 gives a striking illustration of possibilities offered by iCORE <strong>modular</strong>ity. On the left side,<br />

a four-wheeled plat<strong>for</strong>m is shown, composed with two pods. Each pod is driven by the same piece of<br />

control software (running on its own cb555). Hence, programming control <strong>for</strong> a six-wheeled version (with<br />

three pods) is nothing but duplicating a diagram subset and adding a new cb555 into architecture graph.<br />

Code distribution and execution will require no other step. The same way, assuming a 6-DOF robot arm<br />

control has been previously realized using iCORE approach, adding such an arm to the 6-wheeled plat<strong>for</strong>m<br />

is nothing but merging the two application diagrams together and modifying the architecture description in<br />

order to fit the right set of cb555 boards and PC.<br />

Conclusion<br />

Relying on this flexible and reliable development methodology, the iCORE approach, we present here,<br />

bring an efficient help to users and researchers, interested in overriding application implementations<br />

complexity. Thanks to his own experience, Robosoft has adapted robotic solutions <strong>for</strong> the field of<br />

automatic transportation of people and goods, leading to a dedicated product range: rugged hardware<br />

components (cb555 and various types of embedded PC), as well as a set of realtime software components<br />

(control loops, I/O primitives, laser and wire guidance, obstacle detection, ...). As shown over the given<br />

examples, iCORE approach allow user to easily implement and customize transportation applications.<br />

Finally, making use of programming methodology introduced by SynDEx, iCORE guarantee applications<br />

executions integrity during risky interventions.<br />

References<br />

[1] E. Bianchi, L. Dozio, P. Mantegazza.<br />

DIAPM RTAI Dipartimento di Ingegneria Aerospaziale - Politecnico di Milano<br />

Real Time Application Interface.<br />

[2] RTLinux<br />

The Realtime Linux.<br />

[3] Thierry Grandpierre, Christophe Lavarenne, Yves Sorel.<br />

Modèle d'exécutif distribué temps réel pour SynDEx. 1998..<br />

[4] MOTOROLA SEMICONDUCTOR TECHNICAL DATA.<br />

MPC555 Product Preview PowerPC TM Microcontroller, MOTOROLA INC., 1998.<br />

90


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Remote operation kit with <strong>modular</strong> conception and<br />

<strong>open</strong> architecture : the SUMMER concept<br />

L. WALLE, Defence R&D & Europe business manager<br />

ECA – Defence & Offshore, Land systems<br />

Bât Apollo 4 r Rene Razel 91892 ORSAY CEDEX,<br />

� lw@eca.fr � http://www.eca.fr<br />

Abstract<br />

The article presents an original command and control architecture, realized <strong>for</strong> French MoD<br />

(DGA), aiming at operating any robot through a low level controller acting as an abstraction layer<br />

between actuators and high level orders, whilst ensuring liability of the remotely operated<br />

system as well as allowing high level mechanisms to help the operator in assisted mode or to<br />

plan semi or fully autonomous mobility actions while the operator concentrates on its inspection<br />

or observation mission, with the ability to keep the control of the machine in any case.<br />

The Man Machine Interface has also been a big achievement <strong>for</strong> this project since profile<br />

settings allow all controls to be fully configurable and being instantly modified, adjusted, new<br />

functions added.<br />

Lastly, a brief description of the company is provided to know more about ECA.<br />

A. Reasons <strong>for</strong> <strong>modular</strong>ity : the Summer concept aim<br />

Cybernetix company has been developing since years several prototypes aiming at inspecting<br />

and observing the suburb region as well as operating remotely in case of harsh or hostile<br />

environments.<br />

One of our current realizations concerns a global solution <strong>for</strong> French MoD (DGA/ETAS) able to<br />

remote operate different types of vehicles with the same command & control architecture and<br />

the readiness and mechanisms to implement autonomy functions as well as testing new MMI<br />

configurations. The following pictures show the variety of vehicles to be adapted.<br />

All photos courtesy of DGA<br />

Picture 1 : Summer vehicles<br />

The problem there<strong>for</strong>e is of different kind :<br />

o size : because the range of vehicles goes from a truck pillar to a quad, solution has to<br />

be compact,<br />

o ruggedness : because it must operate in all-terrain,<br />

91


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

o liability : because this is <strong>for</strong> use in fine by military <strong>for</strong>ce <strong>for</strong> strategic operations<br />

eliminating the possibility <strong>for</strong> any error,<br />

o <strong>modular</strong> : because the kit must be adapted and fitted <strong>for</strong> different plat<strong>for</strong>ms and<br />

components must be replaced or added easily<br />

o <strong>open</strong> : because the final objective is to find the right architecture and set of sensors so<br />

many tests must be done, the task of implementation must be eased as much as<br />

possible,<br />

o adaptive : because as a prototype, several configurations and improvements or<br />

modifications must be done in a short time to make it usable.<br />

To find the right compromise between all of these constraints has been the objective of the study<br />

phase, as well in terms of <strong>for</strong>ecasting what will be needed in the next future as in terms of ease<br />

of use <strong>for</strong> pilots, not specialists in using computers and a powerful but complicated MMI.<br />

The results are shown in the next paragraphs.<br />

B. Command & Control architecture<br />

Components<br />

The command & control architecture is composed of several subsystems :<br />

� the Command & Control system (C²S) i.e. the place from which the operator is giving the<br />

orders or supervising what’s happening. This subsystem is then subdivided into mobility &<br />

mission dedicated work stations. Further details on this will come in the next chapter.<br />

� the on-board layers : a cPCI computer architecture is used to provide all services needed<br />

from the interpreter to the physical command of the actuators, it is divided into the<br />

following components :<br />

o a communication interface that deals with transmission protocol between C²S and<br />

low level on-board calculator. Thus, a change in the protocol from cyclic to acyclic<br />

transmission, or change in the delivery rate of cyclic packets will result only in a<br />

change in this layer (and the same in C²S of course),<br />

o a supervisor (SUP), in charge of all security aspects managements and arbitration<br />

of possible conflicts between the components.<br />

o a low level controller (LLC), whose role is the management of all automatons <strong>for</strong><br />

remote operation, meaning placing the commands in correct….<br />

o a plat<strong>for</strong>m controller (PFC), that acts as an abstraction layer between the plat<strong>for</strong>m<br />

and low level orders. This layer has the responsibility to provide the link from<br />

orders to physical command of the actuators through the various installed boards<br />

(analog and digital I/O, counters, serial links, …). This is the only specific part of<br />

the architecture, plat<strong>for</strong>m dependent, that need to be adapted <strong>for</strong> each different<br />

type of vehicle (apart from specific transmission).<br />

o a high level controller (HLC) who can communicate with the LLC to get in<strong>for</strong>mation<br />

from the plat<strong>for</strong>m to run value-added algorithms and give back orders to the LLC<br />

<strong>for</strong> driving the robot or simply give feedback to the user via the MMI through the<br />

SUP to help in the driving or alert if an automatic obstacle detection is activated <strong>for</strong><br />

example. To prevent any malfunction caused by a high level function and also<br />

maximizing the calculation power needed <strong>for</strong> the algorithms, a physical calculator<br />

is provided <strong>for</strong> high level functions to ensure in case of hang up of a function or a<br />

module, or in case of severe software failure, the low level will so be still alive and<br />

allow the user to keep control of the plat<strong>for</strong>m.<br />

92


The following scheme sums up the interaction between all components of the system.<br />

Automatons<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Low level Controller<br />

Automatons management<br />

State info<br />

Command Control System<br />

TC<br />

Communication interface<br />

Security<br />

Commands<br />

Filtered<br />

commands<br />

InfoSys<br />

Supervisor<br />

Components<br />

exchanges<br />

management<br />

Security / liability<br />

Plat<strong>for</strong>m Controller<br />

TM<br />

System<br />

Commands<br />

States + PF Commands<br />

Vehicle specific command<br />

management<br />

Directions<br />

Alerts/Malfunctions<br />

States + assisted commands<br />

Measures<br />

Actuators/ Sensors Drivers<br />

Picture 2 : the Command & Control architecture<br />

High levelController<br />

Autonomy modules<br />

Driving help<br />

After identification, the functions have been analyzed to elaborate the state machines. Below is an<br />

example of such an automaton, showing the interactions and data exchange between components.<br />

93


Pilote Expérimenté:Télépilote<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Envoi TC<br />

INTERF:InterfaceComm<br />

Mise en route<br />

Choix Mode<br />

Resultat Etat Mode<br />

Transmission TC Systèmes<br />

SUPV:Superviseur CHN:ControleurHautNiveau CBN:ControleurBasNiveau<br />

Demande identification<br />

Activation modules<br />

Compte rendu<br />

Transmission TC Mobilité<br />

[OrdreActivation]ActivationModule<br />

InfoModule<br />

[OrdreArrêt]ArrêtModule<br />

demande Identification<br />

Configuration<br />

Compte rendu<br />

[ordreActivation]configuration<br />

[OrdreArrêt]configuration<br />

Picture 3 - Automaton example, high level module activation<br />

C. Implementation<br />

Hereafter are shown the low and high level computers, cPCI architecture based with all<br />

acquisition or communication cards (analog and digital I/O, counters, RS232, 485, 422).<br />

Picture 4 : Low level (left) and High (right) level computers<br />

94


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

In terms of on-board sensors, we can detail on the vehicle :<br />

- 4 driving cameras (1 in the front, 1 back, 2 laterals)<br />

- Observation camera (zoom control)<br />

- Pan & tilt unit with shock absorbers<br />

- 6 DOF Inertial sensor giving speed and acceleration in XYZ directions<br />

- Differential GPS<br />

- Inclinometers in axial and transverse directions<br />

- Odometry (ultrasound radar)<br />

In the central unit, some space is left to provide additional sensors, and a dedicated connector is<br />

available, providing needed digital and analog I/O as well as various types of power supply<br />

(+12V, +5V, …), ensuring a quick connection and the test of any sensor within a short time.<br />

Concerning transmissions, the protocol used <strong>for</strong> TeleCommands (TC) and TeleMeasures (TM)<br />

is RS422 @19.2Kbds. At this rate, the maximum transmission lag is 100ms, sufficient <strong>for</strong> driving<br />

a vehicle at 50km/h (max). The installed system is analog and the range of reception is about<br />

500m. To avoid any lose of control of the plat<strong>for</strong>m, in case of transmission loss after a threshold<br />

period (to prevent from micro-cuts), an immediate stop is activated. The security chain<br />

comprises also an independent emergency stop besides normal ES, watchdog, immediate<br />

stops, … It uses the UHF band within the range [400..450] MHz.<br />

The <strong>modular</strong> conception allows also the use of a digital transmission system with a range of<br />

5kms with a relay system instead of the analog one. The only procedure to conduct is to connect<br />

a single plug containing all cables on the vehicle, to connect also the cables in the control station<br />

and to change the transmission profile in the interface, easily done through the tactile screen.<br />

Concerning the videos, 2 communication links are provided to send PAL <strong>for</strong>mat videos at full<br />

rate:<br />

- 1 <strong>for</strong> mobility purpose consisting of a proprietary card connected to 1 to 4 cameras,<br />

allowing any configuration of the pictures, thus making possible to drive having the front<br />

video whilst keeping in the corners the lateral views to be alerted in case of danger. When<br />

activating the reverse gear, the main picture switches from front to back camera.<br />

- 1 <strong>for</strong> the observation camera intended to be used by the navigator.<br />

The range [2.2..2.4] GHz is used <strong>for</strong> videos.<br />

D. Adaptive and <strong>modular</strong> MMI implementation<br />

Picture 5 : from mock-up to realization<br />

95


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

The building of a Man Machine Interface <strong>for</strong> such a complex system is obviously an iterative<br />

process that has been conducted together with the final client to find the best compromise<br />

between ergonomics presentation, making the interface user-friendly whilst guarantying an<br />

access to security organs at any time through dedicated areas of the tactile screen and<br />

mechanisms (dead-lever on joystick, watchdog, emergency stop on loss of transmission, …).<br />

The picture 4 illustrates the matching between the various mock-up versions presented as well<br />

in terms of architecture as <strong>for</strong> the graphic aspect of the buttons and its disposition and pasting<br />

through the different screens.<br />

The composition below presents the various functions of the MMI.<br />

Picture 6 : MMI operated<br />

96


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

E. Profile management<br />

The aim of the profile management tool is to allow the user to configure his environment. The adopted<br />

philosophy is a Windows one. It uses the registry principles, with a list view containing the different<br />

categories, and all controls beneath.<br />

The following pictures illustrate the realization.<br />

Depending on the mode the MMI is used, different privileges are granted, thus preventing hazardous<br />

modifications from inexperienced users. The visibility of the parameters is so following the mode, a lock<br />

indicates a category that cannot be modified.<br />

All elements can be customized, from video size to position of the bitmaps, or alarm thresholds, aso…<br />

Picture 7 : profile customization<br />

97


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

F. Company profile<br />

ECA (http://www.eca.fr) is a high tech company located in Toulon, south of France, specialized in<br />

defence and civil robotics, industrial automation, systems and in<strong>for</strong>mation. Established in 1936, ECA has<br />

six offices in France and four subsidiaries. Two of them are located in France (HYTEC specialized in<br />

Remote control systems <strong>for</strong> inspection and intervention in hostile environments, and ECA AERO- which<br />

is focused on automated systems <strong>for</strong> aeronautics), one in England (ECA CSIP oriented in automated<br />

systems <strong>for</strong> harsh environments (Remote technology, Defence, Environmental Systems)), and one in<br />

Turkey (OD ECA Support and equipment <strong>for</strong> defence).<br />

ECA employs currently around 300 people. The main customers are<br />

military clients with more than 20 national marines, and companies<br />

oriented in aeronautic, automobile, offshore oil and gas, nuclear<br />

power activities. ECA is the market leader <strong>for</strong> sub sea robotics <strong>for</strong><br />

mine warfare.<br />

ECA also builds ground robots <strong>for</strong><br />

French Army and intends to re-use the<br />

technology in civil applications: beach-cleaning activity has been identified<br />

as a major application. To define the prototype, ECA provides the<br />

knowledge <strong>for</strong> all the electronic parts (sensors and the artificial intelligence),<br />

the engine and the frame. As soon as the studies are completed and the<br />

prototypes developed, ECA uses the acquired expertise to develop and<br />

manufacture small series of repetitive products.<br />

ECA has significant means in terms of Research & Development, including mechanical, hydraulic,<br />

electronic and in<strong>for</strong>mation engineering and design as well as specialised workshops (optics,<br />

magnetism...) permitting the company to conceive and fabricate demonstrators or prototypes.<br />

ECA collaborates with a number of research organisms and universities in France and abroad to<br />

maintain very high level of innovation in its systems.<br />

The Parisian premises of ECA, <strong>for</strong>merly known as Cybernétix – Homeland & Security branch, will be in<br />

charge of the project, has the following activities :<br />

� With a unique know-how in remote-operated systems and mobile robots <strong>for</strong><br />

interventions in hostile environments. ECA offers standard systems and<br />

equipment as well as turnkey solutions. The strategy of ECA as a robotics and<br />

automation specialist, is based on the complementarity between technological<br />

research & production of robotized equipment and systems in market “niches”<br />

with strong potential.<br />

ECA develops and manufactures autonomous and remote-controlled mobile<br />

robots <strong>for</strong> the French Defence and Civil Security Services, operations in difficult<br />

environments and industrial applications. ECA offers innovative solutions <strong>for</strong><br />

extending human actions at inaccessible depth (acoustic guided AUV-<br />

Autonomous Underwater Vehicle- <strong>for</strong> sub sea applications: bringing significant<br />

savings in installation and maintenance operations) or in hostile environments<br />

(remote controlled mobile robots especially dedicated to handling and<br />

neutralizing explosives as well as operations in nuclear, bacteriological or<br />

chemical fields ). Designed to carry remote controlled inspection and intervention<br />

operations in complete safety <strong>for</strong> the operator, these robots can easily be<br />

transported in a car or by helicopter or ships. They are equipped with different<br />

type of manipulators and monitoring systems (measurements systems, camera<br />

and vision systems, location systems, etc.) depending on the application. ECA<br />

has also an important background on big transport systems <strong>for</strong> containers manipulation, through<br />

several European RTD projects.<br />

98


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Session « Academic »<br />

99


A multi-level architecture controlling robots from autonomy to teleoperation<br />

Abstract<br />

Cyril Novales, Gilles Mourioux, Gerard Poisson<br />

Laboratoire Vision et Robotique, 63 av. de Lattre de Tassigny, F-18000 Bourges<br />

Cyril.Novales@bourges.univ-orleans.fr<br />

This paper presents a specific architecture based on a multilevel <strong>for</strong>malism to control either<br />

autonomous or teleoperated robots that have been developed in the laboratory vision and robotics. This<br />

<strong>for</strong>malism separates precisely each robot functionalities (hardware and software) and provides a global<br />

scheme to position them and to model data exchanges among them. Our architecture was originally built<br />

from the classical control loop. Two parts can thus be defined: the Perception part which manages the<br />

processing and the models construction of incoming data (the sensor measurements), and the Action part<br />

which manages the processing of controlled outputs. These two parts are divided in several levels and<br />

depending on the robot, the control loops that have to be per<strong>for</strong>med are located at different levels: from<br />

the basic one (i.e. level 0) composed by the jointed mechanical structure and level 1 which per<strong>for</strong>ms<br />

actuators servoing, to the highest one (i.e. level 5) which manages the various missions of the robot. The<br />

higher the level is, the shorter the loop reaction time has to be. Each level clusters, <strong>for</strong> their respective<br />

part, specific modules which per<strong>for</strong>m their own goals. This general scheme permits to integrate different<br />

modules issued from various robot control theories. This architecture has been designed to model and<br />

control autonomous robots. Furthermore, a third part, called “teleoperated part”, can be added and<br />

structured in levels similarly to the two other parts. Distant from the robot, this teleoperated part is<br />

managed by a human operator who controls the robot at various levels: i.e. from the first level (the basic<br />

remote control) to the upper one (where the operator only controls the robot mission).<br />

Hence, this architecture merges two antagonist concepts of robotics, i.e. teleoperation and<br />

autonomy, and allows a sharp distribution between these two fields. Finally, this architecture can be used<br />

as a main frame to build its own control architecture, using only a few clusters with dedicated modules.<br />

Some examples and experimental results are given in this paper to validate the proposed <strong>for</strong>malism.<br />

Introduction<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

When turning a robot on, the problem of its autonomy is quickly addressed. However, several type<br />

of autonomies can be considered: energetic autonomy, the behaviour autonomy or smart autonomy. The<br />

designer has to choose the way he will give autonomy to his robot. He has mainly two orientations:<br />

“reactive” capacities or “deliberative” capacities. These two capacities are complementary to let a robot<br />

per<strong>for</strong>m a task autonomously. The designer must built a coherent assembly of various functions achieving<br />

these capacities. This is the role of the control architecture of the robot. To design an autonomous robot<br />

implies to design a control architecture, with its elements, its definitions and/or its rules.<br />

One of the first author who expressed the need <strong>for</strong> a control architecture was R.A. Brooks [1]. In<br />

1986, he presented an architecture <strong>for</strong> autonomous robots called “subsumption architecture”. It was made<br />

up of various levels which fulfil separately precise function, processing data from sensors in order to<br />

control the actuators with a notion of priority. It is a reactive architecture in the sense that there is a direct<br />

link between the sensors and the actuators (Figure 1). This architecture has the advantage to be simple<br />

and thus easy to implement, nevertheless, the priorities given between the different actions to per<strong>for</strong>m are<br />

fixed in time and do not allow an important flexibility.<br />

100


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Figure 1 - « Subsumption Architecture »<br />

Then other various <strong>architectures</strong> were developed based on different approaches, generally<br />

conditioned by the specific robot application that the architecture had to control.<br />

The architecture 4-D/RCS developed by the Army Research Laboratory [2] has the main<br />

characteristic to be made up of multiple calculative nodes called RCS (Real time Control System). Each<br />

node contains four elements, per<strong>for</strong>ming the 4 following functionalities: Sensory Processing, World<br />

Modeling, Behavior Generation and Value Judgment. Some nodes contribute to the perception, others<br />

contribute to the planning and control. These nodes are structured in levels, in which one can find the<br />

influence of the reactive behaviors in the lower levels and of the deliberative behaviors in the higher<br />

levels. The general management is carried out via communications using a specific language NML<br />

(Neutral Messaging Language) and according to the decision made by the Value Judgment modules.<br />

The Jet Propulsion Laboratory developed in collaboration with NASA its own control architecture<br />

called CLARAty [3]. Its principal characteristic is to free itself from the traditional diagram on 3 levels<br />

(Functional, Executive, Path-Planner) and to develop a solution with only 2 levels which represent the<br />

functional level and the decisional level. A specific axis integrates the concept of granularity of the<br />

architecture <strong>for</strong> compensating the difficulties of understanding due to the reduction of the number of<br />

levels (Figure 2). One of the interests of this representation is to work at the decisional level only on one<br />

model emanating from the functional level. The decomposition in objects of this functional level is<br />

described by UML <strong>for</strong>malism (Unified Modeling Language) that allows an easier realization of the<br />

decisional level.<br />

Figure 2 – Two level Architecture<br />

The LAAS architecture (Laas Architecture <strong>for</strong> Autonomous System) [4] is made up of 3 levels:<br />

decisional, executive and functional (Figure 3). Its goal is to homogenize the whole mobile robotics<br />

developments and to be able to re-use already designed modules.<br />

101


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Figure 3 – LAAS Architecture<br />

All the modules of the functional level are encapsulated in a module automatically generated by<br />

GenoM. These have to interact directly with the actuators and other modules of the functional level. The<br />

higher level is a controller of execution (Request & Ressources Checker). Its main function is to manage<br />

the various requests emitted by the functional level or the decisional level. The operator acts only at the<br />

decisional level by emitting missions which depend on the in<strong>for</strong>mation incoming from the lower levels.<br />

This architecture has an important <strong>modular</strong>ity even if the final behavior is related to the programming of<br />

the controller of execution.<br />

R.C. Arkin describes and uses a hybrid control architecture, called AuRA <strong>for</strong> Autonomous Robot<br />

Architecture [6], including a deliberative part and a reactive part. It is made up of two parts, each using<br />

distinct method to solve problems (Figure 4). The deliberative part which uses methods of artificial<br />

intelligence contains a mission planner, a spatial reasoner and plan sequencer. The reactive part is based<br />

on sensors. A “schema controller” controls the behavioral processes in real time, be<strong>for</strong>e they were sent to<br />

a “low-level control system” <strong>for</strong> execution. The deliberative level stays in standby mode and is activated<br />

only if an impossibility is generated by the reactive part of the task execution. The levels are<br />

progressively activated according to the needs.<br />

Figure 4 - AuRA Architecture<br />

102


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

A. Dalgalarrondo [7] from the DGA/CTA proposed another architecture. It presents a hybrid<br />

control architecture including four modules: perception, action, attention manager and behavior selector<br />

(Figure 5). It is based on sensor based behaviors chosen by a behavior selector. The “perception” module<br />

carries out models using processing which are activated or inhibited by the “attention manager” module.<br />

The “action” module consists of a set of behaviors controlling the robot effectors. A loop is carried out<br />

with the in<strong>for</strong>mation collected by the perception part. This is particularly necessary <strong>for</strong> low level actions.<br />

Figure 5 – DGA Architecture<br />

The “attention manager” module is the organizer of the control architecture: it checks the validity<br />

of the models, the occurrence of new facts in the environment, the various processing in progress and<br />

finally the use of the processing resources. The “behavior selector” module must choose the robot<br />

behavior according to all in<strong>for</strong>mation available and necessary to this choice: the fixed goal, the action in<br />

progress, representations available as well as the temporal validity of in<strong>for</strong>mation.<br />

The DAMN architecture (Distributed Architecture <strong>for</strong> Mobile Navigation) results from work<br />

undertaken at the Carnegie Mellon University (Figure 6). Its development was a response to navigation<br />

problems. The principle is as follows: multiple modules share simultaneously the robot control by sending<br />

votes which are combined according to a system of weight attribution. Then, the architecture makes a<br />

choice of controls to send the robot, by a fusion of the possible solutions [20].<br />

Figure 6 – DAMN Architecture<br />

This architecture proposes the dominating presence of only one module which decides the<br />

procedure to follow. This <strong>for</strong>ces to concentrate all the evolution capabilities of the robot. This mode of<br />

control does not make it possible to understand or anticipate the probable behavior of the robot with<br />

respect to a unexpected situation.<br />

Kim Hong-Ryeol & Al, have suggested a five hierarchical level <strong>for</strong> an architecture that he<br />

patented in 2005 [8]. The physical level represents the robot, the higher level is the function level; it is an<br />

assembly of software components. The “Executive level” is level 3 and is composed of “local Agents”<br />

which are managed by the “real time manager”. The upper level is the “planning level” which includes 2<br />

103


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

on-line and off-line modes. On the upper level of the architecture is the “design level” which represents<br />

the possibility <strong>for</strong> the designer to carrying out modifications of the architecture interactively.<br />

Principle of the proposed architecture<br />

The architecture, propose here, is based on the same <strong>architectures</strong> principles that have been<br />

suggested since the Nineties. It relies on the concept of levels initially developed by R. Brooks and which<br />

appear in <strong>architectures</strong> proposed by AuRA or LAAS. Similarly to the latter one, we have an upstream<br />

flow of in<strong>for</strong>mation from the robot corresponding to its perception, and a downstream flow going down<br />

towards (to) the robot and corresponding to the control part. The specificity is to structure this robot<br />

control architecture in levels similarly to communication <strong>architectures</strong> such as the OSI/ISO (Figure 7).<br />

Each level can communicate with the higher or lower level by data transfers which follow a predefined<br />

<strong>for</strong>mat and processing according to the given robot application.<br />

Figure 7 – Architecture of Open System Interconnection (ISO)<br />

However, contrary to communication <strong>architectures</strong> where data must imperatively flow through all<br />

levels, in the proposed architecture, data will be able to either pass through the levels to be treated, or to<br />

be transmitted directly to the same level from the upstream part to the downstream part. Thus, data do not<br />

have to follow a unique path. They can be routed via multiple paths to per<strong>for</strong>m various control loops.<br />

Even when affecting different levels, all these control loops have a common path through the articulated<br />

mechanical system (AMS) (Figure 8). The Articulated Mechanical System corresponds to the physical<br />

part of a robot.<br />

104


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Figure 8 – Data flow inside a robot<br />

The control loops are interwoven and pass through a more or less great number of levels. The<br />

loops of a low level are faster than the loops of a higher level, because data are processed successively by<br />

a lesser number of levels. We then find the concepts of “deliberative” levels in the higher levels and<br />

“reactive” levels in the lower levels.<br />

Figure 9 – Robot = AMS part + Perception part + Action part<br />

An autonomous robot as a whole is then modelled by an Articulated Mechanical Structure<br />

surmounted by two parts: an upstream part corresponding to the perception of this AMS, and a<br />

downstream part corresponding to the action on this AMS. These two parts are divided into levels along<br />

which the various control loops are closed (Figure 9). Each level of each part must be clearly specified to<br />

allow the designer of the robot to place the respective control loops (articular control, Cartesian control,<br />

visual control...). This architecture is embedded in the robot: it defines the autonomous architecture of the<br />

robot.<br />

105


The autonomous parts<br />

Basic levels<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Let us start from a basic control loop of a robot: a reference signal, compared with the sensor<br />

measurements, is sent in a correction module be<strong>for</strong>e being applied to the entry of the robot actuators<br />

(Figure 10a). When analysing the electromechanical part (the Articulated Mechanical Structure), two<br />

other distinct parts in this control loop are identified: the perception part composed with sensors and their<br />

controls, and the action part which gathers all processing steps (i.e. typically articular controls - PID)<br />

applied to data that are then sent to the AMS (Figure 10b).<br />

Figure 10 a- classic representation of a servoing loop b- our representation<br />

We call level 0 the articulated mechanical structure. Servoings and sensors are located at the same<br />

level, immediately above the articulated mechanical structure; they constitute the level 1 of our<br />

architecture. This level corresponds to the shortest – and the fastest – loop of the robot.<br />

At this level, we will find various loops functioning in parallel; <strong>for</strong> example articular servoings of<br />

each robot articulation. We thus define one module <strong>for</strong> each servoings/sensor system and modules are<br />

clustered in each level of each part (action and perception part).<br />

The basic levels correspond to levels 0 and 1. The cluster of the level 1 of the perception part<br />

gathers the modules sensors, their respective controls and filters. The cluster of the level 1 of the action<br />

part gathers the modules of articular servoings (Figure 11). The setting points of the servoings represent<br />

the inputs of the cluster of the level 1 of the action part. The outputs of the cluster of level 1 of the<br />

perception part are all the filtered sensors measures. These measures are also transmitted to the modules<br />

of the cluster of level 1 of the action part in order to carry out the servoings. Typically, data from the<br />

exteroceptive sensors are simply processed and transmitted to the upper level of perception, and data from<br />

the proprioceptive sensors are transmitted to servoings (cluster action of the same level).<br />

Figure 11 – The level 0 and 1 of the architecture<br />

106


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Level 2: the pilot<br />

To feed the input of level 1 of the action part, the servoings setting points should be continuously<br />

provided. This is the role of the ‘Pilot’ cluster: it generates these setting points (e.g. articular) based on a<br />

trajectory provided as an input. This trajectory is expressed in a space (e.g. Cartesian space) different<br />

from that of the setting points. It is also a “setting point” but from a higher level, so we do not called thus.<br />

This trajectory describes, in time, the position parameters, kinematic parameters and/or dynamic<br />

parameters of the robot in its workspace. The function of the pilot is to convert these trajectories into<br />

setting points to be per<strong>for</strong>med by the servoings. Typically, the pilot contains the Inverse<br />

Geometrical/Kinematics/Dynamic Models of the robot (generally, only one of them is present, according<br />

to the robot application). In our architecture, this pilot is positioned on level 2 in the action part.<br />

However, this ‘Pilot’ cluster does not only contain one IKM module; it can also contain modules<br />

which give the robot the possibility to take into account in<strong>for</strong>mation of its environment. According to the<br />

concept of our architecture, this in<strong>for</strong>mation comes from this level 2 and from the perception part: this is<br />

the cluster of the ‘Proximity Model’ of the robot environment. This ‘Proximity Model’ cluster contains<br />

various modules which trans<strong>for</strong>m filtered measurements (coming from the ‘Sensor’ cluster of level 1<br />

perception) into a model in the same space as that of the trajectory (e.g. Cartesian space). This<br />

trans<strong>for</strong>mation is per<strong>for</strong>med on line using sensors measures; no temporal memorizing is carried out. The<br />

‘Proximity Model’ obtained is thus limited to the horizon of sensor measures. Typically, this ‘Proximity<br />

Model’ cluster contains modules which apply the Direct Geometrical/Kinematics/Dynamic Model of the<br />

robot to the proprioceptive data, and which express the exteroceptive data in a local frame centred on the<br />

robot (Figure 12).<br />

Level 2<br />

Level 1<br />

Level 0<br />

Action Part Perception Part<br />

Pilot<br />

Asservissements<br />

Servoings Sensors Capteurs<br />

A.M.S. ROBOT<br />

Proximity<br />

Model<br />

Figure 12 – The second level provide the second control loop<br />

Depending on the robot application, the 'Pilot' cluster uses data resulting from the cluster<br />

‘Proximity Model’ to carry out – or not – corrections on the reference trajectory provided to it as input.<br />

Typically, <strong>for</strong> a mobile robot, that consists in reflex avoidances of obstacles detected on the trajectory.<br />

For a manipulator robot, this consists in a loop of servoings in Cartesian space.<br />

Because of its path in an additional layer, this loop of level 2 is a <strong>for</strong>tiori longer – and thus slower<br />

– than the loop of the level 1. Moreover, this loop does not exist on all robotic applications. There are<br />

manipulator robotics applications which use IKM and DKM modules <strong>for</strong> the 2 clusters of level 2, without<br />

any connection between them: there is no Cartesian servoings and the processing of Cartesian<br />

position/velocity is carried out in an <strong>open</strong> loop. The level 1 - the articular servoings loop - carries out the<br />

motion alone.<br />

107


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

In a dual way, there are robot applications where the control is carried out only in Cartesian space;<br />

that is represented in our architecture by the absence of loop on level 1. The control loop is per<strong>for</strong>med<br />

only after the DKM/IKM model in the level 2. Mixed modes of articular and Cartesian servoings can also<br />

be represented in this architecture.<br />

Level 3: the navigator<br />

The ‘Pilot’ cluster must receive its trajectory from the upper level of the action part. We call<br />

‘Navigator’ the cluster positioned on this level 3. It must generate the trajectories <strong>for</strong> the ‘Pilot’ cluster<br />

based on data received from the upper level. These input data are of a geometrical type, still in a Cartesian<br />

space, but not necessarily in the robot frame. Moreover, these data do not integer dynamics or kinematics<br />

aspects; contrary to the trajectory, there is not a strict definition of the velocity, the acceleration or the<br />

<strong>for</strong>ce during the time <strong>for</strong> the AMS. These input data are called path – continuous or discontinuous – in<br />

Cartesian space. We allow to set on this path temporal constraints such as indicative time of route or<br />

indicative minimal/maximal speed on specific point of the path.<br />

The ‘Navigator’ cluster must translate a path into a trajectory. The path does not take into account<br />

physical constraints of the AMS, but the trajectory that it delivers must integer them. Indeed, the path<br />

received by the navigator is closer to the task to be per<strong>for</strong>med by the robot than to the mechanical<br />

constraints of the AMS. The navigator is situated at the interface between the “request” and the<br />

“executable”: it is the most noticeable level of the proposed architecture. According to the robot<br />

applications, the modules gathered in this cluster are based on various theoretical methods.<br />

On the top of the mechanical constraints of the AMS, this ‘Navigator’ cluster must generate a<br />

trajectory in concordance with the robot environment. There<strong>for</strong>e, it needs in<strong>for</strong>mation modelling its<br />

environment and coming from the same level, i.e. the cluster of level 3 of perception part. This perception<br />

cluster models the local environment of the robot beyond the range of its sensors in order <strong>for</strong> the<br />

‘Navigator’ to test the validity of the trajectories be<strong>for</strong>e to deliver them to the ‘Pilot’. This cluster is called<br />

the “Local Model” of the environment. It uses the displacement of the robot to locally model the<br />

environment around the robot (Figure 13).<br />

Figure 13 – The third control loop in the level 3<br />

This level 3 makes it possible to integrate various modules based on various theories in the<br />

‘Navigator’ cluster as well as in the ‘Local model’ cluster. Depending on the robot application, the<br />

108


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

control loop at this level may exist or not. When it exists, it crosses 6 clusters and it is longer – and slower<br />

– than the control loops of lower levels.<br />

Level 4: the path-planner<br />

The ‘Navigator’ receives as input a path resulting from the cluster of the level 4 of the action part,<br />

the ‘Path Planner’. This cluster generates the path using as input a goal, a task or a mission. This<br />

functionality is per<strong>for</strong>med in a long run in order to project the path on a large temporal horizon. We are<br />

located in a high control loop of the architecture which corresponds to the “deliberative” levels of similar<br />

<strong>architectures</strong>. To be valid, this path must imperatively be placed in a known environment. The pathplanner<br />

use in<strong>for</strong>mation resulting from the perception part of the same level: this cluster named ‘Global<br />

Model’ of the environment must provide to the path-planner an a priori map. Hence, the path-planner can<br />

validate its path on this pre-established map. This map does not need to be accurate with the metric<br />

direction (absence of obstacle, errors of dimension or orientation, presence of icons out of the scale...) but<br />

must be correct topologically (closed area, connexity, possible way-out, inaccessible areas...). This model<br />

can be either built on-line, by using data resulting from the lower level (“Local Model” cluster), or preestablished<br />

and updated by the lower level (Figure 14).<br />

Level 4<br />

Level 3<br />

Level 2<br />

Level 1<br />

Level 0<br />

Action Part Perception Part<br />

Path Planner<br />

Navigator<br />

Pilot<br />

Level 5: the mission generator<br />

Global<br />

Model<br />

Local Model<br />

Proximity<br />

Model<br />

Asservissements<br />

Servoings Sensors Capteurs<br />

A.M.S. ROBOT<br />

Figure 14 – The level 4<br />

Preestablished<br />

map<br />

The ‘Mission Generator’ is the cluster of the highest level of the action part. It must generate a<br />

succession of goals, tasks or missions <strong>for</strong> the ‘Path-planner’, according to the general mission of the<br />

robot. It is the “ultimate” robot autonomy concept: the robot generates itself its own attitudes and its own<br />

actions by using its own decisions. The ‘Mission Generator’ cluster does not really have any input. It only<br />

needs general in<strong>for</strong>mation on its environment, its state and its possible attitudes. This is provided by the<br />

109


cluster of the perception part of the same level. This cluster is a ‘General Model’ of the robot and its<br />

environment and could be based on a data base (Figure 15).<br />

Level 5<br />

Level 4<br />

Level 3<br />

Level 2<br />

Level 1<br />

Level 0<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Action Part Perception Part<br />

Mission<br />

Generator<br />

Path Planner<br />

Navigator<br />

Pilot<br />

Asservissements<br />

Servoings Sensors Capteurs<br />

A.M.S. ROBOT<br />

Figure 15 – The level 5<br />

General Model Data Base<br />

Global<br />

Model<br />

Local Model<br />

Proximity<br />

Model<br />

Preestablished<br />

map<br />

This level is the higher control loop of the proposed architecture and does not have any entry, as it<br />

corresponds to the highest decisional autonomy level. According to the robot application, this level is<br />

based on modules related to artificial intelligence. Although it must project its missions on a very large<br />

temporal horizon, reaction time of this level is slow (this loop has multiple levels to cross). However, this<br />

level of autonomy corresponds to the “smart” or “clever” attitude of the robot. The remaining part of the<br />

robot autonomy is dispatched on the lower levels of the architecture: the projection of the motion<br />

according to the obstacles is in the navigator, the reflex action is in the pilot level, the servoings level<br />

ensures the accuracy of the gesture...<br />

The autonomous part of the architecture<br />

The whole architecture, made up of 2 parts divided into 5 levels, represents the autonomy of the<br />

robot. In fact, these autonomous parts of the architecture are embedded in the robot: the two parts –<br />

‘Action’ and ‘Perception’ – and the Articulated Mechanical Structure (level 0) constitute the robot itself.<br />

The ‘Perception’ part corresponds to the upstream data flow from the AMS and the ‘Action’ part<br />

corresponds to the downstream data flow to the AMS. Connections on each level ensure the feedback<br />

loops of variable length allowing the control of the robot.<br />

This architecture makes it possible to model any autonomous robot whatever its application or its<br />

degree of autonomy. Its functioning is ensured by modules that are located in the different clusters.<br />

Depending on the applications, all modules are not required <strong>for</strong> all clusters and data flows may vary.<br />

Reversely, to produce a specific robot <strong>for</strong> a dedicated application, this architecture can be used in order to<br />

specify each module in each cluster, be<strong>for</strong>e carrying out the programming and the implementation.<br />

110


The teleoperated part<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

This architecture makes it possible to model and control a robot which has various degrees of<br />

autonomy. But it does not make it possible to take into account the remote control of a robot. Indeed, the<br />

tele-operation is considered as antagonist to autonomy. A robotized system is regarded either as<br />

autonomous or as tele-operated. In the proposed architecture, we have leveled the autonomy of a robot in<br />

several degrees of autonomy. In a similar way, we propose to level the tele-operation of a robot. This<br />

leveled tele-operation complements the levels of autonomy of a robot, substituting itself to the missing<br />

degrees of autonomy.<br />

Hence, we define a third part, called ‘Tele-operation’, distant to the robot (Figure 16). In order to<br />

modulate the degree of the tele-operation, this part is also organised in levels similarly to the one defined<br />

<strong>for</strong> the ‘Action’ and ‘Perception’ parts. Each level of the ‘Tele-operation’ part receives data resulting<br />

from the corresponding levels of the ‘Perception’ part and can replace the corresponding level of the<br />

‘Action’ part by generating in its place data necessary to the lower level of the ‘Action’ part.<br />

Figure 16 – The third ‘distant’ part: the tele-operation<br />

Thus, several possible levels of tele-operation can be identified:<br />

- The level 1 of tele-operation makes it possible <strong>for</strong> a human operator to actuate directly (in <strong>open</strong><br />

loop mode) the Articulated Mechanical Structure. This level receives data from the level 1 of the<br />

‘Perception’ part. This allows any operator to replace the autonomous loop of level 1. This is a remote<br />

control in <strong>open</strong> loop where the robot does not have any autonomy (Figure 17).<br />

Figure 17 – The tele-operation supplying the level 1 of the action part<br />

- The level 2 of the tele-operation uses in<strong>for</strong>mation of the ‘Proximity Model’ to make it possible<br />

<strong>for</strong> a human operator to replace the level 2 of the ‘Action’ part, called the ‘Pilot’. It thus delivers setting<br />

points necessary to the level 1 to control the robot. This level of tele-operation is higher than previous,<br />

and can preserve the level 1 autonomous loop of the robot (Figure 18). Notice that there is no flow of<br />

111


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

in<strong>for</strong>mation between level 1 and level 2 of the tele-operation; the reason being that they are excluded<br />

mutually: when there is a tele-operation of the robot, it is made either from level 1 or from the level 2.<br />

Figure 18 – The tele-operation supplying the level 2 of the action part<br />

- The level 3 of the tele-operation allows an operator to generate a trajectory <strong>for</strong> the ‘Pilot’, hence<br />

taking the role of the ‘Navigator’. The human operator who carries out this task uses data resulting from<br />

the ‘Local Model’ of the ‘Perception’ part. Thus, he acts in place of the loop of level 3 of autonomy. The<br />

operator tele-operates the robot still using the lower degrees of autonomy (2 and 1). For example, when a<br />

reflex reaction pilot is implemented in the level 2 of the ‘Action’ part, this one will act autonomously if<br />

the human operator sends a non-acceptable trajectory by the robot (Figure 19).<br />

Figure 19 - The tele-operation supplying the level 3 of the action part<br />

- The level 4 of the tele-operation makes it possible <strong>for</strong> the human operator to send a path to the<br />

‘Navigator’ using the ‘Global Model’ of the ‘Perception’ part. He thus replaces the autonomous loop of<br />

the level 4 and is assisted in the control of the robot by the lower degrees of autonomy (Figure 20).<br />

112


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Figure 20 - The tele-operation supplying the level 4 and assisted by the lower levels of autonomy<br />

- Finally, the level 5 of the tele-operation gives the possibility to an human operator to choose a<br />

task or a mission carried out by the robot autonomously, thanks to its levels of autonomy 4 to 1 (Figure<br />

21).<br />

Figure 21 - The tele-operation supplying the level 5 of action part<br />

113


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Depending on the robot applications, only one level (or no level) of tele-operation is used by a<br />

human operator. However, the human operator can also choose his level of tele-operation according to the<br />

course of the robot mission. This dynamic evolution of the tele-operation gives the possibility to keep the<br />

human control on all the levels of an autonomous robot. To allow a dynamic evolution between the<br />

degrees of autonomy and tele-operation, a multiplexer module in each cluster of the ‘Action’ part is<br />

needed. This multiplexer module drives the input data either from the tele-operation or from the data<br />

coming from the upper autonomous level.<br />

This dynamic evolution of autonomy/tele-operation degree also gives an operational security <strong>for</strong><br />

the autonomous robot: currently, a human operator can take over the robot when it cannot solve problems<br />

on its own or because of material or software failures.<br />

Of course, according to the tele-operation level where the human operator acts, the man-machine<br />

interface is different. It can be simple a keyboard/joystick <strong>for</strong> the lower levels, a graphic environment <strong>for</strong><br />

the median levels or a textual analyser <strong>for</strong> the higher level. Finally, nothing prohibits the tele-operation to<br />

be ensured by computers instead of a human operator.<br />

The complete PAT Architecture<br />

The PAT architecture that we are proposing is based on 3 parts, the “Perception”, “Action” and<br />

“Tele-operation”, layered in 5 levels. As it has been shown, this PAT architecture <strong>for</strong>ms a frame (Figure<br />

22). A robot designer places on the PAT frame the modules that he needs to carry out the mission. He<br />

also determines the flow of data between the modules.<br />

Robotics is also a technology of integration and the designer has to use various hardware and<br />

software modules. For example, he must integer a GPS sensor or a DC-power variator on which he can<br />

access only to the inputs and the outputs according to the builder specifications. Heterogeneous devices<br />

can be placed in the PAT architecture. The designer organizes them in priority and after, he places the rest<br />

of the modules (generally software). Each module is then developed afterwards according to the software<br />

(OS, languages, RT kernel…) and the hardware (CPUs, networks…) choices. The choice of the theory <strong>for</strong><br />

each module is totally free: different theories/methods can be used and combined in the architecture (e.g.<br />

SLAModels, reactive behaviours based on neural networks, and a navigator based on heuristics can be<br />

used).<br />

114


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

T1<br />

Servoings<br />

Sensors<br />

T2<br />

Pilot Navigator<br />

Proximity<br />

Model<br />

Local<br />

Model<br />

Figure 22 - PAT Architecture (frame)<br />

115<br />

T3 T4<br />

T5<br />

Path<br />

Planner<br />

Global<br />

Model<br />

Mission<br />

Generator<br />

General<br />

Model


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Example of a mobile robot: “Raoul”<br />

Several robots have been developed in our lab (LVR-Bourges); each of them had different<br />

hardware and software plat<strong>for</strong>ms. The PAT architecture was used to design, develop and control these<br />

totally different robots.<br />

Raoul is an autonomous mobile robot, able to run in an unknown environment, finding alone a<br />

trajectory to per<strong>for</strong>m and avoiding collision with static or mobile obstacles. Raoul is built on a Robuter<br />

plat<strong>for</strong>m (Robosoft) with an additional computer (PC/RT-Linux) and exteroceptive sensors: two Sick<br />

telemeters laser range and one goniometer laser.<br />

The PAT architecture was implemented on Raoul with the following modules (Figure 23):<br />

Figure 23 – Architecture of the mobile robot Raoul<br />

‘A1_Servo’ is the module <strong>for</strong> the servoing of the 2 motorized wheels. Robuter plat<strong>for</strong>m is a<br />

standard differential wheels robot. This module uses the measures incoming from the proprioceptive<br />

sensors (motor ticks, odometry) ‘P1_proprio’. We can notice that these two modules, which <strong>for</strong>ms the<br />

articular servoings loop, are implemented in the Albatros Robuter CPU. We cannot modify them; we have<br />

only access to inputs (setting points) and outputs (odometry, linear speed…).<br />

‘P1_Sicks’ is the module driving the front and the rear laser range telemeters. Data that it delivers<br />

<strong>for</strong> the upper level is a list of polar distances depending to the angle of measurements.<br />

‘P1_Gonio’ is the module driving the absolute laser goniometer. It delivers an absolute<br />

position/orientation in the plane.<br />

The ‘Pilot’ cluster contains two modules. ‘A2_IKM’ module converts the desired motion of the<br />

robot in setting points <strong>for</strong> the servoings. ‘A2_React’ module per<strong>for</strong>ms the desired trajectory avoiding<br />

un<strong>for</strong>eseen obstacles using the two local maps made by ‘P2_MapF’ and ‘P2_MapB’ modules in the<br />

116


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

‘Proximity Model’ cluster. ‘A2_React’ is based on DVZ <strong>for</strong>malism [9] and provides the robot with a<br />

reflex behavior to avoid obstacles.<br />

The ‘Local model’ cluster contains the module ‘P3_Map’ which uses the method developed by J.<br />

Canou to built a global map on-line using the motion of the mobile robot to feed and refresh it [20]. The<br />

‘A3_Geom’ module of the ‘Navigator’ cluster is a very simple module: it only places pre-processed<br />

trajectories of the robot in the local map. It deletes trajectories which intersect obstacle and choose the<br />

nearest trajectory from the desired path.<br />

The ‘A4_Fwd’ uses the absolute position of the mobile robot to generate a straight path of 1 meter<br />

in front of the robot. It does not take into account any obstacle of the environment. The ‘P4_PosAbs’<br />

module is only a calculation of the absolute position of the robot. A global map has not been integrated in<br />

this cluster. With this level 4, our goal is to validate the following concept: with only level 2 and level 3<br />

control loops, a mobile robot is quite autonomous to be able to evolve in a totally unknown environment.<br />

With this implementation, Raoul mobile robot is able to run in an unknown environment, avoiding<br />

static and mobile obstacles. It is able to make maneuvers (backward-<strong>for</strong>ward) to escape from dead end<br />

zones.<br />

Example of the tele-operated robot: “OTELO-1”<br />

Otelo is a European project (IST-2001-32516) leaded by the LVR laboratory on the teleechography<br />

using a light robot. During this project, 3 dedicated robots have been developed. Otelo1 robot<br />

is a fully tele-operated robot holding an echography probe. It can follow the motions than a medical<br />

expert realizes at distance with the input device. The expert receives ultrasound image, modifies the probe<br />

holder orientation and makes a diagnosis at a distance [20].<br />

Otelo1 is programmed with the following architecture (Figure 24):<br />

Tele-operation<br />

T3<br />

T3_GUI<br />

T3_InDev<br />

Level 2<br />

Level 1<br />

Level 0<br />

Pilot<br />

Servoings<br />

Level 3<br />

Robot<br />

A2_IGM<br />

Action Perception<br />

A1_Servo<br />

A.M.S.<br />

Local<br />

Model<br />

P1_Proprio<br />

Figure 24 – Architecture of the tele-operated robot Otelo1<br />

Sonographer<br />

P3_Cprs<br />

‘A1_Servo’ module per<strong>for</strong>ms the position servoings of the 6 axes of Otelo1 robot. This dedicated<br />

robot is technologically complex: it has 3 DC-motors, 3 step-motors, incremental encoders, analog<br />

absolute encoders, lvdt, <strong>for</strong>ce sensor and various digital I/Os like optical switches. ‘P1_Proprio’ module<br />

dives all the inputs. In fact, these two modules have its software distributed in many visual-C++<br />

functions/objects, using functionalities of the advanced PMAC boards. The level 1 ensures the articular<br />

servoings of the 6 axes of the robot.<br />

117<br />

Proximity<br />

Model<br />

Sensors


An ultrasound probe of a sonographer is held by the robot. However, companies manufacturing<br />

ultrasound devices with advanced functionalities restrict the access to the transducer signal; they only<br />

deliver ultrasound dynamic images. The ultrasound device is thus considered as an isolated device<br />

covering 3 levels of the ‘Perception’ part. The 2D ultrasound image is considered as a local model of the<br />

environment and it compressed by the ‘P3_Cprs’ module.<br />

The compressed ultrasound image is transmitted to the ‘T3_GUI’ module of the level 3 of the<br />

‘Tele-operation’ part. This module prints the dynamic image (on a screen) <strong>for</strong> the medical expert. He uses<br />

a virtual probe as an input device, the ‘T3_InDev’ module which transmits trajectories to the ‘Pilot’<br />

cluster. The ‘Pilot’ cluster contains the ‘A2_IGM’ module which translates trajectories in articular setting<br />

points <strong>for</strong> the level 1.<br />

Otelo1 was tested during trans-continental experiments (Cyprus-Spain or Morocco-France) using<br />

various communication links (ISDN, Satellite, Internet), and has proved the validity of the teleechography<br />

concept in the medical environment.<br />

Another robot Otelo3 based on a different mechanical structure is currently being developed in the<br />

lab and we are implementing some autonomous abilities to per<strong>for</strong>m motions when problems occur on the<br />

transmission links (lacks or delays). To achieve that goal, we close the loop of the level 3 is closed and<br />

modules are added in the clusters of level 3 and 4.<br />

Conclusion<br />

The main objective of the PAT architecture was to clearly specify the domain of application of<br />

each theory or method developed in robotics. This architecture allows to use together several different<br />

methods.<br />

We used the common concept of level and push it to its extremity to exit of the scheme<br />

reactive/deliberative. We have designed the PAT architecture in order to accept a maximum of robotic<br />

methods either in control or in perception. The main consequence is that the PAT architecture is<br />

sufficiently generic to model any robot, mobile or not, autonomous or tele-operated. The second strong<br />

asset of the PAT architecture is to give the possibility to manage the tele-operation AND the autonomy of<br />

a robot. This mixed mode can be graduated and dynamically modified.<br />

All the robot developments of the laboratory use this PAT architecture, even if the mechanical<br />

structure, the hardware and the software are different. We are thus building a library of modules <strong>for</strong> the<br />

various levels of the architecture which can be used in any other robotic applications. When we fix a<br />

common hardware and software frame (CPU, languages, OS, RT-kernel, network links…), this library<br />

becomes available to an <strong>open</strong> community where each user can develop his/her own modules or use<br />

existing modules.<br />

Bibliography<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

[1] Brooks R.A., “A robust layered control system <strong>for</strong> a mobile robot”, IEEE Journal of Robotics and<br />

Automation, vol. 2, n°. 1, pp. 14–23, 1986.<br />

[2] Albus J. S., “4D/RCS A Reference Model Architecture <strong>for</strong> Intelligent Unmanned Ground<br />

Vehicules”, Proceedings of the SPIE 16th Annual International Symposium on Aerospace/Defense<br />

Sensing, Simulation and Controls, Orlando, FL, April 1 – 5, 2002<br />

[3] Volpe R., et al., “CLARAty: Coupled Layer Architecture <strong>for</strong> Robotic Autonomy.” JPL Technical<br />

Report D-19975, Dec 2000.<br />

[4] Alami R., Chatila R., Fleury S., Ghallab M., and Ingrand F., “An architecture <strong>for</strong> autonomy”, The<br />

International Journal of Robotics Research, Special Issue on Integrated Architectures <strong>for</strong> Robot Control<br />

and Programming, vol. 17, no 4, pp. 315-337, 1998.<br />

118


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

[5] Rosenblatt J., “DAMN: A Distributed Architecture <strong>for</strong> Mobile Navigation”, In proceedings of the<br />

1995 AAAI Spring Symposium on Lessons Learned from Implemented Software Architectures <strong>for</strong><br />

Physical Agents, AAAI Press, Menlo Park, CA, 1995.<br />

[6] Arkin R.C. “Behavior-based Robotics”, MIT Press, 1998<br />

[7] Dalgalarrondo A., “Intégration de la fonction perception dans une architecture de contrôle de robot<br />

mobile autonome”, Thèse de doctorat, Université Paris-Sud, Orsay, 2001.<br />

[8] Hong-Ryeol K., Dae-Won K., Hong-Seong P., Hong-Seok K.and Hogil L. “Robot control software<br />

framework in <strong>open</strong> distributer process architecture”, Korean and World patent, WO2005KR01391<br />

20050512, IPC1-7: G06F19/00, May 2005.<br />

[9] R. Zapata, "Quelques aspects topologiques de la planification de mouvements et des actions<br />

reflexes en robotique mobile", Thèse d'état, Université Montpellier II, juillet 1991.<br />

[10] Joseph Canou, Gilles Mourioux, Cyril Novales and Gérard Poisson, “A local map building process<br />

<strong>for</strong> a reactive navigation of a mobile robot”, ICRA’2004, IEEE International Conference on Robotics and<br />

Automation, April-Mai 2004, New Orleans.<br />

[11] Gilles Mourioux, Cyril Novales and Gérard Poisson, “A hierarchical architecture to control<br />

autonomous robots in an unknown environment”, ICIT’2004, IEEE International Conference on<br />

Industrial Technology, December 2004, Hammamet.<br />

[12] European OTELO project : “mObile Tele-Echography using an ultra Light rObot”, IST n°2001-<br />

32516, leaded by the LVR (Sept. 2001 – Sept 2004), consortium : LVR (F), ITI-CERTH (G), Kingston<br />

Universite (UK), CHU of Tours (F), CSC of Barcelona (E), Ebit (I), Sinters (F), Elsacom (I) and Kell (I).<br />

119


Abstract<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

LAAS architecture: Open Robots<br />

F. Ingrand<br />

LAAS-CNRS<br />

L’architecture LAAS pour systèmes autonomes a été développé durant un grand nombre<br />

d'années et est mise en oeuvre sur l'ensemble de nos robots mobiles. Il faut noter que les aspects<br />

généricité et programmabilité de cette architecture permettent une mise en oeuvre rapide et une<br />

bonne intégration des systèmes utilisés(GenoM, OpenPRS, Exogen et IxTeT). Nous présenterons<br />

cette architecture et les outils qui <strong>for</strong>ment la suite des logiciels Open Robots:<br />

• Le niveau décisionnel: Ce plus haut niveau intègre les capacités délibératives par<br />

exemple : produire des plans de tâches, reconnaître des situations, détecter des fautes,<br />

etc. Dans notre cas, il comprend :<br />

• un exécutif procédural OpenPRS qui est connecté au niveau inférieur auquel il<br />

envoie des requêtes qui vont lancer des actions (capteurs/actionneurs) ou<br />

démarrer des traitements. Il est responsable de la supervision des actions tout en<br />

étant réactif aux événements provenant du niveau inférieur et aux commandes de<br />

l’opérateur. Cet exécutif a un temps de réaction garanti.<br />

• un planificateur/exécutif temporel (dans notre cas IxTeT- eXeC, extension de<br />

IxTeT) qui sera chargé de produire et exécuter des plans temporels. Ce système<br />

doit être réactif et prendre en compte les nouveaux buts ainsi que les échecs<br />

d’exécution (échec d’une action et time-out).<br />

• Le niveau fonctionnel : Le plus bas niveau comprend toutes les actions et fonctions de<br />

perception de base de l’agent. Ces boucles de contrôle et traitements de données sont<br />

encapsulées dans des modules contrôlables (développés avec GenoM). Chaque module<br />

fournit des services et traitements accessibles par des requêtes envoyées par le niveau<br />

supérieur ou un autre module. Le module envoie en retour un bilan lorsqu’il se termine<br />

correctement ou est interrompu. Ces modules sont complètement contrôlés par le niveau<br />

supérieur et leurs contraintes temporelles dépendent du type de traitement qu’ils ont à<br />

gérer (servo- contrôle, algorithmes de localisation, etc.).<br />

• Le niveau de contrôle des requêtes : Situé entre les deux niveaux précédents, le R2C «<br />

Requests and Replies Checker » vérifie les requêtes envoyées aux modules fonctionnels<br />

(par l’exécutif procédural ou entre modules) et l’utilisation des ressources. Il est<br />

synchrone avec les modules (il connaît toutes les requêtes envoyées et tous les bilans<br />

retournés et construit en ligne l’état du niveau fonctionnel). Il agit comme un filtre qui<br />

éventuellement rejette des requêtes en fonction de l’état et d’un modèle <strong>for</strong>mel donné par<br />

l’opérateur spécifiant les états autorisés ou interdits. Les bilans retournés par le niveau<br />

fonctionnel sont transmis à l’exécutif procédural après mise à jour de l’état interne. Les<br />

contraintes temporelles sont de type temps réel dur.<br />

120


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Architectures logicielles pour la robotique et sûreté de fonctionnement<br />

L. Nana<br />

Laboratoire d’In<strong>for</strong>matique des SYstèmes Complexes (LISyC), EA3883<br />

Université de Bretagne Occidentale<br />

20 Avenue Le Gorgeu<br />

C.S. 93837 – BP 809<br />

29238 BREST Cedex 3<br />

Résumé<br />

La sûreté de fonctionnement, bien qu'ayant atteint une<br />

certaine maturité du point de vue du matériel, nécessite<br />

des solutions adaptées au niveau du logiciel, et doit être<br />

prise en compte tant au niveau des langages destinés à la<br />

programmation robotique qu'au niveau de la conception<br />

et de la mise en oeuvre des environnements de<br />

programmation robotique. La sûreté de fonctionnement<br />

logicielle est d'autant plus importante que le logiciel<br />

prend une place de plus en plus importante dans les<br />

systèmes robotiques.<br />

Nous proposons dans cet article, de faire le point sur les<br />

<strong>architectures</strong> logicielles pour la robotique et d'examiner<br />

plus particulièrement sa prise en compte dans des<br />

applications robotiques.<br />

Mots Clef<br />

Langages de programmation de missions, <strong>architectures</strong><br />

logicielles, robotique, sûreté de fonctionnement logicielle.<br />

1 Introduction<br />

Le logiciel prend une place de plus en plus importante<br />

dans les systèmes robotiques. Dans cet article, nous nous<br />

intéressons à la sûreté de fonctionnement logicielle dans<br />

les langages et <strong>architectures</strong> logicielles pour la<br />

programmation de missions robotiques. En effet, les<br />

systèmes robotiques sont par essence des systèmes<br />

critiques. L’intégration et l’adaptation de mécanismes de<br />

sûreté de fonctionnement, en particulier logiciels, à ces<br />

systèmes, est donc d’un intérêt indéniable.<br />

Dans la deuxième section de cet article, un bref aperçu<br />

des langages et <strong>architectures</strong> robotiques est présenté. La<br />

troisième section est quant à elle consacrée à l’utilisation<br />

de mécanismes de sûreté de fonctionnement du logiciel<br />

dans le cadre d’applications robotiques. La quatrième et la<br />

cinquième sections relatent deux expériences en matière<br />

de conception et de réalisation d’<strong>architectures</strong> logicielles<br />

pour des applications robotiques sûres, à savoir<br />

l’architecture associée au langage PILOT (Programming<br />

nana@univ-brest.fr<br />

121<br />

and Interpreted Language Of actions <strong>for</strong> Telerobotics) et<br />

une architecture de réalisation de mission de l’IFREMER.<br />

La première a été conçue et mise en oeuvre au sein du<br />

LISyC et s’applique principalement à la robotique<br />

mobile. La deuxième est quant à elle relative aux<br />

applications robotiques sous-marines. Cet article se<br />

termine par une conclusion en sixième section.<br />

2 Bref aperçu des langages et <strong>architectures</strong><br />

robotiques<br />

2.1. Les langages de programmation de<br />

misions<br />

Les langages de programmation de missions se situent à<br />

haut niveau par rapport aux langages de programmation<br />

de robots plus classiques, consacrés à la programmation<br />

détaillée des mouvements des effecteurs. Ils s'appuient sur<br />

l'existence d'actions élémentaires fournies par une couche<br />

inférieure pour permettre la spécification des applications<br />

robotiques en terme d'ordonnancement des actions<br />

élémentaires. A ce jour, trois techniques ont été<br />

développées dans la conception de langages de<br />

programmation de missions : l'extension de langages<br />

généralistes (tels que C, ADA ou LISP) avec des librairies<br />

orientées robotique, la création de langages spécifiques au<br />

domaine de la robotique, et la modification de langages de<br />

contrôle tels que LUSTRE et SIGNAL.<br />

L'inconvénient de la première approche est l'inadéquation<br />

du point de vue de la spécification et du déterminisme de<br />

l'exécution. La deuxième présente des intérêts multiples.<br />

Les langages sont plus proches des langages de<br />

spécification, capturent mieux la sémantique du domaine<br />

et produisent par conséquent des programmes plus clairs<br />

et plus concis. De nombreux langages dédiés à la<br />

programmation d'applications robotiques manufacturières<br />

ont ainsi été créés dès la fin des années soixante-dix : LM,<br />

VAL, etc. Leur pouvoir d'expression est toutefois très<br />

orienté vers le contrôle de bras manipulateurs, ce qui<br />

compromet leur extensibilité à un domaine d'application<br />

plus vaste tel que celui de la robotique mobile. Dans la


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

même catégorie, d'autres langages, plus génériques dans la<br />

mesure où ils ne sont pas dédiés à un seul type de robot,<br />

ont été créés dans le monde de la recherche. C'est par<br />

exemple le cas du langage de manipulation des actions<br />

robotiques et des buts intermédiaires utilisé par le soussystème<br />

C-PRS du niveau décisionnel de l'architecture<br />

robotique du LAAS. Toutefois, ces langages ne prennent<br />

pas en compte, dans leur sémantique, les aspects<br />

cinématiques et dynamiques spécifiques à la robotique tels<br />

que l'enchaînement de trajectoires. Les langages de<br />

contrôles ont une expressivité beaucoup plus proche de la<br />

programmation de missions robotiques que celle des<br />

langages généralistes.<br />

2.2 Les <strong>architectures</strong> robotiques<br />

La communauté robotique reconnaît qu’aucune<br />

architecture n’est parfaite pour répondre à toutes les<br />

tâches, et que différentes tâches ont différents critères de<br />

succès qui conduisent à différentes <strong>architectures</strong>. Les<br />

<strong>architectures</strong> de programmation robotique peuvent être<br />

regroupées en quatre grandes catégories:<br />

- les <strong>architectures</strong> centralisées classiques<br />

- les <strong>architectures</strong> hiérarchiques,<br />

- les <strong>architectures</strong> comportementales et<br />

- les <strong>architectures</strong> hybrides.<br />

Les premiers travaux concernant les <strong>architectures</strong> de<br />

contrôle robotique étaient inspirés de l’intelligence<br />

artificielle [27], c’est-à-dire organisés autour de processus<br />

décisionnels et d’un état symbolique du monde et du<br />

robot. Les <strong>architectures</strong> conçues suivant cette philosophie<br />

font partie de la catégorie des <strong>architectures</strong> centralisées<br />

classiques. Elles placent la planification au centre du<br />

système et partagent l’axiome suivant lequel le problème<br />

central en robotique est la cognition, c’est-à-dire, la<br />

manipulation de symboles pour maintenir et agir sur un<br />

modèle du monde, le monde étant l’environnement avec<br />

lequel le robot interagit. Parmi les <strong>architectures</strong><br />

centralisées, nous pouvons citer: le système de<br />

planification STRIPS du robot Shakey [28] dans lequel le<br />

plan est statique et le monde supposé inchangé au cours<br />

de l’exécution du plan, les <strong>architectures</strong> Blackboard [18]<br />

qui accumulent des données sur le monde et prennent des<br />

décisions immédiates basées à la fois sur les objectifs à<br />

priori variables et un monde changeant. Pour une tâche<br />

donnée, si le système peut modéliser le monde<br />

suffisamment bien, et si le monde obéit à son (ses)<br />

modèle(s), et si le système peut récupérer l’in<strong>for</strong>mation<br />

pour l’intégrer dans le cœur de la planification centrale,<br />

alors une architecture centralisée classique constitue un<br />

bon choix pour la réalisation de la tâche. Les <strong>architectures</strong><br />

centralisées conviennent bien aux tâches pour lesquelles la<br />

réactivité et le réflexe ne sont pas des critères essentiels.<br />

Les <strong>architectures</strong> hiérarchiques décomposent la<br />

programmation des applications en niveaux de plus en<br />

plus abstraits. Chaque niveau a pour rôle de décomposer<br />

une tâche que lui a recommandée le niveau supérieur, en<br />

tâches plus simples qui seront ordonnées au niveau<br />

122<br />

inférieur. Le niveau le plus haut gère les objectifs globaux<br />

de l’application, alors que le niveau le plus bas commande<br />

les actionneurs du robot. L’instance la plus connue de ce<br />

type d’architecture est NASREM (Nasa/nbs Standard<br />

REference Model) [24]. Dans la même famille, nous<br />

pouvons citer l’architecture du LIFIA [17] et<br />

l’architecture SMACH de l’I3S (In<strong>for</strong>matique, Signaux et<br />

Systèmes de Sophia-Antipolis) [35]. Les <strong>architectures</strong><br />

hiérarchiques telles que NASREM font encore<br />

l’hypothèse que le meilleur moyen d’interagir avec le<br />

monde est à travers la manipulation et le raisonnement au<br />

sujet de modèles du monde, bien qu’elles reconnaissent<br />

qu’il doit y avoir différents modèles du monde pour<br />

raisonner au sujet des différents aspects du monde. Ce que<br />

peut réaliser un tel système, en utilisant des modèles<br />

prédictifs du monde, c’est une très haute précision.<br />

Chaque couche a un modèle de ce qui va se produire dans<br />

le monde, étant donné un ensemble d’entrées et de sorties,<br />

et il revient aux couches inférieures de s’assurer que ce<br />

qui était attendu se réalise précisément. Les <strong>architectures</strong><br />

hiérarchiques sont appropriées pour des tâches qui<br />

s’exécutent dans un environnement prévisible et qui<br />

requièrent une haute précision. Leur principal défaut est la<br />

taxonomie des modules du système, a priori imposée<br />

artificiellement, qui sert à les restreindre plutôt qu’à les<br />

supporter. En effet, la façon dont chaque module dans le<br />

système est structuré n’est pas définie par les besoins de la<br />

tâche, mais par l’endroit où il s’insère dans l’architecture.<br />

Les <strong>architectures</strong> hiérarchiques ont généralement une<br />

réactivité assez faible: compte tenu de la décomposition<br />

systématique de la programmation, la chaîne allant des<br />

capteurs aux actionneurs en passant par les processus<br />

décisionnels capables de répondre à des changements de<br />

l’environnement est complexe, entraînant des temps de<br />

réponse longs.<br />

Les <strong>architectures</strong> comportementales sont nées au milieu<br />

des années 80 avec l’architecture “subsomption” proposée<br />

par Brooks [5]. Elles sont issues de l’observation de<br />

comportements animaux simples, et sont basées sur l’idée<br />

qu’un comportement complexe et évolué d’un robot peut<br />

émerger de la composition simultanée de plusieurs<br />

comportements simples. Brooks définit un comportement<br />

élémentaire comme “un traitement prenant des entrées<br />

capteurs et agissant sur les actionneurs”. L’architecture<br />

DAMN (Distributed Architecture <strong>for</strong> Mobile Navigation)<br />

proposée par Rosenblat [31] à l’Université de Carnegie<br />

Mellon est une autre variante des travaux de Brooks. Les<br />

travaux de Brooks ont mis en évidence l’atout de ce type<br />

d’architecture: la rapidité de la réaction du système face<br />

aux événements extérieurs ou à des situations spécifiques.<br />

Les <strong>architectures</strong> comportementales ont fait leurs preuves<br />

dans de nombreuses et parfois spectaculaires<br />

expérimentations concernant la robotique mobile, car la<br />

réactivité qui les caractérise permet d’aborder la<br />

navigation dans un environnement dynamique. Toutefois,<br />

la complexité des applications reposant sur cette approche<br />

va rarement au delà de la navigation. En effet, plusieurs


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

comportements sont souvent en concurrence pour le<br />

contrôle des actionneurs et on ne peut pas, à priori,<br />

assurer la stabilité d’exécution de la loi de commandes<br />

complexes telles que celles requises pour le contrôle de<br />

bras manipulateurs. Ces approches présentent une autre<br />

limitation: les comportements étant préétablis, le système<br />

s’accommode difficilement d’un changement de mission<br />

impromptu.<br />

Face aux lacunes des deux précédentes catégories<br />

d’<strong>architectures</strong>, certains chercheurs ont proposé des<br />

<strong>architectures</strong> hybrides qui allient les capacités réactives<br />

des <strong>architectures</strong> comportementales et les capacités de<br />

raisonnement propres aux <strong>architectures</strong> hiérarchiques. Ces<br />

<strong>architectures</strong> peuvent être suffisamment souples et<br />

puissantes pour que leur domaine d’utilisation en terme de<br />

variété de robots contrôlés et de type d’application justifie<br />

leur commercialisation. Parmi elles, nous pouvons citer: le<br />

CONTROLSHELL [33] vendu par la société<br />

cali<strong>for</strong>nienne RTI (Real-Time Innovations), l’architecture<br />

du LAAS [ALA 98] qui a fait ses preuves dans les<br />

domaines de la robotique mobile, que ce soit sur les<br />

plates-<strong>for</strong>mes HILARE ou dans l’expérience MARTHA,<br />

ou encore l’architecture ORCCAD (Open Robot<br />

Controller Computer Aided Design system) de l’INRIA<br />

[6][19]. L’architecture ORCCAD a la particularité d’être<br />

indépendante du système à piloter. Elle autorise également<br />

la spécification et la validation de missions en robotique.<br />

3 Sûreté de fonctionnement dans les langages<br />

et <strong>architectures</strong> robotiques<br />

3.1 Bref aperçu des mécanismes de sûreté de<br />

fonctionnement<br />

La sûreté de fonctionnement est une propriété importante<br />

aux différents niveaux du processus de contrôle et de<br />

commande tant en ce qui concerne la télérobotique, qu'au<br />

regard des systèmes automatisés de production. Elle<br />

touche aux différents aspects de la réalisation d'une<br />

mission, partant de la conception du plan à son exécution<br />

sur le système commandé, en passant par le processus<br />

d'interprétation du plan ou de génération de l'exécutable.<br />

Deux approches principales sont souvent utilisées pour la<br />

mise en oeuvre de la sûreté de fonctionnement: la<br />

prévention des erreurs et la tolérance aux fautes<br />

[22][20][21]. La prévention des erreurs vise à écarter à<br />

priori les fautes et les erreurs qui mettent en cause la<br />

fiabilité du système, et ceci avant toute utilisation<br />

régulière de ce dernier. Pour atteindre cet objectif, les<br />

principaux moyens sont l'utilisation de méthodes et de<br />

langages de spécifications <strong>for</strong>mels et l'utilisation de tests.<br />

La tolérance aux fautes est quand à elle basée sur le<br />

principe suivant lequel la prévention des erreurs, bien que<br />

bénéfique, ne permet pas de garantir une élimination<br />

totale des erreurs dans le système. Elle a pour but de<br />

123<br />

permettre au système de se comporter de façon<br />

satisfaisante même en présence de fautes.<br />

3.2 Solutions pour la sûreté de systèmes<br />

La prise en compte de la sûreté de fonctionnement<br />

logicielle dans la conception d’<strong>architectures</strong> et de<br />

langages robotiques reste encore limitée de nos jours. Un<br />

certain nombre de travaux ont toutefois été effectués dans<br />

ce domaine.<br />

Au niveau des langages, les langages de contrôle<br />

disposent d'une sémantique rigoureusement établie<br />

(sémantique opérationnelle) et d'outils de simulation et/ou<br />

de vérification et/ou d'analyse, ce qui représente une plus<br />

value primordiale pour la sûreté des applications<br />

robotiques. Leur utilisation dans un contexte robotique<br />

nécessite toutefois des adaptations.<br />

Zalewski et al. [38] ont souligné la complexité des<br />

solutions actuellement fournies pour la vérification des<br />

systèmes de contrôle in<strong>for</strong>matiques qui restreint leur<br />

applicabilité à des systèmes simples, alors que la<br />

complexité des applications critiques est habituellement<br />

élevé et continue à s’accroître drastiquement avec les<br />

progrès des technologies in<strong>for</strong>matiques. Ils ont étudié<br />

deux approches principales.<br />

La première approche est basée sur l’« Analyse d’Arbres<br />

de Fautes » (AAF) et l’ « Analyse de Mode de Défaillance<br />

et d’Effet » (AMDE). Il s’agit de techniques d’analyse de<br />

sûreté utilisées avec succès dans des systèmes<br />

conventionnels (non basés sur l’in<strong>for</strong>matique). Elles sont<br />

utilisées lors de la conception du système et se focalisent<br />

sur les conséquences de défaillances des composants. Des<br />

adaptations ont été proposées pour l’analyse des systèmes<br />

de logiciels sûrs [23]. Zalewski et al ont proposé une<br />

méthode d’analyse in<strong>for</strong>melle de sûreté de systèmes basés<br />

sur le logiciel utilisant l’AAF [38]. Une application à<br />

l’industrie nucléaire a été effectuée [25]. L’avantage de<br />

cette solution est que les techniques sous-jacentes sont<br />

déjà bien utilisées pour de nombreuses applications<br />

industrielles, ce qui permet aux ingénieurs de sûreté de<br />

s’adapter facilement à leurs nouvelles versions.<br />

L’inconvénient est que ces techniques sont pour la plupart<br />

plutôt in<strong>for</strong>melles. Une adaptation de ces solutions aux<br />

logiciels orientés objet a également été proposée par<br />

Zalewski et al. Elle fait l’hypothèse que les modèles<br />

orientés objets des composants logiciels sont fournis avec<br />

leurs spécifications <strong>for</strong>melles. Cette approche a été<br />

appliquée à une étude de cas de contrôle de feux de<br />

circulation ferroviaire.<br />

La deuxième approche est basée sur l’utilisation de<br />

méthodes <strong>for</strong>melles et semi <strong>for</strong>melles et de modèles<br />

initialement développés pour le domaine logiciel: logique<br />

temporelle, réseaux de Petri, LOTOS, modèles actionévénements,<br />

etc. Zalewski et al ont combiné dans un seul<br />

système intégré, via une interface commune, des outils


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

d’ingénierie traditionnels tels que ceux reposant sur UML,<br />

avec des outils de méthodes <strong>for</strong>melles tels que des outils<br />

de « model checking » ( Statecharts, etc.) [2].<br />

Garbajosa et al ont proposé et mis en oeuvre un outil<br />

pouvant servir comme partie frontale pour le test des<br />

systèmes et acceptant des descriptions de tests en langage<br />

naturel, afin d’affranchir les ingénieurs du test de la<br />

nécessité d’avoir une parfaite maîtrise des systèmes<br />

physiques pour lesquels les tests sont définis et des<br />

techniques de programmation, qui leurs sont peu<br />

familiers [10].<br />

Rutten a proposé une trousse à outils pour la<br />

programmation sûre d’applications robotiques [32]. Cette<br />

dernière est basée sur la synthèse de contrôleurs [30].<br />

Différents autres travaux basés sur l’utilisation des<br />

techniques d’intelligence artificielle ont été effectués pour<br />

le diagnostic de faute [39] et la supervision [13].<br />

3.3 Mise en œuvre dans les langages et<br />

<strong>architectures</strong> robotiques<br />

Seabra Lopes et al proposent dans [34], une architecture<br />

pour l’assemblage de tâches qui fournit à différents<br />

niveaux d’abstraction, des fonctions pour<br />

l’ordonnancement des actions, le contrôle de leur<br />

exécution, le diagnostic et le recouvrement d’erreur. La<br />

modélisation des défaillances d’exécution faite à travers<br />

des taxonomies et des réseaux de causalité joue un rôle<br />

central dans le diagnostic et le recouvrement.<br />

Dans l’architecture de subsomption les comportements<br />

sont modélisés chacun par une (ou plusieurs) machine(s) à<br />

états finis augmentée(s). Cette modélisation permet<br />

l’application de méthodes de vérification, mais aussi la<br />

mise en œuvre de mécanismes de tolérance aux fautes par<br />

l’exploitation des suppresseurs et des inhibiteurs.<br />

Dans l’architecture du LAAS le niveau décisionnel est<br />

réactif aux comptes-rendus d’exécution des niveaux<br />

inférieurs. Ces comptes-rendus peuvent être exploités<br />

pour la mise en œuvre d’actions de recouvrement.<br />

L’architecture ORCCAD de l’INRIA est une des<br />

<strong>architectures</strong> qui mettent un accent sur la sûreté des<br />

applications [36]:<br />

o Exécution temps réel rigoureuse des lois de<br />

commande.<br />

o Utilisation du langage synchrone ESTEREL pour la<br />

spécification de la partie contrôle.<br />

o Utilisation des outils de vérification <strong>for</strong>melle pour la<br />

partie contrôle des applications.<br />

L’aspect sécuritaire dans la réalisation de missions<br />

robotiques a également été l’une des motivations<br />

principale du projet « Architecture Logicielle pour la<br />

124<br />

Interpréteur<br />

Mémoire partagée<br />

Socket<br />

Interface Homme Machine<br />

Serveur<br />

Exécution<br />

ROBOT<br />

Générateur<br />

des règles<br />

Évaluateur<br />

Liaison sans fil<br />

FIG. 2 – Système de contrôle PILOT<br />

robotique mobile et téléopérée » qui a conduit à la<br />

création du langage PILOT et de son architecture<br />

logicielle. Dans la section suivante, nous présentons les<br />

travaux réalisés dans ce contexte pour la sûreté de<br />

fonctionnement des applications robotiques. Une brève<br />

description de l’architecture de contrôle du langage<br />

PILOT est d’abord effectuée. L’approche de sûreté et les<br />

solutions mises en œuvre dans le cadre de cette<br />

architecture sont ensuite abordées.<br />

4 Mécanismes de sûreté de l’architecture<br />

PILOT<br />

4.1 Architecture logicielle PILOT<br />

Le système de contrôle de PILOT (FIG. 1) est l'interface<br />

entre l'utilisateur et la machine pilotée (Robot Cible) [9]<br />

[26]. Il comporte six modules: une Interface Homme<br />

Machine (IHM), un Serveur de Communication, un<br />

Générateur de Règles, un Evaluateur, un Module<br />

d'Exécution ou Driver et un Interpréteur. Ces modules<br />

sont exécutés en parallèle et communiquent par socket et<br />

par mémoire partagée. Le système de contrôle peut<br />

s'exécuter soit en mode centralisé, soit en mode distribué.<br />

Le choix du mode d'exécution est effectué de façon<br />

statique (avant la compilation). L'IHM fournit des moyens<br />

pour la construction de plans, la création dynamique<br />

d'actions (sans recompilation du code), et la modification<br />

du plan avant et au cours de l'exécution de ce dernier. Elle<br />

intègre également des moyens pour la supervision de<br />

l'exécution du plan. L'IHM stocke le plan dans une zone


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

de mémoire partagée avec l'interpréteur. Ce dernier lit le<br />

plan en mémoire partagée et envoie des ordres (demande<br />

d'évaluation de précondition d'une action, ordre de<br />

démarrage de l'action, etc.) aux autres modules afin de<br />

réaliser l'exécution du plan. Le serveur de communication<br />

gère les communications inter modules. Le rôle du<br />

générateur de règles est de trans<strong>for</strong>mer les chaînes de<br />

caractères des règles de préconditions et de surveillance<br />

en arbres binaires. Il stocke le résultat dans une zone de<br />

mémoire partagée avec l'évaluateur. Ce dernier évalue les<br />

règles de précondition et de surveillance à partir des<br />

arbres binaires correspondants. Le module d'exécution<br />

réalise l'interface entre le robot et le système de contrôle.<br />

Il traduit les ordres de haut niveau du plan en ordres de<br />

bas niveau compréhensibles par la machine téléopérée. Le<br />

module d'exécution supporte différents protocoles de<br />

communication (connexion série, Ethernet, FDDI).<br />

4.2 Sûreté de fonctionnement avec PILOT<br />

4.2.1 Mécanismes internes<br />

Les actions PILOT comportent des règles de<br />

précondition et de surveillance. Ces règles constituent<br />

des moyens de sûreté pour l'application PILOT. En effet,<br />

une action ne peut être exécutée que si sa précondition est<br />

vraie. De même, lorsqu'une règle de surveillance est<br />

satisfaite, le traitement associé est effectué (le traitement<br />

par défaut est l'arrêt de l'action). Ce mécanisme est<br />

équivalent au mécanisme des exceptions et constitue une<br />

solution pour la mise en oeuvre de la tolérance aux fautes.<br />

Si nous considérons par exemple l'action avancer pour un<br />

robot mobile équipé de détecteurs d'obstacles, une règle<br />

de précondition pourrait être le test d'absence d'obstacle.<br />

L'une des règles de surveillance serait par exemple le test<br />

de présence d'obstacle avec comme traitement associé<br />

l'arrêt de l'exécution de l'action.<br />

Les plans PILOT sont modifiables au cours de leur<br />

exécution, ce qui est un atout majeur pour la tolérance aux<br />

fautes. En effet, l'opérateur peut, en cas de<br />

dysfonctionnement dans l'exécution d'un plan, apporter<br />

des modifications permettant au système de revenir dans<br />

un état de fonctionnement satisfaisant (poursuite de la<br />

mission ou arrêt dans un état sûr).<br />

La nature interprétée du langage et la possibilité de<br />

modifier des plans en cours d'exécution rendent possible<br />

l'exécution de plans incomplets. On peut ainsi lancer<br />

l'exécution d'un plan sans fin de séquence principale ou<br />

contenant une structure parallèle dont l'exécution ne peut<br />

se terminer en l'état parce qu'elle est incomplète. Afin de<br />

pallier cet inconvénient, l’environnement de contrôle de<br />

PILOT a été doté d’un mécanisme d’édition dirigée par<br />

la syntaxe permettant de garantir la validité syntaxique du<br />

plan à chaque phase de sa construction (insertion,<br />

modification, suppression de primitives). Cette approche<br />

permet de préserver les avantages de la possibilité de<br />

125<br />

modifier dynamiquement des plans: terminaison de plans<br />

bien assurée, etc.<br />

L’édition dirigée par la syntaxe ne prend pas en compte la<br />

validité sémantique du plan lors de sa modification au<br />

cours de l'exécution. Une approche basée sur le<br />

<strong>for</strong>malisme de « synthèse des contrôleurs » a été<br />

incorporée dans l’architecture de contrôle afin de<br />

sécuriser les modifications en cours d'exécution,<br />

notamment par la gestion d'aspects tels que la suppression,<br />

l'insertion ou la suppression de primitives. Grâce à ce<br />

travail, il est désormais possible d'effectuer des actions de<br />

compensation ou de recouvrement d'erreurs « sécurisées »<br />

en cours de mission.<br />

4.2.2 Mécanismes liés au processus de<br />

développement<br />

Les différents modules du système de contrôle du langage<br />

PILOT ont été modélisés à l'aide d'automates d'états<br />

finis et des algorithmes d'interprétation ont été définis<br />

pour les différentes primitives du langage. Ces éléments<br />

fournissent une bonne base pour la prévention d'erreurs<br />

(application de méthodes de vérification <strong>for</strong>melle). Ils<br />

permettent en outre d’éviter des erreurs dues à la<br />

distorsion de l'in<strong>for</strong>mation tout au long du processus de<br />

développement logiciel.<br />

Afin d’augmenter la robustesse du système PILOT des<br />

approches de tests statique et dynamique ont été<br />

appliquées à son interpréteur qui est l’un de ses modules<br />

les plus critiques. La nature réactive des applications<br />

robotiques augmente la complexité des opérations de test,<br />

car l'on doit, en plus des facteurs usuels, prendre en<br />

compte les événements difficilement maîtrisables générés<br />

par le robot. Un autre point important est la prise en<br />

compte des dommages éventuels que peuvent engendrer<br />

des tests effectués directement sur le robot. Un simulateur<br />

de robot simple a donc été construit pour les opérations de<br />

test.<br />

Le test statique a consisté, d'une part, en la lecture du<br />

code source dans le but de détecter les erreurs de<br />

programmation et, d'autre part, en l'analyse du code<br />

source par rapport aux algorithmes d'interprétation et la<br />

sémantique de PILOT.<br />

Le test dynamique a, quant à lui, consisté en la définition<br />

de jeux de tests et en leur application au code binaire de<br />

l'interpréteur. Les données de test ont été définies en<br />

combinant une approche fonctionnelle au retour<br />

d'expérience des tests déjà effectués. Pour la définition de<br />

l'échantillon représentatif des données de test, nous avons<br />

adopté une approche incrémentale. La séquence vide a<br />

d'abord été testée, puis les autres primitives du langage<br />

ont été testées individuellement. Trois combinaisons des<br />

primitives du langage ont ensuite été considérées:<br />

• Combinaison en longueur par l'accroissement du<br />

nombre d'éléments dans les séquences du plan.<br />

• Combinaison en largeur par l'accroissement du


nombre de branches dans le parallélisme, la<br />

préemption ou la conditionnelle.<br />

• Combinaison en profondeur par l'accroissement du<br />

niveau d'imbrication.<br />

Afin d'avoir un ensemble borné de jeux de tests, nous<br />

avons émis un ensemble d'hypothèses: les actions du<br />

même type sont interchangeables, l'ensemble des<br />

séquences résultant de la combinaison de paires<br />

quelconques de primitives est représentatif de l'ensemble<br />

des séquences comportant deux ou plus de primitives<br />

excepté pour les problèmes de mémoire, etc.<br />

Chacune des approches de test appliquées à l'interpréteur<br />

a permis de détecter des erreurs de différentes natures<br />

(erreur de conception dans la gestion des interruptions<br />

logicielles et dans la gestion de la terminaison des actions<br />

continues, erreurs de programmation, etc.). Ces erreurs<br />

ont été corrigées et très peu de dysfonctionnements ont été<br />

observés dans l'utilisation de l'interpréteur depuis ce<br />

travail.<br />

Bien que les techniques de test statiques et dynamiques<br />

décrites ci-dessus se soient révélées très utiles dans la<br />

détection et la correction d'erreurs dans les programmes<br />

d'interprétation, leur utilisation ne permet pas de garantir<br />

la con<strong>for</strong>mité de l'interprétation des plans à la sémantique<br />

Préparation de<br />

mission et<br />

Exploitation de<br />

données<br />

Navire/<br />

Laboratoire<br />

Scientifique<br />

Données plongée<br />

(Synthèse technique)<br />

Navire<br />

Opérationnel<br />

Engin<br />

Autonome<br />

- Cartographie<br />

- Données<br />

Scientifiques<br />

- Trajectoires et<br />

actions<br />

- Validation<br />

opérationnelle<br />

Superviseur de<br />

Surface<br />

- Checklist<br />

- Diagnostic<br />

a) Pont<br />

b) Fibre optique<br />

c) Lien acoustique<br />

- Configuration<br />

engin<br />

Contrôleur Engin<br />

FIG. 2 – Architecture globale<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Trajectoire et actions<br />

“mission”<br />

Diagnostic<br />

Intelligent<br />

de Surface<br />

Configuration “plongée”<br />

Diagnostic<br />

Intelligent<br />

Engin<br />

Données<br />

archivées<br />

Données<br />

126<br />

opérationnelle du langage PILOT. Un travail<br />

complémentaire a été fait pour pallier cet inconvénient. Il<br />

a consisté à modéliser les algorithmes d'interprétation<br />

et à vérifier leur con<strong>for</strong>mité par rapport à la<br />

sémantique opérationnelle du langage afin de corriger<br />

les éventuels dysfonctionnements et de régénérer le code<br />

de l'interpréteur à partir du modèle validé. Les réseaux de<br />

Petri colorés (RdPC) ont été utilisés pour la modélisation<br />

et la vérification. Le support logiciel utilisé a été<br />

Design/CPN (http://www.daimi.aau.dk/DesignCPN).<br />

Les RdP et plus particulièrement les RdP colorés ont été<br />

choisis pour différentes raisons. Leur nature graphique<br />

offre la convivialité souhaitée dans le but d'utiliser<br />

ultérieurement le modèle comme médium de<br />

communication entre les différentes personnes impliquées<br />

dans le développement du logiciel de contrôle, afin<br />

d'éviter des erreurs dues à la distorsion de l'in<strong>for</strong>mation.<br />

Ils permettent de représenter relativement simplement les<br />

différents concepts de l'algorithmique et de la<br />

programmation. La disponibilité d'outils, pour la<br />

simulation et la vérification des modèles, a également été<br />

un critère important.<br />

La modélisation a permis de constater que des<br />

simplifications sont envisageables tant au niveau de la<br />

représentation interne d'un plan, qu'au niveau des<br />

algorithmes d'interprétation, et d'appliquer ces dernières<br />

au système de contrôle. Des plans de tests ont été générés<br />

en s'appuyant sur l'approche adoptée au cours du test<br />

dynamique des algorithmes d'interprétation. La simulation<br />

de l'exécution de ces plans a permis de détecter des<br />

problèmes de terminaison. Cette dernière ne permettant<br />

d’explorer, dans la pratique, qu’une partie des chemins<br />

d’exécution, un travail complémentaire a été effectué. A<br />

partir des RdPC modélisant les algorithmes<br />

d'interprétation et d'un plan de test, le graphe des<br />

marquages accessibles correspondant aux chemins<br />

d'exécutions possibles est généré à l'aide de l'outil<br />

Design/CPN. Le graphe des marquages et le plan de test<br />

sont ensuite transmis au programme de vérification qui<br />

examine, pour chacun des chemins d'exécution, la<br />

satisfaction de la sémantique opérationnelle du langage.<br />

Une extension de l’environnement Design/CPN a été<br />

effectuée pour intégrer notre programme de vérification.<br />

Après cette présentation des mécanismes de sûreté offerts<br />

par l’environnement PILOT et des approches de tests et<br />

de vérification adoptées pour ren<strong>for</strong>cer sa robustesse,<br />

nous présentons, dans la section suivante, l’étude et<br />

l’intégration de mécanismes de sûreté de fonctionnement<br />

dans une architecture globale de préparation, de<br />

supervision et d’exécution de missions d’engins sousmarins<br />

autonomes (Fig. 2.). Ce travail s’est effectué en<br />

collaboration avec la Division Robotique de l’IFREMER<br />

Toulon. Nous présenterons les solutions proposées pour la<br />

sûreté ainsi que les propositions faites pour leur mise en<br />

œuvre aux différents niveaux de l’architecture.


5. Mécanismes de sûreté pour des missions<br />

d’engins sous-marins autonomes<br />

5.1 Niveau préparation de mission<br />

Deux approches sont abordées pour la sûreté: la<br />

vérification de propriétés et la tolérance aux fautes. Nous<br />

les abordons dans les sous-sections qui suivent.<br />

5.1.1 Vérification de propriétés<br />

Actions de<br />

l’engin<br />

Description<br />

« in<strong>for</strong>melle »<br />

de la mission<br />

Système de<br />

Spécification<br />

<strong>for</strong>melle<br />

Données Cartographiques<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Caractéristiques de l’engin<br />

Propriétés spécifiques<br />

Description<br />

<strong>for</strong>melle de<br />

la mission<br />

Contraintes<br />

Système de<br />

vérification<br />

<strong>for</strong>melle<br />

FIG. 3 – Système de spécification et de vérification<br />

Résultat<br />

Il s’agit de vérifier l’adéquation entre les contraintes<br />

issues de la spécification de la mission et les<br />

caractéristiques de l’engin et de l’environnement<br />

(accessibilité de la zone à explorer, précision des capteurs<br />

et de la trajectoire, fréquence d’acquisition des mesures,<br />

durée de la mission, énergie, limites de vitesse et<br />

d’altitude, capteurs et charges utiles adaptées à la mission,<br />

temps de calibration, maintien - si nécessaire - de l’engin<br />

dans la zone à explorer), de vérifier la cohérence, en<br />

terme d’enchaînement d’actions et de logique d’exécution<br />

de la mission spécifiée par le scientifique (par exemple,<br />

certaines exécutions ne peuvent être faites en parallèle, les<br />

post-conditions d’une action et les préconditions de<br />

l’action suivante peuvent être incompatibles), et<br />

d’effectuer des diagnostics sur l’engin, le système de<br />

contrôle, et leurs modèles éventuels (les données des<br />

plongées précédentes pourront être utilisées pour ajuster<br />

certains paramètres du diagnostic). La phase de diagnostic<br />

pourra permettre, par exemple, de vérifier la terminaison<br />

de la mission.<br />

La figure 3 montre la structure du sous-système de<br />

spécification et de vérification proposé à ce niveau.<br />

Différents outils sont envisageables pour la conception et<br />

la spécification. Nous pouvons citer l’environnement<br />

STOOD de TNI-Europe qui permet par ailleurs de générer<br />

des programmes pour différents systèmes de vérification.<br />

Au niveau de la vérification, les approches de « modelchecking<br />

» et de démonstration peuvent être explorées.<br />

Différents outils sont disponibles. Certains outils<br />

s’appuient sur l’approche synchrone très utilisée pour la<br />

127<br />

conception de systèmes réactifs dont font partie les<br />

systèmes robotiques: outils commerciaux tels que SCADE<br />

et ESTEREL [8], SILDEX [11], environnement CELL<br />

CONTROL spécialisé pour les automatismes industriels<br />

de ATHYS [12]. D’autres <strong>for</strong>malismes graphiques de<br />

spécification et de vérification, synchrones et dédiés au<br />

contrôle commande tels que STATECHARTS [16],<br />

SYNCCHARTS [3] et GRAFCET [1], ou asynchrones<br />

tels que les Réseaux de Petri, offrent également des<br />

possibilités intéressantes.<br />

L’approche de démonstration a également donné lieu à<br />

différents outils de preuves (prouveurs) [14]: Isabelle<br />

[29], HOL [15], Coq [7], PVS [37], Boyer-Moore [4]. La<br />

plupart de ces systèmes utilisent des logiques d’ordre<br />

supérieur qui sont extrêmement souples et expressives.<br />

5.1.2 Tolérance aux fautes<br />

Il s’agit ici de prendre en compte les défaillances<br />

envisageables afin de permettre, au niveau de la définition<br />

de la mission, de prévoir des actions de recouvrement.<br />

L’environnement de conception de plans de missions<br />

devra fournir des moyens permettant d’intégrer les<br />

réactions aux défaillances (mécanismes de recouvrement).<br />

Dans l’élaboration du plan de mission, on pourra prévoir<br />

2 cas de figures:<br />

• Introduction de la redondance pour pallier certaines<br />

défaillances, par exemple modification de la<br />

trajectoire initiale suite à la détection d’un obstacle. Il<br />

s’agit dans ce cas de créer un plan de « repli » pour la<br />

partie jugée critique. La gestion d’un tel aspect<br />

dépend également de la richesse du système de<br />

contrôle qui peut déjà être équipé d’un mécanisme<br />

automatique de contournement d’obstacle.<br />

• Classement des actions par niveau de « criticité » de<br />

façon à prendre les actions de compensation<br />

appropriées en cas de dysfonctionnement (abandon et<br />

passage à l’action suivante, abandon de la mission,<br />

saut à une action spécifique ou à un point particulier<br />

du plan de mission, …).<br />

L’approche proposée pour l’intégration de mécanismes de<br />

traitement d’erreur consiste, après construction du plan de<br />

mission standard (c’est-à-dire ne prenant pas en compte la<br />

gestion de fautes autres que celles spécifiées lors de la<br />

création des actions), à spécifier pour chaque action ou<br />

primitive le traitement de fautes correspondant. La<br />

primitive est sélectionnée et les conditions de fautes, ainsi<br />

que les réactions associées sont définies. Le système de<br />

spécification se sert alors du plan et des données saisies<br />

pour générer le fichier de plan de mission incorporant le<br />

traitement de fautes.<br />

Au niveau des actions, les conditions et traitements de<br />

fautes initiaux de l’action sont étendus en cas de besoin<br />

pour prendre en compte de nouvelles conditions de fautes<br />

et leur associer les traitements souhaités. Les conditions<br />

de fautes sont des expressions logiques basées sur des


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

valeurs de capteurs, les états d’exécution d’actions ou de<br />

primitives et les états de fautes reçus du système de<br />

contrôle « bas niveau ».<br />

Les réactions associées aux conditions de fautes sont:<br />

l’arrêt de l’action ou de la primitive (il faudra tenir<br />

compte du caractère interruptible ou non de l’action) et/ou<br />

le saut vers un point du plan et/ou l’exécution d’une<br />

séquence à définir.<br />

5.2 Niveau superviseur de surface<br />

Deux aspects sont considérés à ce niveau, la supervision<br />

de la mission et la vérification de propriétés relatives au<br />

déroulement de la mission.<br />

Il s’agit, pour la supervision de mission, de récupérer des<br />

in<strong>for</strong>mations relatives au déroulement de la mission et de<br />

les rendre disponibles, par affichage à l’écran et<br />

éventuellement stockage dans un fichier accessible par<br />

l’utilisateur, afin de permettre à l’opérateur de prendre des<br />

décisions notamment en cas de dysfonctionnement. Les<br />

données récupérées sont les valeurs de capteurs, les états<br />

d’exécution des actions, les états de défaillances<br />

éventuelles (alarmes, etc.).<br />

En ce qui concerne la vérification de propriétés, un<br />

Système de Diagnostic Intelligent (SDI) permet<br />

d’effectuer des vérifications plus élaborées sur le<br />

déroulement de la mission. Il est <strong>for</strong>mé de modules de<br />

diagnostic intelligent ayant chacun sa spécificité (par<br />

exemple une technique particulière d’intelligence<br />

artificielle ou la gestion d’un type de fautes particulier), et<br />

d’un module de décision. Le module de décision est<br />

chargé, d’une part, de collecter les in<strong>for</strong>mations utiles au<br />

diagnostic et de les transmettre aux modules de diagnostic<br />

intelligent appropriés, et, d’autre part, d’effectuer la<br />

synthèse des in<strong>for</strong>mations de diagnostic reçues des<br />

différents modules de diagnostic intelligent afin de<br />

transmettre, aux modules le requérant (par exemple, le<br />

système contrôlé ou un autre module réalisant par<br />

exemple une trace des défaillances), les in<strong>for</strong>mations sur<br />

les anomalies détectées ou les ordres de correction.<br />

Les différents diagnostics et recouvrements envisagés au<br />

niveau des SDI sont les suivants:<br />

• Diagnostic de défaillance des effecteurs,<br />

• Diagnostic de défaillance des liens de<br />

communication,<br />

• Contrôle des batteries (autonomie),<br />

• Contrôle de la précision de la trajectoire,<br />

• Contrôle de la précision/qualité des données<br />

mesurées,<br />

• Contrôle du logiciel de contrôle embarqué (cas<br />

de défaillance partielle).<br />

Pour la défaillance totale du logiciel de contrôle<br />

embarqué, des procédures de recouvrement sont<br />

prédéfinies. De même, le contrôle de l’ordinateur de<br />

surface et de son logiciel est assuré par l’opérateur.<br />

128<br />

Le diagnostic de défaillance des effecteurs des AUVs peut<br />

être considéré comme un problème de classification<br />

ordinaire. La disponibilité d’expertises humaines et<br />

d’exemples oriente cependant vers le choix d’une solution<br />

basée sur le mixage des approches symboliques et<br />

neuronales. En ce qui concerne le recouvrement de ces<br />

défaillances, il n’existe pas pour l’instant de base<br />

d’exemples. Il s’agit d’un problème de contrôle, pour<br />

lequel la théorie automatique classique ne peut être<br />

appliquée, dont le recouvrement est plus lié aux stratégies<br />

de contrôle qu’aux approches de contrôle. Etant donné<br />

que la sélection des stratégies de contrôle peut dépendre<br />

de plusieurs contraintes quelquefois difficiles à mesurer<br />

ou exprimer, la théorie de contrôle de la logique floue<br />

semble être une solution intéressante.<br />

Le diagnostic des liens de communication et le contrôle de<br />

la batterie de l’AUV sont des problèmes similaires. Ils<br />

correspondent davantage à des problèmes de gestion de<br />

risques qu’à des méthodes de diagnostic pur. Par<br />

conséquent, l’objectif n’est pas uniquement d’effectuer un<br />

diagnostic par l’étude d’une situation statique donnée,<br />

mais au contraire d’observer l’évolution de cette situation<br />

dans le temps, et puis d’évaluer le risque d’avoir une<br />

défaillance ou une situation de conflit. Les outils basés sur<br />

des méthodes probabilistes tels que les réseaux Bayésiens<br />

sont particulièrement bien adaptés à ce type de problème.<br />

5.3 Niveau contrôleur d’engin<br />

Le sous-système de sûreté au niveau contrôleur engin<br />

comporte un Système de Diagnostic Intelligent dont le<br />

principe est le même que celui du niveau Supervision de<br />

Surface, un gestionnaire de modes, un gestionnaire de<br />

fautes, un gestionnaire d’énergie, un système<br />

d’archivage et un module de conversion. Un protocole<br />

robuste est utilisé pour le transfert de la mission entre la<br />

surface et l’engin, afin de se prémunir contre toute<br />

corruption de données et contre l’exécution de missions<br />

incomplètes. La mise à jour du plan doit être possible à<br />

tout moment à travers une liaison acoustique (mode<br />

immergé), radio ou télémétrique (modes surface et « sous<br />

surface »). Contrairement au SDI de Surface, dont le rôle<br />

est celui d’un agent chargé d’analyser / diagnostiquer les<br />

fautes, de fournir à l’opérateur une synthèse des résultats<br />

des diagnostics et, éventuellement, de lui proposer des<br />

actions à entreprendre pour pallier les défaillances<br />

(l’opérateur est seul chargé d’envoyer les ordres d’actions<br />

correctives à l’engin), le Système de Diagnostic<br />

Intelligent de l’Engin a un fonctionnement autonome. Sur<br />

la base des diagnostics qu’il aura effectués, il proposera<br />

directement les actions correctives au Système de<br />

Contrôle de l’Engin. Le gestionnaire de mode contrôle<br />

l’ensemble des transitions d’états du véhicule. Le<br />

gestionnaire de fautes détecte les fautes du véhicule et<br />

prend les actions sur fautes prédéfinies. Les réponses sur<br />

fautes possibles sont: « stop et surface », « change l’étape<br />

de la mission », « ignore la faute et continue ». Le<br />

gestionnaire d’énergie gère l’énergie du passé et prévoit<br />

l’énergie du futur. Lorsque l’énergie atteint certains


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

niveaux de « commutation », un événement<br />

d’avertissement et un événement d’alarme sont<br />

déclenchés. Le sous-système d’archivage mémorise les<br />

données relatives à la mission. Il transmet au Système de<br />

Diagnostic Intelligent Engin (SDIE) des in<strong>for</strong>mations lui<br />

permettant de vérifier la cohérence des données et de<br />

détecter d’éventuelles anomalies. Le module de<br />

conversion se charge de convertir les actions de<br />

recouvrement demandées par le SDIE en une séquence<br />

adaptée aux modules destinataires, et de convertir les<br />

in<strong>for</strong>mations émanant des autres modules dans un <strong>for</strong>mat<br />

adapté au SDIE.<br />

6. Conclusion<br />

En ce qui concerne la sûreté de fonctionnement logicielle,<br />

la majorité des travaux porte sur le diagnostic et le<br />

traitement de fautes à l’aide de techniques d’intelligence<br />

artificielle. Seules quelques <strong>architectures</strong> logicielles<br />

intègrent ou ont donné lieu à l’utilisation de méthodes<br />

<strong>for</strong>melles. De même, les mécanismes de tolérance aux<br />

fautes logicielles sont peu exploités. L’utilisation<br />

croissante de la programmation distribuée dans ces<br />

<strong>architectures</strong>, nécessite pourtant d’incorporer des<br />

mécanismes tels que la réplication très souvent prise en<br />

compte au niveau matériel. L’on peut également noter le<br />

peu de report d’expériences en matière de tests rigoureux<br />

dans la réalisation de ces <strong>architectures</strong> qui, bien que dû<br />

peut-être à la perception même des activités de tests,<br />

traduit un manque d’intérêt en la matière. Ces activités de<br />

test sont pourtant très importantes dans le processus de<br />

développement de logiciels sûrs.<br />

De façon globale, les éléments susmentionnés ren<strong>for</strong>cent<br />

la nécessité d’un ef<strong>for</strong>t dans l’application de techniques<br />

relatives à la sûreté de fonctionnement à conception et la<br />

mise en oeuvre d’<strong>architectures</strong> robotiques.<br />

Les travaux réalisés dans le cadre de la conception de<br />

l’architecture logicielle PILOT ont apporté des solutions<br />

génériques pour la sûreté de fonctionnement<br />

d’applications robotiques: mécanisme d’édition dirigée<br />

par la syntaxe, recouvrement d’erreur à travers la<br />

possibilité de modification de missions en cours<br />

d’exécution, mécanisme de sécurisation de modifications<br />

en cours d’exécution. Les approches de tests statique et<br />

dynamique et les méthodes <strong>for</strong>melles (modélisation et<br />

vérification à l’aide de RdP colorés) appliquées à<br />

l’interpréteur de plans peuvent également s’appliquer à<br />

d’autres environnements de programmation de mission.<br />

Dans le cadre de l’étude de mécanismes de sûreté pour<br />

l’architecture de programmation de missions d’AUV une<br />

approche pour la gestion des fautes été proposée. L’idée<br />

novatrice dans cette approche est la hiérarchisation de la<br />

gestion des fautes, et la spécification par extension qui<br />

permet, d’une part, d’étendre la gestion initiale de fautes<br />

intégrée à l’action, évitant ainsi la redondance des<br />

traitements, et, d’autre part, d’avoir une gestion de fautes<br />

associée à chaque primitive. Les solutions proposées aux<br />

différents niveaux de l’architecture globale de préparation<br />

de mission sont applicables à d’autres environnements<br />

129<br />

similaires voire à des applications robotiques dans des<br />

domaines non maritimes (robotique mobile terrestre ou<br />

manufacturière).<br />

Références<br />

[1] ADEPA, Le Grafcet, Cépaduès Editions, Paris,<br />

France, 1992.<br />

[2] Al-Daraiseh, A., Zalewski, J. and Toetenel, H.<br />

Software engineering in ground transportation<br />

systems. In Proceedings of the SCI’01, 5 th world<br />

multiconference on systemics, cybernetics and<br />

in<strong>for</strong>matics. Orlando, FL., July, 2001.<br />

[3] André C., Representation and analysis of reactive<br />

behaviors: A synchronous approach, In CESA'96,<br />

IEEE-SMC, Lille, France, 1996.<br />

[4] D.J.B. Bosscher, I. Polak and F.W. Vaandrager.<br />

Verification of an audio control protocol. In H.<br />

Langmaak, W. P. de Roever and J. Vytopil, editors.<br />

Proceedings of the third School and Symposium on<br />

Formal Techniques in Real-Time and Fault-Tolerant<br />

Systems, volume 863 of Lecture Notes in Computer<br />

Science, pages 170-192, Springer-Verlag.<br />

[5] Brooks R. A., A robust layered control system <strong>for</strong> a<br />

mobile robot, IEEE Journal of Robotics and<br />

Automation, pages 14-23, Mars 1986.<br />

[6] Castillo E., D. Simon, B. Espiau and K. Kapellos,<br />

Computer-aided design of a generic robot controller<br />

handling reactivity and real-time control issues,<br />

Rapport de recherche 1801, INRIA, November 1992.<br />

[7] Devillers M.C.A., W.O.D. Griffioen, J.M.T. Romijn<br />

and F.W. Vaandrager. Verification of a leader election<br />

protocol: <strong>for</strong>mal methods applied to IEEE 1394.<br />

Report CSI-R9728, Computing Science Institute,<br />

Nijmegen, 1997.<br />

[8] Dima C., A. Girault, C. Lavarenne, and Y. Sorel. Offline<br />

real-time fault-tolerant scheduling. In 9 th<br />

Euromicro Workshop on Parallel and Distributed<br />

Processing, PDP’01, pages 410-417, Mantova, Italy,<br />

février 01.<br />

[9] Fleureau J.L., L. Nana Tchamnda, L. Marcé and L.<br />

Abalain, Remote-controlled vehicle using PILOT<br />

Language, In ANS'99, Pittsburgh, Pennsylvania,<br />

American Nuclear Society, 1999.<br />

[10] Garbajosa J., O. Tejedor and M. Wolff. Natural<br />

language front end to test systems. Annual review in<br />

Automatic Programming, vol. 19, pp. 261-267, 1994.<br />

[11] Girault A. Sur la répartition de programmes<br />

synchrones. Thèse de Doctorat, INPG, Grenoble,<br />

France, Janvier 1994.<br />

[12] Girault A. Elimination of redundant messages with a<br />

two pass static analysis algorithm. Parallel computing,<br />

28(3):433-453, mars 2002.<br />

[13] Gomez P., S. Romero, P. Serrahima and I. Alarcon,<br />

A real time expert system <strong>for</strong> continuous assistance in<br />

process control: a successful approach, Annual Review<br />

in Automatic Programming, vol. 19, pp. 371-375,<br />

1994.


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

[14] Groote J. F., F. Monin and J.C. Van de Pol. Checking<br />

verification of protocols and distributed systems by<br />

computer. In D. Sangiorgi and R. de Simone,<br />

Proceedings of Concur’98, Sophia Antipolis, LNCS<br />

1466, pages 629-655, Springer-Verlag, 1998.<br />

[15] Goldschlag D. M. Verifying safety and liveness<br />

properties of a daily insensitive fifo circuit on the<br />

Boyer-Moore prover. International Workshop on<br />

Formal Methods in VSLI Design, 1991.<br />

[16] Harel D., STATECHARTS: A visual approach to<br />

complex systems, Science of Computer Programming,<br />

8(3), 231-274, 1987.<br />

[17] Hassoun M. and C. Laugier, Reactive motion<br />

planning <strong>for</strong> an intelligent vehicle, In Intelligent<br />

Vehicles'92 Symposium, pages 259-264, Detroit, july<br />

1992.<br />

[18] Hayes-Roth B., A blackboard architecture <strong>for</strong><br />

control, Artificial Intelligence, 26:pp. 251-321, 1985.<br />

[19] Kapellos K., D. Simon and B. Espiau, Control laws,<br />

tasks, procedures with ORCCAD; application to the<br />

control of an underwater arm, In 6th IARP<br />

(International Advanced Robotic Program), La Seyne<br />

sur Mer, France, 1996.<br />

[20] Kermarrec Y., L. Nana and L. Pautet, Implementing<br />

recovery blocks in GNAT: a powerful fault tolerance<br />

mechanism and a transaction support, In ACM,<br />

editor, Proceedings of the TRI-Ada'95 Conference,<br />

Anaheim, Cali<strong>for</strong>nia, Novembre 1995.<br />

[21] Kermarrec Y., L. Nana, L. Pautet, « Providing faulttolerant<br />

services to distributed Ada 95 applications »,<br />

In ACM, editor, Proceedings of the Tri Ada'96<br />

conference, Philadelphia, USA, Décembre 1996.<br />

[22] Laprie J. C., « Sûreté de fonctionnement des<br />

systèmes in<strong>for</strong>matiques et tolérance aux fautes:<br />

concepts de base », TSI, 4(5):419-429, Septembre<br />

1985.<br />

[23] Leveson, N., Cha, S.S., Shimeall, T. J., 1991. Safety<br />

and verification of Ada programs using software fault<br />

trees. IEEE Software 8(7), 48-59, 1991.<br />

[24] Lumia R., J. Fiala and A. Wavering, The NASREM<br />

robot control system and testbed, IEEE Journal of<br />

Robotics and Automation, 5(1), pp. 20-26, 1990.<br />

[25] Maier T., FMEA and FTA to support safety design of<br />

embedded software in safety-critical systems. In<br />

Proceedings of the ENCRESS conference on safety<br />

and reliability of software based systems. Belgium,<br />

1995.<br />

[26] Nana Tchamnda L., J.L. Fleureau and L. Marcé, A<br />

control system <strong>for</strong> PILOT: software architecture and<br />

implementation issues, ANS'01, ANS 9th International<br />

Topical Meeting on Robotics and Remote Systems,<br />

Seattle, Washington, March, 2001.<br />

[27] Nilsson N., A mobile automation: an application of<br />

artificial intelligence techniques, In Proc. Int. Joint<br />

Conf. on Artificial Intelligence, pp. 509-520, 1969.<br />

[28] Nilsson N., Shakey the robot, Technical Report 323,<br />

SRI, Menlo Park, CA.<br />

130<br />

[29] Paulson L. C. On two <strong>for</strong>mal analyses of the<br />

Yahalom protocol. Technical report 432, Computer<br />

Laboratory, University of Cambridge, 1997.<br />

[30] Ramadge P. J., Wonham W. M., « The control of<br />

discrete event systems », Proceedings of the IEEE,<br />

Special issue on dynamics of discrete event systems,<br />

vol. 77, no. 1, pages 81-98,1989.<br />

[31] Rosenblatt J., DAMN: A distributed architecture <strong>for</strong><br />

mobile navigation, Journal of Experimental and<br />

Theoretical Artificial Intelligence, 9(2/3), pp. 339-360,<br />

1997.<br />

[32] Rutten E., A framework <strong>for</strong> using discrete control<br />

synthesis in safe robotic programming », Rapport de<br />

recherche, INRIA, 2000.<br />

[33] Schneider S., V. Chen, G. Pardo-Castellote and H.<br />

Wang, ControlShell: A software architecture <strong>for</strong><br />

complex electro-mechanical systems, International<br />

Journal of Robotics Research, Special issue on<br />

Integrated Architectures <strong>for</strong> Robot Control and<br />

Programming, 1998.<br />

[34] Seabra Lopes L. and L.M. Camarinha-Matos,<br />

Learning to diagnose failures of assembly tasks,<br />

Annual Review in Automatic Programming, vol 19,<br />

pp. 97-103, 1994.<br />

[35] Tigli J.Y., Vers une architecture de contrôle pour<br />

robot mobile orientée comportement, SMACH, Thèse<br />

de Doctorat, Université de Nice - Sophia Antipolis,<br />

Janvier 1996.<br />

[36] Turro N., MaestRo: Une approche <strong>for</strong>melle pour la<br />

programmation d'applications robotiques, Thèse de<br />

doctorat, Université de Nice - Sophia Antipolis,<br />

Septembre 1999.<br />

[37] Vitt J. and J. Hooman. Assertional specification and<br />

verification using PVS of the Steam Boiler Control<br />

System. In J.-R. Abrial, et al., editors, Formal Methods<br />

<strong>for</strong> Industrial Applications: Specifying and<br />

Programming the Steam Boiler Control. Volume 1165<br />

of Lecture Notes in Computer Science, 1996.<br />

[38] Zalewski, J., W. Ehrenberger, F. Saglietti, J. Gorski<br />

& A. Kornecki, Safety of computer control systems:<br />

challenges and results in software development,<br />

Annual Review in Control, vol. 27, pp. 23-37, 2003.<br />

[39] Zhang J., A. J. Morris and G. A. Montague, Fault<br />

diagnosis of a cstr using fuzzy neural networks,<br />

Annual Review in Automatic Programming, vol 19.,<br />

pp. 153-158, 1994.


ORCCAD, a framework <strong>for</strong> safe robot control design and<br />

implementation<br />

Daniel Simon, Roger Pissard-Gibollet and Soraya Arias<br />

INRIA Rhône-Alpes, 655 avenue de l’Europe<br />

38330 MONTBONNOT ST MARTIN, FRANCE<br />

http://sed.inrialpes.fr/Orccad/<br />

Abstract<br />

Robotic systems are typical examples of hybrid systems where continuous time aspects, related to control laws, must<br />

be carefully merged with discrete-time aspects related to control switches and exception handling. These two aspects<br />

interact in real-time to ensure an efficient nominal behaviour of the system together with safe and graceful degradation<br />

otherwise. In a mixed synchronous/asynchronous approach, ranging from user’s requirements to run-time code, Orccad<br />

provides <strong>for</strong>malised real-time control structures, the coordination of which is specified using the ESTEREL synchronous<br />

language. CAD tools have been developed and integrated to help the users along the steps of the design, verification,<br />

implementation and exploitation processes.<br />

1 Motivation<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

A fully fledged robotic system includes various subsystems coming from various fields of science and technology like<br />

mechanical engineering, automatic control, data processing and computer science. The goal of the control architecture is<br />

to organise coherently all these subsystems so that the global system behaves in an efficient and reliable way to match the<br />

end-user’s requirements.<br />

Robotics primarily deals with physical devices like arms or vehicles. These devices are governed by the laws of<br />

physics and mechanics. Compared with virtual, purely computing systems, they exhibit inertia and their models are never<br />

perfectly known. Usually their behaviour can be described by differential equations where time is a continuous variable.<br />

Their state can be measured using sensors of various kind which themselves are not perfect. Control theory provides a<br />

large set of methods and algorithms to govern their basic behaviour through closed-loop control ensuring the respect of<br />

required per<strong>for</strong>mance and crucial properties like stability.<br />

Robots of any type interact with their physical environment. Although this environment can be sensed by exteroceptive<br />

sensors like cameras or sonars, it is only partially known and can evolve through robot actions or external causes. Thus<br />

a robot will face different situations during the course of a mission and must react to perceived events by changing its<br />

behaviour according to corrective actions. These abrupt changes in the system’s behaviour are relevant of the theory of<br />

Discrete Events Systems.<br />

Besides the logical correctness of computations the efficiency and reliability of the system relies on many temporal<br />

constraints. The per<strong>for</strong>mance of control laws strongly depend on the respect of sampling rates and computing latencies.<br />

Corrective actions must be usually executed within a maximum delay to insure the mission success and the system’s<br />

security. The optimisation of computing resources is still a relevant problem especially <strong>for</strong> remote autonomous robots like<br />

planet rovers[29] or underwater vehicles[31] where both on-board space and energy are severely limited.<br />

There<strong>for</strong>e robotic systems belongs to the class of hybrid reactive and real-time systems in which different features need<br />

to use different methods and tools to be programmed and controlled. The ORCCAD[5] environment is aimed to provide<br />

users with a set of coherent structures and tools to develop, validate and encode robotic applications in this framework.<br />

131


2 The Orccad architecture<br />

2.1 Basic statements<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

The <strong>for</strong>mal definition of a robotic action is a key point in the ORCCAD framework. It is based on the following basic<br />

statements :<br />

• In many cases the physical tasks to be achieved by robots can be stated as automatic control problems, which can<br />

be efficiently solved in real-time by using adequate feedback control loops. Let us mention that the Task-Function<br />

approach[33] was specifically developed <strong>for</strong> robotic systems;<br />

• The characterisation of the physical action is not sufficient <strong>for</strong> fully defining a robotic action : starting and stopping<br />

times must be considered, as well as reactions to significant events observed during the task execution; this will be<br />

further used to design complex missions through actions composition.<br />

• Since the overall per<strong>for</strong>mance of the system relies on the existence of efficient real-time mechanisms at the execution<br />

level, particular attention must be paid to their specification and their verification.<br />

From the users and programmers point of view, the specification of a robotic application must be <strong>modular</strong>, structured<br />

and accessible to users with different domain of expertize. The end-user concerned with a particular application must be<br />

provided with high level <strong>for</strong>malisms allowing to focus on mission specification and verification issues; the control systems<br />

engineer needs an environment with efficient design, programming and simulation tools to express the control laws which<br />

then are encapsulated <strong>for</strong> the end-user.<br />

2.2 The Orccad approach<br />

In ORCCAD, two entities are defined in order to capture the a<strong>for</strong>ementioned requirements. The Robot-Task (RT) models<br />

basic robotic actions where control aspects are predominants. For example, let us cite hybrid position/<strong>for</strong>ce control of a<br />

robot arm, visual servoing of a mobile robot following a wall or constant altitude survey of the sea floor by an underwater<br />

vehicle. The RT characterises in a structured way closed loop control laws, along with their temporal features related to<br />

implementation and the management of associated events. These events are :<br />

• preconditions, which can be associated with measurements and watchdogs<br />

• events and exceptions of three types :<br />

– synchronisation signals reports the occurence of a particular state in a RT, they can be used to synchronise<br />

different RTS;<br />

– type 1 exceptions are locally processed in the RT, e.g. by tuning a parameter of the control law;<br />

– type 2 exceptions signal that the RT cannot run to completion and must be switched <strong>for</strong> a new one, chosen by<br />

the supervision controller;<br />

– type 3 exceptions are fatal <strong>for</strong> the application and must, as far as possible, drive the system in a safe recovery<br />

state.<br />

• postconditions are emitted when the RT successfully terminates.<br />

For the mission designer this set of signals and associated behaviours represents the abstract view of the RT, hiding<br />

all specification and implementation details of the control laws (Figure 1). The characterisation of the interface of a RT<br />

with its environment in a clear way, using typed input/output events, allows the composition of them in an easy way<br />

in order to construct more complex actions, the Robot-Procedures (RPs) : The RP paradigm is used to logically and<br />

hierarchically compose RTs and RPs in structures of increasing complexity. Usually, basic RPs are designed to fulfil a<br />

basic goal through several potential solutions, e.g, a mobile robot can follow a wall using predefined motion planning,<br />

visual servoing, or acoustic servoing according to sensory data availability. RPs design is hierarchical so that common<br />

structures and programming tools can be used from basic actions up to a full mission specification.<br />

These well defined structures associated with synchronous composition, thanks to the use of the ESTEREL language<br />

[4], allows <strong>for</strong> the systematisation and thus the automatisation of the <strong>for</strong>mal verification on the expected controller behaviour.<br />

This is also a key to design automatic code generators and partially automated verification. Formal definitions of<br />

RPs and RTs together with associated available <strong>for</strong>mal verification methods may be found in [24].<br />

132


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Drivers<br />

3 Specification of Robot-Tasks<br />

Control<br />

Observer Observer<br />

UnStableCam<br />

Control<br />

Suspend<br />

Timeout<br />

Resume<br />

Control law (Discretized time)<br />

START<br />

ABORT<br />

Reactive shell<br />

T2_UnStableCam<br />

T3_SENSOR_FAILED<br />

Good_End<br />

STARTED<br />

External view (Discrete events)<br />

Figure 1: Encapsulation of the control law in a reactive shell<br />

The design of the robot controller begins with the design and test of basic actions when the necessary actions do not preexist<br />

in the RT library. Let us now illustrate the design and validation process of a RT through an example of underwater<br />

robotics taken from [38], where the goal of the KeepStableCam RT consists of the stabilisation of an underwater vehicle<br />

using visual servoing.<br />

Modular Specification in Continuous Time In fact, it should be emphasised that the RT designer may select easily the<br />

adequate models and tuning parameters in ORCCAD, since they belong to some predefined classes in an object-oriented<br />

description of the control available through the graphical interface depicted by figure 2. The control algorithm is designed<br />

as a block-diagram, where elementary algorithmic modules are connected through input/output ports, e.g. StableCam<br />

in Figure 2a. The value of parameters must be set to instantiate the design. It is also necessary to enter the localisation of<br />

data processing codes (C files).<br />

The particular Physical-Resource module (VORTEX here) provides a gateway towards the physical system through its<br />

Driver ports. A complete description of more complex RTs and of sensor-based tasks may be found in [38].<br />

Figure 2: a)Functional and temporal attributes of a Module – b) Specification of the reactive behaviour of a RT<br />

133


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Timed-Constrained Specification The resulting specification defines the action from a continuous time point of view,<br />

i.e independently of time discretization and other implementation related aspects which are considered in a next design<br />

step. The passage from the above continuous time specification to a description taking into account implementation<br />

aspects is done by associating temporal properties to modules, i.e. sampling periods, assumed worst case execution time<br />

(WCET), communications and synchronisations between the processes.<br />

Modules can be synchronised in several ways :<br />

• explicitly periodic modules are connected to a system clock through a hidden input port ;<br />

• an input port can be synchronised on an output port of another module, thus inheriting its period ;<br />

• it can be synchronised on an external source, e.g. an interrupt coming from a vision processing system.<br />

In default mode, <strong>for</strong> basic control purpose, the designer specifies a single-loop, single-rate controller where all the<br />

algorithmic modules are gathered in a single real-time task <strong>for</strong> which a single sampling period is defined.<br />

More complex timing behaviours, e.g. multi-rate control, make use of more or less tight synchronisations between<br />

modules and clocks. In many cases simple design rules can be used to get a correct (deadlock free) timing diagram<br />

[37]. For more complex designs the synchronisation and timing diagram can be built interactivelly and a Petri net model<br />

of the synchronisation skeleton of the RT is automatically extracted from the timed block-diagram. Thus its logical<br />

correctness and timing analysis (using the provided WCET of modules) can be done using the underlying model in the<br />

(max,plus) algebra [36]. Finally feedback scheduling can be specified and implemented provided that some additionnal<br />

instrumentation (<strong>for</strong> variable clocks generation and accurate online execution time measurement) has been included in the<br />

real-time plat<strong>for</strong>m ([40]).<br />

Specification of the reactive behaviour The logical behaviour of the RT is automatically encoded in ESTEREL through<br />

a graphical window (Figure 2c) (here we specify that the UnStableCam signal must be treated as a type 2 exception).<br />

Thanks to the strong typing of events and exceptions the code generator was proved to be correct and thus guarantees<br />

that crucial properties like safety and liveness are true [24]. Besides user’s defined signals (pre and post-conditions,<br />

exceptions), the code generator builds a large ESTEREL file where hidden system signals are also declared. These signals<br />

will be used at run time to spawn, suspend or resume all the real-time threads necessary <strong>for</strong> the execution of the RT.<br />

4 Design and Analysis of Procedures<br />

After all necessary basic actions have been designed and validated, the user now wants to use them in more complex<br />

procedures to per<strong>for</strong>m a useful underwater inspection mission [38]. Here is the detailed specification of the KEEPSTABLE<br />

procedure in order to enlighten the specification and analysis process proposed <strong>for</strong> basic actions composition.<br />

4.1 Synchronous programming<br />

However, even if these tools provide efficient run-time code they remain difficult to be directly used. They require<br />

expertise from designers and programmers who, as a consequence, spend time in programming tricks rather than being<br />

concentrated on the specification and validation of actions and applications. There<strong>for</strong>e, more friendly CAD systems, built<br />

over basic tools like C, ESTEREL or VxWorks, must be provided to improve both programmer’s efficiency and programs<br />

reliability.<br />

From the verification side, the compositionally principle must preserve the coherence between underlying mathematical<br />

models, in order to be able to per<strong>for</strong>m <strong>for</strong>mal computations at any level. As an example, the use of the single ESTEREL<br />

[4] synchronous reactive language as a target <strong>for</strong> automatic translation is a way of preserving a logical structure whatever<br />

the complexity of the application.<br />

A consequence of this point of view is that the basic entities have to be carefully studied and also that composition<br />

operators should have a proper semantics.<br />

The <strong>for</strong>mal definition of a robotic action is a key point in the ORCCAD approach. The Robot-Task (RT) models<br />

basic robotic actions where control aspects are predominants, like the hybrid position/<strong>for</strong>ce control of a robot arm or<br />

the visual servoing of a mobile robot. The characterisation of the interface of a RT with its environment through typed<br />

input/output events allows to compose them easily in order to construct more complex actions, the so called Robotprocedures<br />

(RP), while hiding most implementation details. In its simplest expression, a RP coincides with a RT, while<br />

the most complex one might represent an overall mission. Briefly speaking, it specifies in a structured way a logical<br />

134


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

and temporal arrangement of RTs in order to achieve an objective in a context-dependent and reliable way, providing<br />

predefined corrective actions in the case of unsuccessful execution of RTs.<br />

These <strong>for</strong>mally defined structures[26] associated with synchronous composition, thanks to the use of the ESTEREL<br />

language, allows to systematise and there<strong>for</strong>e to automatise <strong>for</strong>mal verification about the expected controller behaviour.<br />

In complement of two other features of ORCCAD i.e. the object-oriented model and the possibility of automatic code<br />

generation, this capability of verification is a key point <strong>for</strong> meeting the requirements of safety in the programming of<br />

critical applications.<br />

4.2 Procedure specification<br />

do<br />

SEQ(do<br />

KeepStableUS<br />

until Stabilized<br />

;<br />

loop<br />

PAR(when T2_Stabilized when T2_UnsStableCam<br />

do<br />

do<br />

KeepStableCam KeepStableUS<br />

until UnStableCam until Stabilized)<br />

end loop<br />

)<br />

until Stop<br />

Figure 3: Specification of a Procedure using : a) the GUI – b) the MaestRo language<br />

The KEEPSTABLE procedure aims at stabilising the underwater vehicle in spite of perturbations. The vehicle can be<br />

stabilised in two ways : it can be remotely locked on its target using either visual servoing or acoustic sensors. Here<br />

visual servoing (running the KeepStableCam RT) is considered as the nominal mode. However, if the camera loses the<br />

target, the controller must switch to sounders stabilisation (degraded mode) running the KeepStableUS RT until being<br />

able to recover the vision tracking mode. This redundancy is useful to increase the safety and efficiency of the system and<br />

such a situation where several RTs are exceptions of each other is a typical structure in our procedures. Once again, the<br />

ESTEREL language is used to specify that actions are run in sequence or in parallel, or must preempt each other. Thanks<br />

to the structure of ORCCAD programs, the specification of this procedure can be written in two ways which both avoid<br />

the end-user to write himself the full source ESTEREL file :<br />

• The Robot-Procedure Editor displays the external view of selected RTs and RPs. The user just has to add some<br />

statements to express, e.g., sequencing (;), parallelism (||), or escape mechanisms (trap-exit) to complete the specification<br />

(these additional statements are highlighted in Figure 3a). As the designer only access the external views,<br />

he cannot jeopardise the properties of the previously defined actions.<br />

135


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

• The MAESTRO language [10] has been developed to target ESTEREL (Figure 3b). This language has been designed<br />

to be more “natural” <strong>for</strong> robotic practitioners. At compile time it expands in ESTEREL statements and also per<strong>for</strong>ms<br />

some preliminary verifications like liveness.<br />

In this example the alternance between the two RTs is specified inside a parallel statement (PAR), in which each RT<br />

is guarded by a T2 exception emitted by the other RT. As these exceptions are mutually exclusive the two RTs cannot run<br />

simultaneously : anyway this property will be <strong>for</strong>mally verified in the next section.<br />

At compile time the control code of this procedure is translated into an automaton which output functions are automatically<br />

filled with calls to the underlying RTOS, thus alleviating the burden of the programmer while preserving <strong>for</strong>mal<br />

verification capabilities.<br />

Robotic applications are built incrementally by adding new RTs and RPs which behaviour, e.g. synchronisation, are<br />

encoded in the same way. The next example describes a RP used to synchronise the motions of the underwater vehicle<br />

and of its associated arm.<br />

MovePA10<br />

4.3 Logical Behaviour Verification<br />

Prr_Start_MoveArm<br />

MoveArm<br />

Inspect<br />

T2_Unstable<br />

T1_Centered<br />

KeepStableUS<br />

VKeepStable<br />

KeepStableCam<br />

T2_Stabilized<br />

Prr_Start<br />

Stop<br />

Prr_Start_VKeepStable<br />

Figure 4: Structure of a RP synchronising the arm and the vehicle<br />

First, the satisfaction of crucial properties can be checked. Concerning the safety property (any fatal exception must<br />

always be correctly handled) the process is as follows : knowing the user’s specification defining the fatal exceptions<br />

and the associated processing, a criterion is automatically built to define an abstract action. The abstraction of the global<br />

procedure automaton with respect to this criterion is then computed. The absence of the “Error” action in the resulting<br />

automaton proves that the safety property is verified.<br />

The liveness property (the RP always reaches its goal in a nominal execution) is proved in a similar way. The “Success”<br />

signal is emitted at the end of all successful achievements of a RP. The abstraction of the procedure automaton crossed<br />

with an adequate criterion built with this signal must be equivalent by bi-simulation to a one state automaton with a single<br />

action, the “Success” one.<br />

Conflicts detection : We are interested here in checking that during the RP evolution, there does not exist instants<br />

where two different RTs are competing <strong>for</strong> using a same resource of the system. Here, the physical resource controlled<br />

by the RTs (the underwater vehicle) is considered as well as the software resources used by the controllers (real-time<br />

tasks). For example, one wants to verify that the RTs KEEPSTABLECAMERA and KEEPSTABLEUS never compete to<br />

apply different desired <strong>for</strong>ce inputs to the vehicle thrusters during all the RP evolution. The global automaton is reduced<br />

to the only interesting signals Activate... and CmdStopOK.... Thus by clicking on the conflicts button one can<br />

check that these two signals alternate during the RP life insuring that a control law can be started only after confirmation<br />

that the previous one is stopped and that the transition is as fast as possible to ensure the system’s stability (Figure 5).<br />

Finally, the con<strong>for</strong>mity of the RP behaviour with respect to the requirements is verified. For example, we want to<br />

certify that the vision-servoing and acoustic-servoing RTs can alternate <strong>for</strong> an arbitrary number of time during the life of<br />

the stabilisation procedure. This property can be checked by observing the abstract view given by the ORCCAD graphical<br />

interface (figure 6) : after minimisation and abstraction through behavioural bi-simulation ([6], the original automaton<br />

136


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Figure 5: Checking <strong>for</strong> conflicts a) using explicit automata b) using observers<br />

(which has 17 states and 168 transitions) becomes small enough to be visually analysed. As expected, the procedure<br />

begins with the KeepStableUS RT (arc 1) : then, the two RTs altern in a loop, following arcs 2 and 3. The occurrence<br />

of the Stop signal is the only way to exit the procedure (arcs 4 and 5). The user just have to click on the name of signals<br />

which are relevant <strong>for</strong> the property to be checked to launch the verification process.<br />

5 Implementation<br />

5.1 Switching mechanism<br />

Raising a type 2 exception or a ”well-finished” signal request the embedding RP to smoothly stop the running RT and<br />

starting the next one, according to its own recovery program. Switching between two RTs must insure that :<br />

• The control laws are not conflicting, i.e. the degrees of freedom of the system are all controlled, and only once.<br />

• The boundary conditions of the successive control laws must be compliant, in some cases it would be necessary to<br />

insert a special purpose transition RT to gain a full control of the transition process [34];<br />

• The configuration of the set of real-time tasks is valid be<strong>for</strong>e starting the new control law.<br />

• The switching time must be small w.r.t. the plant’s time constants. Ideally, the delay during which the system is not<br />

controlled should be in the range of one sampling period.<br />

The steps of the switching mechanism have been identified as follow :<br />

• Upon reception of a type 2 exception or postcondition from the running RT, choice of the next one to start, starting<br />

of the transition phase;<br />

• Initialisation of the new real-time tasks and instantiation of the communication ports. According to the load of the<br />

processor, it may be necessary to decrease the sampling period of the running RT (degraded mode allowing to keep<br />

the system under control);<br />

• After fulfilment of the preconditions of the second RT, stopping the first control law and starting the second one.<br />

This delay must be minimised to fit in one sampling period;<br />

137


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Figure 6: An abstract view of KeepStable<br />

• Suspension or destruction of the <strong>for</strong>mer real-time tasks<br />

In practice it appears that such a complex mechanism is very difficult to set up and port on different targets. Indeed,<br />

according to the properties of the used control laws and plant, it is always not necessary to use the fully-fledged smooth<br />

transition protocol ; in particular a simpler transition procedure has been implemented and succesfully experimented in<br />

the framework of vision based tasks ([2]).<br />

5.2 Design and code generation toolset<br />

Figure 7 pictures the compilation process and integrated tools. The ORCCAD structures are instantiated as C++ classes<br />

(or C structures) using virtual system calls, provided by the kernel. For a single processor implementation, all ESTEREL<br />

files corresponding to the RTs and RPs logical behaviour are gathered (i.e. put in parallel) in a single file which can be<br />

further compiled into various <strong>for</strong>mats. The SC <strong>for</strong>mat (sorted boolean equations) is post-processed in C and encapsulated<br />

in a high priority real-time thread.<br />

The application is finally compiled and linked with run-time libraries to make the down-loadable code. Currently<br />

ORCCAD targets VxWorks and RTAI as hard real-time systems and Solaris and Linux as soft real-time systems (using<br />

Posix threads). Note that building a real-time simulator running on the same target can be easily done using the same<br />

code generation toolset : this is simply done by calling a numerical integrator (itself calling a model of the robotic<br />

system) in the drivers of the Physical Ressource [39]. However running the full simulation in real-time requires using an<br />

oversized processor to get enough computing power. Various dedicated run-time interfaces are also provided to monitor<br />

the execution of the controller.<br />

138


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Code generation<br />

Control tasks specification Procedure specification<br />

Real−time analysis<br />

Petri nets<br />

(max,plus)<br />

C/C++ code generator<br />

virtual OrcObjects<br />

C/C++ files<br />

Compilation with<br />

run−time libraries<br />

Binaries<br />

Graphical Interfaces<br />

Modules RTs RPs<br />

C files Esterel files<br />

Control laws<br />

RT behaviours<br />

Mission specification<br />

VxWorks RTAI/Linux Linux/Posix<br />

Solaris<br />

Monitoring<br />

Parameterization<br />

Exploitation<br />

Automaton (SCC)<br />

XES simulation<br />

source code animation<br />

Esterel compiler<br />

Figure 7: Code generation and associated tools<br />

139<br />

Diagnosis<br />

Properties<br />

Observers<br />

Robot modeling (ODE)<br />

Numerical Integrator<br />

tuning<br />

SC <strong>for</strong>mat<br />

Simulator


5.3 Real-time structures<br />

At compile time, the ESTEREL source file is translated into an automaton encoded in C. Input and output functions are associated<br />

to allow the automaton to receive and emit signals. These functions are used to interface the synchronous reactive<br />

program with the asynchronous execution environment, i.e. the operating system. As the compiler knows nothing about<br />

the environment, the output functions are empty and must be filled by the user to make the program effective. Moreover,<br />

as ESTEREL is unable to do numerical computations, all calculations related to the execution of control algorithms are<br />

called as external procedures. Thus it is necessary to design an execution machine [1], the general structure of which is<br />

given by Figure 8a. Signals are collected to build the current event which is given to the automaton. Output actions calling<br />

extern calculation procedures must be carefully interfaced with the RTOS. Writing by hand this execution machine would<br />

be tricky and error prone.<br />

Input<br />

Processing<br />

Event<br />

Generator<br />

Current<br />

Input Signals<br />

AUTOMATON<br />

Synchronous<br />

Actions<br />

Output<br />

Processing<br />

Reactive Machine<br />

In<strong>for</strong>mation Storage<br />

Execution<br />

Manager<br />

Asynchronous<br />

Machine<br />

Asynchronous<br />

Actions<br />

Robot_Procedure<br />

Clocks<br />

Generation<br />

Observers<br />

Parameters<br />

TM<br />

obs1<br />

Robot_Task 1<br />

Parameters<br />

TM1<br />

Robot_Task 2<br />

Parameters<br />

TM2<br />

Controller<br />

.<br />

.<br />

.<br />

TM<br />

obs2<br />

Controller<br />

TM<br />

obs4<br />

Controller<br />

TM3<br />

Parameters<br />

TM<br />

obs3<br />

Logical signals<br />

Control signals<br />

Sémaphore<br />

Controller<br />

FIFO<br />

Manager<br />

Outputs<br />

Automaton<br />

Inputs<br />

Output<br />

Asynchronous Interface<br />

Synchronous<br />

Figure 8: a) Structure and b) Implementation of the execution machine<br />

Thanks to the ORCCAD structures this machine can be automatically generated by the system (Figure 8b). A main<br />

“system” task sets up the whole system and in particular generates the needed clocks used to trigger the periodic calculation<br />

modules. Real-time threads are made periodic by blocking their first input port on a semaphore which is released by<br />

clock ticks.<br />

The automaton is the highest priority task : it is awakened by the occurrence of input signals related to pre-conditions,<br />

exceptions, and post-conditions. In reaction, it tells the RTOS what modules must be spawned, resumed or suspended.<br />

Although this automaton is crucial <strong>for</strong> a safe and successful behaviour of the application, it spends most of time waiting<br />

<strong>for</strong> input events during the periodic execution of control algorithms managed by the RTOS. Typically, a transition of the<br />

automaton takes a few microseconds while the duration of robotic actions may range from seconds (e.g. manipulation<br />

tasks <strong>for</strong> a robot arm) to hours (e.g. mapping mission <strong>for</strong> an autonomous underwater vehicle).<br />

6 Related work<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Many robotics controllers software inspired from Component-Based Software engineering have been developped (Claraty<br />

[30], Cosarc [32], Oscar [22], Orca [7], Orocos [8], HRPOpen [23]). All these approches are done by research teams, often<br />

dedicated to a single robot type (mobile robot, humanoid robot or robotic arm) and are not adopted by a big community.<br />

To tackle the control robotics software complexity, it is mandatory to pool resources and ef<strong>for</strong>ts. However although<br />

ORCCAD has been primarily developed and used <strong>for</strong> robot control purpose it belongs to a more general family of model<br />

driven design methods and tools <strong>for</strong> computer controlled systems [11] [15].<br />

140


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Some other layered approaches in robot control show architectural similarities with our. In the fields of synchronous<br />

programming applied to robotics let us cite [28] where data and control flows are gathered using the SIGNAL synchronous<br />

data-flow language. Using the same <strong>for</strong>malism <strong>for</strong> numerical calculations and tasking management allows <strong>for</strong> checking<br />

the temporal coherency of data in the whole application. However, if SIGNAL is well suited to specify signal processing<br />

algorithms, long computations like computing the explicit dynamics of a robot arm should be done by external functions<br />

written in a host language like C.<br />

ControlShell (now Constellation) [35] proposes an architecture quite similar to ORCCAD. At the level of control<br />

actions it allows <strong>for</strong> the construction of models of components such as PIDs or trajectory generators which can be gathered<br />

through a block-diagram oriented GUI. Event-driven aspects are handled by hand-made (and thus quite simple) Finite<br />

State Machines <strong>for</strong> which <strong>for</strong>mal verification methods are not provided.<br />

Besides this commercial offer, the OROCOS project’s aim is to build <strong>open</strong> source software <strong>for</strong> robot control, from<br />

the lowest, real-time servo control, over intelligent sensor processing, to high-level human-machine interfacing. The<br />

emphasis is on building components, not on designing <strong>architectures</strong> [8]. However the proposed framework, ranging from<br />

control design to real-time code generation, also en<strong>for</strong>ces the separation of data flow (periodic control laws computation)<br />

and execution flow controlled via Finite State Machines.<br />

7 Summary and perspectives<br />

In this paper we have described the ORCCAD approach to <strong>for</strong>malize control structures in controllers <strong>for</strong> robotics and<br />

others embedded systems. In this approach, the Robot-Task, where a discretized control law is encapsulated in a logical<br />

behavior, is a hybrid structure at the frontier between continuous and discrete time aspects. The <strong>for</strong>malization of these<br />

structures allows <strong>for</strong> the <strong>for</strong>mal verification of programs and automatic down-loadable code generation.<br />

In such control systems, the interleaving of events and activities in a real life mission accounts <strong>for</strong> our hierarchical<br />

design and verification process, where complex actions are designed from already validated basic blocks lying at a lower<br />

level. The ESTEREL synchronous language was found to be effective to specify and encode the coordination of actions,<br />

from the low level logical management of real-time tasks up to the application layer. To further ease the specification<br />

process, ORCCAD provides friendly user interfaces:<br />

• Starting from the specifications given through the ORCCAD GUI, the user is provided with control oriented predefined<br />

structures. Automatic code generation allows him to take advantage of real-time tools and synchronous<br />

languages with a moderate programming ef<strong>for</strong>t. These structures are also a key <strong>for</strong> a partial automatization of the<br />

verification process.<br />

• The ESTEREL compiler does not provide the interface functions with the environment. To avoid messy and error<br />

prone low level interfacing work, ORCCAD automatically generates the execution machine managing both the<br />

synchronous automaton and the asynchronous real-time threads. Thus the user can concentrate on application<br />

specification and validation rather than low level programming tricks.<br />

• However, compiling reactive programs into explicit automata is subject to size explosion, and behavioral verification<br />

through visual inspection of automata becomes impossible <strong>for</strong> industrial size problems, even after reduction and<br />

bisimulation. An ongoing work consists of the identification of classes of relevant properties to be encoded in<br />

generic observers : after parameterisation these properties can be checked by the compiler itself (e.g. figure 5b).<br />

A dual approach consists in using a discrete events controller synthesis algorithm to build a safe-by-construct<br />

controller from the desirable properties, e.g. from a fault tolerance specification.<br />

ORCCAD has been used in laboratory applications such as programming the BIP 2000 biped control laws [3], real-time<br />

simulation of a teleoperation plat<strong>for</strong>m [39], high level programming of an underwater robotic system [25] and feedback<br />

scheduling experiments [40].<br />

The Orccad concepts and tools have been selected to support the development of a set of ESA projects. In particular,<br />

in the MUROCO (Multiple Robot Controller [18]) project the Orccad structuring of Events, Actions and Tasks, as well as<br />

the utilisation of Esterel as a mission specification language are used in a dedicated tool developed by TRASYS Space. Up<br />

to 63 actions are composed in twelve tasks to specify the full ExoMars mission ranging from ’deployment’ to ’travelling’<br />

and ’experiment cycle’ activities. In the VIMANCO on-going project [19], TRASYS in cooperation with INRIA and<br />

KUL aims at improving the autonomy, safety and robustness of space robotics system using vision. The developments<br />

include the implementation of a Vision Software Library allowing Vision Control <strong>for</strong> space robots, <strong>for</strong> which the Orccad<br />

controller is <strong>for</strong>eseen <strong>for</strong> the real time execution of the vision based tasks.<br />

141


From a software engineering perspective the Orccad approach allows <strong>for</strong> the design of robot software controllers using<br />

existing theory (Control and Discrete Events theories) and software tools based on existing technologies already used <strong>for</strong><br />

the embedded and real-time domains (RTOS, Object Oriented languages, Synchronous languages,. . . ). Starting from<br />

existing paradigms, our goal is to provide an effective software framework and associated tools well suited <strong>for</strong> robotic<br />

applications and more generally <strong>for</strong> control and real-time co-design.<br />

We think that the current ORCCADapproach is still valid but that the software tools must be upgraded and constantly<br />

updated. The current release has not been re-engineered <strong>for</strong> eight years, although many partial improvements (e.g. realtime<br />

multitasking) have been developed and tested but not cleanly integrated.<br />

Orccad tools must evolve according to the improvements of software engineering such as Model Driven Architecure<br />

MDA (e.g. [15] <strong>for</strong> computer-based control) or Synchronous language [12], [13], [9]). However this necessary evolution<br />

can still be based on existing design and control <strong>architectures</strong> dedicated to the application field.<br />

In the middleware domain, using modern software enginering now lead to the emergence of a large community willing<br />

to adopt standards (e.g. see the success of the ObjectWeb consortium [21]) : using such tools have increased drastically<br />

software productivity. Among other initiatives the recently created Object Management Group (OMG [16]) robotics<br />

corner aims at the elaboration of OMG standards (UML, Corba are OMG standards <strong>for</strong> example) to ease the integration<br />

of robotic systems using <strong>modular</strong> components.<br />

The work around robotics standardisation must be done in connection with the embedded and real-time community.<br />

In particular this community is involved in several collaborative projects to adapt UML and/or MDA standards taking into<br />

account per<strong>for</strong>mance and real-time constraints (e.g. [27], [20], [17], [14]). Designing a common architecture based on<br />

such standards adopted by a large community could be now the cornerstone <strong>for</strong> efficiently sharing robot control software<br />

development and practice.<br />

References<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

[1] C. André, A. Ressouche, and J.M. Tanzi. Combining special purpose and general purpose languages in real-time<br />

programming. In IEEE Workshop on Programming Languages <strong>for</strong> Real-Time Industrial Application, Madrid, 1998.<br />

[2] S. Arias. Formalisation et intégration en vision par ordinateur temps réel. PhD thesis, Université de Nice Sophia<br />

Antipolis, 1999.<br />

[3] C. Azevedo, N. Andreff, and S. Arias. BIPedal walking:from gait design to experimental analysis. Mechatronics,<br />

14/6:639–665, 2004.<br />

[4] G. Berry. The esterel v5 language primer. Technical report, http://www-sop.inria.fr/esterel.org/, 2000.<br />

[5] J.J. Borrelly, E. Coste-Manière, B. Espiau, K. Kapellos, R. Pissard-Gibollet, D. Simon, and N. Turro. The ORCCAD<br />

architecture. Int. Journal of Robotics Research, 17(4):338–359, april 1998.<br />

[6] A. Bouali and R. de Simone. Symbolic bisimulation minimisation. In Computer Aided Verification, number 663 in<br />

Lecture Notes in Computer Science. Springer, 1993.<br />

[7] A. Brooks, T. Kaupp, A. Makarenko, S. Williams, and A. Oreback. Towards component-based robotics. In Intelligent<br />

Robots and Systems IROS, pages 163–168, august 2005. http://orca-robotics.source<strong>for</strong>ge.net/.<br />

[8] H. Bruyninckx. Open robot control software: the OROCOS project. In IEEE International Conference on Robotics<br />

and Automation ICRA, Seoul, may 2001. http://orocos.org/.<br />

[9] J-L Colaço, B. Pagano, and M. Pouzet. A conservative extension of synchronous data-flow with state machines. In<br />

ACM International Conference on Embedded Software (EMSOFT’05), Jersey city, New Jersey, september 2005.<br />

[10] E. Coste-Manière and N. Turro. The MAESTRO language and its environment: Specification, validation and control<br />

of robotic missions. In 10th IEEE/RSJ International Conference on Intelligent Robots and Systems, Grenoble, 1997.<br />

[11] J. El-khoury, D. Chen, and M. Törngren. A survey of modelling approaches <strong>for</strong> embedded computer control systems<br />

(version 2.0). Technical Report TRITA - MMK 2003:36,ISSN 1400 -1179, ISRN KTH/MMK/R-03/11-SE, Royal<br />

Institute of Technology, KTH, Stockholm, Sweden, 2003.<br />

142


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

[12] D. Garlan. Software architecture: a roadmap. In Conference on the future of Software engineering, pages 91–101,<br />

Limerick, Ireland, june 2000.<br />

[13] D. Garlan and D. Perry. Software architecture: practice, potential, and pitfalls. In 16th International Conference on<br />

Software Engineering, 1994.<br />

[14] S. Gérard, N. Voros, C. Koulamas, and F. Terrier. Efficient system modeling <strong>for</strong> complex real-time industrial networks<br />

using the ACCORD/UML methodology. In International Workshop on Distributed and Parallel Embedded<br />

Systems DIPES, october 2000.<br />

[15] D. Henriksson, O. Redell, J. El-Khoury, M. Törngren, and K-E. ˚Arzén. Tools <strong>for</strong> real-time control systems co-design<br />

— a survey. Technical Report ISRN LUTFD2/TFRT--7612--SE, Department of Automatic Control, Lund Institute<br />

of Technology, Sweden, April 2005.<br />

[16] http://robotics.omg.org/.<br />

[17] http://www.carroll-research.org.<br />

[18] http://www.esa.int/esaMI/Aurora/SEMQA7A5QCE 0.html.<br />

[19] http://www.irisa.fr/lagadic/actions-internationales-fra.html.<br />

[20] http://www.ist-compare.org/.<br />

[21] http://www.objectweb.org/.<br />

[22] http://www.robotics.utexas.edu/rrg/research/oscarv.2/.<br />

[23] F. Kanehiro, H. Hirukawa, and S. Kajita. OpenHRP: Open architecture humanoid robotics plat<strong>for</strong>m. International<br />

Journal of Robotics Research, 23(2):155–165, 2004.<br />

[24] K. Kapellos. Environnement de programmation des applications robotiques réactives. PhD thesis, Ecole des Mines<br />

de Paris, Sophia Antipolis, 1994.<br />

[25] K. Kapellos, D. Simon, S. Granier, and V. Rigaud. Distributed control of a free-floating underwater manipulation<br />

system. In 5th Int. Symp. on Experimental Robotics, Barcelon, 1997.<br />

[26] K. Kapellos, D. Simon, M. Jourdan, and B. Espiau. Task level specification and <strong>for</strong>mal verification of robotics<br />

control systems: state of the art and case study. Int. Journal of Systems Science, 30(11):1227–1245, 1999.<br />

[27] F. Loiret and D. Servat. Insights on real time systems architecture modelling <strong>for</strong> a software engineering viewpoint.<br />

In 17th Euromicro Conference on real-time systems ECRTS05, Palma de Mallorca, june 2005. work in progress<br />

session.<br />

[28] Marchand, E., E. Rutten, H. Marchand, and F. Chaumette. Specifying and verifying active vision-based robotic<br />

systems with the SIGNAL environment. Int. Journal of Robotics Research, 17(4):418–432, 1998.<br />

[29] L. Matthies, E. Gat, R. Harrison, B. Wilcox, R. Volpe, and T. Litwin. Mars microrover navigation: Per<strong>for</strong>mance<br />

evaluation and enhancement. Autonomous Robots, 2(4):291–311, 1995.<br />

[30] A. Nesnas, R. Simmons, D. Gaines, C. Kunz, A. Diaz-Calderon, T. Estlin, R. Madison, J. Guineau, M. McHenry,<br />

I. Shu, , and D. Apfelbaum. CLARAty: Challenges and steps toward reusable robotic software. submitted to<br />

International Journal of Advanced Robotic Systems, 2006. http://keuka.jpl.nasa.gov/main/.<br />

[31] A. Pascoal. The AUV MARIUS: Mission scenarios, vehicle design, construction and testing. In 2nd IARP workshop<br />

on Underwater Robotics, Monterey, CA, may 1994.<br />

[32] R. Passama and D. Andreu. COSARC : Component based software architecture of robot controllers. In 1st National<br />

Workshop on Control Architecture of Robots: software approaches and issues, Montpellier, 2006.<br />

[33] C. Samson, M. Le Borgne, and B. Espiau. Robot Control: the Task-Function Approach. Ox<strong>for</strong>d Science Publications.<br />

Clarendon Press, 1991.<br />

143


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

[34] A. Santos, B. Espiau, P. Rives, D. Simon, and V. Rigaud. Sensor-based control of holonomic autonomous underwater<br />

vehicles. Technical Report 2609, INRIA Research Report, july 1995.<br />

[35] S. Schneider, V. Chen, G. Pardo-Castellote, and H. Wang. Controlshell: A software architecture <strong>for</strong> complex electromechanical<br />

systems. Int. Journal of Robotics Research, 17(4):360–380, 1998.<br />

[36] D. Simon and F. Benattar. Design of real-time periodic control systems through synchronisation and fixed priorities.<br />

Int. Journal of Systems Science, 36(2):57–76, 2005.<br />

[37] D. Simon, E. Castillo, and P. Freedman. Design and analysis of synchronization <strong>for</strong> real-time closed-loop control in<br />

robotics. IEEE Trans. on Control Systems Technology, 6(4):445–461, july 1998.<br />

[38] D. Simon, K. Kapellos, and B. Espiau. Control laws, tasks and procedures with ORCCAD: Application to the control<br />

of an underwater arm. Int. Journal of Systems Science, 17(10):1081–1098, 1998.<br />

[39] D. Simon, M. Personnaz, and R. Horaud. Teledimos telepresence simulation plat<strong>for</strong>m <strong>for</strong> civil work machines :<br />

real-time simulation and 3D vision reconstruction. In IARP Workshop on Advances in Robotics <strong>for</strong> Mining and<br />

Underground Applications, Brisbane, Australia, october 2000.<br />

[40] D. Simon, D. Robert, and O. Sename. Robust control/scheduling co-design: application to robot control. In RTAS’05<br />

IEEE Real-Time and Embedded Technology and Applications Symposium, pages 118–127, San Francisco, March<br />

2005.<br />

144


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Overview of a new Robot Controller<br />

Development Methodology<br />

R. Passama 1,2 , D. Andreu 1 , C. Dony 2 , T. Libourel 2<br />

1 Robotics Department<br />

2 Computer sciences Department<br />

LIRMM, 161 rue Ada 34392 Montpellier, France<br />

E-mail : {passama, andreu, dony, libourel}@lirmm.fr<br />

Abstract - The paper presents a methodology <strong>for</strong> the development of robot software controllers, based on actual<br />

software component approaches and robot control <strong>architectures</strong>. This methodology defines a process that<br />

guides developers from the analysis of a robot controller to its execution. A proposed control architecture<br />

pattern and a dedicated component-based language, focusing on <strong>modular</strong>ity, reusability, scalability and<br />

upgradeability of controller <strong>architectures</strong> parts during design and implementation steps, are presented. Finally,<br />

language implementation issues are shown.<br />

Keywords: Software Components, Control Architecture, Integration, Reuse, Object Petri Nets.<br />

I. INTRODUCTION<br />

Robots are complex systems whose complexity is continuously increasing as more and more<br />

intelligence (decisional and operational autonomies, human-machine interaction, robots cooperation,<br />

etc.) is embedded into their controllers. This complexity also depends, of course, on the mechanical<br />

portion of the robot that the controller has to deal with, ranging from simple vehicles to complex<br />

humanoid robots. Robot controllers development plat<strong>for</strong>ms and their underlying methodologies are of<br />

great importance <strong>for</strong> laboratories and IT societies, because there is an increasing interest in future<br />

service robotics. Such plat<strong>for</strong>ms help developers in many of their activities (modelling, programming,<br />

model analysis, test and simulation) and should take into account preoccupations like the reuse of<br />

software pieces and the <strong>modular</strong>ity of control <strong>architectures</strong> as they correspond to two major issues.<br />

The goal of our team is to provide a robot controller development methodology and its dedicated<br />

tools, in order to help developers overcoming problems during all steps of the design process. So, we<br />

investigate on the creation of a software paradigm that specifically deals with controller development<br />

preoccupations. From the study of robot control <strong>architectures</strong> presented in the literature, we identified<br />

four main different practices in control architecture design approaches that must be considered.<br />

The first practice is the structuring of the control activities. There are different approaches. One<br />

approach consists in decomposing the control architecture into hierarchical layers, like in LAAS<br />

architecture [ALA, 98], 4D/RDC [ALB, 02], ORCCAD [BOR, 98] and some others. Each layer within<br />

the robot controller has a “decision-making system”, as each layer only ensures part of the control<br />

(from low level control to planning). Such a decomposition impacts on the reactivity of the robot<br />

controller. The lower the layer is, the higher is the time constraint of its execution. The upper the<br />

layer is, the higher is the priority of its reaction. This hierarchical approach has been extended to<br />

hybrid <strong>architectures</strong>. For instance, AURA [ARK, 97] proposes to mix it with a behavioural approach<br />

in order to improve reactivity. In doing so, the interaction scheme is not limited to interactions<br />

between adjacent layers (<strong>for</strong> reactivity purposes): some data can be simultaneously available to<br />

several layers, or an event, can be directly notified to the upper layers (without passing through<br />

intermediary ones), etc. In behavioural approaches, like the subsumption architecture [BRO, 86] or<br />

145


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

similar ones (AURA <strong>for</strong> example), interactions between basic behaviours are complex, even if those<br />

interactions are not explicit and that they are taken into account by means of an “external” entity (<strong>for</strong><br />

instance, the entity that computes the weighting of commands generated by individual low level<br />

behaviours).<br />

The second practice is the decomposition of the control architecture into sub-systems that<br />

incorporate the control of specific parts of a robotic system. This practice is reified in IDEA agents<br />

<strong>architectures</strong> [MUS, 02] and Chimera development methodology [STE, 96]. This organizational view is<br />

orthogonal to the hierarchical one: each sub-system can incorporate both reactive and ‘long term’<br />

decision-making activities and so can be ”layered” itself.<br />

The third practice is to separate, in the architecture description, the “robot operative portion”<br />

description from the “control and decision-making” one. This practice is often adopted at<br />

implementation phase, except in specific <strong>architectures</strong> like CLARATY [VOL,01], in which the “real<br />

world” description is made by means of objects hierarchies. These two portions of a robot, its<br />

mechanical portion (including its sensors and actuators) and its control one, are intrinsically<br />

interdependent. Nevertheless, <strong>for</strong> reasons of reusability and upgradeability, the controller design<br />

should separate, as far as possible, two aspects: the functionalities that are expected from the robot on<br />

the one hand, and, on the other, both the representation of the mechanical part that implements them<br />

and that of the environment with which it interacts. One current limitation in the development of<br />

robot software controllers is the difficulty of integrating different functionalities, potentially<br />

originating from different teams (laboratories), into a same controller, as they are often closely<br />

designed and developed <strong>for</strong> a given robot (i.e. <strong>for</strong> a given mechanical part). Hence, upgradeability and<br />

reusability are aims that are currently almost impossible to achieve since both aspects of the robot<br />

(control and mechanical descriptions) are tightly merged. The reuse of decision-making/control<br />

systems parts is also a big challenge, because of the different approaches (behavioural or hierarchical)<br />

that can be used to design it.<br />

Finally, the fourth practice is to use notations to describe the controller’s parts and to <strong>for</strong>malize<br />

their interactions. Model-based specifications are coupled with <strong>for</strong>mal analysis techniques in order to<br />

follow a “quality-oriented” design process. The verification of properties like invariants or the<br />

research of “dead-lock free” interactions are examples of benefits of such a process.<br />

A robotic development methodology and its plat<strong>for</strong>m should propose a way to develop a control<br />

architecture using all these practices, as they correspond to complementary preoccupations. We<br />

identified five different preoccupations: description of the real world, description of the control (in the<br />

following this term will include decision-making, action, perception, etc.), description of interactions,<br />

description of the layers of the hierarchy, description of subsystems. The software component<br />

paradigm [SZY,99] helps dealing with these preoccupations in many ways (separation of protocols<br />

description from computation description, deployment management, etc.). Component based<br />

approaches propose techniques to support easy reuse and integration and they sometimes rely on<br />

<strong>for</strong>mal languages to describe complex behaviour and interactions (aiming at improving quality of the<br />

design), like architecture description languages [MED, 97] <strong>for</strong> example.<br />

In the following sections, we present the CoSARC (Component-based Software Architecture of<br />

Robot Controllers) development methodology, based on actual component models. It defines a<br />

process that guides developers during analysis, design, implementation and deployment phases. It is<br />

based on two concepts: a control architecture pattern <strong>for</strong> analysis, presented in section 2, and a<br />

component-based language, presented in section 3. It integrates robot controller preoccupation<br />

management and takes into account actual practices by promoting the use of Objects Petri Nets. The<br />

146


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

component execution and deployment model is shown in section 4. This paper concludes by citing<br />

actual work on, and perspectives of, the CoSARC methodology.<br />

II. CONTROL ARCHITECTURE PATTERN<br />

The CoSARC methodology provides a generic view on robot control architecture design by means<br />

of an architecture pattern. The proposed pattern is adaptable to a large set of hybrid <strong>architectures</strong>. It<br />

provides a conceptual framework to the developers, useful <strong>for</strong> controller analysis. The analysis phase<br />

is an important stage because it allows outlining of all the entities involved in the actions/reactions of<br />

the controller (i.e. the robot behaviour) and the interactions between them. It is made by following<br />

concepts and organization described in the pattern. It takes into account robot controller description<br />

depending on robot’s physical portion (operative portion), to make the analysis more intuitive. The<br />

pattern also deals with design subjects, by defining the properties of layers of the hierarchy and the<br />

matching between layers and entities.<br />

The central abstraction in the architecture pattern is the Resource. A resource is a part of the robot’s<br />

intelligence that is responsible <strong>for</strong> the control of a given set of independently controllable physical<br />

elements. For instance, consider a mobile manipulator robot consisting of a mechanical arm<br />

(manipulator) and a vehicle. It is possible to abstract at least two resources: the ManipulatorResource<br />

which controls the mechanical arm and the MobileResource which controls the vehicle. Depending on<br />

developer’s choices or needs, a third resource can also be considered, coupling all the different<br />

physical elements of the robot, the Mobile-ManipulatorResource. This resource is thus in charge of the<br />

control of all the degrees of freedom of the vehicle and the mechanical arm (the robot is thus<br />

considered as a whole). The breaking down of the robot’s intelligence into resources mainly depends<br />

on three factors: the robot’s physical elements, the functionalities that the robot must provide and the<br />

means developers have to implement those functionalities with this operative part.<br />

A resource (cf. Fig. 1) corresponds to a sub-architecture decomposed into a set of hierarchically<br />

organised interacting entities. Presented from bottom to top, they are:<br />

• A set of Commands. A command is in charge of the periodical generation of command data to<br />

actuators, according to given higher-level instructions (often setup points) and sensor data.<br />

Commands encapsulate control laws. The actuators which are concerned belong to the set of<br />

physical elements controlled by this resource. An example of a command of the<br />

ManipulatorResource is the JointSpacePositionCommand (based on a joint space-position control law<br />

that is not sensible to singularities, i.e. singular positions linked to the lining up of some axis of<br />

the arm).<br />

• A set of Perceptions. A perception is responsible <strong>for</strong> the periodical trans<strong>for</strong>mation of sensor data<br />

into, potentially, more abstract data. An example of a perception of the ManipulatorResource is the<br />

ArmConfigurationPerception that generates the data representing the configuration of the mechanical<br />

arm in the task space from joint space data (by means of the direct geometrical model of the arm).<br />

• A set of Event Generators. An event generator ensures the detection of predefined events<br />

(exteroceptive or proprioceptive phenomena) and their notification to higher-level entities. An<br />

example of an event generator of the ManipulatorResource is the SingularityGenerator; it is able to<br />

detect, <strong>for</strong> instance, the singularity vicinity (by means of a ‘singularity model’, i.e. a set of<br />

equations describing the singular configurations).<br />

• A set of Actions. An action represents an (atomic) activity that the resource can carry out. An<br />

action is in charge of commutations and reconfigurations of commands. An example of action of<br />

147


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

the ManipulatorResource is the ManipulatorContactSearchAction, which uses a set of commands to<br />

which belongs the ManipulatorImpedanceCommand. This command is based on an impedance<br />

control law (allowing a spring-damper like behaviour). In a more “behavioural oriented” design<br />

an action could activate and deactivate sets of commands and manage the summing and the<br />

weighting of the command data they send to I/O controllers.<br />

Robot Controller Reaction priority<br />

Mission 1..1<br />

Manager 1..1<br />

Global Supervisor<br />

1..*<br />

Resource<br />

1..1<br />

high<br />

Resource Supervisor<br />

1..*<br />

Mode<br />

1..*<br />

Action<br />

1..*<br />

*<br />

Command Perception<br />

1..*<br />

Event<br />

Generator<br />

1..* 1..*<br />

Input/Output Controller<br />

*<br />

Event<br />

Generator<br />

medium<br />

Time constraints<br />

Figure 1: Control architecture pattern (UML-like syntax), and properties of the layers.<br />

• A set of Modes. Each Mode describes the behaviour of a resource and defines the set of orders<br />

the resource is able to per<strong>for</strong>m in the given mode. For example, the MobileResource has two<br />

modes: the MobileTeleoperationMode using which the human operator can directly control the<br />

vehicle (low-level teleoperation, <strong>for</strong> which obstacle avoidance is ensured), and the<br />

MobileAutonomousMode in which the resource is able to accomplish high-level orders (e.g., ‘go to<br />

position’). A mode is responsible <strong>for</strong> the breaking down of orders into a sequence of actions, as<br />

well as the scheduling and synchronization of these actions.<br />

• A Resource Supervisor is the entity in charge of the modes commutation strategy, which depends<br />

on the current context of execution, the context being defined by the corresponding operative<br />

portion state, the environment state and the orders to be per<strong>for</strong>med. A robot control architecture<br />

consists of a set of resources (Fig. 1). The Global Supervisor of a robot controller is responsible<br />

<strong>for</strong> the management of resources according to orders sent by the operator, and events and data<br />

respectively produced by event generators and perceptions. Event generators and perceptions not<br />

belonging to a resource thus refer to physical elements not contained in any resource. In the given<br />

example, we use such resource-independent event generators to notify, <strong>for</strong> instance, ‘low battery<br />

level’ and ‘loss of WiFi connection’ events to some resources as well as to the global supervisor.<br />

The lowest level of the hierarchical decomposition of a robot controller is composed of a set of<br />

Input/Output controllers. These I/O controllers are in charge of periodical sensor- and actuatordata<br />

updating. Commands, event generators, and perceptions interact with I/O controllers in<br />

order to obtain sensor data, and commands use them to set actuator values. Other upper layer<br />

entities, like actions <strong>for</strong> instance, can directly interact with I/O controllers to configure their<br />

activities (if necessary).<br />

148<br />

low<br />

low<br />

medium<br />

high<br />

Reactivity<br />

loop<br />

Control<br />

relationship


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Organization inside resources and robot controller follow a hierarchical approach. Each layer<br />

represents a ”level of control and decision” in the controller activities. The upper layer incorporates<br />

entities embedding complex decision-making mechanisms like modes, supervisors and mission<br />

managers. The intermediate layer incorporates entities like control schemas (commands), observers<br />

modules (event generators, perceptions) and reflex adaptation activities (inside actions). The lowest<br />

layer (I/O controllers) interfaces upper layers with sensor, actuators and external communication<br />

peripherals, and helps standardizing data exchanges. The semantic of layers hierarchy is based on the<br />

“control” relationship: a given layer controls the activities of lower layers. Two design properties<br />

emerge from this hierarchical organization. The first one is that upper layers must have a higher<br />

priority of reaction than lower layers, because their decision is more important <strong>for</strong> the system at a<br />

global scope. The second one is that lower layers have greater temporal constraints to respect,<br />

because they contain reflex and periodic activities. Managing these properties together is very<br />

important <strong>for</strong> the “real-time” aspect of the control architecture and has to be considered in our<br />

proposition.<br />

A. General concepts<br />

III. COMPONENT-BASED LANGUAGE<br />

The CoSARC language is devoted to the design and implementation of robot controller<br />

<strong>architectures</strong>. This language draws from existing software component technologies such as Fractal<br />

[BRU, 02] or CCM [OMG, 01] and Architecture Description Languages such as Meta-H [BIN, 96] or<br />

ArchJava [ALD, 03]. It proposes a set of structures to describe the architecture in terms of a<br />

composition of cooperating software components. A software component is a reusable entity subject<br />

to “late composition”: the assembly of components is not defined at ‘component development time’<br />

but at ‘architecture description time’.<br />

The main features of components in the CoSARC language are internal properties, ports,<br />

interfaces, and connections. A component encapsulates internal properties (such as operations and<br />

data) that define the component implementation. A component’s port is a point of connection with<br />

other components. A port is typed by an interface, which is a contract containing the declaration of a<br />

set of services. If a port is ‘required’, the component uses one or more services declared in the<br />

interface typing the port. If a port is ‘provided’, the component offers the services declared in the<br />

interface typing the port. All required ports must always be connected whereas it is unnecessary <strong>for</strong><br />

provided ones. The internal properties of a component implement services and service calls, all being<br />

defined in the interfaces typing each port of a component. Connections are explicit architecture<br />

description entities, used to connect ports. A connection is used to connect ‘required’ ports with<br />

‘provided’ ones. When a connection is established, the compatibility of interfaces is checked, to<br />

ensure ports connection consistency.<br />

Components composition mechanism (by means of connections between their ports) supports the<br />

“late composition” paradigm. The step when using a component-based language is to separate the<br />

definition of components from software architecture description (i.e. their composition). Components<br />

are independently defined/programmed and are made available in a ‘shelf of components’. According<br />

to the software architecture to be described, components are used and composed (i.e. their ports are<br />

connected by means of connections). The advantages of such a composition paradigm is to improve<br />

the reusability of components (because they are more independent from each other than objects), and<br />

the <strong>modular</strong>ity of <strong>architectures</strong> (possibility to change components and/or connections). Obviously, the<br />

reuse of components is influenced by the standardization of interfaces typing their ports (which define<br />

the compatibility and so, the composability of components), but this is out of the scope of this paper.<br />

149


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

In the CoSARC language, there are four types of components: Representation Components, Control<br />

Components, Connectors and Configurations. Each of them is used to deal with a specific<br />

preoccupation of controller architecture design and implementation. We present the specificities of<br />

these types in the following sub-sections.<br />

B. Representation Components<br />

This type of component is used to describe a robot’s “knowledge” as regards on its operative part,<br />

its mission and its environment. Representation components are used to satisfy the “real-world<br />

modelling” preoccupation, but their use can be extended to whatever developers considers as the<br />

knowledge of the robot. They can represent concrete entities, such as those relating to the robot’s<br />

physical elements (e.g. chassis, and wheels of a vehicle) or elements of its environment. They can<br />

also represent abstract entities, such as events, sensor/actuator data, mission orders, control or<br />

perception computational models (in this context, those models are mathematical models that describe<br />

how to compute a set of outputs based on a given set of inputs, like <strong>for</strong> instance control laws and<br />

observers), etc. When a developer wants to represent the fact that a specific model is applied on a<br />

specific (operative) part of the robot, it just has to connect those two representation components: that<br />

corresponding to the computational model with that related to the operative part. For example, Fig. 2<br />

illustrates how to apply a control law to a given vehicle.<br />

Representation components are ‘passive’ entities that only act when one of their provided services<br />

is called. They only interact according to a synchronous communication model. Internally,<br />

representation components consist of object-like attributes and operations. Operations implement the<br />

services declared in provided ports and they use services declared in interfaces of required ports.<br />

Representation components are incorporated and/or exchanged by components of other types, such as<br />

control components and connectors. Representation components can also be composed between<br />

themselves when they require services of each-other. Indeed, a representation component consists of a<br />

set of provided ports that allows other representation components to get the value of its “static”<br />

physical properties (wheel diameter, frame width, etc.) and/or to set/get the current value of its<br />

“dynamic” properties (velocity and orientation of wheels, etc.).<br />

port<br />

VehiclePhysicalPropertiesConsultation<br />

VehiclePosition<br />

ControlLaw<br />

Vehicle<br />

typing VehicleDynamicPropertiesAccess<br />

interface<br />

VehicleActuatorsValueComputation<br />

Representation component<br />

Figure 2: Example of two connected representation components<br />

Fig. 2 shows a simple example of composition. The representation component called<br />

VehiclePositionControlLaw consists of:<br />

• a provided port, typed by the VehicleActuatorsValueComputation interface, through which another<br />

component (a representation or a control component) can ask <strong>for</strong> a computation of the value to<br />

be applied to the actuator.<br />

• and two required ports. The first one is typed by the VehiclePhysicalPropertiesConsultation<br />

interface, the second one by the VehicleDynamicProperties interface. These interfaces are<br />

150


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

necessary <strong>for</strong> the computation as some parameters of the model depend on the vehicle on which<br />

the corresponding law is applied. The corresponding ports are provided by the representation<br />

component Vehicle. VehiclePositionControlLaw and Vehicle are so composed by connecting the two<br />

required ports of VehiclePositionControlLaw with the two corresponding provided ports of Vehicle.<br />

C. Control Components<br />

A Control Component describes a part of the control activities of a robot controller. It can represent<br />

several entities of the controller, as we decompose the controller into a set of interconnected entities<br />

(all being components), like <strong>for</strong> example: Command (i.e. entity that executes a control law or a<br />

control sequence), Perception (i.e. entity in charge of sensor signal analysis, estimation, etc.), Event<br />

Generator (i.e. entity that monitors event occurrences), Mode Supervisor (i.e. entity that pilots the use<br />

of a physical resource in a given mode as teleoperation, autonomous, cooperation), Mission Manager<br />

(i.e. entity that manages the execution of a given mission), etc. A control component incorporates and<br />

manages a set of representation components which define the knowledge it uses to determine the<br />

contextual state and to make its decisions.<br />

Control components are ‘active’ entities. They can have one or more (potentially parallel) activities,<br />

and they can send messages to other control components (the communication being further detailed).<br />

Internal properties of a control component are attributes, operations and an asynchronous behaviour.<br />

Representation components are incorporated as attributes (representing the knowledge used by the<br />

component) and as <strong>for</strong>mal parameters of its operations. Each operation of a control component<br />

represents a context change during its execution. The asynchronous behaviour of the control<br />

component is described by an Object Petri Net (OPN) [SIB, 85], that models its ‘control logic’ (i.e. the<br />

event-based control-flow). Tokens inside the OPN refer to representation components used by the<br />

control component. The OPN structure describes the logical and temporal way the operations of a<br />

control component are managed (synchronizations, parallelism, concurrent access to its attributes,<br />

etc.). Operations of the control component are executed when firing OPN transitions. This OPN based<br />

behaviour also describes the exchanges (message reception and emission) per<strong>for</strong>med by the control<br />

component, as well as the way it synchronizes its internal activities according to these messages. Thus<br />

the OPN corresponds to the reaction of the control component according to the context evolution<br />

(received message, occurring events, etc.).<br />

We chose OPN both <strong>for</strong> modelling and implementation purposes. The use of Petri nets with objects<br />

is justified by the need of <strong>for</strong>malism to describe precisely synchronizations, concurrent access to data<br />

and parallelism (unlike finite state machines) within control components, but also interactions<br />

between them. The use of Petri nets is common, <strong>for</strong> specification and analysis purposes, in the<br />

automation and robotic communities. Petri nets <strong>for</strong>mal analysis has been widely studied, and provides<br />

algorithms [DAV, 04] <strong>for</strong> verifying the controller event-based model (its logical part). Moreover, Petri<br />

nets with objects can be executed by means of a token player, which extends its use to programming<br />

purposes (cf. section 4).<br />

Fig. 3 shows a simplified example of a control component behaviour that corresponds to a<br />

command entity, named VehiclePositionCommand. It has three attributes: its periodicity, the Vehicle<br />

being controlled and the applied VehiclePositionControlLaw. The Vehicle and the<br />

VehiclePositionControlLaw are connected in the same way as described in Fig.3, meaning that the<br />

VehiclePositionCommand will apply the VehiclePositionControlLaw to the Vehicle at a given periodicity.<br />

Such decomposition allows the adaptation of the control component VehiclePositionCommand to the<br />

Vehicle and VehiclePositionControlLaw used (i.e. the representation components it incorporates). It is<br />

thus possible to reuse this control component in different control <strong>architectures</strong> (<strong>for</strong> vehicles of the<br />

same type).<br />

151


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

This control component’s provided port (Fig. 3) is typed by the interface named<br />

VehiclePositionControl that declares services offered (to other control components) in order to be<br />

activated/deactivated/configured. Its required ports are typed by one interface each:<br />

VehicleMotorsAccess which declares services used to fix the value of the vehicle’s motors and<br />

MobileWheelVelocityandOrientationAccess which declares services used to obtain the values of the<br />

orientation and velocity of the vehicle’s wheels. These two interfaces are provided by ports of one or<br />

more other control components (depending on the decomposition of the control architecture).<br />

The (simplified) OPN representing the asynchronous behaviour of VehiclePositionCommand shown in<br />

Fig. 3, describes the periodic control loop it per<strong>for</strong>ms. This loop is composed of three steps:<br />

- the first one (firing of transition T1) consists in requesting sensors data,<br />

- the second one (firing of transition T2) consists in computing the reaction by executing MotorData<br />

computeVehicleMotorControl(Velocity,Orientation) operation (cf. Fig. 1) and then by fixing the values of<br />

the vehicle motors (token put in FixMotorValue black place),<br />

- and the third one (firing of transition T3) consists in waiting <strong>for</strong> the next period be<strong>for</strong>e a new<br />

iteration (loop). Grey and black Petri net places both represent, respectively, the reception and<br />

transmission of messages corresponding to service calls. For example, grey places startExecution and<br />

stopExecution correspond to a service declared in the VehiclePositionControl interface, whereas the black<br />

place RequestVelAndOrient and the grey place ReceiveVelAndOrient correspond to a service declared in<br />

the VehicleWheelVelocityandOrientationAccess interface.<br />

VehiclePositionControl<br />

: Executor<br />

In StartExecution()<br />

In StopExecution()<br />

VehiclePositionCommand<br />

VehicleMotorsAccesss<br />

: Sender<br />

Out<br />

FixMotorsValues(MotorCommandData)<br />

Attributes:<br />

int period;<br />

VehiclePositionControlLaw law;<br />

Vehicle v;<br />

Operations: MotorData computeVehicleMotorControl (Velocity,<br />

Orientation)<br />

//state change and initialization operations<br />

Asynchronous Behaviour:<br />

startExecution<br />

stopExecution<br />

<br />

[period,∞]<br />

VehicleWheelsVelocityAndOrientationAccess<br />

: Requester<br />

Out RequestVelAndOrien( )<br />

In ReceiveVelAndOrient(Velocity, Orientation)<br />

Figure 3: Simple example of a control component<br />

152<br />

T3<br />

T1<br />

T2<br />

RequestVel<br />

AndOrient<br />

ReceiveVel<br />

AndOrient<br />

FixMotors<br />

Value


D. Connectors<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Connections of control components are reified into components named connectors (that allow the<br />

assembly). Connectors contain the protocol according to which connected control components<br />

interact. Being a component, a connector is an entity definable and reusable by a user. It implements a<br />

protocol that potentially involves a large number of message exchanges, synchronizations and<br />

constraints. Once defined, connectors can be reused <strong>for</strong> different connections into the control<br />

architecture. This separation of the interaction aspect from the control one appears to be very<br />

important in order to create generic protocols adapted to domain specific <strong>architectures</strong>. One good<br />

practical aspect of this separation is that it leads to distinguish interactions description with control<br />

activities description, whereas describing both aspects inside the same entity type would reduce the<br />

reusability.<br />

A connector incorporates sub-components named roles (as attributes). Each role defines the<br />

behaviour’s part that a control component adopts when interacting through the protocol defined by<br />

the connector. We then say that a control component “plays” or “assumes” a role. For example, the<br />

connector of Fig. 4 describes a simple interaction between a RequesterRole and a ReplierRole. The<br />

control component assuming the Requester role sends a request message to the control component<br />

assuming the Replier role, which then sends the reply message to the Requester (once the reply has<br />

been computed). For each role it incorporates, a connector associates one of its required or provided<br />

ports. A connector’s port is typed by an interface that defines the message exchanges allowed<br />

between the connector on one side and the control component to be connected on the other side. Fig.<br />

4 shows that the connector has one provided port (left) typed by the Requester interface and one<br />

required port (right) typed by the Replier interface. The Replier interface defines the message<br />

exchanges between the connector and the VehicleIOController control component. VehicleIOController<br />

receives a request from the connector, computes it internally, and then sends the reply. The<br />

connection between the control components and the connector has been possible because of the<br />

compatibility of ports: an interface typing a connector’s port (provided or required) must be<br />

referenced by the interface of the control component’s port to which it is connected. Fig. 2 shows that<br />

VehicleWheelsVelocityAndOrientationAccess interface references the Requester interface which allows the<br />

connection of VehiclePositionCommand’s port; VelocityAndOrientationAccess interface references the<br />

Replier interface which allows the connection of VehicleIOController‘s port (cf. Fig. 4). Finally,<br />

compatibility of control components ports is verified according to interface names. Fig. 4 shows that<br />

the connection has been possible because VehicleWheelsVelocityAndOrientationAccess service is required<br />

and provided by the two control components ports connected (i.e. each interface has the same name).<br />

Vehicle<br />

Position<br />

Command<br />

Requester<br />

In sendRequest(any)<br />

Out receivedReply(any)<br />

VehicleWheelVelocityandOrientationAccess<br />

RequestReplyConnection<br />

RequesterRole<br />

ReplierRole<br />

Replier<br />

Out receiveRequest(any)<br />

In sendReply(any)<br />

Vehicle<br />

I/O<br />

controller<br />

Figure 4: Simple connector example, connecting two control components<br />

153


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

A connector can be a very adaptive entity. First, the number of roles played by components can be<br />

parameterized. Connector’s initialisation operation is useful to manage the number of roles played,<br />

according to the number of control components ports to be connected by the connector and according<br />

to their interfaces. A cardinality is associated with each role to define constraints on the number of<br />

role instances. For example, the ReplierRole has to be instantiated exactly one time, and the<br />

RequesterRole can be instantiated one or more time. The second adaptive capacity of connector is the<br />

ability to define generic (templates-like) parameters that allow to parameterize the connector with<br />

types. This is particularly important to abstract, as far as possible, the connector description from data<br />

types used in message exchanges. In Fig.2, the connector has two generic parameters: anyReq,<br />

representing the list of the types of the parameters transmitted with the request and anyRep,<br />

representing the list of types of the parameters transmitted with the reply. RequestReplyConnection is<br />

parameterized as follows: anyReq is valued to void, because no data is transmitted with the message;<br />

anyRep is valued with the Velocity and Orientation types pair, because these are the two pieces of<br />

in<strong>for</strong>mation returned by the reply. Protocols being describes into a composition of roles, roles are<br />

parameterized entities too.<br />

Requester< anyReq, anyRep> Trasmitter < anyReq, anyRep><br />

In sendRequest(anyReq)<br />

Out receiveReply(anyRep)<br />

Out transmitRequest(Id,anyReq)<br />

In transmitReply(Id, anyRep)<br />

RequesterRole<br />

Port<br />

sendRequest transmitRequest<br />

exported<br />

by the<br />

connector<br />

< anyReq><br />

<br />

< anyRep><br />

<br />

<br />

receiveReply<br />

transmitReply<br />

Trasmitter < anyReq, anyRep> Replier< anyReq, anyRep><br />

In transmitRequest(Id,anyReq)<br />

Out transmitReply(Id, anyRep)<br />

Out receiveRequest(anyReq)<br />

In sendReply(anyRep)<br />

ReplierRole<br />

transmitRequest receiveRequest<br />

<br />

< anyReq><br />

Port<br />

exported<br />

< Id> < anyRep><br />

by the<br />

connector<br />

Ports internal<br />

to the connector<br />

transmitReply<br />

Figure 5: RequesterRole and ReplierRoles<br />

sendReply<br />

A role is a sub-component, part of a connector, that itself has ports, attributes, operations and an<br />

asynchronous behaviour, like control components (Fig. 5). But unlike, control components, roles<br />

description is completely bounded to connectors one. A role has a provided or a required port<br />

exported by the connector to make it “visible” outside the connector (and then, connectable with<br />

control component’s ports). Other ports of roles are internal to the connector (Fig. 4) and are<br />

connected by the connector’s initialization operation. A role implements the message exchange<br />

between the port of the connected control component and its (own) associated port, as well as the<br />

message exchange with the other role(s) of the connector (i.e. exchanges inside the connector).<br />

Constraints described in the OPN of the ReplierRole (Fig. 5) ensure that only one request will be sent<br />

by the Requester until it receives a reply, and that the Replier will process only one request until it<br />

sends the reply to the Requester. The OPN of ReplierRole ensures that only one request will be proceed<br />

at a time by the component assuming this role. It also describes the way it identifies and memorizes<br />

the requester in order to send it the reply. A specific object of type Id, that contains all necessary<br />

configuration in<strong>for</strong>mation to this end, can be transmitted during messages exchanges. RequesterRole<br />

sends its own identifier object to the ReplierRole, with transmitRequest message (the state of the Id is<br />

“in<strong>for</strong>ming”). The ReplierRole uses this Id to identify its clients and then sends it the reply computed<br />

by the control component behaviour. In this case, the Id is used to configure communications (its state<br />

is “routing”), and not as registering data. When more than one RequesterRole exists, each has a port<br />

typed by the Transmitter interface that is connected to the corresponding provided port of the unique<br />

ReplierRole. Then their Id are used by the replier to select the receiver of the computed reply. The<br />

initialization of role Id is made by the initialization operation of the connector.<br />

154


The RequestReplyConnection connector can be used to establish connections between different<br />

control components, if the interaction to be described corresponds to this protocol, and if ports are<br />

compatible. To design a mobile robot architecture, we defined (and used several times) different types<br />

of connectors supporting protocols like EventNotification or DataPublishing.<br />

Connectors being also modelled by Petri nets, it allows to build the OPN resulting from the<br />

composition of control components (i.e. the model resulting from the composition of all their<br />

asynchronous behaviours). Thanks to this property, developers can analyze inter-component<br />

synchronizations, allowing then to check, <strong>for</strong> example, that those interconnections do not introduce<br />

any dead-lock.<br />

E. Configurations<br />

Once the control architecture (or part of it) has been completely modelled, the result is a graph of<br />

the composition of control components (composition done by means of connectors). The CoSARC<br />

language provides another type of component, named Configuration, that contains this graph. It<br />

allows developers to incorporate a software (sub-)architecture into a reusable entity. Configurations<br />

can be used to separate the description of sub-systems, each one corresponding to a resource of the<br />

robot. The global control architecture can be represented by configuration. At design phase, a<br />

configuration can be considered as a control component because it has ports that are connectable via<br />

connectors. Ports of a configuration export ports of control components that the configuration<br />

contains (dotted lines, Fig. 6). At runtime, any connection to those ports is replaced by a connection<br />

to the initial port, i.e. to that of the concerned control component. Fig. 6 shows an example of a<br />

configuration: the MobileSubSystem, corresponding to the sub-architecture controlling the vehicle part<br />

of a mobile robot. It exports the provided port of the MobileSupervisor and the required ports of<br />

VehiclePositionCommand and VehicleObstacleEventGenerator. Since a configuration can contain others<br />

configurations, it allows developers to describe the controller architecture at different levels of<br />

granularity.<br />

configuration<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Mobile<br />

Supervisor<br />

Mobile Resource<br />

Action<br />

Vehicle<br />

Move To<br />

Position<br />

Vehicle<br />

Autonomous<br />

Mode<br />

Vehicle<br />

Position<br />

Command<br />

Vehicle<br />

Obstacle<br />

Event<br />

Generator<br />

Control components<br />

placement<br />

container<br />

processing node<br />

Containers priority<br />

Figure 6: Managing architecture organization: description of the MobileResource by means of a<br />

configuration and description of its deployment.<br />

The CoSARC language provides structures to describe the deployment of a configuration. This<br />

description is made of two phases:<br />

155


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

- the hardware description phase (graphs of nodes, communication networks) allows to define<br />

operating system (OS) resources available to execute components,<br />

- the component placement phase allows to define the different OS processes (named<br />

containers) executing one or more control components and to define the scheduling of these<br />

processes on each node. At the deployment stage, configurations incorporate the description used<br />

to install and configure components.<br />

This mechanism allows to treat the deployment of an architecture independently of the control<br />

behaviour it defines. We chose to treat the organization of a control architecture into layers<br />

(hierarchy) during the deployment phase. A container is the unity that is useful to (parts of) layer’s<br />

description. The relationships between layers are translated into container execution configuration:<br />

container’s process execution priorities are set depending on layer’s relationship (the upper is the<br />

layer represented by the container, the higher is its priority of reaction). One future research is to find<br />

a multi-criteria scheduling algorithm (dealing with temporal constraints in addition to containers<br />

priorities) which will be more adapted to the management of layers hierarchy, in order to ensure<br />

maximal reactivity of low layers without sacrificing pre-emption of upper layers.<br />

IV. DEPLOYMENT & EXECUTION MODEL<br />

The CoSARC language is not only a design but also a programming language. Then, it needs a<br />

framework to execute components. This framework is a middleware application that runs on top of an<br />

OS. It provides standardized access to OS functionalities, and a set of functionalities specific to the<br />

CoSARC components execution. It is also in charge of configuration deployment (i.e. the deployment<br />

of control components and connectors corresponding to the description made) by creating containers<br />

and by managing their scheduling on each processing node.<br />

Threaded Threaded<br />

Operation<br />

Threaded<br />

Operation Operation<br />

parallel operation<br />

programming<br />

token<br />

emission<br />

operation<br />

ending<br />

events<br />

Token<br />

Player<br />

Interaction<br />

Engine<br />

time<br />

events<br />

token<br />

reception<br />

Figure 7: Container’s internal structure<br />

Timer<br />

Timer<br />

Timer<br />

timer<br />

programming<br />

A container is in charge of the execution of a set of control components and the set of all<br />

the roles played by these components. Any number of control components and roles can be placed<br />

into one container, and many containers can be deployed on the same processing node. It supports<br />

OPN execution and roles communications by the mean of two software processes that interact<br />

with an asynchronous communication model (Fig. 7): the Token Player (TP) and the Interaction<br />

156


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Engine (IE). The TP is a kind of OPN inference engine that executes the byte code in which is<br />

compiled the OPN resulting from the composition of all the asynchronous behaviours of roles and<br />

control components contained in the container. During its execution, it can program threads <strong>for</strong><br />

parallel operation execution and timers <strong>for</strong> timed transition management. During execution, the<br />

TP communicates with the Interaction Engine (IE) <strong>for</strong> emission and reception of external messages.<br />

The IE manages run-time communications of control components that are contained in the<br />

corresponding container. These communications between control components are configured<br />

depending on the connection of the roles they play. At run-time the IE of a given container exchanges<br />

messages with IE of others containers according to roles connection in<strong>for</strong>mation. Given a container,<br />

its IE unpacks received messages to give corresponding tokens to the Token Player. And vice-versa<br />

its IE packs up tokens arriving from its TP to send the corresponding messages (Fig. 8).<br />

Figure 8: container communications<br />

In the next subsections we first present components deployment model and we focus on OPN<br />

execution mechanism in the second one.<br />

A. Deployment Model Overview<br />

Container Container<br />

TP Token<br />

Inference<br />

Token<br />

reception<br />

Token<br />

unpacking<br />

IE<br />

Message<br />

transmission<br />

The execution of CoSARC components is completely configured by their deployment. The<br />

description of configuration deployment is useful to describe precisely components execution issues.<br />

It is used to configure containers priorities, but it also helps determining where and how OPN are<br />

executed. The deployment is realized by the following steps:<br />

• The component placement step consists in creating each container on nodes and placing<br />

components code inside containers, corresponding to deployment description (Fig.6).<br />

• The role assignment step consists in defining where roles are executed. This is automatically<br />

deduced from the preceding step by applying the following rule: when a control component plays a<br />

role, this role is executed in the same container as the control component. Fig.9 shows that<br />

VehiclePositionCommand and RequesterRole are placed inside the same container. A connector can be<br />

then distributed among different containers.<br />

• The behaviour execution model definition step consists in producing a global OPN that will be<br />

executed by the TP. A container OPN model can be made of the disjoined union of the complete<br />

control component behaviour. A complete behaviour is an OPN model of the fusion of a control<br />

component’s and role’s asynchronous behaviours. This fusion is deduced from each connection<br />

between a control component and a role it plays: each place concerned with message exchanges, of a<br />

157<br />

Token<br />

Inference<br />

IE<br />

Token<br />

packing<br />

TP<br />

Token<br />

emission


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

control component’s OPN, is merged with the corresponding place of a role’s OPN, according to port<br />

connection and interface matching. The resulting OPN, executed by a container, is thus made of as<br />

many “complete behaviours” as control component it has to execute. These behaviours can<br />

communicate between each others (if connections between their roles exist) and they can<br />

communicate with behaviours contained in other containers. Fig. 9 gives an example of this step: it<br />

shows that, <strong>for</strong> example, places P1 and P3 are merged, because they are bind to the same sendRequest<br />

service of the Requester Interface.<br />

VehiclePositionCommand<br />

Vehicle<br />

I/O controller<br />

P11<br />

P12<br />

P1<br />

P2<br />

P3<br />

P4 P6<br />

Requester Role<br />

RequestReply<br />

Connection<br />

P5<br />

Replier Role<br />

P10 P7<br />

P9<br />

P8<br />

Figure 9: deployment of containers - OPN byte code compilation.<br />

P1–P3<br />

P2–P4<br />

• The container communication configuration step consists in defining communications between<br />

(and inside) Interaction Engines of containers, according to connections between ports of roles (Fig.<br />

4). For example, the RequesterRole r1 and the ReplierRole r2 are connected by their ports typed by the<br />

Transmitter interface. The interaction described in the two Transmitter interfaces (Figs. 4, 5) implies<br />

that r1 sends transmitRequest message to r2 and that r2 sends transmitReply message to r1 (once their<br />

ports are connected).<br />

Configuring communications supported by Interactions Engines is made in different steps. First it<br />

requires to determine relations between Interactions Engines. This is deduced by applying the<br />

following operation:<br />

<strong>for</strong>each port p of a role r executed by a container c<br />

<strong>for</strong>each port p’ of a role r’ executed in container c’<br />

if p connected with p’<br />

configure message reception and emission between c and c’<br />

with in<strong>for</strong>mation of p and p’ interfaces.<br />

end<br />

end<br />

Second, it consists in defining the system communication supports (pipe, TCP/IP, etc.) used to<br />

make IE communication effective. This is done in accordance with connector deployment. If a<br />

connector is deployed on one container, the communication is local to the IE, so the IE directly route<br />

tokens without using OS communication support. If it is deployed on two or more containers placed<br />

on the same processing node, the communication relies on OS process communication procedures<br />

158<br />

P9–P12<br />

P8<br />

Container 1<br />

P10–P11<br />

P7<br />

Container 2<br />

P5<br />

P6


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

(e.g. Mailbox). If the connector is deployed then a network communication protocol has to be used<br />

(e.g. TCP/IP). For the moment, we don’t consider network distribution problems.<br />

Finally, it consists, <strong>for</strong> each IE, in doing the matching of the message reception and emission points<br />

on one hand, and respectively input and output places of OPN played by the TP on the other hand.<br />

This in<strong>for</strong>mation is directly extracted from ports description (ports reference input and output places<br />

associated to message transmission). The IE can then, packs tokens arriving from the token-player<br />

into emitted messages, and unpacks token from message arrivals.<br />

Vehicle<br />

Position<br />

Command<br />

Token<br />

Player<br />

Container 2<br />

P5<br />

P6<br />

<br />

Figure 10: Configuring containers communications with connector and deployment in<strong>for</strong>mation.<br />

Fig. 10 shows an example of the configuration of the interaction engines of container 1 and 2<br />

according to the connector used and its deployment. We can see that IE of container 1 packs tokens<br />

coming from the place P5 into a transmitRequest message and sends the message to container 2. The IE<br />

of container 2 unpacks the token from the message and puts it to the place P7.<br />

B. OPN Execution Model Overview<br />

RequestReplyConnection<br />

Requester<br />

Role<br />

Processing Node<br />

Replier<br />

Role<br />

transmitRequest(Id,void)<br />

Interaction<br />

Engine<br />

transmitReply(Id,<br />

Interaction<br />

Engine<br />

Container 1<br />

<br />

Vehicle<br />

I/O<br />

controller<br />

<br />

Token<br />

Player<br />

P8<br />

Container 2 Velocity, Orientation) Container 1<br />

Basically there are two main approaches <strong>for</strong> implementing a discrete-event controller specified by<br />

means of a Petri net [VAL, 95]. For the first one, by means of a procedural language, a collection of<br />

tasks are coded in such a way that their overall behaviour emulates the dynamics of the Petri net. For<br />

the second one, the Petri net is considered as a set of declarative rules. Then an inference engine<br />

which does not depend on the particular Petri net to be implemented operates on the data structure<br />

representing the net. This inference engine is the Token player that propagates tokens with respect to<br />

the semantic of OPN <strong>for</strong>malism. The TP also executes functional calls contained in condition and<br />

action parts of OPN’s transitions. Operations can be threaded when their execution is too long and<br />

may block the inference <strong>for</strong> a too long time. The TP also programs timers when it needs time events<br />

to be monitored (time-out, periodic, delay events) to deal with timing notations on transitions. When<br />

timer or thread execution is finished, they send corresponding internal events to the TP. The argument<br />

to use a TP instead of a direct compilation into a programming language, is that the state of the OPN<br />

during execution is reified, allowing then to put in place introspection (dynamic study of OPN state)<br />

and reflexive (dynamic change of the OPN state) mechanisms. Introspection is useful to reason, at<br />

run-time, about sequences of events, in order to detect a problem and elaborate a diagnosis.<br />

Reflexivity is useful to correct (or modify) the OPN state, one diagnosis is done.<br />

159<br />

P7


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

The Token Player inference mechanism [PAS, 02] is event based: PNO tokens are propagated in the<br />

PN control structure, as far as possible according to OPN propagation rules, and then stands <strong>for</strong> new<br />

events to reactivate the inference mechanism. In order to optimise the OPN inference, its structure is<br />

compiled into an equivalent executable structure (Fig. 11). The principle is to decompose transitions<br />

into an optimised graph of transition nodes (test, joint, time and action nodes are only used if<br />

required); the propagation mechanism is applied on this resulting graph.<br />

<br />

conditions<br />

actions<br />

[t1, t2]<br />

test test<br />

Figure 11: compilation of a transition into an executable structure<br />

The propagation mechanism propagates tokens as far as possible within this resulting graph. The<br />

TP starts from new marked places of the PNO and propagates tokens of each new marked place<br />

through transitions, i.e. the deepest as possible within the equivalent graph. For example, in Fig. 11,<br />

the token will be propagated to the first node ("test"). In the case of a successful test, the token will be<br />

blocked be<strong>for</strong>e the "Joint" node until a token arrives in the adjacent "test" node and satisfies this test.<br />

When done, the two tokens are used to verify test associated to the "Joint" node. If this step is passed,<br />

tokens continue their propagation to the "Time" node and stands <strong>for</strong> the time event to occur (if<br />

specified). Once the time event has occurred, the propagation will continue to the "Action" node,<br />

where operations will be executed (as well as eventually new token creations). When new tokens<br />

have been created and put into one or more post-places, these places are considered to be newly<br />

marked ones.<br />

When the propagation is not possible anymore (i.e. there is no newly marked place), the OPN<br />

inference mechanism is in a “stable state”. A “stable state” is a state in which token propagation is<br />

waiting <strong>for</strong> event occurrences to be pursued (Fig. 12). So, in a stable state, the token player stands <strong>for</strong><br />

internal events (time event or parallel operation ending) and/or external events (message arrival) that<br />

will reactivate token propagation. For instance, when an external event occurs a new token is created<br />

and put into the corresponding Input Place; such newly marked place leads the propagation to start<br />

again.<br />

The right part of Fig. 12 depicts the propagation of tokens from newly marked place. When all<br />

tokens have been propagated from such place to its post-transitions, this place is no more considered<br />

as a newly marked place. If a transition is fired (action is executed) then new tokens are created and<br />

put down into post-places which become then newly marked places. When all the post-transition of a<br />

given place have been treated, then the token player checks if new internal events occurred. If none,<br />

the propagation pursues with another newly marked place. On the other hand, if an internal event has<br />

occurred, it is treated be<strong>for</strong>e pursuing with the newly marked places. When all the newly marked<br />

160<br />

<br />

Joint<br />

Time =[ t1, t2]<br />

Action


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

places have been considered and in lack of internal event occurrences, the token player reaches a<br />

“stable state”.<br />

In this inference mechanism, we distinguish internal and external events: internal events are<br />

monitored more frequently than external ones (Fig. 12). This distinction is made because of the nature<br />

(meaning) of the events. For reactivity purposes, internal events, and particularly time events, must be<br />

handled as quickly as possible. Indeed, such events can correspond to watchdogs <strong>for</strong> example.<br />

External events result from message arrivals and represent communications between component<br />

instances. Internal reactivity (reaction and propagation) is considered as having priority over external<br />

request. When tokens are put into output places, these tokens are given to the Interaction Engine in<br />

charge of sending them as parameters of messages.<br />

Stable State :<br />

Wait<br />

internal or external<br />

event reception<br />

managing new<br />

events<br />

yes no<br />

new<br />

internal<br />

event ?<br />

no<br />

no<br />

<strong>for</strong> T, a post-transition of<br />

this newly marked place P.<br />

propagates all new<br />

tokens from P to T.<br />

Figure 12: Simplified Token Player inference mechanism<br />

For determinism and per<strong>for</strong>mance purposes, the token player relies on:<br />

- A real time operating system which allows to manage time events thanks to real time clocks, and to<br />

define precisely component instances process execution priorities.<br />

- The determinism of the executed structure that is possible thanks to OPN transition firing priorities.<br />

- The efficiency of the propagation mechanism. It allows to avoid polling of the OPN (cyclic<br />

execution) and consequently optimises its execution (and processor use).<br />

- The robustness of the inference mechanism which guarantees that no evolution will take place if an<br />

incoherent or a non-pertinent event occurs.<br />

Actually, the TP has been developed and tested in order to validate inference mechanism. A<br />

complete and real-time version of container’s execution mechanism is under development.<br />

Representation components are translated into objects and their types are translated into classes. Each<br />

of these classes implements the object interfaces corresponding to the interfaces typing the provided<br />

ports. Each required port is translated into a specific attribute that references the object interface<br />

corresponding to the interfaces typing the required ports. Connections of representation component<br />

ports are also manageable by means of a specific connection object.<br />

161<br />

still newly<br />

marked<br />

places ?<br />

yes<br />

another posttransition<br />

of P ?<br />

yes


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

V. CONCLUSION<br />

We have presented the CoSARC methodology, which is devoted to improving quality, <strong>modular</strong>ity,<br />

reusability and the upgradeability of robot control <strong>architectures</strong>, along all the architecture life cycle,<br />

from analysis to execution. To this end, the methodology relies on two aspects: an architecture pattern<br />

and a component-based language. The proposed architecture pattern helps in many way <strong>for</strong> the<br />

organization of control activities, by synthesizing main organization principles. It also gives a way <strong>for</strong><br />

identifying control activities and their interactions with respect to material elements of the robot. It is<br />

also specifically dedicated to the reification and the integration of human expertise (control laws,<br />

physical descriptions, modes management, observers, action scheduling, etc.). The CoSARC<br />

language deals with design, implementation and deployment of control software architecture. It<br />

supports four categories of components, each one dealing with a specific aspect during control<br />

architecture description. Moreover, it has the added benefit of relying on a <strong>for</strong>mal approach based on<br />

Object Petri Nets <strong>for</strong>malism. This allows analysis to be per<strong>for</strong>med at the design stage which is a great<br />

advantage when designing the control of complex systems.<br />

Current works concern the development of the CoSARC execution environment and of the<br />

CoSARC language development toolkit.<br />

REFERENCES<br />

[ALA, 98] Alami, R. & Chatila, R. & Fleury, S. & Ghallab, M. & Ingrand, F. An architecture <strong>for</strong> autonomy,<br />

International Journal of Robotics Research, vol. 17, no. 4 (April 1998), p.315-337.<br />

[ALB, 02] Albus, J.S. & al., 4D/RDC: A reference model architecture <strong>for</strong> unmanned vehicle systems. Technical<br />

report, NISTIR 6910, 2002.<br />

[ALD, 03] Aldrich, J. & Sazawal, V. & Chambers, C. & Notkin, D. Language support <strong>for</strong> connector<br />

abstraction, in Proceedings of ECOOP’2003, pp.74-102, Darmstadt, Germany, July 2003.<br />

[ARK, 97] Arkin, R.C. & Balch, T. Aura : principles and practice in review. Technical report, College of<br />

Computing, Georgia Institute of Technology, 1997.<br />

[BIN, 96] Binns, P. & Engelhart, M. & Jackson, M. & Vestal, S. Domain Specific Architectures <strong>for</strong> Guidance,<br />

Navigation and Control. International Journal of Software Engineering and Knowledge Engineering, vol. 6, no.<br />

2 (June 1996), pp.201-227, World Scientific Publishing Company.<br />

[BOR,98] Borrely, J.J. & al. The Orccad Architecture. International Journal of Robotics Research, Special<br />

issues on Integrated Architectures <strong>for</strong> Robot Control and Porgramming, vol. 17, no. 4 (April 1998), pp.338-359.<br />

[BRO,98] Brooks, R. & al.. Alternative Essences of Intelligence. in Proceedings of American Association of<br />

Artificial Intelligence (AAAI), pp. 89-97, July 1998, Madison, Wisconsin, USA.<br />

[BRO, 86] Brooks, R.A. A robust layered control system <strong>for</strong> a mobile robot. IEEE journal of Robotics and<br />

Automation, vol. 2, no. 1, pp.14-23, 1986.<br />

[BRU, 02] Bruneton, E. & Coupaye, T. & Stefani, J.B. Recursive and Dynamic Software Composition with<br />

Sharing. In Proceedings of the 7th International Workshop on Component-Oriented Programming (WCOP02)<br />

at ECOOP 2002, June 2002, Malaga, Spain.<br />

[DAV, 04] David, R. & Alla, H. Discrete, Continuous and Hybrid Petri Nets. Ed. Springer, ISBN 3-540-22480-<br />

7, 2004.<br />

[MED, 97] Medvidovic, N. & Taylor, R.N. A framework <strong>for</strong> Classifying and Comparing Software Architecture<br />

Description Languages. In Proceedings of the 6th European Software Engineering Conference together with the<br />

5th ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE), Springer-Verlag,<br />

pp. 60-76, 1997, Zurich, Switzerland.<br />

[MUS, 02] Muscettola, N. & al.. Idea : Planning at the core of autonomous reactive agents. In Proceedings of<br />

the 3rd Int. NASA Workshop on Planning and Scheduling <strong>for</strong> Space, 2002.<br />

162


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

[OMG, 01] OMG. Corba 3.0 new components chapters. OMG TC Document <strong>for</strong>mal 2001-11-03, Object<br />

Management Group, December 2001.<br />

[PAS, 02] Passama, R & Andreu, D. & Raclot, F. & Libourel, T. J-NetObject :Un Noyau d'Exécution de<br />

Réseaux de Petri à Objets Temporels. Research report LIRMM n°02182, version 1.0, LIRMM, France,<br />

December 2002.<br />

[SIB, 85] Sibertin-Blanc, C. High-level Petri Nets with Data Structure, in proceedings of the 6th European<br />

workshop on Application and Theory of Petri Nets, pp.141-170, Espoo, Finland, June 1985.<br />

[STE, 96] Stewart, D. B. The Chimera Methodology: Designing Dynamically Reconfigurable and Reusable<br />

Real-Time Software Using Port-Based Objects, International Journal of Software Engineering and Knowledge<br />

Engineering, vol. 6, no. 2, pp.249-277, June 1996.<br />

[SZY, 99] Szyperski, C. Component Software: Beyond Object Oriented Programming, Addison-Wesley<br />

publishing.<br />

[VAL, 95] R. Valette. Petri nets <strong>for</strong> control and monitoring: specification, verification, implementation. In<br />

workshop « Analysis and Design of Event-Driven Operations in Process Systems, Imperial College, Centre <strong>for</strong><br />

Process System Engineering, London, 10-11 April 1995.<br />

[VOL, 01] Volpe, R. & al. The CLARATy Architecture <strong>for</strong> Robotic Autonomy, in Proceedings of the IEEE<br />

Aerospace Conference (IAC-2001), vol. 1, pp.121-132, Big Sky, Montana, USA, March 2001.<br />

163


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

An Asynchronous Reflection Model <strong>for</strong><br />

Object-oriented Distributed Reactive Systems<br />

Jacques Malenfant 1<br />

Université Pierre et Marie Curie-Paris6, UMR 7606,<br />

8 rue du Capitaine Scott F-75015 Paris, France<br />

CNRS, UMR 7606, 8 rue du Capitaine Scott F-75015 Paris, France<br />

Jacques.Malenfant@lip6.fr<br />

Abstract. Today’s distributed and embedded systems challenge the traditional procedural<br />

approach to reflection. Central to this approach is the use of an “implements” relationship<br />

to realize the connection between the meta and the base level. This restricted view of reflection<br />

is inappropriate in distributed or embedded computing, where part of the system to<br />

reflect upon cannot be captured in an “implements” relationship, either because we lack a<br />

centralized state or an essential ingredient lies outside the system. We introduce a novel asynchronous<br />

reflective model, ARM, where the connection between levels use an asynchronous<br />

publish/subscribe communication model. We show not only that this model is better suited<br />

to distributed and reactive systems, but that it also generalizes the possible <strong>for</strong>ms of reflection<br />

by adopting and adapting to B. Smith’s “right combination of connection and detachment”<br />

between the base and the metalevel. ARM is applied to the reflective control of <strong>modular</strong><br />

robots, which dynamic physical reconfigurability must be paralleled by a software reconfigurability<br />

offered by reflection. ARM then uses reactive objects founded on the GALS<br />

approach (globally asynchronous, locally synchronous), which implements synchronization<br />

by future values. A hybrid deliberative/reactive framework inspired by intelligent robotic<br />

control systems is implemented using the ARM[GALS] <strong>for</strong> Java plat<strong>for</strong>m.<br />

Keywords: reflection, object-oriented systems, distributed systems, embedded & reactive<br />

systems, event-based computing, AI robotics.<br />

1 Introduction<br />

Today’s distributed and embedded systems operate <strong>for</strong> long periods of time, while large variations<br />

in the level of available resources are observed. Sustaining an acceptable level of per<strong>for</strong>mance in<br />

such contexts requires dynamic adaptation of applications. Reflection [39] has been proposed both<br />

as a conceptual framework and as an architectural blueprint to achieve dynamic adaptation of<br />

applications, yet these new systems challenge the traditional procedural approach to reflection.<br />

In this paper, we consider the problem of controlling <strong>modular</strong> robots. A <strong>modular</strong> robot is one<br />

that is made of a large number of homogeneous and simple robotic entities, which can be physically<br />

assembled and reconfigured during the mission. Examples of such robots are CONRO [38] and M-<br />

TRAN [48]. The key concept behind <strong>modular</strong> robotics is that the shape provides <strong>for</strong> the function.<br />

Modular robots are morphologically reconfigurable to adapt to their mission: <strong>open</strong> field motion,<br />

go over obstacles, motion within pipes or between close walls, etc. We currently participate in a<br />

<strong>modular</strong> robot project called MAAM (Molecule := Atom | Atom + Molecule) where modules called<br />

atoms are build from a spherical kernel to which six orthogonal legs are attached. Legs can move in<br />

a cone and they can bind to each others to <strong>for</strong>m molecules (see Figure 1). Because the morphology<br />

of the robot defines its function, we claim that a parallel reconfigurability of the software controlling<br />

the robot is necessary to dynamically implement the control of the new function.<br />

Modular robots are examples of distributed embedded systems <strong>for</strong> which we introduce a novel<br />

reflection model called Asynchronous Reflection Model, or ARM. We apply this new model to<br />

the software reconfiguration and dynamic adaptation of MAAM atoms. ARM aims at breaking<br />

the limits of procedural reflection to apply to distributed or embedded systems. It is seeking <strong>for</strong><br />

generality and genericity, being parameterized both by the kind of base level supporting entities<br />

164


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Fig.1. The MAAM atom and molecule.<br />

(components, active objects, reactive objects, and so on) and by the <strong>for</strong>m of reified representation<br />

chosen to match the need <strong>for</strong> adaptation of the application. Hence, ARM should rather be seen<br />

as a generator ARM[·](·) of reflection models.<br />

We also present a Java implementation of ARM, which is based on an hybrid active and<br />

reactive object model. In this implementation, reactive objects use a synchronous approach to<br />

real-time [18], which meshes well with the event-based computation model of ARM. However, we<br />

preserve active objects, better suited to program the metalevel entities. We there<strong>for</strong>e have adopted<br />

the globally asynchronous but locally synchronous (GALS) approach. The control of individual<br />

atoms is seen as a synchronous program, which can communicate with other atoms and the metalevel<br />

using asynchronous events. Our active and reactive objects implement synchronization using<br />

future values, a premiere in this context to our knowledge. For the MAAM project, we have developed<br />

a hybrid deliberative/reactive framework inspired from work in AI robotics [2], within the<br />

ARM[GALS] <strong>for</strong> Java plat<strong>for</strong>m. A first deliberative metamodel <strong>for</strong> the dynamic adaptation of<br />

atoms has also been implemented.<br />

The rest of the paper is organized as follows. In the next section, we introduce procedural<br />

reflection and then discuss its limitations in order to argue in favor of a novel asynchronous approach<br />

to reflection. In Section 3, we introduce the ARM model and its implementation in Java. Next,<br />

we address the problem of MAAM distributed real-time control by introducing our GALS model,<br />

its integration with ARM to give the ARM[GALS] plat<strong>for</strong>m <strong>for</strong> Java, and its use to develop the<br />

deliberative/reactive framework used to program MAAM atoms. We then compare our approach<br />

to the related work and finally give conclusions and some perspectives of this work.<br />

2 Motivations<br />

2.1 Procedural reflection<br />

Dating back to the seminal work of Smith, reflection is “an entity’s integral ability to represent,<br />

operate on, and otherwise deal with its self in the same way that it represents, operates on and deals<br />

with its primary subject matter” [40]. Reflective behavior is implemented by a metalevel, “the most<br />

identifiable feature of reflective systems”, placed in a meta relationship with a base level. Although<br />

Smith did not impose any particular way to realize this relationship, his 3-Lisp language [39] and<br />

most of its descendents have adopted an “implements” relationship, where the metalevel interprets<br />

or otherwise processes the base level using traditional data structures of language processors reified<br />

into the language of the base level, thus enabling reflective computations.<br />

The restriction of reflection to systems where the metalevel is in an “implements” relationship<br />

with the base level has been called procedural reflection by Jim des Rivières [14] and it has been<br />

defined as follows by Bobrow, Gabriel and White [5] in the context of reflective programming<br />

languages:<br />

“Reflection is the ability of a program to manipulate as data something representing the<br />

state of the program during its own execution. There are two aspects of such manipulation:<br />

165


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

introspection and intercession. Introspection is the ability <strong>for</strong> a program to observe and<br />

there<strong>for</strong>e reason about its own state. Intercession is the ability <strong>for</strong> a program to modify<br />

its own execution state or alter its own interpretation or meaning. Both aspects require<br />

a mechanism <strong>for</strong> encoding execution state as data; providing such an encoding is called<br />

reification.<br />

To be more precise, reification encompasses both defining a representation (e.g. a class hierarchy)<br />

and obtaining objects at run-time that actually represent the current state of the computation.<br />

To be effective, introspection and intercession must operate on reified data that are continuously<br />

updated. Causal connection is the property of the link between the base and the metalevel imposing<br />

that any changes to one level leads to a causal effect on the other.<br />

2.2 Limits of procedural reflection<br />

The “implements” relationship exhibits many desirable properties, such as a full causal connection.<br />

When using metacircular interpreters as metalevel, it becomes easy to reify since everything is<br />

already represented by the metacircular interpreter as data structures in the base level language.<br />

However, it has the major drawback of introducing a full coupling between the two levels. Indeed,<br />

when adopting this kind of relationship, the meta in some sense is the base level, an observation<br />

that Danvy and Malmkjaer have <strong>for</strong>malized under the single-threadedness property of 3-Lisp like<br />

reflective languages [13].<br />

The single-threadedness property says that, at any time, only one of the base or the metalevel<br />

is actually running. A usual corollary assumption permeating all reflective code is that nothing<br />

happens at the base level during reflective computations, and there<strong>for</strong>e modifications to the base<br />

level through its metalevel representation take effect be<strong>for</strong>e any computation steps can be carried<br />

over at the base level. In other words, metalevel and base level computations steps are synchronous<br />

with each other in the sense that they are totally ordered and happening in “mutual exclusion”.<br />

Except <strong>for</strong> some attempts in AI and agent-oriented programming, this view of reflection has<br />

permeated the vast majority of the reflective languages, middleware and systems proposed to date.<br />

Only a few recent reflective languages and middleware begin to timidly introduce alternatives (see<br />

§6). Un<strong>for</strong>tunately, this vision does not scale outside the traditional sequential programming field<br />

where it was first realized. Smith has often argued against such a restriction of reflection in an<br />

AI perspective, where his theory would apply to the relation between an intelligent entity and<br />

the world into which it operates [41]. Today, the challenge <strong>for</strong> this choice of the “implements”<br />

relationship to connect the base and metalevel comes from attempts to apply reflection into the<br />

distributed and embedded computing paradigms, where this relation fails to cope with the very<br />

nature of actual systems.<br />

In distributed computing, the “implements” relationship goes against the absence of a global<br />

state and the inherent characteristics of a system made of independent computing nodes. As a<br />

result, most attempts to introduce reflection in distributed systems either restrict themselves to<br />

reflect upon individual (sequential) entities independently [45,46,30,22,28,35,47,11,31,29], where<br />

a procedural approach can be applied, or reify only very particular aspects (e.g. message sending,<br />

stubs, ...) upon which local reflective computations can be introduced. Little work has been done<br />

to introduce reflection in embedded systems (however, see [21]); difficulties are similar to the ones<br />

while of AI applications <strong>for</strong>eseen by Smith, since they need reflection upon the “outside world”<br />

that indeed cannot be put in an “implements” relationship with the metalevel.<br />

2.3 Advantages of an asynchronous non-implementing approach<br />

To achieve its full generality, reflection must go beyond the traditional procedural approach and<br />

propose new ways to view reification and to implement the connection between the base level<br />

and its metalevel. In this paper, we propose that the publish/subscribe event-based computation<br />

model is better suited to implement the connection between the meta and the base level. We<br />

there<strong>for</strong>e introduce an asynchronous reflective model where the communication between levels<br />

166


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

uses asynchronous events and we claim that this model provides the means to adopt and adapt to<br />

the “right combination of connection and detachment” [39] necessary to reflect in general.<br />

Of course, developing a reflective kernel upon a publish/subscribe middleware is not a real challenge.<br />

The crucial point concerns the revolution in the resulting <strong>for</strong>m of reflection and there<strong>for</strong>e<br />

what this new <strong>for</strong>m allows us to do that was not possible in the traditional procedural reflection<br />

approach. Generally speaking, using asynchronous events and distinguishing the metalevel representation<br />

from the data structures of the language processor go around the old discussions about<br />

the concurrency versus the non-concurrency between the base and the metalevels. Being relieved<br />

from its execution role, the metalevel can execute concurrently with the the base level.<br />

More substantially, rejecting the strict synchrony imposed by procedural reflection to the benefit<br />

of a more finely-shaded semantics more or less synchronized, more or less fault-tolerant, or<br />

respecting different notions of causality between events, allows us to introduce a corresponding<br />

finely-shaded notion of causal connection. This corresponds exactly to what Smith was argueing<br />

<strong>for</strong> when requiring an equilibrium between connection and detachment among levels. The importance<br />

of this aspect appears clearly in real-time systems where strict deadlines must be met<br />

even when calling <strong>for</strong> reflective computations. These reflective computations must be sufficiently<br />

detached from the real-time base level to maintain the timeliness of the system.<br />

Asynchronous reflection <strong>open</strong>s a wide spectrum of possible reified representation to be explored<br />

according to the <strong>for</strong>m of introspection and intercession to be implemented. When relieved from<br />

the constraints of acting as the execution state of the language processor, reified representation<br />

can be defined as a model, in its very sense, i.e. an abstraction chosen <strong>for</strong> its proper goal. In the<br />

complex world of distributed embedded systems, it is illusive to hope <strong>for</strong> complete models, taking<br />

into account all aspects of the base level. In asynchronous reflection, incompleteness, fuzziness,<br />

or even randomness in representation can be smoothly integrated in models. Given the needs <strong>for</strong><br />

adaptation, the model is defined by necessity, <strong>for</strong> the reified representation and <strong>for</strong> the means to<br />

construct the model, to instrospect and to intercede with the base level. A model is constructed<br />

by the metalevel by aggregation and processing of events received from the base level. The model<br />

needs not be unique. Models can easily be composite, thanks to the publish/subscribe technology.<br />

Using its autonomous execution, the metalevel can also perceive events coming from other base<br />

level entities and even events coming from sensors collecting in<strong>for</strong>mation from the non-computerized<br />

“outside” world, thus enabling truly distributed and embedded reflection. Autonomous execution<br />

also allows the metalevel to probe its environment to collect the necessary in<strong>for</strong>mation. Unlike<br />

reflection à la 3-Lisp, the metalevel can take the lead in adaptation instead of waiting <strong>for</strong> requests<br />

from the base level.<br />

The flexibility of publish/subscribe communication can also be recruited to provide a wide<br />

spectrum of connection/detachment possibilities. Events coming from the base level entity associated<br />

to the metalevel can be followed with a finer granularity than the ones coming from other<br />

entities or from the sensors. This can be done using the content-based filtering capabilities of<br />

message-oriented middleware. For example, to reflect upon the tactics and strategies of a robot<br />

football player, the metalevel need not be in<strong>for</strong>med of the precise state of all the mechanisms controlling<br />

the movement of the robot. Moreover, in asynchronous procedural reflection, the grain of<br />

observable computational steps can be organized hierarchically (bigger steps being an aggregation<br />

of finer steps) so that the choice of notification granularity can be made on a per entity basis. This<br />

provides exactly the kind of tradeoffs Smith was looking <strong>for</strong> when he was talking about “the right<br />

combination of connection and detachment”.<br />

Furthermore, as filters can be modified dynamically, the connection/detachment tradeoffs need<br />

not be decided once and <strong>for</strong> all but rather adapted to the current situation. For example, at some<br />

point in time, the metalevel may want to ignore the level of remaining energy in the robot batteries,<br />

and inhibit the related events. But when the energy level crosses some threshold, the metalevel can<br />

switch into a mode where such events are sollicited and taken into account to adapt the behavior<br />

of the base level (robot). More generally, some sort of meta-events can mark thresholds crossing<br />

leading to a modification in the granularity of observation from the metalevel.<br />

Finally, the very nature of reified representations will drastically change in this new settings.<br />

The experience we got when applying reflection to adapt systems dynamically [27], as well as the<br />

167


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Entity StructuralMeta<br />

BehavioralMeta<br />

basicBehavioralMeta<br />

Fig.2. Kernel entities of ARM.<br />

inherits−from<br />

instance−of<br />

meta−of<br />

work in multi-agent systems show the necessity of prevision and planning <strong>for</strong> intercession. To adapt<br />

an application to the level of physical resources available is a control problem in the sense of classical<br />

control theory. Poor control policies can severely deteriorate the per<strong>for</strong>mance. To repetitively adapt<br />

to rapidly varying physical parameters can lead to a <strong>for</strong>m of trashing where the system does nothing<br />

else but adapt itself. This well-known problem in control theory can be tackled by an appropriate<br />

choice of metalevel representation upon which viable policies, if not optimal, can be computed and<br />

then applied when interceding with the base level. Our asynchronous reflection model there<strong>for</strong>e goes<br />

towards a marriage of reflection and control to succeed in the dynamic adaptation of applications.<br />

Of course, the major disadvantage of our asynchronous approach to reflection is the loss in<br />

reactiveness of the metalevel. Very fine-grained adaptations, to the level of instructions in the base<br />

level program, will not be efficiently implementable in the asynchronous approach. There are two<br />

counter-arguments to this. First, the kind of adaptability needed in distributed and embedded<br />

systems has often to do with variations of resources or context that happen infrequently compared<br />

to the rythm of instruction scheduling and execution but frequently compared to the duration of the<br />

whole program execution (sometimes years of continuous execution <strong>for</strong> some embedded systems).<br />

Second, nothing prevents a dual model, where a more traditional procedural reflection tightly<br />

integrated with the base level <strong>for</strong> language-oriented adaptation combines with an asynchronous<br />

reflection metalevel catering <strong>for</strong> environmental adaptation.<br />

As a matter of fact, asynchronous reflection does not abolish procedural reflection, because<br />

intercession with a base level program still needs a reification of the program to be effective.<br />

Full procedural reification is not always necessary <strong>for</strong> all kinds of adaptation however. Reflective<br />

middleware such as reflective virtual machines can provide the necessary APIs to adapt the base<br />

level. In Java, <strong>for</strong> example, the possiblity to modify the code by hotswap as provided by the<br />

Java Plat<strong>for</strong>m Debugging Architecture (JPDA) [23] or code manipulation capabilities provided by<br />

reflective JIT compilers [34] can give enough flexibility to attack a wide spectrum of adaptation<br />

problems.<br />

3 The Asynchronous Reflection Model<br />

3.1 Kernel entities<br />

The kernel of ARM is largely inspired from the ObjVlisp model of Cointe and Briot [7, 12], to which<br />

are added behavioral meta-objects in the line of Ferber [15]. The kernel is there<strong>for</strong>e built around<br />

three core structural meta-entities, Entity, StructuralMeta and BehavioralMeta, and a first<br />

behavioral meta-entity, hereafter called basicBehavioralMeta, which role is to be the behavioral<br />

meta-entity of all kernel entities, i.e. the three structural meta-entities (“classes”) and itself. We<br />

have avoided the classical names Object, Class and MetaObject in order to emphasize the fact<br />

that ARM extends the procedural reflection of ObjVlisp with a more generic model <strong>for</strong> the reified<br />

representation of a computational entity from which many different specific representations can be<br />

developed.<br />

Accordingly, we will use the word entity instead of object. ARM can be applied to several<br />

different kinds of base level entities: sequential objects, active objects, reactive objects, components,<br />

168


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

etc. To emphasize this genericity, we will use the notation ARM[·] to introduce the fact that ARM<br />

is rather a generator of reflective kernels <strong>for</strong> given choices of the kind of base level entities. ARM<br />

need not be directive but rather liberal in the way it can be extended to apply to specific contexts<br />

and to implement specific applications. It there<strong>for</strong>e focuses more on APIs and relationships between<br />

entities than their actual content.<br />

The figure 2 puts on the main “inheritance”, “instantiation” and “meta-of” relationships of the<br />

ARM kernel. These relationships must be understood with a specific semantics in the context of<br />

ARM, which can have nothing to do with their traditional meaning. Being instance here must<br />

be understood as “having as structural meta-entity”, while inheritance must be understood as<br />

extending a structural meta-entity and the “meta-of” relationship as “having as behavioral metaentity”.<br />

Specific kernels generated from ARM with a given reified representation are responsible<br />

<strong>for</strong> giving a precise semantics to these relationships. In a procedural kernel, <strong>for</strong> example, they<br />

would take back their traditional meanings.<br />

The graph of Figure 2 subsumes the one of ObjVlisp. Entity describes the common structure<br />

and behaviors of entities, while StructuralMeta describes the structure and behaviors common<br />

to all structural meta entities. Being an itself entity, StructuralMeta inherits from Entity. Being<br />

itself the first structural meta-entity, it is constructed in such a way that it is its own instance, i.e.<br />

it possesses the structure and behaviors of a structural meta-entity.<br />

Adding to the relationships isomorphic to those of ObjVlisp, we have everything that have to<br />

do with BehavioralMeta. BehavioralMeta is the structural meta-entity <strong>for</strong> all behavioral metaentities,<br />

and there<strong>for</strong>e is an instance of StructuralMeta. Being also an entity, it inherits from<br />

Entity. The first behavioral meta-entity, basicBehavioralMeta, is instance of BehavioralMeta<br />

and it is the behavioral meta-entity of all kernel entities, including itself.<br />

The main flow of events that appear between the three different kinds of entities in order to<br />

implement the causal connection between the two levels are the following:<br />

– notification of state changes or other semantically important modifications from the base level<br />

entity to its behavioral meta-entity allows the latter to construct or refine its model of what is<br />

currently going on at the base level;<br />

– request <strong>for</strong> reflective computations can flow from the base level entity to its behavioral metaentity,<br />

often to initiate an adaptation phase;<br />

– requests from the behavioral meta-entity to the base level occur when adaptation are made;<br />

– adaptation can also lead to modify the structural description of a base level entity, and there<strong>for</strong>e<br />

we have events flowing from the behavioral meta-entity to the structural one to implement these<br />

modifications;<br />

– accordingly, structural modifications can lead to events flowing from the structural meta-entity<br />

to the base level object to harmonize the structure of the latter with the description held by<br />

the <strong>for</strong>mer.<br />

Besides the requests from the behavioral to the structural meta-entities, all other communication<br />

flows go across level boundaries, and are there<strong>for</strong>e comparable to the classical procedural<br />

reflection operations “reify” and “reflect”. Hence, the problem of designation of reified entities<br />

versus base level entities is posed. This aspect needs a more thorough study with the foundations<br />

of ARM to which we plan to return in future work.<br />

3.2 Generalization of the reified representation concept<br />

The reified representation is central to the metalevel. It is generally decomposed into a structural<br />

part and a behavioral part. The interest of this decomposition is that the structural part can often<br />

be shared among several base level entities (aka classes <strong>for</strong> objects), while the behavioral part,<br />

which accounts <strong>for</strong> the run-time state of base level entities, is not sharable by its very nature.<br />

To get some point of reference, in sequential procedural reflection, the reified representation<br />

is implemented by classes (well-known) and by behavioral meta-objects. This representation comprises<br />

a description of the structure of base level entities, by a list of instance variables (with their<br />

types and other modifiers) and a description of the behavior by a list of methods that can be<br />

169


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

applied by the base level entities. The behavioral representation comprises the execution state of<br />

the base level entities, typically the current continuation and the current environment (given that<br />

the continuation embodies the environments of all subcomputations waiting still <strong>for</strong> a result of a<br />

method call). In distributed procedural reflection, the local behavioral representation comprises<br />

elements used to manage the concurrency such as the queue of incoming messages and threads.<br />

Being <strong>open</strong> to several choices in representation that can even live together in one application,<br />

ARM is conceptually a generator of specific reflective models, noted ARM[·](RR), given a<br />

choice RR of reified representation. The kernel defines a set of abstract “classes” <strong>for</strong> the reified<br />

representation that impose its constraints. ARM can there<strong>for</strong>e be concretely viewed as a model<br />

parameterized by a set of concrete classes derived from the representation abstract classes.<br />

The design of this set of abstract classes results from an induction process aiming at making<br />

what transcends different representations appear. In this process, we have currently looked at three<br />

different <strong>for</strong>ms of representation: one based on a classical procedural approach ARM[·](P), one<br />

based on a deterministic finite-state automaton approach ARM[·](DFA), and finally one based<br />

on a statechart approach ARM[·](SC) where it is possible to have several levels of granularity<br />

in a clear semantic framework. The analysis of these three approaches has lead to the following<br />

minimal concepts of a reified representation:<br />

– the set of possible states of the base level entity,<br />

– a behavior that a base level object can apply to go from one state to another,<br />

– the set of behaviors that a base level entity can apply,<br />

– the activation of a base level object, and<br />

– a possible state <strong>for</strong> a base level object.<br />

The first three concepts are comprised in the structural part of the reified representation, while<br />

the activation concept is the hearth of the behavioral part as it will gather all the in<strong>for</strong>mation about<br />

the run-time properties of an entity. The state concept can give us an account <strong>for</strong> the current state<br />

of an entity, thus being part of the behavioral part. On the other hand, the set of possible states<br />

can sometimes be defined in extension as the set of all possible individual states, hence leading to<br />

see the state concept as part of the structural representation too.<br />

ARM represents these concepts as the five entities State, StateSpace,Behavior, Behaviors<br />

and Activation. For the sake of homogeneity, these can be considered as entities of the same kind<br />

as base level entities, but we do not impose that, as we will see in the ARM <strong>for</strong> Java plat<strong>for</strong>ms<br />

where they are abstract classes (partially) describing plain Java objects. The figure 4 provides the<br />

UML model of the kernel entities and reified representation <strong>for</strong> the ARM <strong>for</strong> Java plat<strong>for</strong>m.<br />

These concepts map easily to different choices of reified representation. For example, the state<br />

of an entity can be a vector of instance variable values (<strong>for</strong> traditional objects), a state in an<br />

automaton (<strong>for</strong> the DFA approach), or a (possibly composite) state in the statechart approach.<br />

The set of possible states can be the cartesian product of set of admissible values (product type)<br />

<strong>for</strong> traditional objects, or sets of states defined in extensions <strong>for</strong> DFA or statcharts. A behavior<br />

can be seen as a method in traditional objects, but also as a transition in a DFA or a statechart.<br />

Accordingly, the set of behaviors can be either a method dictionary (procedural reflection) or the<br />

set of all transitions in the automaton (DFA or statechart).<br />

3.3 Events of the communication protocol<br />

The communication protocol between levels implies the use of asynchronous events. Besides events<br />

that implements more traditional method invocations, ARM also needs events to notify the metalevel<br />

of changes at the base level. One can imagine two general well-known approaches to notifying<br />

the metalevel. First, the base level can notify its state changes. Be it a differential description of<br />

the new state from the current one, this can lead to quite large events if the state of the entity<br />

comprises a large amount of data. A second approach notifies the actions taken at the base level,<br />

from which the metalevel model can reconstruct the new state given the current state.<br />

These two notification modes are equally interesting in our context. The notification of actions<br />

can use a small amount of data if the parameters are of primitive data types only. On the other<br />

170


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

active<br />

#uniqueOID : String<br />

+start()<br />

+run()<br />

+process(e:Event)<br />

+publish(e:Event)<br />

<br />

ActiveObject<br />

+publishWithFuture(e:Event) : Future<br />

+publishWithFutureValue(e:Event) : FutureValue<br />

Future<br />

#no : long<br />

+touch()<br />

+set()<br />

1<br />

0..*<br />

FutureValue<br />

+getValue() : Serializable<br />

+setValue(v:Serializable)<br />

MethodRequest<br />

BoundedBuffer<br />

+isEmpty() : Boolean<br />

+isFull() : Boolean<br />

+insert(item:ObjectMessage)<br />

+remove() : ObjectMessage<br />

+isProcessable(servant:ActiveObject) : Boolean<br />

<br />

1 1<br />

-futureNo : long<br />

<br />

Callable<br />

+setServant(s:ActiveObject)<br />

+guard() : Boolean<br />

+apply()<br />

javax.jms<br />

1<br />

#senderOID : String<br />

SynchronousMethodRequest<br />

#destinationOID : String<br />

ObjectMessage<br />

(from javax.jms)<br />

0..*<br />

Event<br />

+isProcessable(servant:ActiveObject) : Boolean<br />

+setProperties(msg:Message)<br />

+isProcessable(servant:ActiveObject) : Boolean<br />

Fig.3. Active objects.<br />

1<br />

1<br />

SynchronizedEvent<br />

-futureNo : long<br />

<br />

hand, reconstructing a new state can be computationally intensive in some cases. The notification<br />

of state changes can be data intensive if it entails the communication of a large amount of data, but<br />

has a low computational cost. From another point of view, in distributed settings, notification of<br />

state changes is much more robust to loss of events than action notification. To let users choose the<br />

most appropriate <strong>for</strong>m of notification <strong>for</strong> individual entities, ARM provides both a StateEvent<br />

generic state notification event and an ActionEvent generic action notification event.<br />

4 A Java implementation of ARM <strong>for</strong> active objects<br />

4.1 Asynchronous active objects as base level entities<br />

A concurrent and distributed declension of ARM is implemented in the Java J2EE plat<strong>for</strong>m<br />

upon the basis of active objects communicating with asynchronous events and synchronizing using<br />

future values. In fact, the base level computational model upon which ARM <strong>for</strong> Java is founded<br />

is borrowed from Nierstrasz’s Hybrid language [33]. Asynchronous active objects (AAO) represent<br />

the unit of concurrent and distributed computation, around which islands of unshared passive<br />

objects aggregate. Passive objects cannot communicate directly with objects (passive or active)<br />

that are not part of their island; they have to use their proprietary active object to do so. The<br />

implementation is inspired from the design pattern Active Object <strong>for</strong>malized by Schmidt [37].<br />

The figure 3 gives a UML class diagram of our packageactive. The core functionality is implemented<br />

by the classActiveObject, which is a thread (inherits fromThread) and which implements<br />

distributed message passing and notification using the Java Message Service (JMS) asynchronous<br />

communication API. All events used by ARM inherits from the classEvent. An event has a sender<br />

and a destination; while queued by the receiver, a boolean method isProcessable tells whether<br />

the event is processable given the current state of the servant object.<br />

Events can either carry data or activate methods in receivers. Method requests are executable<br />

events that implement the interface Callable. A callable event has a guard method which tests<br />

the processability of the event given the state of the servant. The servant is set by setServant<br />

upon enqueue of the event. When processable, the event become candidate to a dequeue and is<br />

applied using the method apply.<br />

171


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

A method request can be unsynchronized (MethodRequest) or synchronized (Synchronized-<br />

MethodRequest). Synchronization is implemented using futures or promises [20,25], a well-known<br />

synchronization method in asynchronous communication. When a synchronized event is published,<br />

an instance of FutureValue is created to represent the return value (resp. an instance of Future<br />

representing the return signal, when there is no value but just a synchronization signal to be sent<br />

back) in the sender. When the event has been processed, the receiver returns the result (resp. the<br />

signal) to the sender. When the sender tries to access the result (or the signal) using the getValue<br />

(resp. touch) method, there are two possibilities. If the value (resp. the signal) has already been<br />

received, the sender gets that value (resp. that signal) and continue its execution. If not received<br />

yet, it waits <strong>for</strong> the value (resp. signal) and resumes its execution only when the value arrives.<br />

Events sent to active objects are queued into the object bounded buffer (class BoundedBuffer).<br />

The basic behavior of an active object is to repeatedly remove a processable event from its bounded<br />

buffer, and to process it. When none of the events are processable, or when there is no awaiting<br />

event at all, the active object becomes dormant until a processable event comes in.<br />

To send events through JMS, active objects first put them into an instance of javax.jms.<br />

ObjectMessage and then send them using the JMS API. Communication in JMS is organized in<br />

topics. The package active uses the topic called "active/Future" to deliver future values (or<br />

signal) between active objects. Three other JMS topics are introduced to organize the communication<br />

in ARM: "arm/Entity" <strong>for</strong> the communication between entities, "arm/SMeta" <strong>for</strong> the one<br />

between base level entities and their structural meta-entity, and "arm/BMeta" <strong>for</strong> the one between<br />

base level entities and their behavioral meta-entity.<br />

4.2 ARM <strong>for</strong> Java kernel<br />

The figure 4 gives a UML class diagram <strong>for</strong> the kernel entities and <strong>for</strong> the representation abstract<br />

classes of ARM[AAO](·) <strong>for</strong> Java. Besides the fact that Entity inherits from the class<br />

ActiveObject of the active package, we can identify relationships already defined in the model.<br />

Entities are represented by asynchronous active objects. Their main behavior appears in how they<br />

process incoming events, which is defined by the method process. A structural meta-entity provides<br />

you with methods to add (addbehavior), delete (deleteBehavior) or look up (lookup)<br />

behaviors. It also provides you with methods to get the initial state (getInitialState) <strong>for</strong> an<br />

instance of the structural meta, as well as a mapping from state events to states (mapToState) of<br />

their instances. A behavioral meta-entity provides you with a way to add a new base level entity to<br />

be the meta of (addBaseLevelEntity), and to delete an existing one (deleteBaseLevelEntity).<br />

Because structural (and behavioral) meta-entities are also entities, they are also represented<br />

by asynchronous active objects. A bootstrap, implemented by the static method bootstrap of<br />

StructuralMeta creates the three corresponding entities <strong>for</strong> the kernel, as well as the basic behavioral<br />

meta-entity.<br />

We have also defined a package representation containing the five classes corresponding to<br />

the representation entities in ARM. These classes are abstract and minimal. Only the essential<br />

relationships between them are defined; further refinements are deferred to specifically generated<br />

kernels. Notice that an activation <strong>for</strong> an entity is obtained by calling the method activate defined<br />

on the StateSpace of the entity.<br />

Finally, ARM defines the two abstract classes ActionEvent and StateEvent. An action event<br />

holds at least the name (the method name or the transition identifier) of the behavior which<br />

activation led to the notification of the event.<br />

5 Application to AI robotics<br />

5.1 Control architecture <strong>for</strong> AI robotics<br />

AI robotic control systems are founded on the organization of the three basic primitive functions:<br />

sense, plan and act [32]. After a long domination from the hierarchical paradigm, where the emphasis<br />

is put on the creation of an exhaustive model of the environment used by planning, AI<br />

172


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

arm<br />

Entity<br />

+reifiedEntity : StructuralMeta<br />

#struturalMetaOID : String<br />

#behavioralMetaOID : String<br />

+process(e:Event)<br />

StructuralMeta<br />

-reifiedStructuralMeta : StructuralMeta<br />

#name : String<br />

#superName : String<br />

+bootstrap()<br />

+lookup(name:String) : Behavior<br />

+getInitialState() : State<br />

+mapToState(e:StateEvent) : State<br />

+addBehavior(b:Behavior)<br />

+deleteBehavior(name:String)<br />

events<br />

representation<br />

StateSpace<br />

+containsState(s:State) : Boolean<br />

+isInitial(s:State) : Boolean<br />

+mapToState(e:StateEvent) : State<br />

+getInitial() : State<br />

+activate() : Activation<br />

Behaviors<br />

<br />

ActiveObject<br />

+lookup(name:String) : Behavior<br />

+addBehavior(b:Behavior)<br />

+deleteBehavior(name:String)<br />

StateEvent ActionEvent<br />

#name : String<br />

0..*<br />

BehavioralMeta<br />

+reifiedBehavioralMeta : StructuralMeta<br />

+reifiedBasicBehavioralMeta : BehavioralMeta<br />

+addBaseLevelEntity(entityOID:String, init:State)<br />

+deleteBaseLevelEntity(entityOID:String)<br />

0..*<br />

0..*<br />

#name : String<br />

State<br />

+nextState(e:ActionEvent) : State<br />

#smetaOID : String<br />

+getCurrentState() : State<br />

meta-of<br />

0..*<br />

Activation<br />

+invokeBehavior(name:String, args:Serializable)<br />

Behavior<br />

+invoke(s:State, args:Serializable) : State<br />

Fig.4. Kernel and representation entities of ARM.<br />

robotics has been faced to the difficulty of creating such a complete model and to plan actions<br />

from such complex data structures within strict deadlines to cope with the real-time nature of<br />

robot operations.<br />

Using lessons from ethology, Brooks [9] has proposed the reactive paradigm, which abandons<br />

planning in favor of very simple reflexes associating directly a reaction to each possible perceptions<br />

of the robot, without any memory of past perceptions and reactions. In this paradigm, the intelligence<br />

of the robot is emerging from the combination of a possibly large number of elementary<br />

reflexes. The absence of memory is grounded in Gibson’s argument saying that “the world is its<br />

own best representation” [32, quoted in] which should be accessed only through perception. The<br />

173<br />

0..*


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

S<br />

E<br />

N<br />

S<br />

O<br />

R<br />

S<br />

B<br />

I<br />

level 1 behaviors<br />

1 2<br />

A<br />

level 0 behaviors<br />

Fig.5. Behaviors and subsumption: given perceptions, B can Inhibit the input 1 of A; similarly C can<br />

Suppress (replace) the output 2 of A.<br />

reactive paradigm leads to the design of robots capable to react very rapidly to stimuli coming from<br />

their ecological niche. Reflex behaviors are implemented using three types of behavior modules:<br />

1. perceptors, which role consists in reading sensors and producing stimuli (or the absence thereof)<br />

looked <strong>for</strong> by the robot (e.g. a spot of light in an image),<br />

2. reactors, which role consists in computing the parameters of reactions given the actual stimuli<br />

and their intensity, and<br />

3. actuators, which role is to trans<strong>for</strong>m the reaction parameters into orders to the physical actuators<br />

of the robot.<br />

When composed, higher-level behaviors can inhibit lower-level behaviors, in much the same<br />

way our intelligent behaviors most of the time inhibit our animal ones. That’s what is called<br />

subsumption in the reactive paradigm. To do that, modules are added to connect perceptors,<br />

reactors and actuators, which allow us to inhibit stimuli from perceptors or reaction parameters<br />

computed by rectors. The figure 5 illustrate these possibilities.<br />

The successes of the reactive paradigm in the beginning of the nineties have given the first<br />

hope <strong>for</strong> operational situated intelligent robots. Un<strong>for</strong>tunately, the lack of a model of the robot<br />

environment and even more of planning soon appeared as a position far too extreme. Reactive<br />

control can become quite complex when trying to cope with some abnormal situations (looping,<br />

<strong>for</strong> instance), where planning could provide an answer. The emergent behavior of a large reactive<br />

robot can be very difficult to predict or to alter. Hybrid deliberative/reactive <strong>architectures</strong> have<br />

been introduced to get the best of both worlds. A reactive level takes care of robot reflexes in order<br />

to react in real-time to events from the environment, while a deliberative level can run in parallel<br />

to construct a model of the environment and to plan <strong>for</strong> future actions or to repair current faulty<br />

actions (such as looping). One possibility is to have the deliberative level producing new reactive<br />

programs to be used <strong>for</strong> a while at the reactive level until some goal has been reached or some<br />

abnormal situation is detected, and then to change <strong>for</strong> another reactive program.<br />

5.2 A globally asynchronous but locally synchronous model<br />

In our MAAM project, we aim at using reflection and ARM to implement the reactive part at<br />

the base level and the deliberative part at the metalevel. To do so, we have to provide a real-time<br />

computational model <strong>for</strong> the base level entities of ARM. Generally speaking, robotic systems are<br />

examples of the family of reactive systems. Reactive systems are those which main function is to<br />

react continuously to events occuring in their environment in order to produce reactions, often in<br />

the <strong>for</strong>m of orders executed by actuators to act upon their environment. This distinguishes them<br />

from more traditional trans<strong>for</strong>mational systems, which compute outputs from inputs. Furthermore,<br />

reactive systems must cope with an environment which cannot wait, and they are generally intended<br />

to be deterministic. This distinguishes them from more general interactive systems. Typical reactive<br />

systems are plant control systems.<br />

One of the most interesting approach to program reactive systems is the synchronous approach<br />

[18], which is based on a few simple hypothesis:<br />

174<br />

C<br />

S<br />

A<br />

C<br />

T<br />

U<br />

A<br />

T<br />

O<br />

R<br />

S


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

– events from the environment occur at discrete time instants,<br />

– at each instant, the system computes reactions from all of the events perceived at that instant,<br />

– the time to compute reactions is small compared to the time between two successive discrete<br />

instants, and<br />

– reactions can lead to the emission of events, which are perceived instantaneously by all reactive<br />

processes at the next discrete time instant, along with environmental events.<br />

The major advantages of the synchronous approach to reactive programming is the expressive<br />

power of synchronous parallel composition, the potential <strong>for</strong> efficient implementation, and the<br />

availability of <strong>for</strong>mal verification methods [19]. The appropriateness of the synchronous programming<br />

approach to robot control justifies its choice <strong>for</strong> MAAM atoms. However, if synchronous<br />

programming is well adapted to the control of individual atoms, Halbwachs and Baghdadi [19]<br />

note that it is not so well adapted to the case of distributed embedded systems where the intrinsic<br />

asynchronism must necessarily be taken into account. Currently, researchers are looking at globally<br />

asynchronous but locally synchronous (GALS) <strong>architectures</strong> to cope with distributed embedded<br />

systems. In such <strong>architectures</strong>, local synchronous processes are composed with each others using<br />

asynchronous communication [19].<br />

We have chosen the GALS approach <strong>for</strong> our MAAM atoms. Un<strong>for</strong>natunately, if the synchronous<br />

approach matches very well the reactive part of MAAM atoms, it is not really appropriate to<br />

implement the more AI-oriented deliberative functions. Hence, the GALS model we have chosen<br />

<strong>for</strong> MAAM is a composition of both asynchronous active objects and synchronous reactive ones,<br />

composed using a publish/subscribe communication model. Schmidt and O’Ryan [36] have shown<br />

that publish/subscribe communication can have a level of per<strong>for</strong>mance that is compatible with<br />

distributed embedded programming, there<strong>for</strong>e justifying our choice.<br />

The mix of active and reactive objects within one system raises the issue of synchronization<br />

mechanisms between both kind of entities. In the ARM <strong>for</strong> Java plat<strong>for</strong>m, we have used futures to<br />

implement synchronization between the asynchronous active objects. The mode of synchronization<br />

is not readily adoptable <strong>for</strong> reactive objects. Obviously, the waiting entailed by the traditional<br />

semantics of the touch and getValue operations is not appropriate <strong>for</strong> reactive objects given<br />

their real-time constraints. To solve the problem, we have simply noticed that active wait, which<br />

is usually considered as a bad practice in concurrent programming, is in fact the usual practice<br />

in real-time systems. Hence, we have adopted an active-wait semantics <strong>for</strong> reactive objects when<br />

synchronizing using futures. Futures are seen as any other stimuli upon which behavior modules<br />

can be fired (waiting). But if at some synchronous reactive cycle an awaited future is not available,<br />

other behaviors can continue their execution to keep up with the real-time deadlines while waiting<br />

<strong>for</strong> synchronization on other behaviors.<br />

5.3 A Java implementation of ARM[GALS]<br />

Most implementations of the synchronous approach re tightly integrated with synchronous languages,<br />

which are not appropriate to program robot deliberative functions. Few synchronous systems<br />

exist to date in Java 1 , the language we have chosen <strong>for</strong> MAAM as a compromise between the<br />

real-time nature of the reactive level and the AI-bound nature of the deliberative level. We have<br />

there<strong>for</strong>e chosen to implement a minimal extension of ouractive package to introduce synchronous<br />

reactive objects with their semantics of synchronization on futures.<br />

The figure 6 shows the new class diagram of the active package. In such an object-oriented<br />

settings, it would have been interesting to derive a class ReactiveObject from ActiveObject.<br />

Un<strong>for</strong>tunately, this would have caused difficulties when inheriting. Decoupling active and reactive<br />

behaviors would naturally lead to two classes in the ARM kernel: Entity and ReactiveEntity.<br />

Because of the single inheritance of Java, it is hard to satisfactorily implement theReactiveEntity<br />

class because it would have to inherit both from Entity and from ReactiveObject. We have<br />

there<strong>for</strong>e chosen to keep just one class ActiveObject with two modes of operation chosen upon<br />

instantiation: synchronous and asynchronous.<br />

1 SugarCubes [43] is a notable exception, but its implementation is too resource consuming <strong>for</strong> the MAAM<br />

atom light-weight electronic.<br />

175


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

active<br />

#operatingMode : int<br />

#uniqueOID : String<br />

+start()<br />

+run()<br />

+runAsynchronous()<br />

+runSynchronous()<br />

<br />

ActiveObject<br />

+processAsynchronous(e:Event)<br />

+processSynchronous(es:Event[])<br />

+publish(e:Event)<br />

+publishWithFuture(e:Event) : Future<br />

+publishWithFutureValue(e:Event) : FutureValue<br />

1<br />

Future<br />

#no : long<br />

+touch()<br />

+set()<br />

SynchronousFuture<br />

+touch()<br />

+set()<br />

0..*<br />

FutureValue<br />

+getValue() : Serializable<br />

+setValue(v:Serializable)<br />

BoundedBuffer<br />

+isEmpty() : Boolean<br />

+isFull() : Boolean<br />

+insert(item:ObjectMessage)<br />

+remove() : ObjectMessage<br />

+removeAll() : ObjectMessage[]<br />

FutureEvent<br />

+getNo() : long<br />

+getValue() : Serializable<br />

SynchronousFutureValue<br />

+touch()<br />

+set()<br />

1<br />

0..*<br />

1<br />

+getValue() : Serializable<br />

+setValue(v:Serializable)<br />

MethodRequest<br />

javax.jms<br />

#senderOID : String<br />

+isProcessable(servant:ActiveObject) : Boolean<br />

<br />

-futureNo : long<br />

#destinationOID : String<br />

ObjectMessage<br />

(from javax.jms)<br />

0..*<br />

<br />

Clock<br />

Event<br />

+isProcessable(servant:ActiveObject) : Boolean<br />

+setProperties(msg:Message)<br />

<br />

Callable<br />

+setServant(s:ActiveObject)<br />

+guard() : Boolean<br />

+apply()<br />

Fig.6. Reactive objects.<br />

1<br />

1<br />

SynchronizedEvent<br />

-futureNo : long<br />

SynchronousMethodRequest<br />

+isProcessable(servant:ActiveObject) : Boolean<br />

<br />

The asynchronous mode of operation is the genuine ARM <strong>for</strong> Java behavior described in the<br />

preceding section. The synchronous mode is the one of reactive objects. In this mode, a clock<br />

object (class Clock) rythms the execution of reactive object threads. At each given period of<br />

time, the clock releases all the threads waiting. When released, the threads execute one reactive<br />

cycle, which consists in taking all events waiting in their bounded buffer and to process them<br />

according to their reactive behavior. When the processing is done, the threads put themselves on<br />

a wait <strong>for</strong> the next release signal from the clock. Futures are now considered as any other events<br />

<strong>for</strong> reactive objects, thus instance of the class FutureEvent. The classes SynchronousFuture and<br />

SynchronousFutureValue implement the active-wait semantics of futures <strong>for</strong> the synchronous<br />

mode of operation of reactive objects. Active objects keep the traditional semantics with Future<br />

and FutureValue<br />

5.4 Hybrid reactive/deliberative model<br />

In our MAAM atoms, robot control is defined as a combination of reactive schemas, themselves<br />

being sets of perceptor, reactor and actuator modules. From a programmer point of view, we have<br />

implemented a complete reactive framework under ARM[GALS] <strong>for</strong> Java plat<strong>for</strong>m, which class<br />

diagram appears in Figure 8. The robot programmer has only to:<br />

1. design his/her reactive schemas, taking possible subsumptions into account,<br />

2. create the corresponding modules with classes inheriting from our framework classes,<br />

3. create the classes <strong>for</strong> signals among modules by inheriting from our class Signal, and<br />

4. create a class of reactive object, say Robot, inheriting from our class AbstractRobot, after<br />

which reactive behavior instances will have to be registered (using addBehaviorModule,<br />

addConnector, . . . ).<br />

To organize the reactive behaviors, we propose to have reactive schemas (Schema) composed of<br />

reactive modules (ReactiveModule), which themselves comprise behavioral modules (Behavior-<br />

Module). A reactive schema represents a logical function in the robot; it can be the basis <strong>for</strong><br />

176<br />

1<br />

1


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

active<br />

arm<br />

reactiveframe<br />

ActiveObject<br />

#modules : Hashtable<br />

#schedule : Vector<br />

#signalsIn : Vector<br />

#signalsOut : Vector<br />

Schema<br />

#externalSignals : Vector<br />

+addReactiveModule(m:ReactiveModule)<br />

+addExternalSignal(s:Signal)<br />

+setScheduling(schedule:Vector)<br />

+launch()<br />

+inhibit()<br />

Entity BehavioralMeta MethodRequest<br />

ReactiveModule<br />

+checkSchedulable(genSigs:Vector, connSigs:Vector) : Boolean<br />

+react()<br />

connector<br />

1<br />

1..*<br />

BehaviorModule<br />

#handlers : Vector<br />

+addHandler(h:Handler)<br />

+react()<br />

Perceptor<br />

Connector<br />

-signal : Signal<br />

+react()<br />

Inhibitor<br />

1<br />

1..*<br />

Signal<br />

#name : String<br />

#data : Object<br />

+set(data:Object)<br />

+getData() : Object<br />

SignalModule<br />

Reactor Actuator<br />

+react()<br />

Derivator<br />

Suppressor<br />

+react()<br />

1..*<br />

1<br />

1<br />

1<br />

metareactive<br />

reaction<br />

AbstractRobot<br />

#signals : Hashtable<br />

+createSignal(name:String)<br />

+addSchema(s:Schema)<br />

+setScheduling(schedule:Vector)<br />

+processSynchronous(es:Event[])<br />

+handle() : Object<br />

1<br />

Handler<br />

+conditionsAreChecked() : Boolean<br />

+guard(response:Object) : Boolean<br />

+react()<br />

MetaRobot<br />

SetScheduling<br />

ChangeModule<br />

SignalHandler<br />

#conditions : Condition<br />

1<br />

Scheduler<br />

#signals : Hashtable<br />

#schemas : Hashtable<br />

#schedule : Vector<br />

+createSignal(n:String)<br />

+addSchema(s:Schema)<br />

+buildScheduling() : Vector<br />

+conditionsAreChecked() : Boolean<br />

1..*<br />

Condition<br />

#conditions : Vector<br />

#notConditions : Vector<br />

+addSignal(s:Signal)<br />

+addNotSignal(s:Signal)<br />

+check()<br />

1<br />

+areChecked() : Boolean<br />

PHandler<br />

RHandler AHandler<br />

Fig.7. Reactive framework under ARM[GALS] <strong>for</strong> Java.<br />

adaptation by dynamic addition or subtraction of schemas in the robot reactive program. Behavior<br />

modules can be perceptors (Perceptor), reactors (Reactor) and actuators (Actuator).<br />

Schemas and behavior modules can be connected to each other using connectors (Connector),<br />

which comprise inhibitors (Inhibitor), suppressors (Suppressor) and derivators (Derivator). A<br />

schema must <strong>for</strong>m an directed acyclic graph. The scheduling of behavior modules is implemented<br />

by a topological sort of that graph. Behavior modules in schemas are partially scheduled up to the<br />

connections to other schemas. The complete schedule of a robot instance is made when all schemas<br />

are known. When actually running, the robot simply executes all of its behavior modules, in turn,<br />

as scheduled within one cycle of synchronous reaction. A complete rescheduling is done when one<br />

or more schemas are added or subtracted by the robot metalevel. The deliberative part of the<br />

framework is currently implemented by one abstract class MetaRobot, which main purpose is to<br />

do the scheduling of its base level robot. The class Scheduler implements the above scheduling<br />

algorithm.<br />

The figure 8 shows how our deliberative/reactive framework integrates with ARM[GALS]. The<br />

classAbstractRobot inherits fromEntity, using the synchronous operating mode, from which base<br />

level robot entities are created (e.g. robot1 and robot2). A class BMetaRobot defines behavioral<br />

177


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Entity StructuralMeta<br />

META<br />

BASE<br />

BehavioralMeta<br />

basicBehavioralMeta<br />

bmrobot1<br />

BMetaRobot<br />

AbstractRobot<br />

bmrobot2<br />

inherits−from<br />

instance−of<br />

meta−of<br />

robot1 robot2<br />

Fig.8. MAAM robots under ARM[GALS] <strong>for</strong> Java.<br />

meta-entities <strong>for</strong> robot (e.g. bmrobot1 and bmrobot2). The adaptation protocol put in place by<br />

AbstractRobot and BMetaRobot proceeds as follows:<br />

1. at each synchronous cycle, notifications from the base level robots are sent to the behavioral<br />

meta,<br />

2. the behavioral meta integrates these notifications into its reified model of the base level;<br />

this possibly fires an adaptation request sent to the base level as an event representing<br />

setScheduling or changeModule method request (the time needed to execute this adaptation<br />

must stay within the duration of a cycle to match the synchronous hypothesis),<br />

3. the adaptation request is processed by the base level.<br />

The adaptation request cannot be processed as other events in the synchronous processing<br />

cycle. Modifications are better handled when the base level is in some kind of renewal state, which<br />

needs the lowest possible amount of work to do the adaptation. For synchronous programs, this<br />

can either be at the beginning or at the end of a cycle. Depending on the priority to be given to<br />

adaptation compared to meeting the deadlines, the programmer can chose either policies.<br />

6 Related work<br />

In parallel with the industrial adoption of publish/subscribe communication <strong>for</strong> distributed programming,<br />

as the JMS API testifies, some work have introduced event-based communication in<br />

reflective operating systems and middleware [4,3, 24,44] and more generally in systems trying to<br />

implement <strong>for</strong>ms of dynamic adaptability [27]. We should also mention work in computer-human<br />

interaction, like the MVC, as well as the Observer and State design patterns that have popularized<br />

the concept and the use of notification in general. This work, as well as their counterpart in<br />

reflection [6, 16] have inspired our work on ARM.<br />

Among reflective languages, LEAD++ [1] proposes an approach with tends towards ARM<br />

ideas, without breaking with procedural reflection though. The use of events has also inspired<br />

Dynascope [42], a supervision system, which Sosi˘c read out as reflective. One can see in this<br />

system a precursor of the Java Plat<strong>for</strong>m Debugger Architecture (JPDA) [23]. MetaXa [17] uses<br />

events to reflect upon a Java-like virtual machine, but stays close to procedural reflection.<br />

In object-oriented concurrent and distributed programming, most of the reflective languages<br />

restrict themselves to local reflection where the metaobject can be a processor <strong>for</strong> the base level<br />

(see [8] <strong>for</strong> a review of this wide area). Other systems have retricted themselves to the reification<br />

of very specific aspects of their implementation, such as stubs and proxies, upon which reflective<br />

computations can be locally implemented. All of these sticks to procedural reflection.<br />

The actor and agent research community has identified and begins to explore ideas similar to<br />

the ones developed in ARM. Without sharing the inspiration <strong>for</strong> agents, ARM can clearly share<br />

much of the implementation ideas with that area. Very few papers build bridges between reflection<br />

and control theory, Pii Lunau offering a notable exception [26], yet taking a procedural point of<br />

view.<br />

178


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

The synchronous approach to reactive systes is generally associated with synchronous languages,<br />

like Esterel. The objective of a synchronous language is to offer a way to describe how events must<br />

be processed during a cycle of the logical clock. Several of these languages use the logical concurrent<br />

composition between computational activities an event emission. Robotic control <strong>architectures</strong>, like<br />

the deliberative/reactive ones pursue essentially the same goals. This is why we have not chosen<br />

to implement our base level entities using a synchronous language or system, like Rejo [43].<br />

Arkin’s book [2] is the reference in the area of reactive <strong>architectures</strong> in AI robotics. We did<br />

not find a reference implementation of these ideas in Java. In AI robotics, there is a tendency <strong>for</strong><br />

everyone, that we un<strong>for</strong>tunately had to pursue, to reimplement his own framework. However, most<br />

of the work we had access to use a manual scheduling of behavior modules.<br />

Halstead has proposed futures as a synchronization abstraction in MultiLisp [20], which has<br />

then been improved by Liskov and Shrira with promises [25]. Halbwachs and Baghdadi [19] propose<br />

to emulate different synchronization schemes in the synchronous approach, something we actually<br />

do with our implementation of futures <strong>for</strong> synchronous reactive systems. Caromel and Roudier<br />

[10] have proposed a reactive extension to the Eiffel// language, but they use an asynchronous<br />

approach to reactive systems borrowed from the Electre asynchronous reactive language.<br />

7 Conclusion<br />

ARM[·](·) is a new generic reflection model that we have argued in this paper to be much better<br />

suited to address the challenges of today’s distributed and embedded systems. ARM is inspired<br />

from the ObjVlisp model to which behavioral meta entities are added. However, the traditional<br />

language processor role of the metalevel is abandonned in favor of a much more general role<br />

of model construction and controlled adaptation of the base level. The traditional procedural<br />

reified representation is also traded <strong>for</strong> the much more general concept of reification model, where<br />

incompleteness, fuzziness and probabilistic account of the base level can be used to capture the<br />

necessary properties of the base level to enable the wide-range of reflective capabilities needed in<br />

distributed and embedded systems.<br />

ARM has been developed and applied to a <strong>modular</strong> robotics project called MAAM, where<br />

the physical reconfigurability of the robots has to be paralleled by an equivalent software reconfigurability.<br />

To that end, we have designed and implemented the ARM[GALS] <strong>for</strong> Java plat<strong>for</strong>m.<br />

This plat<strong>for</strong>m uses a globally asynchronous but locally synchronous approach to the design of distributed<br />

embedded systems. In our plat<strong>for</strong>m, asynchronous active objects mesh with synchronous<br />

reactive objects to implement a hybrid deliberative/reactive framework typical in AI robotics. This<br />

implementation proposes to use futures <strong>for</strong> synchronization in GALS systems, a premiere to our<br />

knowledge.<br />

Numerous perspectives are <strong>open</strong> by this work. Asynchronous reflection poses a large number of<br />

profound questions, such as the possibilities offered by new <strong>for</strong>ms of reified representations, and the<br />

relationship between the kinds of adaptation and the required level of precision, synchronization,<br />

fault-tolerance and causality that must be imposed on events notifying state changes in the base<br />

level to the metalevel. Another important issue is the marriage of control theory and reflection<br />

that must be done to keep away from undesirable adaptation policies which would do nothing but<br />

repeatedly adapt the system to rapidly varying level of available physical resources <strong>for</strong> example.<br />

IBM has launched the autonomic computing initiative where such issues have a deep impact.<br />

8 Acknowledgements<br />

Simon Denier, student at the INSA of Rennes, has contributed to this work during his summer stay<br />

in 2002 and then during a semester research period in 2003. The MAAM project, led by Dominique<br />

Duhaut from the Université de Bretagne sud, is funded by the French CNRS under the ROBEA<br />

program.<br />

179


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

References<br />

1. N. Amano and T. Watanabe. An Approach <strong>for</strong> Constructing Dynamically Adaptable Component-<br />

Based Software Systems using LEAD++. In Cazzola, Stroud, and Tisato, eds, Elec. Proceedings of the<br />

OOPSLA’99 Workshop on Object-Oriented Reflection and Software Engineering, OORaSE’99, pages<br />

1–16, 1999.<br />

2. R. Arkin. Behavior-Based Robotics. MIT Press, 1998.<br />

3. G.S. Blair, A. Andersen, L. Blair, G. Coulson, and D. Sánchez. Supporting Dynamic QoS Management<br />

Functions in a Reflective Middleware Plat<strong>for</strong>m. IEE Proceedings — Software, 147(1):13–21, February<br />

2000.<br />

4. G.S. Blair, G. Coulson, P. Robin, and M. Papathomas. An Architecture <strong>for</strong> Next Generation Middleware.<br />

In Proceedings of the IFIP International Conference on Distributed Systems Plat<strong>for</strong>ms and<br />

Open Distributed Processing, Middleware’98. IFIP, Springer-Verlag, 1998.<br />

5. D.G. Bobrow, R.P. Gabriel, and J.L. White. CLOS in Context — The Shape of the Design Space.<br />

In A. Paepcke, editor, Object-Oriented Programming: the CLOS Perspective, chapter 2, pages 29–61.<br />

MIT Press, 1993.<br />

6. M. Braux and J. Noyé. Changement dynamique de comportement par composition de schmas de<br />

conception. In J. Malenfant and R. Rousseau, editors, Proceedings of “Langages et Modèles à Objets,<br />

LMO’99”, page ?? Hermès, 1999.<br />

7. J.-P. Briot and P. Cointe. A Uni<strong>for</strong>m Model <strong>for</strong> Object-Oriented Languages Using the Class Abstraction.<br />

In Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI’87, pages<br />

40–43, 1987.<br />

8. J.-P. Briot, R. Guerraoui, and K.P. Lohr. Concurrency and Distribution in Object-Oriented Programming.<br />

ACM Computing Surveys, 30(3):291–329, September 1998.<br />

9. R. A. Brooks. A Robust Layered Control System <strong>for</strong> a Mobile Robot. IEEE Journal of Robotics and<br />

Automation, 2(1):14–23, March 1986.<br />

10. D. Caromel and Y. Roudier. Reactive Programming in Eiffel//. In J.-P. Briot, J. M. Geib, and<br />

A. Yonezawa, editors, Proceedings of the Conference on Object-Based Parallel and Distributed Computation,<br />

pages 125–147. Springer-Verlag, 1996.<br />

11. S. Chiba and T. Masuda. Designing an Extensible Distributed Language with a Meta-Level Architecture.<br />

In Proceedings of ECOOP’93, number 707 in Lecture Notes in Computer Science, pages 482–501.<br />

Springer-Verlag, July 1993.<br />

12. P. Cointe. Metaclasses are First Class: the ObjVLisp Model. Proceedings of OOPSLA’87, ACM Sigplan<br />

Notices, 22(12):156–167, December 1987.<br />

13. O. Danvy and K. Malmkjaer. Intensions and Extensions in a Reflective Tower. In Proceedings of the<br />

ACM Symposium on Lisp and Functional Programming, LFP’88, pages 327–341. ACM, 1988.<br />

14. J. des Rivières and B. C. Smith. The implementation of procedurally reflective languages. In Proceedings<br />

of the ACM Symposium on Lisp and Functional Programming, LFP’84, pages 331–347. ACM,<br />

August 1984.<br />

15. J. Ferber. Computational Reflection in Class Based Object-Oriented Languages. In Proceedings of<br />

OOPSLA’89, ACM Sigplan Notices, volume 24, pages 317–326, October 1989.<br />

16. L.L Ferreira and C.M.F. Rubira. Reflective Design Patterns to Implement Fault Tolerance. In J.-C.<br />

Fabre and S. Chiba, editors, Proceedings of the OOPSLA’98 Workshop on Reflective Programming in<br />

C++ and Java, pages 81–85. UTCCP Report 98-4, U. of Tsukuba, Center <strong>for</strong> Computational Physics,<br />

October 1998.<br />

17. M. Golm and J. Kleinöder. MetaXa and the Future of Reflection. In J.-C. Fabre and S. Chiba, editors,<br />

Proceedings of the OOPSLA’98 Workshop on Reflective Programming in C++ and Java, pages 1–5.<br />

UTCCP Report 98-4, University of Tsukuba, Center <strong>for</strong> Computational Physics, October 1998.<br />

18. N. Halbwachs. Synchronous Programming of Reactive Systems – A Tutorial and Commented Bibliography.<br />

In A. J. Hu and M. Y. Vardi, editors, Proceedings of Computer Aided Verification, 10th<br />

International Conference, CAV’98, number 1427 in LNCS, pages 1–16. Springer, 1998.<br />

19. N. Halbwachs and S. Baghdadi. Synchronous Modelling of Asynchronous Systems. In A. L.<br />

Sangiovanni-Vincentelli and J. Sifakis, editors, Proceedings of Embedded Software, Second International<br />

Conference, EMSOFT 2002, number 2491 in LNCS, pages 240–251. Springer, October 2002.<br />

20. R. H. Halstead, Jr. Multilisp: A Language <strong>for</strong> Concurrent Symbolic Computation. ACM Transaction<br />

on Programming Languages and Systems, 7(4):501–538, October 1985.<br />

21. Y. Honda and M. Tokoro. Soft Real-Time Programming through Reflection. In A. Yonezawa and<br />

B. Smith, editors, Proceedings of the International Workshop on New Models <strong>for</strong> Software Architecture<br />

’92, Reflection and Meta-Level Architectures, pages 12–23. RISE (Japan), ACM Sigplan, JSSST, IPSJ,<br />

November 1992.<br />

180


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

22. Y. Ichisugi, S. Matsuoka, and A. Yonezawa. RbCl: A Reflective Object-Oriented Concurrent Language<br />

without a Run-Time Kernel. In A. Yonezawa and B. Smith, editors, Proceedings of the International<br />

Workshop on New Models <strong>for</strong> Software Architecture ’92, Reflection and Meta-Level Architectures, pages<br />

24–35. RISE (Japan), ACM Sigplan, JSSST, IPSJ, November 1992.<br />

23. Sun Microsystems, Java Plat<strong>for</strong>m Debugger Architecture Home Page.<br />

http://java.sun.com/products/jpda, 2002. SDK release 1.4.<br />

24. F. Kon, M. Romàn, P. Liu, J. Mao, T. Yamane, L.C. Magalh aes, and R.H. Campbell. Monitoring,<br />

Security, and Dynamic Configuration with the dynamicTAO Reflective ORB. In J. Sventek and<br />

G. Coulson, editors, Proceedings of the IFIP International Conference on Distributed Systems Plat<strong>for</strong>ms<br />

and Open Distributed Processing, Middleware 2000, number 1795 in LNCS, pages 121–143. Springer-<br />

Verlag, April 2000.<br />

25. B. Liskov and L. Shrira. Promises: linguistic support <strong>for</strong> efficient asynchronous procedure calls in<br />

distributed systems. In Proceedings of the ACM SIGPLAN 1988 conference on Programming Language<br />

design and Implementation, PLDI’88, pages 260–267. ACM Press, 1988.<br />

26. C.P. Lunau. A Reflective Architecture <strong>for</strong> Process Control Applications. In Proceedings of ECOOP’97,<br />

number 1241 in LNCS, pages 170–189. Springer-Verlag, June 1997.<br />

27. J. Malenfant, M.-T. Segarra, and F. André. Dynamic Adaptability: the MolèNE Experiment. In<br />

A. Yonezawa, editor, Proceedings of the Third International Conference on Metalevel Architectures<br />

and Separation of Crosscutting Concerns, Reflection 2001, volume 2192 of LNCS, pages 110–117.<br />

Springer-Verlag, September 2001.<br />

28. H. Masuhara, S. Matsuoka, T. Watanabe, and A. Yonezawa. Object-Oriented Concurrent Reflective<br />

Languages can be Implemented Efficiently. Proceedings OOPSLA ’92, ACM SIGPLAN Notices,<br />

27(10):127–144, October 1992.<br />

29. H. Masuhara, S. Matsuoka, and A. Yonezawa. Implementing Parallel Language Constructs using a<br />

Reflective Object-Oriented Language. In G. Kiczales, editor, Proceedings of the First International<br />

Conference on Reflection, Reflection’96, pages 79–92. Xerox PARC, 1996.<br />

30. S. Matsuoka, T. Watanabe, and A. Yonezewa. Hybrid Group Reflective Architecture <strong>for</strong> Object-<br />

Oriented Reflective Programming. In Proceedings of ECOOP’91, number 512 in LNCS, pages 231–250.<br />

Springer-Verlag, July 1991.<br />

31. J. McAffer. Meta-Level Programming with CodA. In Proceedings of ECOOP’95, number 952 in LNCS,<br />

pages 190–214. Springer-Verlag, August 1995.<br />

32. R.R. Murphy. Introduction to AI Robotics. MIT Press, 2000.<br />

33. O. Nierstrasz. Active Objects in Hybrid. Proceedings of OOPSLA’87, ACM Sigplan Notices,<br />

22(12):243–253, December 1987.<br />

34. H. Ogawa, K. Shimura, S. Matsuoka, F. Maruyama, Y. Sohda, and Y. Kimura. OpenJIT: An Open-<br />

Ended, Reflective JIT Compiler Framework <strong>for</strong> Java. In E. Bertino, editor, Proceedings of ECOOP<br />

2000, number 1850 in LNCS, pages 362–387. AITO, Springer-Verlag, 2000.<br />

35. H. Okamura, Y. Ishikawa, and M. Tokoro. AL-1/D: A Distributed Programming System with Multi-<br />

Model Reflection Framework. In A. Yonezawa and B. Smith, editors, Proceedings of the International<br />

Workshop on New Models <strong>for</strong> Software Architecture ’92, Reflection and Meta-Level Architectures, pages<br />

36–47. RISE (Japan), ACM Sigplan, JSSST, IPSJ, November 1992.<br />

36. D. C. Schmidt and C. O’Ryan. Patterns and Per<strong>for</strong>mance of Distributed Real-time and Embedded<br />

Publisher/Subscriber Architectures. Journal of Systems and Software, 66(3):213–223, June 2003.<br />

37. D.C. Schmidt, M. Stal, H. Rohnert, and F. Buschmann. Pattern-Oriented Software Architecture:<br />

Patterns <strong>for</strong> Concurrent and Networked Objects. Wiley & Sons, 2000.<br />

38. W.-M. Shen, B. Salemi, and P. Will. Hormone-Inspired Adaptative Communication and Distributed<br />

Control <strong>for</strong> CONRO Self-Reconfigurable Robots. In IEEE Transactions on Robotics and Automation,<br />

October 2002.<br />

39. B.C. Smith. Reflection and Semantics in Lisp. In Proceedings of the 14th Annual ACM Symposium on<br />

Principles of Programming Languages, pages 23–35. ACM, January 1984.<br />

40. B.C. Smith. Discussion session. First Workshop on Reflection and Metalevel Architectures in Object-<br />

Oriented Programming, OOPSLA/ECOOP’90, October 1990.<br />

41. B.C. Smith. What do you mean, meta? In J.-P. Briot, B. Foote, G. Kiczales, M.H. Ibrahim, S. Matsuoka,<br />

and T. Watanabe, editors, In<strong>for</strong>mal Proceedings of the First Workshop on Reflection and Metalevel<br />

Architectures in Object-Oriented Programming, OOPSLA/ECOOP’90, October 1990.<br />

42. R. Sosi˘c. Introspective computer systems. Electrotechnical Review, 59(5):292–298, December 1992.<br />

43. J.-F. Susini. Implementation de l’approche reactive en Java : les SugarCubes v2. In Actes de<br />

Modélisation des Systèmes Réactifs, MSR’99. Hermès, 1999.<br />

44. N. Wang, M. Kircher, and D.C. Schmidt. Towards a Reflective Middleware Framework <strong>for</strong> QoS-Enabled<br />

CORBA Component Model Applications. In Proceedings of the Reflective Middleware Workshop, RM<br />

2000, 2000. Electronic proceedings.<br />

181


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

45. T. Watanabe and A. Yonezawa. Reflection in an Object-Oriented Concurrent Language. Proceedings<br />

of OOPSLA’88, ACM Sigplan Notices, 23(11):306–315, November 1988.<br />

46. T. Watanabe and A. Yonezawa. An Actor-Based Metalevel Architecture <strong>for</strong> Group-Wide Reflection.<br />

In J.-P. Briot, B. Foote, G. Kiczales, M.H. Ibrahim, S. Matsuoka, and T. Watanabe, editors, In<strong>for</strong>mal<br />

Proceedings of the First Workshop on Reflection and Metalevel Architectures in Object-Oriented<br />

Programming, OOPSLA/ECOOP’90, October 1990.<br />

47. Y. Yokote. The Apertos Reflective Operating System: The Concept and its Implementation. Proceedings<br />

OOPSLA’92, ACM SIGPLAN Notices, 27(10):414–434, October 1992.<br />

48. E. Yoshida, S. Murata, S. Kokaji, A. Kamimura, K. Tomita, and H. Kurokawa. Get Back In Shape! A<br />

Hardware Prototype Self-Reconfigurable Modular Microrobot that Uses Shape Memory Alloy. IEEE<br />

Robotics & Automation Magazine, 9(4):54–60, 2002.<br />

182


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

¦��£�¥���¡��<br />

¢¡¤£¦¥¨§¨©��¢¡�������§�©���£¦��¡¤��§�£¦����<br />

�§���¡�¥�����§¨ ���������������©���¡� �������§��<br />

����<br />

�������������¢�������������<br />

�������¨���������������������������������<br />

�������������������������������<br />

���������������������������������������������������������� �������������<br />

�������������������<br />

�¤����������������������������������<br />

���������<br />

��������������������������������������������������������<br />

���������<br />

���������������������������������������������������������������������������������������������������������������������������������<br />

�����������������������������¨�����������������������������������������������������������������¨�������������������¨�����������������<br />

���������¢���������������¦���������������������������¦���������������������������������������������������������������������������������������<br />

���������������������������������������¢�������������������������������������������������������������¦�������������������������������������<br />

�������������������������������������������������������������������������������������������������������������������������������������¨���������<br />

�����������������������������������������������������������������������������������¨���������������������������������������������������������������<br />

�������������¢�����������������������������������������������������������������������������������������������������������������������<br />

���������������������������������������¢���������������������������������������������������������������������������������������������<br />

�����������¢�������¨�����������������������������������������������������������¤�������������������������������������������������������<br />

�����������������������������������������¤���������������������������<br />

�����������������������������������������������������������������������������������������������������������������������������<br />

�<br />

�����������������������<br />

� �������������������������<br />

�����������������������������������������������������������������������������<br />

��������������������������������� ��� �������������������������������<br />

�<br />

�������������������������������������������������������������������������<br />

� ���������������������<br />

���������������������������������������������������<br />

�����������������������������������<br />

�������������������������������¢�����<br />

���¨���������������������������������������¦�����������������������������������<br />

���������������������������������������������������¢�������������������������¢���<br />

������� � �<br />

�������������������������������<br />

� ���������������������������������������������������������<br />

���������������������<br />

�������������������������������������������������������������������������������������<br />

���������������������������������������������������������������������������<br />

�������������������������������������������������������������¢���������������������<br />

� ������������������������������������� �<br />

���������������������������������������<br />

�������������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������������������<br />

���������������������������������¦���������������������������������¢�������������������<br />

�����������������������������������������������������������������������������������������<br />

������� � ��������������� ������� � �<br />

���������������¤�����������������¨�������������<br />

183<br />

�����������������������������������������������¨�������������������������������¦�����<br />

�����������¨�����������������������������������������������������������������������<br />

�������������¨���������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������<br />

���������������������������������������������������������������������������������������<br />

� �����<br />

�����������������������������������������������������������¦�����������������������<br />

�����������������������������������������������������������������������������¨���������<br />

� �����������������������<br />

���������������������������������������������<br />

���������������������������������������������������������������������������<br />

��������� � ������� � �����������������������<br />

�������������¤�����������������<br />

������� � ��������������� ��������� � ������� � �����������������<br />

�������������������<br />

� � �������<br />

�����������������������������������������������������������������������<br />

�������������������������������������������������������������������������<br />

��� ��� �<br />

�������������������������������������������������������������������������<br />

��� � � � ����������������� ������� � ������������������� � ��� ���<br />

�������������<br />

�����������������������������������������������������������������������������������������<br />

�����������������������������������������������������������������¦�����������������<br />

�����������������������������������������������������������������������������������������<br />

������� ��� � � � � � �����¢����������������� ������� � �����������������<br />


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

� ���¤����������� ��������� � ����� � � ������������� ����� � � ���������<br />

�������<br />

�����������������������������������������������������������������������¢�����������������<br />

���������������������������������������¨�����������������������������������������������<br />

���������������������¦�����������������������������������������������������¢�����������<br />

���������������������������������������������������������������������������������������<br />

� �����������<br />

���������������������������������������������������������������������<br />

���������������������������������������������¤���������������������������������������<br />

�������������������������������������������������������������¨�������������������������<br />

�������������������������������<br />

� ����������� �<br />

���������������������������������������������������������<br />

���������������������������������������¨�����������������������������������������<br />

� �����<br />

�������������������������������������������������������������������������������������<br />

�������������������������¨���������¨���������������������������¢���������������<br />

�������������������������������������������������������������������������������<br />

� ���������������������<br />

���������������������������������������������������������<br />

�������������������������������������������������������������������������������������<br />

�������������������¨�������������������������������������������������������������<br />

� ����������� �<br />

�����������������������������������������������������������������������<br />

�������������������������������������������������������¦���������������������������<br />

� ����������� �<br />

�������������¢���������������¤�����������¢���������������������������<br />

�������������������������������������������������������������������������������������<br />

�����������������������¤�����������������������<br />

� ���������������¤����������������������� ����� ����� �<br />

� � � ���������������������������������������<br />

�������������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������������<br />

�������¢�����������������������������������¨�������������������������������<br />

���������������������������������������������������������������������������������������<br />

�������������������������������������������¨�����������������������������������������<br />

���������������������������������������������������������������������������������<br />

� ����� � ������������� � ��� � � �¤�����<br />

�������������������������������������<br />

� ��� � � �����¢���������������������������������������������������<br />

���¤�������<br />

���������������������������<br />

�������������������������������������������������������������������¨���������<br />

�����������������������������������������������������������������������������������������<br />

���������������������������������������������������������������������¨�����������������<br />

���������������������������������������������������������������������������������<br />

�����������������������������������������������������������������¨�������������������<br />

� ���������������������������������<br />

���������������������������������������������¤���<br />

��� ��� � ������� � �����<br />

�������������������������������������������������������������<br />

��� � � �<br />

�������������������������������¨�����������������������������¦�������������<br />

������� � ������� � �������������������������������������������������<br />

�����������<br />

�������������������������������������������������������������������������������<br />

������� � �<br />

���¤�������������������������������<br />

�¤�������������������������������������������������������������������������������<br />

�����������������¨�������������������������������������������������������������������<br />

� �������������������������������������������������������<br />

�������������������������<br />

�����������������������������������������������������������������������������<br />

�������������������������������¤�����������������������������������������������������<br />

�������������������������������������������������������������������������������<br />

�����������¢�������������������������������������������������������������������<br />

��������� � ������� � �����������������������������������<br />

���������������������<br />

�������������������������������������������������������������������������������<br />

184<br />

Complexity<br />

of Agent<br />

Architecture<br />

deliberative<br />

reactive<br />

Individual Intelligence<br />

(direct communication)<br />

Collective Intelligence<br />

(swarm intelligence,<br />

indirect communication)<br />

Number of Agents<br />

� �������������������������������������������������������������<br />

�������<br />

�����������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������<br />

���������������������������������������������������������������������������<br />

��� � � � ����������������������������� ������� �<br />

���������������������������������<br />

� ������������� � ��� � � � � �����¢�������������������¤�������������������������<br />

�<br />

� �<br />

���������������������¢���������������������������������������������������<br />

���������������������������������������������������¦���������������������������<br />

�������¨�����������������������������������������������������������������������<br />

���������������������������������������������������������������������������������������<br />

�����������<br />

� �<br />

�����������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������<br />

�����������������������<br />

���������������������������¤�������������������������������������������������<br />

�������������������������������������������������������������������<br />

� ��� � � �<br />

�����������������������������������<br />

�����������������������������������¨�������������������������������������<br />

�������������������������¨���������������������¤���������������������������<br />

�<br />

���������¨�����������������������������������<br />

� � � ���������������������������������<br />

�����������������������������������������������������������������������������������<br />

�<br />

� �������������������<br />

�������������������������������������������������������������<br />

�����¦�����������������������������������¨���������������¦���������������¨�������<br />

�����������������������������������������������������������������������<br />

���������������������������������¨�������������������������¨�����������������������<br />

�������������������������������������������������������������������������������������<br />

������� � �����������<br />

���������������������������������������������������������������<br />

�������������������������������������������������������������������¦���������������<br />

�����������������������¦�����������������������������������������������������������<br />

���������������������������������������������������������������������������<br />

�<br />

�������������������������������������������������������������������������������������<br />

� ��� � � �����������������������<br />

�����������������������������������������<br />

�����������������������������������������������������������������������������������<br />

�<br />

�����������������¨�������������������������������������������¤���������¨�����������<br />

�����������������������������������������������������������¦�������������������¦���<br />

�������������������¨�������������������������������������������������������¤���������<br />

�������������������������������������������������������������������������������


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

��� ��� � ������������� ����� � ��� ���������������������������������<br />

���������������<br />

���������������������������¨�����������������������������������������������������<br />

�����������������������������¦�����������������������¦�������������������<br />

���������������������������������������������������¦���������¨�����������<br />

�<br />

���������������������������������������¤���������������¨�������������������¤���������<br />

��������� � ������� ��� ���������������<br />

�������������������������������������������������<br />

� �����������������������<br />

�������������������������������������������������������<br />

� � � ����� ��������� � � ��� � � �����������������������������������<br />

�������<br />

� ������������������������������������� � �������������������������������<br />

�������<br />

���������������������������������������������������������������������������������<br />

� �����������������������������������������������������<br />

���������������������������<br />

� � ���¤���<br />

���������������������������������������������������������������������¢�������<br />

���������������������������������������������������������������������������<br />

�����������������������������������������¦�������������������������������<br />

� �������<br />

�����������������������������������������������������������������������<br />

�������������������������������������<br />

�������������������������������������������������������¤�����������������������������<br />

�������������������������������<br />

���������������������������������������������������������������������������<br />

�<br />

���������������<br />

�������������������������������������������������������������������������������<br />

�������������������������������¦�������������������������������������������������<br />

�������������������������������¤�¦�������������������������������������¦�������<br />

���������¤�����������������������������������������������������������������������<br />

�������������������������������������������������������������������������������������<br />

���������������������������������������������������������������������������������������<br />

�¨���������������������������<br />

�¤�������������������������������������������������������������<br />

�������������������������������������������������������������������������������<br />

�<br />

�������������������������������<br />

�����������������������������������������������������������������������������<br />

�<br />

�����������������������������������<br />

���������������������������������������������������������������������������<br />

�������������������������������������������������������������������������<br />

�<br />

���������������������������������������<br />

�¤�����������������������������������������������������<br />

�����������������������¢���¤�������������������������������������������<br />

���������������������¢���������������������������������������¨���������<br />

�<br />

���������������������������������������������������������������������������<br />

���������������������������������������������������������������������������<br />

�������������<br />

�����������������������������¤���������������¢���������������������������<br />

�������¢�����������������������������������������<br />

�����������������������������������������������������������������<br />

�������<br />

� ��������������������������������� � ����� � �������<br />

�������������������<br />

� �<br />

���������������������������������������������¦���������������������<br />

�����������������������������������������������������������������������<br />

�����������������������������<br />

���������������������������������������������<br />

�����������������������������������������������������������������������<br />

�������������������������������������������������������������¨�����������������<br />

���<br />

���������������������������������������������������������������<br />

� ������� � � �����<br />

185<br />

left attractive<br />

signal<br />

¡¢¡¢¡¢¡¢¡¢¡¢¡¢¡¢¡<br />

£¢£¢£¢£¢£¢£¢£¢£¢£<br />

£¢£¢£¢£¢£¢£¢£¢£¢£<br />

¡¢¡¢¡¢¡¢¡¢¡¢¡¢¡¢¡<br />

paralyzed<br />

robot<br />

¡¢¡¢¡¢¡¢¡¢¡¢¡¢¡¢¡<br />

£¢£¢£¢£¢£¢£¢£¢£¢£<br />

£¢£¢£¢£¢£¢£¢£¢£¢£<br />

¡¢¡¢¡¢¡¢¡¢¡¢¡¢¡¢¡<br />

¡¢¡¢¡¢¡¢¡¢¡¢¡¢¡¢¡<br />

£¢£¢£¢£¢£¢£¢£¢£¢£<br />

£¢£¢£¢£¢£¢£¢£¢£¢£<br />

¡¢¡¢¡¢¡¢¡¢¡¢¡¢¡¢¡<br />

right attractive<br />

signal<br />

carriage<br />

� ���������������������������������������������������¤�������<br />

�������<br />

�������������������������������������������������������������������������������������<br />

� �����<br />

�������������������������������������������������������������������������������������<br />

�������������������������������������¢�����������������������������¤�����������<br />

�������������������������������������������������������������������������������������������<br />

���������������������������������������������������������������������������������������<br />

�������������������������¦�������������������������������������¨���������������<br />

���������������������������������������������������������������������������<br />

� ��� � ��� �������<br />

�������������������������������������������������������¤�������<br />

�������������������������������������������������������������������������������<br />

���������������������������������������¢���¢�������������������������������¦�������<br />

�����¨���������������������������������������������������������������������������<br />

�������������������������������������������������������������������������<br />

� ����� �������¦�����������������������<br />

�����������������������������������¦�����<br />

�����������������������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������<br />

�<br />

�������������¦�����¤�����������������������������������������������������������������<br />

�����������������������������������������������������¤�����������������������������<br />

������� � ����������������������������� ������� �<br />

�������������������������������<br />

�����������������������������������������������������������������������������������������<br />

����� � ��� �������������������������������������¢���������������������<br />

���������<br />

�����������������������������������������������������������������������������<br />

�������������������������������������������¦�������������¨���������������������������<br />

���������������������������¦�����������������������������������������������������������<br />

���������������������������������������������¨�������������������¨���������������������<br />

���������������������������������������������������������������������������������������������<br />

� ���������������������<br />

�������������������������������<br />

� ��������� �<br />

�¤�����¤���������������¤�����<br />

��� �¥¤ �����������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������������������<br />

�������������¨�����������������¤�������������������������¢���������������������<br />

�����������������������������������������������������������¤���������������������������<br />

�����������������������������������������������������������������������������<br />

� ���������������������������������������<br />

���������������������������������������<br />

���<br />

�������¨�������������¤�����������������������������������������������������������<br />

� ���������������<br />

���������������������������¨���������������¢���������������������


Stopped<br />

destination<br />

reached<br />

destination<br />

not reached<br />

Emission of two attractive<br />

signals (left and right)<br />

wrong<br />

direction<br />

(deviation)<br />

Update values to emit<br />

(left and right side)<br />

updated<br />

�����������������������������������������������������������������������������������<br />

� �����������������<br />

�������������������������������������������������������������������������������������<br />

� �<br />

���������������������������������������<br />

� ���������������������������������������������������������������<br />

�����������<br />

�����������������������������������������������������������������������������<br />

¦<br />

���������������������¤�����������������������������������������������������������������<br />

� � � ���������������������������������������������������<br />

�����������¢�������������<br />

�����������¦�����������������������������������¨���������������������������������<br />

���������������������������������������������������������������������������������������<br />

���¤�����������������������������������������������������������������������������������<br />

���������¦���������������������������������������������������������������������<br />

�������������������������������������¦�������������¦�����������������������<br />

���������������������������������������������������������������������������������<br />

���������������������������������������������¨���������������������������������������������<br />

�������������������������������������������������������������������������������<br />

� � �������������������������������¢�����<br />

�������������������������������������<br />

�������¨�������������������¨�����������������������¨�����������������������������<br />

���������������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������������<br />

� ���¤�������������������������������¨�����������������������������������������<br />

�<br />

�������������������������������������������������������������������������������<br />

�������������������������������������������¨�����������������������������������<br />

����������������������� ���������������������������������������������������<br />

�<br />

�����������������������������������������������������������������������������������<br />

���������������������������������������<br />

���������������������������������<br />

���������������������¨���������������������������������������<br />

�����������������������<br />

�����������������������������������������������������������������������������<br />

���������������������������������������������������������¢�����������������<br />

�����������������������������������������������������������������������������<br />

���������<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

��� � �����������¨�����¦���������������������¨§������������<br />

� � � �������¨������������©�������������� ���¨�������������������������<br />

���<br />

�������������������������������������������������������������������������������������<br />

�������������������������������¨���������������������������������������������������<br />

�������������������������¢�������������������������������������������������������<br />

�����������������������������������������������������������������������������������<br />

�����������������������������������������������������������������¦���������������<br />

���������������¦�����¨�����������������������������������������¨�������������<br />

�������������������������������������������������������������������������������¦���������<br />

���������������������������������������������������������������¦�������������������������<br />

� ��������������������������������� � ����� � �����������������������������<br />

�<br />

��������� � ���������������������������¤���������������������������������������������<br />

186<br />

Random walk<br />

(with obstacle<br />

avoidance)<br />

signal<br />

perception<br />

perception<br />

carriage arm<br />

or mobile robot<br />

Move towards<br />

signal origin with<br />

obstacle avoidance<br />

contact<br />

no signal<br />

Move towards arm /<br />

mobile robot with<br />

obstacle avoidance<br />

Push <strong>for</strong>ward with a<br />

<strong>for</strong>ce in proportion to<br />

signal intensity<br />

no perceived<br />

arm or robot<br />

no<br />

contact<br />

� ���������������������������������������������������������������������<br />

�������<br />

�<br />

�����������������������������������������������������������������������������������<br />

�������������������������������������¢���¢�������������<br />

� � � ��������������������������������� �������������������������������������<br />

���<br />

�����������������������������������������¢���������������������������������������<br />

�������������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������������<br />

�����������¨�������������������������¤�����������������������������������������������<br />

�������������������������������������������������������������������������������������<br />

���������¨�������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������������<br />

� �������������<br />

�����������¨���������������¢���������������������������������������<br />

� �<br />

�����������������������������¤�����������������������¢�������������������<br />

�����������������������������������������������������¦�����������������������������<br />

� �������������������������������������������������������������������������������<br />

�����<br />

�����������������������¢�����¨�������������������������������������������������<br />

�������������������������������������������������������������¨���������<br />

�������������������������������������������������������¦�����������������¨�����������<br />

� �<br />

�����������������������������������������������������������������¢���¢�����<br />

���������������������������<br />

�����������������������������������������������������������������������������<br />

���������������������������������������������������������������������������<br />

���������������������������������������������������������������������������¤�����<br />

�������¦���������������������������������������������¨���������������������������<br />

� ����� � �������������������������������������������������<br />

�������¦�����������������<br />

� ��������������������������������� � ����� � �<br />

�����������������������¢�����<br />

�������������������¢�������������¤���������������������������������������������<br />

�������������������������������������������������¤���������������������������������<br />

���������¤���������¢���������������������������������������������������������<br />

�����������������������������������������������������������������������������<br />

� ����������� � �<br />

�������������������������������<br />

���������������������������������������������������������������������������������<br />

���������������������������������������������������������������������������������<br />

���������������������������������������¨���������¢�������������������¦�������������<br />

� � �¦�����������������������������������������������������<br />

���������<br />

�����������������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������������<br />

� �����������������������������������������������������������������������<br />

�������������<br />

�������������������������������������������������������������������������������


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

� �����������������������������¤�������������������������������������������<br />

�������<br />

�����������������������������<br />

�����������������������������������¦���������������������������������������<br />

���������������������������������<br />

������� � ���������������������������������������������������<br />

���������������������������������������¤�������������������������������¢�����<br />

�����������������������¢�����������������¤�������<br />

�������������������������������������¨�����¨���������������������������������������<br />

�������������������������������������������������������������¨���������������������<br />

���������������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������������<br />

��������������� � ���������¢�����������������������������������������������������<br />

187<br />

Paralyzed Robot<br />

(or a person)<br />

reactive<br />

mobile<br />

robots<br />

left<br />

<strong>for</strong>ce<br />

signals : f(d)<br />

+<br />

Mechanical<br />

combination<br />

reactive<br />

mobile<br />

robots<br />

right<br />

<strong>for</strong>ce<br />

vision<br />

Path deviation: d<br />

reactive<br />

MAS<br />

move<br />

� �����������������������¤��� � �������������������������������������<br />

�������<br />

�����������������������¢���������<br />

���������������������������������������������������������������������������¢���<br />

�����������¢���������������������������������������������������������������������<br />

�������������¨�����¦���������������������������������������������������������<br />

���������������������������������������������������������������������������������������<br />

�������������������������������������������������<br />

� ���������������������������������������������������������<br />

�����������<br />

���������������������������������������������������������������������������<br />

���������������������������������������������������¢�����������������������������<br />

�������������������¦���������¦���������������������������������������������������<br />

�����������������������������������������������������������������������������������<br />

�������������������������¢�����������������������������������������������������������<br />

���������������������������������������������������������������������������������<br />

���������������������������������������������������������������������������������<br />

�<br />

���¤�������������������������������������������������������������������������<br />

�������¤���������������������������������������������������������������<br />

��� ���¤�������������<br />

���������������������������������������������������������������������������������<br />

�<br />

�����������������������������������������¨�����������������������������������������<br />

�������������������������������������������������������������������������������������<br />

���������������������������������������������������������������¦�����������������<br />

� �����������������������<br />

���������������������������������������¤���������������<br />

�����������������������������������������������������������������������������<br />

�����������������������������������������������¨�����������������������������<br />

�������������������������������������<br />

���������������������������������������<br />

�����������������������������������������¨�������������������������¤���������������������<br />

���������������������������������������������������������������������������������������<br />

�����¨�����������������������������������������������������������������¤���������<br />

�����������������<br />

�����������¢�����������������������������������������������������������<br />

�������������������������������������������������������������������������¤�������������<br />

��������������������������������� � � ���������������<br />

���������������������¦�������<br />

�����������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������<br />

�����������������¨�����������������������������������������������������������������<br />

���������������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������������������<br />

�������¢���������������������������������¢�����������¨���������������������<br />

�����������������¢�������������������¢���������������������������������������������


� ��� � �����������������������������¢�����������<br />

�������<br />

���������������������������������������������������������������������������<br />

�����������¦�����������¤�����¨�������������������������������������������������������<br />

�����������������������������������������������������������������������������<br />

�������������������������������������������������������������������<br />

�����������������������������¨�����������������������������������������������������<br />

�������������������������������������������¤���������������¨�������������¢�������<br />

�������������������������������������������������������������������¨���¨�����������<br />

�������������������������<br />

�����������������¨�������������������������������������<br />

�����������������������������������¨�����¨�����¤���������������¨�����������<br />

� �����������������������������������������������<br />

�����¢�����¢�����������������<br />

���������������������������������¦�������������������������������������������������<br />

�����������������������������������������������������������������������<br />

�����������¢�����������������������<br />

�������������������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������������<br />

�������������������������¦�����������������������<br />

�����¤����� � �������������������������������<br />

�<br />

���������������<br />

��� � ���������������������������������������������������������<br />

� ���������������������������������������������������������������������������<br />

�����<br />

�������������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������<br />

� ������������� � ��� � � � � �������������������������<br />

�������������������<br />

� ����� � ����������������������������������������������������������� �<br />

�������<br />

���������������������������������������������������������������������������������<br />

�������¨�����������������������������������������������<br />

�������������������������������������������������������������������������������������<br />

� �����<br />

�������������¦���������������������������������������������������������������<br />

������������� §���������§��������<br />

�������������������¤�����������������������������<br />

� ������������������������������� � ����� � �<br />

���������������������������������<br />

��������������������� �������������������������������<br />

�������������������������<br />

���������������������������������������������������������������������������<br />

����������� � �������������������������������<br />

�������������������������������������������<br />

���������������������������������������������������������������������������������������<br />

���������������������������������������������������������������¢�����������������<br />

�����������������������������������������������������������������������������<br />

� �����������������������<br />

�������������������������������������������������������������<br />

� �����������������������������������������������������<br />

�������������������������<br />

����������� � �������������������������¢�������������������������<br />

���������<br />

� � �<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

188<br />

b<br />

a<br />

c<br />

neighbor agents<br />

Signals (I)<br />

perception<br />

of neighbors<br />

perception<br />

(sat. P)<br />

neighbors<br />

evaluation<br />

action<br />

selection &<br />

emissions<br />

Agent<br />

Environment<br />

a. Actions b.<br />

������� ��� ������� � ���������������������������������������������������������������<br />

���������������¨�������������������������������������������������������<br />

�������������������������������������������������������������������������������<br />

�������������<br />

���������������������������������������������������������������������������<br />

�����������������������������������������������������������������������<br />

�������������������¨����������������������� ���������������<br />

�������������������������<br />

���¥��� �����������������������������������������������<br />

���������������������<br />

��� � �������������������<br />

�����������������������������������������¨�����������������<br />

���������������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������������<br />

�������������������������������������¢�������������������¢���������������������������<br />

��� � ���������������������������������������������������¨�����������������������<br />

�����<br />

�������������������������������������������������������������������������������������<br />

� �����������������<br />

�¨�������������������������������������������������������<br />

� ����� ��� � �������������<br />

���������¦���������������������������������������������¦�������������������������<br />

���������������������������������������������������������������������������������¨���<br />

�������������������������������������������������������������������������������������������<br />

���������������������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������������<br />

��� ������� ���������������������������������������������¤����������� ��� � �<br />

�<br />

���������������������¨���������¨�����������������������������������¨�����������������<br />

���������������������������������������������������������������������������������<br />

�������������������������������������������������<br />

�������������������������������������<br />

�������������������������������¨���������¢�����������������¦�����������������<br />

�������������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������<br />

�����������������������������������������������������������¦���������������<br />

� ��� ������� � ��� ����� ��� � ��� ������� � ��� � ��� �<br />

��� �����������������������������������������������������������������������<br />

���������<br />

���������������������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������������������<br />

���������������������������������������������������������������������������������<br />

�������������¨�������������������������¦�����������������������������������������<br />

� ���������������������������������������¤���������������<br />

�������������������������<br />

�������������������������¦���������������������������������������������������������<br />

�������������������������������������������<br />

���������������������������������������������������������������������¤�������������������<br />

���������������������¦���������������������������������������������������������������<br />

���������������������������������������¨�����������������������������������������������<br />

� �����������<br />

�������������������������������������������������������������������<br />

� ����� ��� �<br />

�������������������


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

�����������������������������������������������������������������<br />

�������������������������������¥���������������������������������������������<br />

��������������������������� � �������������������������������������������������<br />

�<br />

� ���������������������������<br />

���������������������������������������������������������<br />

������������������������������������������������������� � �����������������<br />

���������<br />

������������������������������������� � ������������� ��������� ���������������<br />

�<br />

� ������������������������� �������¨���������������������������������������������<br />

� ��������� � �������������������������<br />

���������������������������������������������<br />

� ����������������������������������������������� � �����<br />

���������������������<br />

� ���������������������������������������������������<br />

�����������������������������<br />

� ��������������������� � ���������������������������<br />

�����������������������������<br />

� ���������������������<br />

�������������������������������������������������������<br />

� ����������������� � ��������������������������������������������������� � �������<br />

�����<br />

� �����������¥�����������������������������������<br />

�����������������������<br />

� ������������� ���������<br />

��������� �������������������������������������������������<br />

���������������<br />

� �������������������������������������������������<br />

�������������������������������������<br />

�������������������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������������<br />

������������������������������������������������������������� � �������������������<br />

�<br />

� �����������������������<br />

�����������������������������������������������������������<br />

� ���������������������������������<br />

���������������������������������������������������<br />

� �����������������������������������<br />

���������������������������������������������<br />

� ����������������������������������������������� � ���������������������<br />

���������<br />

���������������������������������������������<br />

� ��������������������������������������� � ���<br />

���������������������������������������<br />

��������� �<br />

�������������������������������������������������������¨�������������<br />

� ����� � �������������������������������������<br />

���������������������������������<br />

�����������������������������������������������������������������������<br />

�����������������������������������������������������������������������������<br />

� ���������������<br />

�������������������������������������������������������������<br />

�����������������������������������<br />

� �����������������������������¥������������� � ����������� � �<br />

���������������<br />

� ���������������������������������������������������������������<br />

�����������<br />

� �����<br />

���������������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������������<br />

� �������������������������������¢���������������<br />

���������������������<br />

� ���������<br />

�������������<br />

� ����������������������������������������� � �����������������������<br />

�����<br />

�������������������������������������������������������������������������������<br />

���������������������������������������������������������������¥���������������������<br />

� ������� � � � �������������������������������������������������������������<br />

�����<br />

� � � ������������������������� � �������������������<br />

�������������������������������<br />

189<br />

������� � �����������������������������������������������������������������������<br />

���������������������������������������������������������������������������������������<br />

���������������������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������������<br />

�������������������������������������������������������������������¢���¢���������<br />

�����������������������������������������������������������¤�����������������������<br />

�����������������������������������������������������������������������������������<br />

�������������������������������������������������������¤�������������������������������<br />

�������¦�������������<br />

�����������������������������������������������������������<br />

�������������������������������������������������������������������������������<br />

�����������������������������������������������������������������¦�����������������<br />

� � � � ����� ����� � � � ���<br />

���������¤�����������������������������������<br />

� � � ���������������������������������������������������<br />

���������������<br />

�����������������������������������������������������¦�������������������������������������<br />

�����������������������������������������������������������<br />

� ����������������� ������������������������� ����� �<br />

���������<br />

���������������������������������������������������������������������������¤�����������<br />

� ���������������������<br />

���¦���������������������������������������������������������<br />

� �¤�����������������<br />

�������������������������������������������������������<br />

� � ��� � � ������� � ���������������������������¦�������������<br />

���������<br />

���������������¨�����������������¤�����������������������������������������������������<br />

���������¤�������������������������������������������������������������������<br />

�������������������������������������������������������������������������������<br />

�������������������<br />

� ��� ������� � ������������� � ��� ��������� ��� � ������������� � �<br />

�������������<br />

� ���������


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

� �������������������<br />

�������������¨�����¢�������������������������¦���������¤�<br />

�������������������������¨���������������¢�������������������������������������<br />

���������������������������������������������������������������������������������<br />

���������������¨���������������������������������������������������������<br />

�����������������������������������������������������������������������������������<br />

���������������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������<br />

��������� � � ��� � ��� �<br />

�����������������������������������������<br />

� ���������������������������������������������������<br />

�������������¦�����������<br />

�����������������������������¨�����������������¨�������������������������������<br />

���������������������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������<br />

���������������������������������������������������������������������������������������<br />

�������������¨�������������������������������������������������¨�������������������<br />

�������������������������������������������������������������������������������������������<br />

� ������������� � �����<br />

�����������������¦�����������������������������������������������������������������<br />

�����������������������������������������������������������<br />

�������¦���������<br />

�¨�������������������������������������������������������������������¢�����������������<br />

�������������������������������¨�������������������������������������������������<br />

�����������������������������������������������������������������������������<br />

�����������¢�������������������������������������������������������������<br />

�������������������¦�������������¨���������������������������������������������������<br />

�����¢�������������¤���������������������������¦���������������������������������<br />

�����������������������������������������������������¨�������������������������<br />

���������������������������������������������������������������������������������<br />

� ���������������������������������<br />

�����������¢�����������������������������������<br />

�������������������¨�����������������¢�������������¢���������������������������������<br />

�������¤�����������������������������������������������������������������������<br />

���������������������������������������������������������������������������������<br />

���������������������������������������<br />

�����������������������������������������<br />

� ����������������� ������������������������������������� � �����<br />

���������<br />

���������������������������������������������������������������������������������<br />

�����������������������<br />

�������������������������������������������������������������������������<br />

�������¨�¨�����¨�����������������������������������������¦���������������������<br />

�������������������������������¦���������������������������������������������<br />

�����������������������������������������������������������������������������������<br />

�������������������������������������������¦���������������¤���������������<br />

�������������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������<br />

�����¢�������������������������������������������������������������������������<br />

�����������������������������������������������������������������������������<br />

���������������������������������������������������¦�����������������������<br />

� ��� � � �����������<br />

�����������������������������������������������������������������������������<br />

�������������������������������������������������������������������������������<br />

�������������������������������������������������������������������������<br />

���������������������¦���������������������<br />

�������������������������������������������������������������������������<br />

���������������������������������������¢�������������<br />

� �����������������<br />

� ��� � � �����������������������������������������������������<br />

�����������������<br />

������������������������������������������������������� �����������������<br />

190<br />

�¨�������������������������������������������������������������������<br />

��������� ������������� ������������������������� � � ����������� ���������������<br />

�<br />

� ������������������� ������������� ��������� �<br />

�����������������������<br />

��� � � � �������������������������������������������<br />

�����������������¤���<br />

�������������������������������������������������������������������<br />

¤ �����������������������¤�����<br />

�������������������������������������������<br />

�����������������������������������������������¤�����������<br />

�������<br />

����� � � ������� � �������<br />

������� � ���������������������������������������������<br />

�����������������¤���<br />

�¨���������������������������������������¢���������������������������������<br />

���������������������������������������������������������<br />

���������������������<br />

��� ����� ��� � �<br />

���������������������<br />

����� � � � �¤�����������������������¤��������������� �������<br />

�����������������¤���<br />

�����������������<br />

������� � �������������������������<br />

�����������������¤�����������������������<br />

�����������������������������<br />

�����������������������������������������������<br />

§��������������������������������<br />

�������������������¢�����������������������¤�����<br />

� ��������� � � � �<br />

�¤������������� ������� � �������������������������<br />

�������������������������<br />

���������¦���������������������������������������¨���������������������������<br />

¤ ������������������� ����� � �������������������������<br />

�����������������<br />

� � �������������<br />

�������������������¤�������������������������������������<br />

� � ��� ���<br />

���������������¨�������<br />

�¤�¤���������¨���<br />

���������������������������¤�¤�¤�������������������������<br />

������� � �����������������������������������������������������������������<br />

�<br />

������� ���������������<br />

�����������������������������������������<br />

� � � � �������<br />

�����������������������������������������������������<br />

� �����<br />

��������������������������� �������������������������<br />

�������������������������<br />

������� � �¤������� ���������¨�����������������������������������������<br />

�<br />

�¤����� §��<br />

�������������������������������������������¦�����������������������<br />

��������� ¤ �����������������������������������������������<br />

�����������<br />

���������������������������������������������������������������������������<br />

¤ ������� �<br />

�����������¨�������������¢�����������������¦���<br />

� ������� ��� � � � � � �������������������������������<br />

�����������������������<br />

¤ �������������������<br />

�������������������������������������������������¤�����<br />

���¨���������������¤���������������������������������������������<br />

�����¨�����������������������������<br />

���������������������������������������<br />

�����������<br />

������� � � ���������������<br />

���������������������������������������������������<br />

�������������������������������<br />

���������¦�������������������<br />

�¨�¦� ��� ��� � �������������������������������������������<br />

�����������������<br />

�������������������������<br />

�����������������¨�������������������������<br />

¦£���������� � � � ��� ���<br />

�����������������������¤������������������<br />

� ����������������������������������� � ��� � � �<br />

���������������������������<br />

�����������������������������������������������������������������������<br />

�������������������������������������������������������������������������������<br />

� � ���������������������������������������������¤����� §��������<br />

�����<br />

����������������������������������� ����� �<br />

�����������������������<br />

������� � �<br />

���������������������������¤�������������������������������������<br />

�¤�����������������������������������������������������������������������������<br />

��� ����������������� �������<br />

�����������������������������������¤�<br />

��� ������� §������������ ����� ¤ ������������� ����������� �<br />

�������������<br />

� � ��������� � �����<br />

�¨��� ������� � � � �������������������������������������������<br />

�������������������<br />

¤ ��������������� �<br />

�������������¢�����������������������������������������<br />

����������������� � �����<br />

� ���¤���������������������������������������<br />

���������������������<br />

� ����������������� � �������¢����������������������� � ��� � � �<br />

���<br />

��� � � ��������������������������������������� §��������������������<br />


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

���������������������������������������������������������������<br />

�����<br />

��� � ����� ��� � ��� �����<br />

�<br />

����� � �����������������������������������¨��� � �����<br />

�¤�����������������������<br />

������� � �����������������������������������������������������������<br />

���������<br />

�������������������©����������������������������¤�������������������<br />

���������<br />

� ���������������������<br />

������� � �����������������������<br />

�¤�����������������������������������������<br />

�����¨�����������������¢���������¤�����������������������������������������<br />

���������������������������������������������������������������������������<br />

��� ¤ �������������������������¢�¤���¢�������� �<br />

�����������������������<br />

������� ��������������������� ������������������������©����<br />

���������������<br />

����������� �����������¤�������������¨������������� �<br />

�¨�<br />

����� � � �<br />

�¤���������������¨�������������¨���������������������������<br />

� ���������������������������������������<br />

�������������������������������<br />

� � � ������� ���¤���<br />

���������������������������������������������������������<br />

� ����� � ��������������� �<br />

������� � � �����������������������������������������������������<br />

�������������������<br />

§�������������������������������� �<br />

�������������������¨�������������������¢�¤�����<br />

� ����������� �¤�������������<br />

����� � � ���¤���������������������������������������������������<br />

�������������<br />

�¤����� §��������������������������������¢����������������������������� �<br />

�����<br />

�¤�����������������<br />

��� � � � � � ���������������������������������������������������<br />

�����������������<br />

���������������������������������<br />

���������������������������������<br />

��������������������������� � � � ��������� �<br />

¤<br />

� ����������������������������������� ���������������<br />

�����������������¨�������<br />

� ��� � � �����������������������������������������������������������������<br />

�<br />

�����������������������������������<br />

�����¨�����������������������������������<br />

������� �<br />

����������������������������������������������������<br />

� ����������������� � �������������������������������<br />

�������������������������<br />

� ��� � � �������������������������������������������������������<br />

�<br />

�������������<br />

�������������������¦�������¨�����������������¦�������������<br />

� ����� �������<br />

���������������������������������������������������<br />

� �����<br />

���������¤�������������¨������������� ������� � ���������������������<br />

�<br />

� �����������������������������������<br />

���������������������������������������������<br />

�����������������������������������������������¦�����������������¤���<br />

��� � �¢����� �����������������¨�������������<br />

�������������������<br />

� ��� � �����<br />

�����������������������������������<br />

���������¤� ����� �¤� �¨� ����������������� � ����� �����<br />

�<br />

� ��� � � �����������������������������������������¦�������������<br />

�¨�������������<br />

������������� � � �������������������������������<br />

�������¨�����������������<br />

�<br />

�����������������������������������������������������������������������<br />

� �����������¢�����¦�����������������¤� � ��� � � �<br />

�����������������������������<br />

�����������������������������������������������������������¦�����������������<br />

�����������������������������������������¨���������<br />

�������������������������<br />

�������������������������¨�������������������<br />

�������������������������������<br />

� � �������������������<br />

���������������������������¤�����������������<br />

� ��� ����� � ��� � �����<br />

������� � ��� �����������������������������������������<br />

���������������<br />

�����������������������������������������������������������������������������<br />

�����������������������������������������������������������������<br />

�����<br />

��� � �<br />

�����������������������������������������¤���������������������<br />

� ��� ����� ���<br />

�����������������<br />

������� � �������������������������������������������<br />

�������������������������<br />

���������������������������������������������������������������������������<br />

�����������������������¤�����������������������������¨�����������<br />

��� ����� ��� � � �<br />

����� ������� � ���������������������������������������������������<br />

�������������������<br />

���������������������������������������������������������������������������<br />

191<br />

���������������������������������������������������������<br />

�����������������<br />

§������������������������������������������ ��� � � � �������<br />

���������������¤�����<br />

� � ����� � � ������������� ����������� �����<br />

�������������<br />

�����������§���������������������������������������������������������������������� �<br />

���������<br />

� ��� � � ���������������������������¤���������������������������������<br />

���������������<br />

��� ¤ ����������������������� � �����������������<br />

�������������������������<br />

�<br />

�¨�����������������������������������������������������������������<br />

� � ������� ��� �����<br />

� ����� � �¨���������������������������¢���������������<br />

�����������������¦�<br />

�����������������������������������������������������¢�����������������������<br />

�<br />

�¨���������������������¤�������������������������������������������¨���<br />

���������������������������������������������������������������������<br />

����� � �������<br />

��� �¨��� ������� � �����������������������������������������<br />

�����������������¨�<br />

���������������������������������������<br />

���������������������������������<br />

��������������������������������������������� ��� ����� � � �<br />

�¤�������<br />

�¨� �¨������� � ��������������������� � ��� � � �������<br />

�����������������¨�<br />

������������������������������������� �¨�������������������<br />

���������������<br />

��� ���������������������������������������������������<br />

���������������������<br />

������� ��������� �����<br />

�����������������������������������<br />

������������������� � ��� � � � ��� ��������� �������������������������<br />

�<br />

�����������������������������������������������������������������������������<br />

����������������������������������������������� ��������������������������� ���<br />

�<br />

�������������������������������������������������������������<br />

�����������������<br />

�����<br />

������������������� � ��� � � �����������������������������������������������������<br />

�<br />

�����<br />

�������������������������������������������������������������������<br />

�����������¤�������������������������������������������������������������¤���<br />

�<br />

�����������������������������������������������������¢�����������<br />

� � ������� � ��� � �����<br />

����������������� ����� ����������������� � � ����� � �����<br />

�<br />

�����������������������������������������������������������������������������<br />

���������¨���������������������������������������������������������������������<br />

� ���������������¨�������������¤����� §��������������������<br />

�������������<br />

������� ������� � �������������<br />

���������������������������������������¢� � ����� � �������������������������<br />

�<br />

�������������������������������������������������������������������������<br />

���������<br />

�����������������������������������������������������������������<br />

�������������������������<br />

���������������������������������������������<br />

������� ����������� � �<br />

�������������������������������������������<br />

����� � � ��� �������<br />

������������������������������������������� � ����� � �������������������<br />

�<br />

�������������������������������������������������������������������������������<br />

��� ¤ �������������������������<br />

���������������������������¨���������������<br />

������� � ���


Abstract<br />

Horocol language and Hardware modules <strong>for</strong> robots<br />

Dominique Duhaut, Claude Gueganno, Yann Le Guyadec, Michel Dubois<br />

Valoria<br />

Université de Bretagne Sud<br />

Lorient Vannes, Morbihan, France<br />

dominique.duhaut@univ-ubs.fr<br />

This work inserts in the general field of collective robotics. In this paper, we present the results on the design and the<br />

conception of (1) our robotics component called Atom, (2) the in<strong>for</strong>mal semantics of the HoRoCoL language. The<br />

expressivity of the language is illustrated on a simple example.<br />

At the hardware level, we propose a versatile architecture easily adaptable <strong>for</strong> most mechatronic systems. The<br />

hardware is based on a processing unit developed around a CPU + FPGA computing system communicating through<br />

bluetooth. On this hardware we build a software architecture, where each robot embeds its own description in an<br />

XML file. Control interfaces or programming tools are self-reconfigurable, depending of the XML description of the<br />

robot. That enables quick technology transfer <strong>for</strong> many mechatronics applications.<br />

At the software level, we present the Horocol language <strong>for</strong> programming a society or teams of robots. An example<br />

shows the principal features of the Horocol language. This language has been developed to offer a solution to express<br />

the behaviours of a set of teams of robots or agents. We focus on the originality of this language which is in the<br />

instructions <strong>for</strong> programming the team coordination.<br />

Introduction<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

This project takes place in the more general field of reconfigurable <strong>modular</strong> robotics. We can mention several<br />

various experiments. The M-TRAN (Modular Trans<strong>for</strong>mer - AIST) described in [1], is a distributed selfreconfigurable<br />

system composed of homogeneous robotic modules. CONRO (Configurable Robot - USC), is a robot<br />

made of a set of connectible, autonomous and self-sufficient modules [2]. ATRON, is a lattice based selfreconfigurable<br />

robot [3], and also, PolyPod (Xeros) [4], I-Cube (CMU) [5], Hydra . These robots generally consist<br />

in modules working together and where each module is permanently linked to at least one other.<br />

Programming such reconfigurable systems is a difficult task [6]. This field covers very different concepts like :<br />

methods or algorithms (planning, trajectory generation...), or classically, <strong>architectures</strong> <strong>for</strong> robot control, usually<br />

hierarchical : centralised [7], reactive [8], hybrid [9, 10, 11]. Some languages are developed in order to implement<br />

these high level concepts [12, 13]. Different paradigms are also proposed: functional [14, 15, 16], deliberative or<br />

declarative [17, 11, 18] and synchronous [12]. In any way, we can schematically summarise the difficulties of robot<br />

programming in two great characteristics:<br />

• programming of elementary actions (primitives) on a robot is often a program including many process running<br />

in parallel with real-time constraints and local synchronisation<br />

• interactions with the environment are driven via traditional features: interrupt on event or exception and<br />

synchronisation with another element.<br />

The recent introduction of teams of robot, where cooperation and coordination are needed, introduces an additional<br />

difficulty : programming the behaviour of a group of robots or even a society of robots [19, 20, 21, 22]. In this case<br />

(except in the case of a centralised control) programming implies to load a specific program on to each robot because<br />

of the different characteristics of robots : different hardware, different behaviours and different programming<br />

languages. These distinct programs must in general be synchronized to carry out missions of group (<strong>for</strong>aging,<br />

displacement in patrol, ...) and have reconfiguration capabilities according to a map of cooperation communication.<br />

192


From the human point of view it is then difficult to have simultaneously an overall vision of the group on three<br />

levels: the social level where we look <strong>for</strong> the global behaviour of any robot, the team level where we focus on a<br />

specific group of robot and the individual robot level.<br />

The definition of our general language HoRoCoL is driven by these three levels of team programming: Social,<br />

Group, Agent. Social and Agent programming are very classical, the original part of this work is on the Group<br />

programming where we introduce two original instructions : ParOfSeq/SeqOfPar and the where instruction.<br />

This paper presents the design of our robotics <strong>modular</strong> component, called Atom, and preliminary results on the<br />

prototypeand introduces the HoRoCoL language.<br />

Hardware level<br />

This section is a quick presentation of some mechanical aspects of the basic module (atom) and next, a description of<br />

the hardware and embedded software. Some in<strong>for</strong>mations on the progress of this project can be found in. The priority<br />

was given to the high-level tools and the communication middleware, .<br />

Mechanical design<br />

One atom is composed of six legs which are directed towards the six orthogonal directions of space. They allow the<br />

atom to move itself and/or dock to another one. The carcass of the atom consists of six plates molded out of<br />

polyurethane. A carcass weights approximately 180g. The first walking prototype of atom is shown here.<br />

Fig1. -The fisrt prototype of Maam robot -The CPU Board -The CPU Organisation<br />

The CPU has to<br />

- control 12 axis (2 DOF <strong>for</strong> one leg) : each leg is driven by two servo--motors and a servo--motor is controlled<br />

by a PWM (Pulse Width Modulation) signal. The servo includes a motor, an angle reducer and a P.I.D.<br />

regulator.<br />

- control the docking of two legs : the mechanic system under consideration provides a flip-flop control. The<br />

same control must alternatively couple then uncouple the two atoms.<br />

- identify the legs at the touch of the ground : an atom may have 3 or 4 legs touching the ground at the same<br />

time. The presence of pincers at the tip of the leg make the installation of a sensor hard. We extract this<br />

in<strong>for</strong>mation from the inside of the servo by processing some control-signals of the PID regulator.<br />

- line up 2 legs : the mechanical connection between two atoms requires the lining up of two legs. We propose an<br />

infrared transmitter/receiver system. The search <strong>for</strong> an optimal position needs the use of 6 analog--to--digital<br />

converters <strong>for</strong> each atom. It may be useful to activate or deactivate the transmitter if necessary: that leads to<br />

add 6 digital outputs in our system.<br />

- communicate with another atom or with a host computer: this aspect is discussed in the next section.<br />

We also have the following general constraints <strong>for</strong> robotic and embedded systems:<br />

- mechanical: the electronic is embedded in a robotic atom; it must fit in a cube which edges < 50 mm.<br />

- adaptation: emergence of new requirements due to un<strong>for</strong>eseen problems during the development of robotic<br />

atom must not question the general architecture.<br />

Embedded electronic<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

193


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

The architecture represented by the diagram in Fig 1 takes the previous enumeration of functions and constraints into<br />

account. The Embedded electronics is built around a Triscend TE505 CSoC. The TE505 integrates a CPU 8051, a<br />

FPGA with 512 cells and an internal 16KB RAM. It is completed by an AD convertor card and external bluetooth<br />

module <strong>for</strong> radio--communication<br />

This solution gives a suitable answer <strong>for</strong> previous constraints. The micro-controller provides usual functions of a<br />

computing architecture: central unit, serial line, timers, internal memory. With the FPGA we can realise the<br />

equivalent of an input/output card with low level functionalities. It provide most of classical combinatory and<br />

sequential circuits (latches, counters, look--up--tables, comparators With the 512 cells build in the TE505 we could<br />

carry out the twelve PWM-commands, as well as the command of A/D converter (MAX117) in a pipeline mode and<br />

also other input/output. So we can command each axis by just writing in one register and control the level of the Ir<br />

receptor simply by reading in a register that is refreshed in real time.<br />

Low level Software<br />

Because managing a team of robots with effectiveness suppose to guess the actual robots reachable in the area, to<br />

learn the capabilities of each one, to be able to distribute an application between them, and, possibly to remotecontrol<br />

any of them, we developed the following architecture.<br />

The host computer searches <strong>for</strong> the bluethoot robots<br />

accessible and build the first map of communication<br />

between them.<br />

A specific interface allows to make a direct use of the<br />

commands avaliable on each robot<br />

The host computer receives all the XML files coming from<br />

the robots. These files describes the commands and sensors<br />

available on the robot.<br />

It is also possible to use the generic interface to program<br />

the all set of robots using the Horocol language (decribed<br />

further).<br />

194


After a compilation, the program can be loaded in each<br />

robot connected to the system. In each robot a local<br />

language interpreter will run its own part of the code.<br />

Horocol Language<br />

This general procedure permit to program with a unify<br />

system a all set of robot making by this a multi-agent<br />

programming language : Horocol.<br />

Fig 2 : Ambient Robotics presentation<br />

In Horocol we distingush three levels of programmind : social programming, coordination programming and agent<br />

programming<br />

Social programming<br />

Is used to express the general behaviour of the sets of agent. It gives a general description of « what set of agent is<br />

doing what ». It is a meta langage of behaviour. It offers a high level point of view to the programmer who describes<br />

its calculation in term of composition of event, directed parallel programs synchronized by areas. Areas stands <strong>for</strong> a<br />

set of agents, virtually or physically distributed over a network of computers or a set of mechatronics agents, running<br />

the same goal.<br />

Main structure<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Horocol : := *import file.xml ;*<br />

programHorocol program_name {<br />

agents_set_declaration // Declaration section<br />

* global_instruction ; * // Programming section<br />

}<br />

import is used to express what which real robots are used in this program.<br />

We assume that an agent (robot) is define by a set of 3 files:<br />

• primitive.xml which describe the elementary actions that can per<strong>for</strong>m the agent (<strong>for</strong> instance in the next example<br />

myRobot will be able to : “findTheBall() or searchNeighbourg() …”<br />

195


• langage.xml which describe the kind of program that can execute the agent (robot). This language can be a<br />

standard programming language (Java, C++, …) or a specific language <strong>for</strong> an industrial robot. In our case it is<br />

the interpeter described in Fig2 which is the target language.<br />

• horocolSystemBasics.xml which describe the list of system features available <strong>for</strong> this agent (robot) like<br />

communication, synchronisation... This file will allow a Horocol engine to know if it is possible to generate<br />

from an Horocol program a specific program <strong>for</strong> this agent (robot).<br />

These 3 files are supposed merged in a file file.xml. In the case of a set of heterogeneous robot then there will be<br />

several different file.xml : one by kind of robot.<br />

Declaration section<br />

[A] agents_set_declaration : := *agents_type_declaration* [A1]<br />

*agent_list * [A2]<br />

*[social_variable]* [A3]<br />

*[social_event]* [A4]<br />

Agents, variables and events are declared at the global scope. Agents list is the list of all<br />

agents participating to the program. Variables and event declared at global scope are supposed<br />

visible by any agent. The public variables of the agents (define in files.xml) are also<br />

supposed visible by all agents.<br />

[A1] agents_type_declaration ::= type agents_type_identifier use file.xml;<br />

This construction defines type of agents and make the link with the real external agents/robots.<br />

[A2] agent_list ::= agent_type_identifier identifier=newAgent([agent_type_identifier]);<br />

This is the declaration of all the agents participating in this code. The newAgent order express<br />

the begining of the robot life <strong>for</strong> this appication. It is not necessarily implementable it<br />

can be reduce to “power on” a robot.<br />

[A3] social_variable ::= type_indication identifier_list [limited( agents_list, agents_type)] [= expression];<br />

[A4] social_event ::= event identifier_list;<br />

Programming section<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Classical variable are allowed (int, float, boolean …). This defines public variables visible<br />

by all agents of the system named social variables.<br />

The keyword limited express that <strong>for</strong> the agents in agents_list or agents_type the variable<br />

is “read only”.<br />

It is possible to declare public events. Social event are supposed to be visible from all<br />

agents declared in this program.<br />

[B] global_instruction ::= global_noninterrupt_action [B1]<br />

| global_interrupt_action [B2]<br />

| global_parallel [B3]<br />

| global_variable_assignment [B4]<br />

| global_if [B5]<br />

| global_loop [B6]<br />

[B1] global_noninterrupt_action ::= [ local_program ]<br />

196


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

[B2] global_interrupt_action ::= ° local_program °<br />

This defines a program that is executed from the first to the last instruction without possibility<br />

to end its execution. This construction is only usefull in [ B3] construction.<br />

This defines a program that can be ended during its execution. The way of finishing a program<br />

is not defined by Horocol because it depend on the type of real agents used. Basically<br />

we can consider two kinds of ending. First the system kills the program. The second way is<br />

to send a message to this program to ask it to finish, this technique will be preferred when<br />

security is needed (shared variable, robot …).<br />

[B3] global_parallel ::= || (*global_instruction,* global_instruction )<br />

This construction allows to begin at the same time two (at least) different programs over<br />

the set of all agents. At this point an agent will execute the first code possible <strong>for</strong> him. This<br />

means that in the case of a || (P1,P2,P3,P4) then there is 4 programs running in parallel.<br />

An agent will execute the first program that he is able to execute in the list beginning by P1<br />

end ending by P4. The ||(P1,P2,P3,P4) instruction is terminated when all the programs<br />

P1, P2, P3, P4 are terminated.<br />

[B4] global_variable_assignment ::= identifier = expression<br />

[B5] global_if ::= if (test) { local_program } else {local_program}<br />

[B6] global_loop ::= while (test) { local_program }<br />

These are very classical assignment to social variables or If and While instructions<br />

.<br />

Coordination or Group Programming<br />

The coordination programming gives the description of how a specific set of agent will execute the code and how it<br />

is distributed over this set of agents. This is the most original part of the language. It contains two different original<br />

constructions.<br />

First original part : the couple seqofpar and parofseq express the way in which the code is executed : synchronously<br />

(seqofpar) or independently in parallel (parofseq).<br />

Second original part : the where instruction express the pre condition to be satisfied <strong>for</strong> an agent to execute a code.<br />

[C] local_program : :=<br />

*[global_variable_assignment]* [B4]<br />

| *[agent programming]* [C1]<br />

[C1] agent programming :: =<br />

.method() [D1]<br />

| seqofpar(agent_type_list) { [LP1]<br />

[protected_declaration] [D2]<br />

*where_without_event* [D3]<br />

}<br />

| parofseq(agent_type_list ){ [LP2]<br />

[protected _declaration] [D2]<br />

*where_with_event* [D4]<br />

}<br />

197


Coordination programming in Horocol language takes three different <strong>for</strong>ms : a classical<br />

specific method called to a specific agent [D1], or one of the two constructions “seqofpar”<br />

[LP1] or “parofseq” [LP2] detailed below.<br />

[D2] protected _declaration ::=<br />

type_indication identifier_list [ limited( agents_list, agents_class)] [= expression] ;<br />

| event identifier_list ;<br />

[LP1] Instruction<br />

It is possible to declare protected scope variables or events (inside a seqofpar or parofseq).<br />

In this case they are visible only to the subset of all agents executing this part of code.<br />

seqofpar:<br />

sequence of parallel<br />

seqofpar(agent_type_list) can be understood as : “apply seqofpar to all agents having the type « agent_type » in the<br />

following”. In this construction « agent_type » are defined in [A1]. The seqofpar is a control structure <strong>for</strong> which<br />

each line of the internal program (where_without_event) will be executed synchronously over all agents concerned<br />

by this branch. Synchronously execution means that agents execute one instruction at the same time than the others.<br />

[D3] where_without_event ::=<br />

where (test) {<br />

[private_declaration] [D5]<br />

* local_instruction ; * [E]<br />

}<br />

[LP2] Instruction<br />

where indicates who is concerned by the “local_instruction”. This construction can be understood<br />

like : “<strong>for</strong> all agents satisfying the condition expressed by the test execute the following<br />

local_instuction”. Remark also that it is possible to define private variables [D5] or<br />

events which are visible only by the agent executing this branch an duplicated in each of<br />

them (i.e. duplicated in each agent satisfying the condition test). Because this kind of instruction<br />

is in a seqofpar this means that each instruction [F1] to [F11] of the local instructions<br />

[E] is executed locally at the same time on each agent satisfying the test. In this case<br />

we speak of synchronous multi-agent programming.<br />

parofseq:<br />

concurrency of sequence<br />

Here all agents concerned by the code (where_with_event) are executing their code in parallel and no instruction<br />

synchronisation between them is made. This means that all agents execute its own code independently from the<br />

others. The only instruction synchronisation is at the end of the parofseq because this instruction is considered<br />

terminated when all the agents concerned by the internal code have finished their execution.<br />

[D4] where_with_event ::=<br />

where (test ){<br />

[private_declaration] [D5]<br />

* local_instruction ; * [E]<br />

[react<br />

* when_event ; *] [D6]<br />

}<br />

[D5] [private_declaration]<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Declaration of private variable or event at a private level. These elements are only visible<br />

by the agent executing this code and are duplicated in each agent.<br />

198


[D6] when_event ::= when test => * local_instruction ; *<br />

Agent programming<br />

The react part is used to express reactive multi-agent programming. It works like exceptions<br />

in standard languages. Each time that an event (supposed visible by the agent executing<br />

this code) is emitted (by the emit [F7] instruction) then the react part is activated and<br />

looks if a specific program is linked to it. If it is the case then this program is executed else<br />

the normal program continues at the point where the event arrived.<br />

If during the execution of local_instruction an other event is raised the this second event is<br />

queued until the end of the code actully running in the react part. After what it will be<br />

treated by the react part. Nevertheless, it is possible to use Horocol to simulate preemption<br />

and priority.<br />

The agent programming will describe the code executed at low level by the agents. To the set of instructions defined<br />

in [E] we have to add local agent primtives which are defined in the file.xml imported in the beginning of the<br />

Horocol program.<br />

[E] local_instruction : :=<br />

basic_primitive() [F0]<br />

. basic_primitive() [F1]<br />

| if (test) { local_instruction }{ local_instruction } [F2]<br />

| while (test) { local_instruction } [F3]<br />

| loop local_instruction end loop [F4]<br />

| exit [F5]<br />

| variable_assignement [F6]<br />

| emit event [F7]<br />

| resume [F8]<br />

| restart [F9]<br />

| reevaluate [F10]<br />

[F0] basic_primitive()<br />

Again classical specific method applied to the concern agent.<br />

[F1] . basic_primitive()<br />

Again classical specific method call to a specific agent identical to[D1].<br />

[F2] if (test) { local_instruction } else { local_instruction }<br />

standard.<br />

[F3] while (test) { local_instruction }<br />

standard<br />

[F4] loop local_instruction end loop<br />

standard<br />

[F5] exit<br />

used to exit from a loop …end loop [F4] or while [F3] instruction.<br />

[F6] variable_assignement<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

Identical to [B4]<br />

[F7] emit event | emit event (*type var,* type var)<br />

This instruction emits an event that can be declared at the social level [A4] or protected if<br />

declared in [D2] or private if declared in [D5]. When an event is emitted then the react part<br />

[D6] of the program is executed.<br />

199


[F8] resume<br />

[F9] restart<br />

[F10] reevaluate<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

This instruction can be present only in when_event [D6] part. Its execution will restart the<br />

execution of the corresponding local_instruction [E] program at the instruction where was<br />

emmited the event that stops its execution to enter in the react part.<br />

This instruction can be present only in when_event [D6] part. Its execution will restart the<br />

execution of the corresponding local_instruction [E] program at the first instruction.<br />

This instruction can be present only in when_event [D6] part. Its execution will restart the<br />

execution of the seqofpar or parofseq instruction. The idea is to check is the agent stills<br />

have the properties expressed in the where test of [D3] or [D4]<br />

Example of Horocol programming<br />

Let’s consider an example in which we simulate the behaviour of a robot team playing football. Let’s say that there is<br />

4 players, one goal keeper and one coach in the team. A general clock will calculate the end of the play.<br />

import myRobot.xml;<br />

import clock.xml;<br />

programHorocol footballVersion1<br />

type football use myRobot.xml; // the football type is builded by the decscription of my robots<br />

type clock use clock.xml; // a general clock type<br />

football a1,a2,a3,a4,a5,coach = newAgent( football ); // define 6 agent variables member of the team<br />

clock watch= newAgent( clock); // and on agent <strong>for</strong> the clock<br />

event coachGivesOrder, timeOut; // two global events one <strong>for</strong> the coach, one <strong>for</strong> the clock<br />

int coachStrategy; // the global variable defining the strategy of the team<br />

int time limited (football); // variable impossible to modify <strong>for</strong> agent having type football<br />

{<br />

// init part here we suppose that each football agent will receive his assignment (player, goal keeper, coach)<br />

parofseq(football, clock){<br />

int playersOrganisation ; // this variable is visible <strong>for</strong> all members of this section<br />

where (football.isPlayer()){ // only football agents satisfying isPlayer() primitive execute this<br />

section<br />

football x; // this local variable is in each player of this section<br />

loop<br />

findTheBall(); // call to a primitive of the agent defined in myRobot.xml<br />

if (foundBall ) { moveToBall(); shootBall(); }<br />

else { localMove(playersOrganisation);<br />

x =searchNeighbourg(); // search an other agent to speak with<br />

playersOrganisation = x.exchangeIn<strong>for</strong>mation(); // discuss with this agent<br />

} // this make possible change in the team organisation<br />

end loop;<br />

react // if an event is raised the previous section stops and this part is executed<br />

when timeOut => resetBehaviour() ; restart; // end of the game start again<br />

when coachGivesOrder => setMyself( coachStrategy); reevaluate; // change my behaviour<br />

};<br />

where (football.isGoalKeeper()){ // executted by the football agent verifying isGoalKeeper()<br />

loop<br />

findTheBall(); ….<br />

coach.exchangeIn<strong>for</strong>mation();// direct talk between the goal keeper/coach: local synchronisation<br />

end loop;<br />

react<br />

200


}<br />

}<br />

when timeOut => resetBehaviour() ; restart;<br />

when coachGivesOrder => setMyself( coachStrategy); reevaluate;<br />

}; // according to the coach decision the goal could change his behaviour and become a player<br />

where (football.isCoach()){<br />

int coachConclusion ; // local variable used by the coach to analyse the situation<br />

loop<br />

if (time0) { waitOneSecond(); time =time-1;};<br />

emit timeOut; // raise the timeout then all the agents will react to this event and end the game<br />

};<br />

Mapping Horocol on a set of real robots<br />

By these example we see how the Horocol programs are linked to the real robot by the use is the import and use<br />

constructions.<br />

The idea of Horocol is to assume that some primitive actions are available <strong>for</strong> each type of agents. Then when we<br />

write an Horocol program we manipulate these primitives under some parallel : ||, seqofpar or parofseq constructions.<br />

In fact, depending on the hardware structure of the robots, we have no guaranties that these parallel constructions are<br />

really possible to implement. For instance if the robots are very simple : contact sensor, ligth sensor no<br />

communication (think of a Lego Mindstorm robot) then constructions like : seqofpar or a direct call to a specific<br />

robot [F1] are not possible.<br />

To know if it is possible to compile the Horocol program in a equivalent code running on the real robot the Horocol<br />

compiler will use the in<strong>for</strong>mations included in the XML file. This file is including three levels of in<strong>for</strong>mation :<br />

- the robot primitive,<br />

- the syntax of the language used to program this robot,<br />

- the horocol system primitives avaliable on this physical target.<br />

The compiler checks first with the in<strong>for</strong>mation stored in the horocol system if all the basics features exist to<br />

implement : social or protected variable, parallel constructions, direct in<strong>for</strong>mation exchange.<br />

The second phase is to check if all the primitive used in the Horocol <strong>for</strong> the associated type are present in the robot<br />

primitive.<br />

Finnaly a purely syntactic rewriting trans<strong>for</strong>m the Horocol source code in the specific robot language. Of course this<br />

last pass is specific to each robot language so it needs to be rewrited <strong>for</strong> each kind of target. In our case we tested this<br />

trans<strong>for</strong>mation <strong>for</strong> the local interpreted language mentioned in fig 2.<br />

CONCLUSION<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

The Horocol language proposed here allow the description of multi-agents, multi robots behavior at three different<br />

levels : social, coordination and agent. The originality of Horocol is in the instructions : parofseq/seqofpar <strong>for</strong><br />

synchronous programming coupled to the where instruction <strong>for</strong> precondition evaluation coming with the reevalute<br />

to check <strong>for</strong> dynamical. Coupled to the distributed hardware it offers a solution to distributed mechatronics systems.<br />

201


References<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

[1] S. Murata, E. Yoshida, A. Kamimura, H. Kurokawa, K. Tomita, S. Koraji, "M-TRAN: Self-Reconfigurable Modular Robotic System" in<br />

IEEE/ASME transactions on mechatronics, Vol.7 No.4 2002<br />

[2] M. Rubenstein, K. Payne, W-M. Shen, "Docking among independent and autonomous CONRO self-reconfigurable robot" in ICRA 2004<br />

[3] M.W. Jorgensen, E.H. Ostergaard, H. Hautop, "Modular ATRON: Modules <strong>for</strong> a self-reconfigurable robot" in proceedings of 2004 IEE/RSJ<br />

International conference on Intelligent Robots an Systems (IROS 2004).<br />

[4] http://robotics.stand<strong>for</strong>d.edu/users/mark/polypod.html<br />

[5] C. Unsal and P.K. Khosla, "A multi-layered planner <strong>for</strong> self-reconfiguration od a uni<strong>for</strong>m group of I-cube modules", IEEE/RSJ, IROS conference,<br />

Maui, Hawaii, USA, pp 598-605, Oct. 2001. http://www-2.cs.cmu.edu/~unsal/research/ices/cubes<br />

[6] T. Lozano-Perez & R. Brooks “An approach to automatic robot programming” Proceedings of the 1986 ACM fourteeth annual conf on<br />

computer science 1986, ACM Press<br />

[7] J. S. Albus & all. NASA/NBS “Standard Reference Model <strong>for</strong> Telerobot Control System Architecture (NASREM)”. NBS Technical Note<br />

1235, National Bureau of Standards, Gaithersburg, MD, 1987.<br />

[8] P. Hudak & all “Arrows, robots, and functional reactive programming” LNCS 159-187 Spinger Verlag 2002<br />

[9] R. Alur & all, "Hierarchical Hybrid Modeling of Embedded Systems" Proceedings of EMSOFT'01: First Workshop on Embedded Software,<br />

October 8-10, 2001<br />

[10] F. F. Ingrand & all “PRS: a high level supervision and control language <strong>for</strong> autonomous mobile robots”, IEEE Int Cong on Robotics and<br />

Automation Minneapolis, 1996<br />

[11] D. Paul Benjamin & all “Integrating perception, language an problem solving in a cognitive agent <strong>for</strong> mobile robot” AAMAS’04 july 19-23<br />

2004, New-York<br />

[12] I. Pembeci & G. Hager “A comparative review of robot programming languages” report CIRL – Johns Hopkins University august 14, 2001<br />

[13] C. Zielinski “Programming and control of multi-robot systems” Conf. On Control and Automation Robotics and Vision ICRARCV’2000 dec<br />

5-8 2000, Singapore<br />

[14] J. Armstrong “The development in Erlang”, ACM sighpla international Conference on Functional Programming p 196-203. 1997<br />

[15] M.S. Atkin & all “HAC: a unified view of reactive and deliberative activity”. Notes of the European Conf on Artificial Intelligence 1999<br />

[16] G. King “Tapir: the Evolution of an Agent Control Language” American Association of Artificial Intelligence 2002.<br />

[17] M. Dastani & L. van der Torre “Programming Boid-Plan agents deliberating about conflicts along defeasible mental attitudes and plans”<br />

AAMAS 2003<br />

[18] J. Peterson & all “A language <strong>for</strong> declarative robotic programming” Int Conf on Robotics and Automation ICRA 1999<br />

[19] E. Klavins “A <strong>for</strong>mal model of a multi-robot control and communication task” IEEE Conf on Decision and Control, 2003<br />

[20] E. Klavins “A language <strong>for</strong> modeling and programming cooperative control systems” Int Conf on Robotics and Automation ICRA 2004<br />

[21] D.C. Mackenzie & R. Arkin “Multiagent mission specification and execution” Autonomous Robot vol 1 num 25 1997<br />

[22] F. Mondada & all “Swarm–bot: <strong>for</strong> concept to implementation”, IEEE/RSJ Int Conf on Intelligent Robots and Systems IROS 2003<br />

202


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

A Real-Time, Multi-Sensor Architecture <strong>for</strong> fusion of delayed<br />

observations: Application to Vehicle Localisation<br />

C. Tessier, C. Cariou, C. Debain<br />

CEMAGREF<br />

24, avenue des Landais<br />

BP 50085, 63172 Aubière, France<br />

e-mail: cedric.tessier@cemagref.fr<br />

Abstract— This paper presents a software framework called<br />

AROCCAM that was developed to design and implement<br />

data fusion applications. This architecture permits to build<br />

applications in a very short time unburdening the user of sensor<br />

communication. Moreover, it manages unsynchronized sensors<br />

and delayed observations in an elegant manner that permits to<br />

the user to fuse those in<strong>for</strong>mation easily, taking into account<br />

the environment perception date.<br />

In this paper, a fusion methodology <strong>for</strong> delayed observations<br />

is first presented in order to point the problem of latency<br />

periods in a fusion system. These latency periods are then taking<br />

into account within our embedded architecture needing only a<br />

little ef<strong>for</strong>t from user. Finally, benefits of AROCCAM architecture<br />

are demonstrated via a real-time vehicle localisation<br />

experiment carried out with an outdoor robot.<br />

I. INTRODUCTION<br />

Nowadays, more and more vehicles are equipped with<br />

intelligent systems. These systems have to realize particular<br />

tasks such as: vehicle localisation, automatic guidance, obstacle<br />

avoidance, pedestrian detection,. . . . To accomplish their<br />

tasks, they process sensor data. However, it is necessary to<br />

write piece of softwares to communicate with sensors. This<br />

task is tedious and most of all a lack of time. A solution to<br />

this problem is to use an embedded architecture. Such <strong>architectures</strong><br />

permit to facilitate the development of algorithms<br />

by managing the communication between those algorithms<br />

and sensors. For instance, they can collect sensor data and<br />

control sensors without blocking algorithm mechanism.<br />

The architecture proposed here has to respect several<br />

requirements:<br />

• management of unsynchronized data and sensors latency,<br />

• recording and replaying of sensor data in real-time,<br />

• engineering requirements: re-usability, integration,<br />

maintenance, processing efficiency<br />

• user requirements: easy of use, programming error<br />

detection.<br />

When a system uses a single sensor, it can only sense a<br />

partial and incomplete part of the environment. By contrast,<br />

a multi-sensor approach is a way to improve environment<br />

perception. It’s necessary to notify that a sensor provides an<br />

observation of the environment at a particular time. Another<br />

sensor provides a similar in<strong>for</strong>mation at another time. The<br />

F. Chausse, R. Chapuis<br />

LASMEA - UMR 6602<br />

24, avenue des Landais<br />

63177 Aubière, France<br />

e-mail: chausse@lasmea.univ-bpclermont.fr<br />

203<br />

C. Rousset<br />

ECA<br />

Rue des Frères Lumière BP 24<br />

83078 Toulon Cedex 09, France<br />

e-mail: cro@eca.fr<br />

difficulty of a multi-sensor system is to fuse sensor data with<br />

different dates.<br />

• In some works [1], [2], [3], sensor data are fused<br />

without taking into account the fact that in<strong>for</strong>mation<br />

are unsynchronized.<br />

• In other works [4], the data sampling frequency is<br />

increased in order to consider that all in<strong>for</strong>mation are<br />

synchronized each other. Un<strong>for</strong>tunately, this assumption<br />

is false since each sensor has a latency time, which can<br />

not be taken into account by this way.<br />

Data fusion systems become more and more used, which<br />

motivated us to design an architecture to answer the problem<br />

of unsynchronized sensor data taking also into account sensor<br />

latency.<br />

This paper is organized as follows. Section II depicts<br />

the works related to embedded architecture, emphasising<br />

assets and drawbacks of each approach. In section III, our<br />

architecture is described by presenting each module and<br />

their objectives. Functionalities that ease the development<br />

of embedded software are also enumerated. Section IV<br />

details the latency periods of observations: where they arise<br />

from? the consequences of fusion of delayed observations.<br />

A solution is suggested to deal with such observations in a<br />

data fusion system. Finally, a typical application involving<br />

our solution is given in the last section: real-time vehicle<br />

localisation.<br />

II. RELATED WORKS<br />

Since several decades, different embedded <strong>architectures</strong><br />

have been developed, each answering to requirements of a<br />

particular field of applications. However, all these <strong>architectures</strong><br />

are agree with the necessity to be divided in several<br />

components.<br />

For instance, the LAAS architecture [5] has three hierarchical<br />

levels, having different temporal constraints and<br />

manipulating different data representations. The main asset<br />

of such a decomposition is the realisation of prototype<br />

software applications in a very short time. In the same manner,<br />

SCOOT-R [6], [7] the acronym <strong>for</strong> “Server and Client<br />

Object Oriented <strong>for</strong> the Real-Time”, offers a framework<br />

<strong>for</strong> distributing tasks on multi-processing unit architecture.<br />

This software is in charge of communication between each


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

processing unit. This permits to realise time-consuming<br />

applications by distributing tasks on several computers.<br />

Another feature pointed up in the SCOOT-R architecture<br />

is the real-time aspect. It consists in giving the sensor data<br />

to the algorithm as soon as it’s available. In the case of<br />

this architecture, a communication between each components<br />

must be implemented, since it is a distributed application. In<br />

order to solve the problem of communication time in a realtime<br />

software, a real-time network is used. Moreover, the<br />

real-time aspect of the mechanism of SCOOT-R components<br />

obliges each component to work in a very short time not to be<br />

declared as defective. Un<strong>for</strong>tunately, the determination of the<br />

processing time of sensor data is sometimes not possible. The<br />

upper limit of the processing time can always be evaluated.<br />

In general, vision algorithms like road detection [8] can take<br />

more than 100ms per images. This reduce the utilisation<br />

of this architecture to mighty processing unit and to avoid<br />

time-consuming algorithms. In the same manner as SCOOT-<br />

R, RT-MAPS [9], [10], is divided into several components<br />

and date sensor data with an accurate clock as well. In that<br />

particular case, the aim of the dating is to use RT-MAPS<br />

like a numeric videotape recorder. Note that this date is the<br />

reception date of the sensor data. In a first time, it can record<br />

all data in a synchronized dated database. Then, it’s possible<br />

to replay all these data later. The asset of this function is that<br />

it permits to improve an algorithm by testing it on the same<br />

databank or to compare several algorithms. Now, having<br />

compared several <strong>architectures</strong> <strong>for</strong> embedded applications,<br />

let’s analyse the main functions of our architecture be<strong>for</strong>e<br />

discussing latency problem.<br />

III. AROCCAM<br />

In our architecture called AROCCAM, the acronym <strong>for</strong><br />

“Architecture d’ordonnancement de capteurs pour la création<br />

d’algorithmes modulaires”, we pointed up the <strong>modular</strong> and<br />

simple aspect. As we want an easy to use architecture,<br />

AROCCAM is only divided into three components (Figure<br />

1).<br />

• A driver module is responsible <strong>for</strong> the communication<br />

with external entities like sensors, softwares,<br />

computers,. . . There is a particular driver module <strong>for</strong><br />

a particular communication bus (IEEE, CAN network,<br />

Ethernet network, Serial ports,. . . ).<br />

• A brik module is an application algorithm. Thanks<br />

to a subscription to driver modules, a brik module<br />

receives directly sensor data without having to know<br />

the communication protocol. In general, the user has to<br />

write piece of software only in this area.<br />

• The heart module is the final component. This component<br />

has not to be modified or adapted by the user.<br />

It is responsible <strong>for</strong> the communication between driver<br />

modules and brik modules, the threads creation, memory<br />

management, . . .<br />

Moreover, like RT-MAPS, our architecture dates accurately<br />

each sensor data gathered. This permits to the user<br />

to replay all the recorded sensor data at the desired speed.<br />

In order to replay exactly the sensor data in the same way<br />

204<br />

Localisation<br />

brik<br />

Control<br />

brik<br />

Obstacle<br />

avoidance<br />

brik<br />

Brik modules<br />

Heart module<br />

− Threads creation<br />

− Communication between<br />

drivers and briks<br />

− Memory management<br />

Driver modules<br />

CAN<br />

IEEE Ethernet RS232<br />

1394<br />

Fig. 1. AROCCAM Software Architecture.<br />

that they were recorded (order of sensor data reception and<br />

interval time between two sensor data), the date of each<br />

sensor data corresponds to the reception date of the data<br />

by AROCCAM during recording.<br />

After having described components involved in our architecture<br />

and their objectives, let’s now analyse in details the<br />

problem of delayed observations fusion.<br />

IV. A FUSION METHODOLOGY FOR DELAYED<br />

A. Presentation<br />

OBSERVATIONS<br />

The real-time aspect of embedded <strong>architectures</strong> is a feature<br />

pointed up. This feature consists in giving the sensor data to<br />

the algorithm as soon as it is available. However, as it was<br />

suggested by [11], most of the sensors have a latency time.<br />

Such <strong>architectures</strong> remain real-time sensor data collecting<br />

softwares but are not real-time environment perception softwares.<br />

As sensor latency time can’t be suppressed, it’s there<strong>for</strong>e<br />

impossible to realise the real-time architecture last proposed.<br />

It means that the user must take these latenesses into account.<br />

In [12], a solution is suggested to the user. In this paper, we<br />

propose an elegant manner to manage this problem without<br />

making the development of an application algorithm more<br />

complex.<br />

In the next section, is described in details the kind of<br />

latency periods that can appear.<br />

B. Observation and latency period<br />

The aim of the software that must be implemented in our<br />

architecture is to realize a particular task. In this section,<br />

we take the example of an accurate real-time positioning<br />

of a car on a digital map [13]. In this case, the vehicle is<br />

equipped with a video camera, oriented in the direction of<br />

the road, aiming at detecting the road. Then this detection<br />

permits to locate the vehicle thanks to a digital map listing<br />

road configurations.<br />

An in<strong>for</strong>mation that permits to participate to the achievement<br />

of the software task is called here an observation.<br />

It can be a sensor data or the processing result of this<br />

data. These observations are fusioned in the software. Let’s


analyse in details the process to obtain an observation (Figure<br />

2) in our example.<br />

Environment<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

sensor latency communication time<br />

ta tc<br />

Perception<br />

Communication<br />

tp<br />

processing time<br />

Processing<br />

Fig. 2. The latency periods <strong>for</strong> obtaining an observation.<br />

tobs<br />

Observation<br />

(result)<br />

In general, the process to obtain an observation can be<br />

divided into three steps, each of them requiring a process<br />

time:<br />

• perception. The sensor captures at time ta a perception<br />

of the environment. Modern systems use smart sensors,<br />

i.e. an analog sensor linked to a local controller. The<br />

local controller gets the analog in<strong>for</strong>mation, per<strong>for</strong>ms<br />

the analog to digital conversion, and sends the results<br />

through a digital bus like RS232 or CAN or Firewire. In<br />

the following of this paper, the word sensor will refer to<br />

a smart sensor. The time required by the local controller<br />

to prepare the result corresponds to the sensor latency.<br />

• communication. As explained just above, it consists in<br />

sending the result through a digital bus at time tc.<br />

• processing. At time tp, the embedded architecture, like<br />

AROCCAM, receives the sensor data. As our architecture<br />

works in real-time, we consider that the sensor data<br />

is dated as soon it’s available by the computer. However,<br />

this in<strong>for</strong>mation is not directly useable <strong>for</strong> the algorithm<br />

task. This last one has to process these data to extract a<br />

worth observation. The time required to treat the sensor<br />

data is the processing time.<br />

In all embedded <strong>architectures</strong>, dating sensor data accurately,<br />

consists in timestamping those data with the date tp<br />

like we do. However, the application algorithm can only<br />

fuse observations with the same date: the perception date.<br />

Sometimes, the data sheet of the sensor or directly the sensor<br />

during the experiment provides the sensor latency. When<br />

there is no in<strong>for</strong>mation, a temporal calibration of sensors has<br />

to be done. Un<strong>for</strong>tunately, it’s not always possible to estimate<br />

accurately all these parameters. We suggest to timestamp the<br />

observation with the date ta and include with this date the<br />

dating precision.<br />

C. Consequences of delayed observations fusion<br />

To illustrate the fusion of delayed observations, let’s keep<br />

our example of vehicle localisation. For vehicle localisation<br />

task, the estimated position is valuable only at a specific time.<br />

The robotic community calls this characteristic, the spatiotemporal<br />

localisation. It means that the estimated vehicle position<br />

is function of time. For each result produced by those<br />

systems, a time imprecision induced a spatial imprecision. A<br />

spatial time induced error of 10cm is given by an imprecision<br />

time<br />

205<br />

of 1.5ms at 250km/h or 15ms at 25km/h. To build an<br />

accurate localisation system, it’s necessary to keep in mind<br />

the observations latency times.<br />

D. Fusion of delayed observations<br />

The aim of the system is to compute a result: the state<br />

vector, and this vector is function of time. In the previous<br />

section, the latency periods <strong>for</strong> obtaining an observation<br />

have been detailed. The problem here, is the fusion of a<br />

delayed observation with a particular state vector. It means<br />

that an observation received by system has to be used to<br />

update a particular state vector: the one that describes the<br />

system at observation’s date. AROCCAM offers an optimal<br />

estimation of the state vector at any time.<br />

V. EXPERIMENTATION AND RESULTS<br />

The following section describes experimentations that<br />

were carried out to validate our approach. Thus, an outdoor<br />

localisation system was chosen.<br />

A. Description<br />

A terrestrial robot was used <strong>for</strong> the experiments. This<br />

research plat<strong>for</strong>m, of the Research Federation TIMS (Technologie<br />

de l’in<strong>for</strong>mation, de la mobilité et de la sûreté),<br />

(Figure 3), was initially equipped with an on-board PC<br />

running Linux RTAI. Several sensors was added to it <strong>for</strong><br />

the realisation of the system.<br />

The objectives of this system is to locate the vehicle with<br />

2m accuracy using only low-cost sensors.<br />

Fig. 3. The “RobucarTT” with its sensors<br />

Magnetometer<br />

Low−cost GPS<br />

Doppler radar<br />

Gyrometer<br />

1) Sensors: Sensors used in this system are presented in<br />

Table I. The GPS system and the magnetometer permits to<br />

initialize the system: vehicle position and orientation. Then,<br />

two proprioceptive sensors: a Doppler radar and a gyrometer,<br />

and the GPS are used to locate the vehicle. A particle filter<br />

is employed to estimate the vehicle position.<br />

Table I lists not only sensors but also their latency period<br />

and their acquisition frequency. These latency periods take<br />

into account the latency period of the sensor and also the


TABLE I<br />

PROBABILITY VALUES FOR PERCEPTIVE TRIPLETS<br />

sensor latency period acquisition frequency<br />

magnetometer 5ms 16Hz<br />

low-cost GPS 6 − 100ms ⋆ 10Hz<br />

Doppler radar - 10 − 77Hz<br />

gyrometer - 20Hz<br />

⋆ : supplied directly by the sensor.<br />

communication time on the bus that links the sensor to<br />

the computer. We can also notice that all these sensors<br />

are not synchronized since they are affected by different<br />

latency period and different acquisition frequency. The use<br />

of AROCCAM seems to be necessary.<br />

2) Vehicle progress model: In this part, we focus on small<br />

displacements. Assuming the flatness of the environment<br />

where the robot is running, position and attitude of the<br />

vehicle are resumed to the position of the vehicle in the<br />

2D plane (Oxy) defined by x(t) and y(t) and orientation<br />

of the car with respect to the (Ox) axis given by θ(t).<br />

Measurements from the vehicle are the average speed V (t)<br />

given by the Doppler radar and angular speed ω(t) given by<br />

the gyrometer.<br />

Figure 4 shows the vehicle state (position and orientation)<br />

at two particular moments: tk and tk+1.<br />

y k+1<br />

y<br />

k<br />

O<br />

y<br />

ICC<br />

R<br />

M k<br />

α<br />

x k<br />

θ k<br />

d<br />

∆ d<br />

M k+1<br />

x k+1<br />

θ k+1<br />

∆ θ<br />

Fig. 4. Small displacement between two successive positions.<br />

Relation between vehicle displacement ∆d and average<br />

speed V is given by: ∆d ≈ d = T e · V where T e is the<br />

sampling period. In the same way, relation between vehicle<br />

rotation ∆θ and angular speed ω is given by: ∆θ = T e · ω.<br />

Thus, the vehicle progress model is:<br />

⎧<br />

⎨<br />

⎩<br />

First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

xk+1 = xk + ∆d · cos(θk + ∆θ/2)<br />

yk+1 = yk + ∆d · sin(θk + ∆θ/2)<br />

θk+1 = θk + ∆θ<br />

3) Why use AROCCAM: It’s necessary to use the AROC-<br />

CAM architecture in this system since:<br />

• sensors are unsynchronized: equation (1) supposed<br />

that the two proprioceptive sensors supply synchronized<br />

observations.<br />

x<br />

(1)<br />

206<br />

• sensors have a latency period.<br />

An elegant solution to solve these difficulties is to use the<br />

architecture suggested in this paper.<br />

B. Experimentation results<br />

The scenario used to validate our architecture is presented<br />

in figure 5. As we can see, the trajectory is complex and<br />

contains several curves. This path have been obtained by<br />

a manual run. During the experiment, successive absolute<br />

positions giving by a GPS RTK (THALES Navigation) with<br />

2cm accuracy, are recorded. The position estimated by our<br />

method is compared each time with the GPS-RTK reference<br />

trajectory by computing the difference (error) between them.<br />

Error (m)<br />

1.8<br />

1.6<br />

1.4<br />

1.2<br />

1.0<br />

0.8<br />

0.6<br />

0.4<br />

End<br />

Fig. 5. Trajectory realised in the experiment.<br />

Begin<br />

0.2<br />

0.0<br />

with AROCCAM<br />

without AROCCAM<br />

0 20 40 60 80 100 120 140<br />

Time (s)<br />

Fig. 6. Localisation error with two <strong>architectures</strong>.<br />

On the following figure (figure 6), the localisation error is<br />

depicted <strong>for</strong> two experimentations:<br />

• In red line: localisation error with the use of AROC-<br />

CAM, taking into account the latency periods.<br />

• In blue line: localisation error with a classical embedded<br />

architecture.<br />

We can notice that, as expected, latency periods affect the<br />

localisation system results. In our example, a maximal error<br />

of 20cm is added to the system when a classical architecture<br />

is used. These results permit to show the benefit given by<br />

our approach.


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

VI. CONCLUSION<br />

This paper has proposed an embedded architecture <strong>for</strong><br />

the fusion of delayed observations. A fusion methodology<br />

has first been designed. First a description of the latency<br />

periods from the sensor to the data fused in the algorithm has<br />

been presented. The consequences of delayed observations<br />

fusion have been explained though the example of the vehicle<br />

localisation and a method has been suggested by the use of<br />

AROCCAM. Finally, a vehicle localisation application has<br />

been proposed to illustrate our architecture.<br />

Even though the AROCCAM architecture allows to reduce<br />

errors in fusion of unsynchronized in<strong>for</strong>mation, our aim here<br />

is to draw conclusions not from the demonstration but from<br />

the process of building embedded applications. We think<br />

that the AROCCAM architecture offers an elegant and easy<br />

manner to build fusion algorithm. The benefit was really<br />

substantial: during all the programming and debugging process,<br />

we never had problem with thread communication, realtime<br />

multithread management, transmission of data between<br />

different system,. . . All these aspects were dealt with by our<br />

software architecture.<br />

Moreover several other applications were developed under<br />

AROCCAM with no difficulties and good results.<br />

REFERENCES<br />

[1] Gianluca Ippoliti, Leopoldo Jetto, Alessia La Manna, and Sauro<br />

Longhi. Improving the robustness properties of robot localization<br />

procedures with respect to environment features uncertainties. In International<br />

Conference on Robotics and Automation (ICRA), Barcelona,<br />

Spain, April 2005.<br />

[2] C. Kwok, D. Fox, and M. Meila. Real-time particle filters. IEEE,<br />

Sequential State Estimation, 92(2), 2004.<br />

[3] David Filliat. Cartographie et estimation globale de la position pour<br />

un robot mobile autonome. PhD thesis, LIP6/AnimatLab, Université<br />

Pierre et Marie Curie, Paris, France, December 2001. Spécialité<br />

In<strong>for</strong>matique.<br />

[4] Marc-Michael Meinecke and Marian-Andrzej Obojski. Potentials and<br />

limitations of pre-crash systems <strong>for</strong> pedestrian protection. In International<br />

Workshop on Intelligent Transportation, Hamburg/Germany,<br />

March 15-16 2005.<br />

[5] Rachid Alami, Raja Chatila, Sara Fleury, Matthieu Herrb, Felix<br />

Ingrand, Maher Khatib, Benoit Morisset, Philippe Moutarlier, and<br />

Thierry Siméon. Around the lab in 40 days... In IEEE International<br />

Conference on Robotics and Automation, San Francisco, USA, 2000.<br />

[6] Khaled Chaaban, Paul Crubillé, and Mohamed Shawky. Computer<br />

Science, chapter Real-Time Framework <strong>for</strong> Distributed Embedded<br />

Systems, pages 96–107. Springer-Verlag GmbH, 2004.<br />

[7] Khaled Chaaban, Paul Crubillé, and Mohamed Shawky. Real-time<br />

embedded architecture <strong>for</strong> intelligents vehicles. In Proceeding of the<br />

5th Real-time Linux workshop, Valencia, Spain, November 2003.<br />

[8] P. Jeong and S. Nedevschi. Efficient and robust classification method<br />

using combined feature vector <strong>for</strong> lane detection. Ieee transactions on<br />

circuits and systems <strong>for</strong> video technology, 15(4):528–537, 2005.<br />

[9] Iyad Abuhadrous, Fawzi Nashashibi, and Claude Laurgeau. Multisensor<br />

fusion (gps, imu, odometers) <strong>for</strong> precise land vehicle localisation<br />

using rtmaps. In 11th International Conference on Advanced<br />

Robotics ICAR, 2003.<br />

[10] Fawzi Nashashibi. Rtm@ps: a framework <strong>for</strong> prototyping automatic<br />

multisensor applications. In IEEE Intelligent Vehicles Symposium,<br />

October 3-5 2000.<br />

[11] Mikael Kais, Laurent Bouraoui, Steeve Morin, Arnaud Porterie, and<br />

Michel Parent. A collaborative perception framework <strong>for</strong> intelligent<br />

transportation system applications. In Intelligent Transportation Systems<br />

Conference ITSC, Vienna, Austria, September 13-16 2005.<br />

[12] Iyad Abuhadrous. Systéme embarqué temps réel de localisation et<br />

de modélisation 3D par fusion multi-capteur. PhD thesis, Ecole des<br />

Mines de Paris, january 2005.<br />

207<br />

[13] Jean Laneurit, Roland Chapuis, and Frédéric Chausse. Accurate<br />

vehicle positioning on a numerical map. International Journal of<br />

Control, Automation, and Systems, 3(1):15–31, March 2005.


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

List of speakers<br />

Name Organism Title<br />

H. Ayreault DGA-GESMA Goal driven planning and adaptivity <strong>for</strong> AUVs<br />

M. Barbier ONERA-CERT<br />

N. Dulac INTEMPORA<br />

D. Dufourd DGA-SPART<br />

D. Duhaut VALORIA<br />

A. Godin DGA-ETAS<br />

ProCoSA: a software package <strong>for</strong> autonomous<br />

system supervision<br />

RT-MAPS: a <strong>modular</strong> software <strong>for</strong> rapid<br />

prototyping of real-time multisensor<br />

applications<br />

Integrating human/robot interaction into<br />

robot control <strong>architectures</strong> <strong>for</strong> defence<br />

applications<br />

Horocol language and Hardware modules <strong>for</strong><br />

robots<br />

<strong>Pleading</strong> <strong>for</strong> <strong>open</strong> <strong>modular</strong> <strong>architectures</strong> in<br />

robotics<br />

F. Ingrand LAAS-CNRS LAAS architecture: Open Robots<br />

J. Malenfant LIP6<br />

L. Nanatchamda LISYC<br />

C. Novales LVR<br />

R. Passama / D. Andreu LIRMM-CNRS<br />

M. Perrier IFREMER<br />

P. Pomiers ROBOSOFT<br />

J.P. Quin THALES<br />

N. Ricard / C. Rousset ECA<br />

D. Simon INRIA<br />

O. Simonin UTBM<br />

C. Tessier CEMAGREF<br />

L. Walle<br />

ECA<br />

(CYBERNETIX)<br />

208<br />

An Asynchronous Reflection Model <strong>for</strong> Objectoriented<br />

Distributed<br />

Reactive Systems<br />

Architectures logicielles pour la robotique et<br />

sûreté de fonctionnement<br />

A multi-level architecture controlling robots<br />

from autonomy to teleoperation<br />

Overview of a new Robot Controller<br />

Development Methodology<br />

Advanced Control <strong>for</strong> Autonomous Underwater<br />

Vehicles<br />

Modular distributed architecture <strong>for</strong> robotics<br />

embedded systems<br />

Compared Architectures of Vehicle Control<br />

System (Vetronics) and application to an UXV<br />

DES (Data Exchange System), a<br />

publish/subscribe architecture <strong>for</strong> robotics<br />

Orccad, a framework <strong>for</strong> safe control design<br />

and implementation<br />

Reactive Multi-Agent approaches <strong>for</strong> the<br />

Control of Mobile Robots<br />

A Real-Time, Multi-Sensor Architecture <strong>for</strong><br />

fusion of delayed observations: Application to<br />

Vehicle Localisation<br />

Remote operation kit with <strong>modular</strong> conception<br />

and <strong>open</strong> architecture: the SUMMER concept


First National Workshop on Control Architectures of Robots - April 6,7 2006 - Montpellier<br />

List of participants<br />

Name Surname Organism E-mail<br />

Afilal Lissan URCA-CReSTIC lissan.afilal@univ-reims.fr<br />

Andreu David LIRMM-CNRS andreu@lirmm.fr<br />

Arias Soraya INRIA soraya.arias@inrialples.fr<br />

Ayreault Hervé DGA-GESMA herve.ayreault@dga.defense.gouv.fr<br />

Barbier Magali ONERA-CERT barbier@cert.fr<br />

Benlazreg Ibrahim LIRMM-CNRS benlazre@lirmm.fr<br />

Boisgerault Sébastien EMP sebastien.boisgerault@ensmp.fr<br />

Briand William LIRMM-CNRS briandwilliam@hotmail.com<br />

Chausse Frédéric LASMEA chausse@lasmea.univ-bpclermont.fr<br />

Christ Guillaume DCN guillaume.christ@dcn.fr<br />

Delaunay Claire clairedelaunay@gmail.com<br />

Dony Christophe LIRMM-CNRS dony@lirmm.fr<br />

Dufourd Delphine DGA-SPART delphine.dufourd@dga.defense.gouv.fr<br />

Duhaut Dominique VALORIA dominique.duhaut@univ-ubs.fr<br />

Dulac Nicolas INTEMPORA nicolas.dulac@intempora.com<br />

El Jalaoui Abdellah LIRMM-CNRS eljalaou@lirmm.fr<br />

Fabiani Patrick ONERA-CERT patrick.fabiani@onera.fr<br />

Fraisse Philippe LIRMM-CNRS fraisse@lirmm.fr<br />

Franchi Eric ECA ef@eca.fr<br />

Godary Karen LIRMM-CNRS godary@lirmm.fr<br />

Godin Aurélien DGA-ETAS aurelien.godin@dga.defense.gouv.fr<br />

Hygounenc Emmanuel DCN emmanuel.hygounenc@dcn.fr<br />

Ingrand Felix LAAS-CNRS felix@laas.fr<br />

Joyeux Sylvain LAAS-CNRS sylvain.joyeux@m4x.org<br />

Kapellos Konstantinos TRASYS konstantinos.kapellos@trasys.be<br />

Kheddar Abderrahmane AIST/CNRS kheddar@ieee.org<br />

Lacroix Simon LAAS-CNRS Simon.Lacroix@laas.fr<br />

Lapierre Lionel LIRMM-CNRS lapierre@lirmm.fr<br />

Libourel Thérèse LIRMM-CNRS libourel@lirmm.fr<br />

Malenfant Jacques LIP6 Jacques.Malenfant@lip6.fr<br />

Moline Eric DGA-CEP eric.moline@dga.defense.gouv.fr<br />

Morillon Joel THALES joel-g.morillon@fr.thalesgroup.com<br />

Mourioux Gilles LVR Gilles.Mourioux@bourges.univ-orleans.fr<br />

Nanatchamda Laurent LISYC nana@univ-brest.fr<br />

Novales Cyril LVR Cyril.Novales@bourges.univ-orleans.fr<br />

Parodi Olivier LIRMM-CNRS parodi@lirmm.fr<br />

Passama Robin LIRMM-CNRS passama@lirmm.fr<br />

Perrier Michel IFREMER Michel.Perrier@ifremer.fr<br />

Pissard-Gibollet Roger INRIA Roger.Pissard@inrialpes.fr<br />

Pomiers Pierre ROBOSOFT pierre@robosoft.fr<br />

Quin Jean-Philippe THALES jphquin@aol.com<br />

Ricard Nicolas<br />

ECA nr@eca.fr<br />

Rousset Christophe ECA cro@eca.fr<br />

Simon Daniel INRIA Daniel.Simon@inrialpes.fr<br />

Simond Nicolas EMP nicolas.simond@ensmp.fr<br />

Simonin Olivier UTBM olivier.simonin@utbm.fr<br />

Tessier Cédric CEMAGREF cedric.tessier@cemagref.fr<br />

Trapier Manoel LIRMM-CNRS trapier@lirmm.fr<br />

Villenave Dominique ROBOSOFT dominique.villenave@robosoft.fr<br />

Walle Laurent ECA (CYBERNETIX) laurent.walle@cybernetix.fr<br />

Zapata René LIRMM-CNRS zapata@lirmm.fr<br />

209

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!