Towards new challenges for innovative management ... - Erima - Estia

Towards new challenges for innovative management ... - Erima - Estia

Selected papers from proceedings of ERIMA07’:

International Symposium on

Innovative Management Practices


Towards new challenges for

innovative management practices

Volume I

ERIMA Publication -

Jérémy Legardeur & Juantxu Martin

ESTIA - France MIK - Spain

ERIMA07’ Proceedings


ERIMA07’ Proceedings

Table of Contents

Table of Contents_______________________________________________________________ 2

Preface _______________________________________________________________________ 4

ERIMA07’ Organizing Team ______________________________________________________ 5

Supporting the Systemization of the Early-Stage Innovation by Means of Collaborative

Working Environments __________________________________________________________ 6

Towards a multi-input model, method and ILM (Ideas Lifecycle Management) tool for

innovation ____________________________________________________________________ 13

Using information technologies to improve the management of the French healthcare system

_____________________________________________________________________________ 26

Needs for methods and models rationalizing work of the actors of an organization implied in

an innovation process: basic principles and examples ______________________________ 33

Managing collaboration for improving design co-ordination __________________________ 43

Using Organizational Intangible Assets for Better Levels of Operational Efficiency _______ 52

The TRIZ-CBR synergy: a knowledge based innovation process_______________________ 61

Intellectual Property Management: a TRIZ-based approach to manage innovation within SMEs

_____________________________________________________________________________ 71

How Innovation in the Organisation of Management Systems in SMEs could contribute to the

Economic Growth of Developing Countries? _______________________________________ 81

An SDSS for the Space Process Control, an Hybrid Approach: Fuzzy Measurement, Linear

Programming, and Multicriteria Decision aid. Application to Regional Planning __________ 90

Symatop – a web based platform giving a flexible and innovating tool for decision making

and for human and process development. ________________________________________ 101

A Knowledge Management Approach to Support Learning and Education of Newcomers in

Wide Organizations ___________________________________________________________ 110

About the transferability of behavioural skills _____________________________________ 119

A Framework for the Potential Role of Information Specialists as Change Agents in

Performance Management _____________________________________________________ 126

Creating Cultural Change with Employees Transferring Through TUPE ________________ 132

Organizational routines and dynamics of organizational cognition ____________________ 139

A Decision Support System for Complex Products Development Scheduling___________ 145

Fostering SMEs networking through Business Ecosystem and ICT ___________________ 154

Meta-modelling “object”: expression of semantic constraints in complex data structure__ 163

A general framework for new product development projects _________________________ 172

An Organizational Memory-based Environment as Support for Organizational Learning __ 181

Managing collaborative E-business and E-manufacturing through distributed environments

____________________________________________________________________________ 190

Use for a creative and innovative approach in product design. Case of students-enterprises

linked projects _______________________________________________________________ 200

Innovation encounters: ecosystems and entrepreneurship in cross-national alliances ___ 208


Innovation Management: the Karlsruhe Model of Product Development________________ 213

Innovation in practice: The case of a medium-size multinational manufacturing holding __ 219

Avoiding emergency innovation: change prediction in innovative products ____________ 227

Economic and environmental performance of the food industry ______________________ 236

Continuous Improvement Processes in manufacturing enterprises as an enabler of process

innovation ___________________________________________________________________ 245

The relevance of product related services in industry _______________________________ 255

University Technology Business Incubators in China and in France___________________ 260

Service orientation for manufacturing firms: challenges and innovation management____ 271

Social Psychology and the Challenge for Interdisciplinary European Research _________ 281

Evaluating organisational readiness for change implementation through a potential change

maturity model _______________________________________________________________ 295

Decision Points in the Innovation Process ________________________________________ 305

A practical framework to apply innovation concepts in the exploitation phase of collaborative

R&D projects ________________________________________________________________ 314

Innovation models and processes: a recipe to be competitive now and in the future _____ 323

A Case Study of Organizational Innovation in Taiwan’s Puppet Show Industry __________ 332

Towards a dialogic management of cognitive competences _________________________ 341

A second look at the complex innovation phenomenon through a “dialogical” principle __ 349

Managing design system evolution to control design process: methodology and tools___ 357

An empirical model of resource-based view on entrepreneurship and innovation________ 366

Index of authors ______________________________________________________________ 376

ERIMA07’ Proceedings


ERIMA07’ Proceedings


The global objective of ERIMA (European Research on Innovation and Management Alliance) is

to constitute a “Network of European Excellence” in the field of Innovation and Industrial

Management (I&IM). ERIMA is currently formed by 13 highly qualified Europeans Universities and

Research Centres from 10 countries in Europe. The aim of this network is to promote new

theories, methods, and techniques in I&IM issues.

This book titled “Towards new challenges for innovative management practices - Volume I” is

resulting from the scientific and industrial contributions to the First ERIMA Symposium. This

conference was held in March 2007 in the ESTIA engineering institute located at Biarritz France.

The ERIMA07 conference had gathered researchers, business leaders of both SMEs and large

companies, public sector representatives, and practitioners focused on innovation management.

The objective of the conference was to provide an inspiring background and stimulus for a

focused, target-oriented discussion regarding the new concepts in collaborative working

environment, systematic innovation, and their respective management and support ICT tools and


The topics of the ERIMA07 were:

- Models, Tools and Methods for Innovation Management

- Fieldwork, Case studies and Storytelling of Innovative Management Practises

- Intra & Entrepreneurship initiatives

- Innovative services

- Creative routines, cultures and behaviours

- Education, learning and knowledge flows in practise

- Professional virtual and informal communities

- Collaborative environment

- Enterprise interoperability

- Combining economic social and environmental objectives

- Innovative sustainable public policies

- Innovative welfare development

Reference to the papers of this book should be made as follows: Initiale(s), Name(s), “Title of the

paper”, in the book “Towards new challenges for innovative management practices”, Vol. 1, pp.

xx-xx, Editors: J. Legardeur, J. Martin, ERIMA Publication, 2007.

Example: A. Hesmer, K.A. Hribernik, J.B. Hauge, K.D.Thoben, “Supporting the Systemization of

the Early-Stage Innovation by Means of Collaborative Working Environments”, in the book

Towards new challenges for innovative management practices”, Vol. 1, pp. 6-12, Editors: J.

Legardeur, J. Martin, ERIMA Publication, 2007.

Jérémy Legardeur

Juantxu Martin


General Co-chairs of the symposium

Legardeur Jérémy (ESTIA) France

Martin Juantxu (MIK) Spain

Scientific Advisory Board

Allen P. (Cranfield University) UK

Corallo A. (ISUFI) Italy

De Looze M. P (TNO) Netherlands

Dorronsoro I. (MCC) Spain

Kirner E. (Fraunhofer - ISI) Germany

Kongsvold K. (Sintef) Denmark

Larrasquet JM. (ESTIA) France

Legardeur J. (ESTIA) France

Lucas S. A. (INESC) Portugal

Martin J. (MIK) Spain

Merlo C. (ESTIA) France

Mitleton-Kelly E. (LSE) UK

North K. (Fachhochschule Wiesbaden) Germany

Pinho de Sousa J. (INESC) Portugal

Salkari I. (VTT) Finland

Thoben K. (BIBA) Germany

Wagner F. (IAO) Germany

Organisation Committee

Marty H. (ESTIA) France

Pehau N. (ESTIA) France

Savoie E. (ESTIA) France

Unamuno A. (MIK) Spain

ERIMA07’ Proceedings

ERIMA07’ Organizing Team


Supporting the Systemization of the Early-Stage Innovation by Means of

Collaborative Working Environments

ERIMA07’ Proceedings

A. Hesmer, K.A. Hribernik, J.B. Hauge, K.D.Thoben

BIBA – Bremen Institute of Industrial Technology and Applied Work Science at the University of

Bremen, Hochschulring 20, 28359 Bremen, Germany,{hes, hri, baa, tho}

Abstract: Successful innovations depend on the given input to the process. This input – ideas – are

developed in the early stage of innovation where no well-defined problems or goals are given (cf. Simon

1973/ Bayazin 2004). The research presented in this paper focuses on innovators’ needs in today’s and

future working environments to provide a highly flexible software solution supportive of early-stage

innovation. Alan describes the early-stage innovation as a social process where individuals work among

individuals and groups in a collaborative way (Alan 2007). To encourage this collaborative work game

dynamics will be used to support the early-stage innovation. The integrated software solution will support

and guide innovators to get connected to the right people, produce ideas based on explored knowledge and

evaluate them to achieve the goal of developing successful innovations by the usage of game approaches.

The approach presented in the proposed paper is basing on the work carried out by the European funded

research project Laboranova.

Keywords: Collaborative Working Environments, Innovation, Ideation, Early-stage innovation; Routines

I. Introduction

As innovation is seen to be Europe’s key to success in economy where competitors from today’s

successful economies and the emerging economies from Asia move into markets the subject has

raised the attention of companies and politics. The importance of innovation has reached the

academic field as much as the economical field but still there is a lack of considering the first

steps in the innovation process. The early-stage of innovation – the phase where ideas are

generated and developed – is mostly seen as a black box in common stage-gate processes. The

ability to generate quantitative and qualitative good ideas will achieve a much higher recognition

in future as it is the starting point for economically successful innovations.

Early Stage of Innovation



Potential in supporting the

Early-stage innovation


Figure 1: The Innovation Process (Rothwell 1992)



Stage-gate process





II. Theoretical Background of Early-Stage Innovation

Innovation in Theory

Schumpeter defines innovation as the new combination of resources (Schumpeter 1952).

Innovation is not only about one having a new idea but to make an economical effort out of this.

Innovation relates to whole process from the idea generation to penetration of the market or the

successful implementation of the innovation.

Latest trends in literature present a view from a sociological perspective upon innovation and the

change from a linear innovation process to a user centric approach where both the technological

research and the sociological aspects of innovation are addressed equally. Additionally strategic

management and innovation are no longer perceived as a linear but as a parallel development.

The innovation process can be seen as learning/knowledge process within a community of


Ideation in Theory

Rhea describes the ideation process to be the process of discovering what to make, for whom,

understand why to make it and define the success criteria including the development of insights

for answering these strategic questions (cf. Rhea 2005). Ideation as part of the overall innovation

process is the “ability one has to conceive, or recognize through the act of insight, useful ideas”

(Vaghefi 1998). The externalisation of ideas from divine inspiration (cf.: Weisberg 1993) and the

understanding of work processes to generate ideas is in contemplation in the research in the field

of ideation.

Design as the Science related to Ideation

The scientific discipline related to the ideation process is the theory of design methodology.

According to Charles Eames design is described as “a plan for arranging elements in such a way

as to best accomplish a particular purpose” (Design Notes 2006). Design is seen as a discipline

dealing with the early-stage of innovation.

Jonas claims that today’s efforts heading for the development of planning practises and

methodological approaches without having the pretence of planning everything complete (s.

Bauer 2006). This is consistent with Akin’s theory that states that “no quantifiable model is

complex enough to represent the real-life complexities of the design process” (Cross 1984).

One reason for this issue is that one of the specific aspects of the working process of designers is

the constant generation of new task goals and redefinition of task constrains (Cross 1984). In

relation to information technology (IT) support Rahe states that the problem with the most

planning instruments is the inattention on the fact that during a development process new

knowledge is achieved that changes the project (Klünder 2006). This underlines the thesis of nonlinearity

in the early-stage-innovation. In this context a proposal from Schön comes becomes

relevant who states to search for an epistemology of practise implicit in the worker, intuitive

proceedings (Cross 2001). The user centric approach is becoming more and more important to


Current Support of Ideation

Because of its fuzzy nature, where details and even goals are not defined exactly the early stage

innovation can not take place in a linear process. Iterations are the nature of the workflows

ERIMA07’ Proceedings


elated to ideation processes because of the generation of new knowledge during the process. In

the early stages of innovation, there is a well-defined problem, and so iterations between

problem, solution and possibilities are needed (cf. Simon 1973).

Existing Collaborative Working Environments (CWEs) (Hribernik 2007a/b) mainly focus on

supporting traditional working paradigms of linear workflows by providing IT-based platforms for

planning, scheduling and executing tasks (cf. NovaNet 2006). These tools represent single

methods related to idea generation or support innovation processes on a management level. The

usage of this proprietary tool in economies practise is very seldom (NovaNet 2006).

Requirements for successful Ideation Support Tools

In order to achieve continuous strategic innovation and thus create persistent competitive

advantage, organisations need to increase their capacity for carrying out open-ended and

nonlinear problem solving involving a wide participation of people in knowledge-rich

environments. This must be supported by the next generation CWE’s, which in turn, requires new

paradigms for managing the knowledge transfer, the social dynamics, and the decision processes

involved in the front-end of innovation.

With respect to this the actual research in the field of early stage innovation focuses on the real

requirements of innovators in distributed working environments and the solving of the occurring


III. Research Approach

The design theory provides research approaches in the field of early-stage innovation. Having a

look in companies workflows related to ideation one recognizes that workflows are based on an

individual level or at least on group dynamic level (cf. also Cross 2001). The usage of methods is

constrained to brainstorming; eventually supported by a proprietary IT tool. The ideation process

is not conceived to be part of all day work in European companies. To build a successful software

solution that will be adapted and used in companies and networks of innovators one need to build

upon every day requirements and workflows people are already used to and are not willing to

change. In relation to design theory the research focuses on every day routines of innovation

workers (ideators) with their workflows and data organisation.

To gather the information needed Ideators and groups working together are observed and

interviews are accomplished. Within the observation these individuals and groups are

accompanied through their daily business. All activities are monitored and captured as well as put

into the context of the actual task and workflow. In relation to this their organisation of data and

information – digital and physical representation - is observed. The usage of physical elements

and IT tools is investigated. Further data is gathered by interviewing innovative workers lead by a


To define the CWE supporting ideation today’s solutions and studies in the field are considered

and evaluated. Based on the knowledge the definition of the innovation environment will be


The Outcome

Results of the observation and interviews clearly show that creating and developing ideas is

based on iterative routines of representing an idea, sharing it with others, getting feedback and

communicating about the object (the representation). Adapted from every-day routines the

ERIMA07’ Proceedings


esearch will identify tasks for team-based ideation work, together with a technological

infrastructure that allows for communication about, and experimentation with more or less

finished ideas, early stage innovations and concepts not yet realised.

Representations of ideas can be e.g. sketches, renderings or maps. Work routines show that

individual ideators represent their ideas in an “easy to access” way, meaning that CAD or

rendering software is used in a basic way, more often ideas are sketched or presented in

PowerPoint. The interviewees stated that the rational for using Microsoft (MS) PowerPoint is

based on the one hand on the generic usage and on the other hand the exchange with others

because of its status as a de facto standard of the product. The representation is distributed to

stakeholders by mail for getting feedback in general, comments, further ideas, and the

development of the original idea.

Example of an Idea Development Routine

The initial moment is the occurrence of an idea. This is not further specified. Within the time of

one to eight hours the idea is represented as a sketch rendering or text. There might be variations

of the idea but not an entirely different concept. Pictures are pasted in common media programs

like MS PowerPoint or MS Word.

The document is sent out by e-mail to the recipients who have an interest in the idea. Usually

they are well known. The reply by email occurs during two days otherwise there will be no reply at

all after that. Alternatively feedback can be gathered by phone. Feedback is usually given in an

unstructured way.

The feedback is extracted from the individual sources (text, comments to the pictures/text, phone

calls) and than gathered. The feedback is then used to transform the original idea.

With this developed idea as the objective the routine starts again. The overall time frame for the

described routine is about three to four days in total.

Core of this routine is the representation thereof its exchange with others. The interviewed person

states that he stops thinking about how to developing the idea when not interacting with others.

ERIMA07’ Proceedings

Gather Feedback

Extract Feedback


Active phone feedback


1. Idea

Figure 2: Example of an Idea Development Routine

2. Transform the Idea

Representing the Idea

Email out




Example Idea Generation Routine Group Perspective

Within the observed group the first step is to show the discussion topic. It is visualised to a

whiteboard or flipchart (large representation plane). The topic is discussed within the group to

achieve a common understanding (verbal).

To generate ideas, brainstorming takes place supported by “Post Its” which are randomly placed

on the representation plane. Ideas are affected by former thoughts and experience of the


The next step is the structuring of the ideas to higher aggregation levels. This is done by

discussing the ideas and finding group during that discussion. Within the discussion the ideas are

usually evaluated on best guess basis. The ideas are clustered on the wall. For this step lots of

space is needed to develop clear clusters. The possibility to edit the visualisation with e.g.

connecting lines with a marker is given when using a whiteboard.

The representation is captured by taking a photograph.

ERIMA07’ Proceedings

Photo Capture


Discussion Structure Ideas

Figure 3: Example of an Idea Generation Routine by a Group


Topic is represented Whiteboard/Flipchart

Topic is discussed Verbal

Brainstorming Post-Its

IV. An Approach for the Support of Early-Stage Innovation

Common aspects within the interviewed individuals and groups are the needs for externalising

ideas achieved by representations. Based upon a first representation (e.g. sketch, text diagram)

feedback from stakeholders is gathered and the idea is developed based the new information.

For representation common generic IT tools are used (e.g. MS PowerPoint).

The research has identified loops in the workflows between externalising ideas, communicating

about them and developing them further. The next innovation CWE need to provide a convenient

solution for dealing with representations of ideas and possess an interface to generate or upload

these objects of knowledge.

One of the major challenges in CWE is the motivation of people involved in processes to

participate in generating, evaluating and developing ideas. An approach to deal with getting

stakeholders involved in processes is by using game dynamics for task identified within the

research. The notion “game” is an ambiguous term – for some it signals energy, entertainment

and creativity, while for others it signals a lack of seriousness and value. This implies that the

diffusion and implementation of innovation games should focus on the productive side of the

process. The message should be clear that while being a game the process is still work and

should be taken seriously.


For generating ideas games will be used for shorter, specific work routines. Game approaches for

idea generation will be designed from the assumption that (good) ideas do not just come into

existence but involve some analytical and explorative work. The objective of these ideation

games is to promote and support innovative work. Most games for companies are simulations

focused on learning or team-building. For games to be used in ideation, and not just in training

people in ideation work, the game should provide insights as well as make the participants able to

act on these insights by coming up with ideas for new products, services and strategies.

The follow-up process should be an integral part of the design of a game. Knowledge developed

during the game should be documented and presented to the participants. Competences

developed should be followed-up with action plans for further development, implementation and

integration into ordinary practices. If the game is supposed to create input to decision processes

in the organisation, feedback to the participants about how the feedback should be

communicated needs to be part of the game’s results. The evaluation of ideas is based on the

“intelligence of many” which will be used by implementing a prediction market into the innovation


As ideation is about the intrinsic knowledge of individuals new connection mechanisms need to

be developed and implemented to bring the right people together who share a specific interest.

As much as participating in the idea generation process the motivation of individuals is key to

success. This can also be achieved by game mechanisms. By providing simple games for finding

people of same interest or knowledge background within the user group of the innovation

environment an initial first contact can be supported.

This will be integrated in a browser application that fulfils the users’ needs for convenient access,

easy usability and non local storage (cf. NovaNet 2006). The innovation environment will contain

a database with rated ideas and concepts, be the platform for experts as much in an open

innovation environment as in the enclosed system of a company. Within this environment the

communication related to an object will be attached directly to the object to track the interchange

of information back and understand the development phases of the idea. The integration of

communication and object representation enhances the data consistency without changing the

users’ behaviours.

V. Conclusion

To support early-stage innovation in distributed teams CWE need to be developed which support

non-linear work processes. The theory of design – the discipline dealing with the early-stage of

innovation – promotes that research in this field of the innovation process need to focus on the

real work requirements of ideators. The work in the early-stage innovation is constituted by going

back and forward between generating knowledge and applying new knowledge to the idea.

These iterative processes will be supported by the innovation environment in a way that does not

change the habits and routines of people working in the field of innovation but provides tools and

methods to them which augment the efficiency of their way of working. Important is the support of

object related communication. It can be seen in the routine examples that idea development is

based on the representation of the idea, exchange of its representations and gathering feedback

and get input to further develop the idea. This will be assisted by the usage of game dynamics in

the field of knowledge sharing, idea generation and evaluation and connecting the right people


An IT based innovation environment with rated ideas on several development levels will support

innovation workers with presenting and communicating their ideas to stakeholders, developing

their ideas further, finding related ideas and people and will be the backbone to enhance

ERIMA07’ Proceedings


companies ability to generate successful innovations. The objective of the game dynamics is to

make the work routine of generating ideas more effective through the use of games. The outcome

of the game intended to be initial ideas but could also be broader and imply “options”, e.g. ideas

for solutions for specific problems. However, with focus on the fuzzy front end of innovation, the

very early part of a project when the idea has not been found and the criteria for selecting a good

idea are unclear and it is not sure that the idea will lead to a new product. The challenge of

introducing and developing a game is that it should be possible to use it in a productive way, i.e. it

should be included in the work flow in generate.

The overall goal is to provide an innovation environment which can be used easily; where

innovators see the advantage of usage and by using it enhance the environment in its quality.


Allan, T. (2007): Keeping Pace;, accessed: 2007-


Bauer, B. (2006): Design & Methoden , in: Design Report 11/06, Blue C. Verlag GmbH, Leinfelden-

Echterdingen 2006

Bayazit, N. (2004): Investigating Design: A Review of Forty Years of Design Research, essay in Design

Issues, Volume 20, Number 1

Bürdek, B. (2005): Design. History, theory and practice of product design, published by Birkhäuser, Basel


Cross, Nigel (2001): Designerly Ways of Knowing: Design Discipline Versus Design Science, paper

prepared for the Design+Research Symposium held at the Politecnico di Milano, Italy, May 2000 (2001: MIT


Cross, Nigel, (1984): Developments in Design Methodology, Edited by Nigel Cross, The Open University,

published by John Wiley & Sons, Chichester

Design Notes (2006):, Accessed:


Frey, C. (2006): Mind Mapping Software Survey, Innovation Tools,, Accessed: 13.02.2007

Hribernik, K., Thoben, K.-D., Nilsson, M. (2007a): Technological Challenges to the Research and

Development of Collaborative Working Environments. in: Encyclopedia of E-Collaboration. Idea Group

Reference, 2007

Hribernik, K., Thoben, K.-D., Nilsson, M. (2007b): Collaborative Working Environments. in: Encyclopedia of

E-Collaboration. Idea Group Reference, 2007

Klünder, P. (2006): Planbarer Brückenschlag, in: Design Report 11/06, Blue C. Verlag GmbH, Leinfelden-

Echterdingen 2006

Nova-Net Konsortium (2006), Nutzung von Internet und Intranet für die Entwicklung neuer Produkte und

Dienstleistungen, Fraunhofer IEB Verlag, Stuttgart 2006

Rhea, D. (2005): Bringing Clarity to the “Fuzzy Front End”, A predictable Process for Innovation, Design

Research, The MIT Press, Cambridge 2005

Schumpeter, J.A. (1952): Theorie der wirtschaftlichen Entwicklung, Dunker&Humblot, Berlin 1952

Simon, H. (1973): The Structure of Ill-structured Problems, originally published in Artificial Tony J. Watson.

Rhetoric, Discourse and Argument in Organizational Sense Making: a Reflexive Tale. Organizational

Studies, 16(5):805–821, 1995.

Vaghefi, M.R./ Huellmantel, A.B. (1998): Strategic Management for the XX Century, Boca Ranton 1998

Weisberg, W. (2003): Creativity, Beyond the Myth of Genius, Freeman, New York 1993.

ERIMA07’ Proceedings


Towards a multi-input model, method and ILM (Ideas Lifecycle

Management) tool for innovation

ERIMA07’ Proceedings

O. Pialot 1,2* , J. Legardeur 1,3 , JF. Boujut 2

1 ESTIA - LIPSI, Bidart, France

2 INPG Grenoble University - G-SCOP, Grenoble, France

3 IMS - Bordeaux 1 University, Bordeaux, France

* Corresponding author:, +33 559 438 471

Abstract: This paper focuses on the early design phases of innovative projects. More precisely, the question

of the innovation opportunities development and management is addressed here, starting from a theoretical

model and methodology, until precise tool perspectives. The key elements of our approach are the PTC

multi-input model and the C-K theory, and we provide a detailed background on them. Our model is based

on three dimensions (concept, technology and potential) and highlights the need of interactions between

them regarding strategic and operational levels. Starting from the analysis of the three dimensions of the

PTC model, different opportunities for the innovation are identified. In order to develop every identified

opportunity, the three dimensions have to be explored with specific workshops and the C-K theory resulting

in a tree diagram. The paper presents also tool perspectives dedicated to structure the preliminary

exchanges among all stakeholders using criteria. This tool is mainly oriented towards the consolidation and

the diffusion of new ideas. Two case studies are finally proposed.

Keywords: Early design phases, Innovation process, PTC model, C-K theory.

I. Introduction

This paper focuses on the early design phases of innovative projects that are one of the important

challenges for industrial companies. Indeed, innovation contains complex socio-technical

phenomena and processes especially when new ideas of innovative concepts (such as products

or services) are proposed. These innovation processes are complex because the first operations

of innovative product developments are not well-defined phases of the design activity. Indeed,

they are not well-known and combine different aspects such as creativity aspects but also sociotechnical

negotiation among different stakeholders (i.e. design, marketing, supplier, R&D, and

others). In this paper, we propose a model to support innovation in early design phases combined

with a methodological approach. The aim is to have a control over elements that contribute to the

definition of the future innovation. New fragments of tool are also proposed to provide a support

for collaboration to foster innovation opportunities.

This paper is organized as follows. In Section 2, we review the existing models for innovation and

early design phases, and we provide a detailed background on the PTC multi-input model and the

C-K theory since they are the key elements of our approach. In Section 3, we show how to exploit

the PTC model for innovation by dividing its three dimensions into workshops and by connecting

the results with C-K reasoning. Section 4 presents innovation development and evaluation

perspectives with respect to mobilized criteria in innovative design process. In Section 5, we

show how to use our approach at the example of a heated surfing wetsuit completed by a Web

2.0 tool application, before we conclude in Section 6.

II. Existing innovation models and approaches

In the economical field, there are a variety of different innovation theories that have been

proposed in the literature. In general, one can distinguish between two principal innovation


models: the “science push” model (innovation pushed by the science), and the “demand pull”

model (innovation pulled by the demand). These two models are mainly based on the two

classical concepts in economy: the offer and the demand. However, they cannot be regarded

separately since the offer and the demand have to be taken into account in order to understand

and manage the innovation process (Mowery and Rosenberg 1979, Rothwell et al. 1988).

The innovation process is a complex phenomenon that is difficult to model. In fact, in the

hierarchy model (Gomory 1999) (sometimes also called “step by step” model), the innovation

process is considered as a linear progression towards increasingly practical solutions. The

Roozenburg and Eckels model (Roozenburg and Eckels, 1995) follows the same idea, but

integrates many parallel components (production, product, and marketing). Kline and Rosenberg

consider the innovation as a central chain of design with iterative feedback loops that is

interconnected with the knowledge sphere (Kline and Rosenberg, 1986). Figure 1 shows

Weelwright and Clark model (Weelwright and Clark, 1992): a new concept development process

caused and taken around a selective funnel by taking account of the 2 dimensions: the offer

(Technology) and the demand (Market).

Figure 1: Weelwright and Clark model.

Concerning process point of view, Perrin shows that design is « the heart of innovation process »

(Perrin 2001). So Innovation is a process to transform an abstract thing into concrete thing

(Rodenacker 1970). Hatchuel distinguish Innovative Design and Rule based Design (Hatchuel

1996, LeMasson and Magnusson, 2001). In Innovative Design, product identity is mobile and is

progressively established. The performance of Design type process is based on innovation

spaces exploration mechanisms. This progressively innovation concepts development is agree

with the “actor-network” theory proposed by sociologists Callon and Latour (Callon and Latour,

1986). Innovation is the result of confrontations and compromises between various actors. During

Innovation process, both product and actor representation are always changing. It’s the

“translation model”. To adopt an innovation, it’s necessary to adapt it and transform it.

The early design phase’s context has a high impact on the innovation process efficiency. The

difficulties and weaknesses of the involved cooperation processes have been extensively studied

(Merlo and Legardeur, 2004), especially when a new concept or a new idea is taken into

consideration. During these early phases, exploring new alternatives, such as new technical

concepts or technologies, is very difficult and off-putting as the actors find themselves devoid of

knowledge in certain areas and tend to remain faithful to traditional solutions that are already

proven to be stable and reliable. But unofficial negotiations with non mature and private

information which take place in an ad-hoc structure and sometimes unpredictable, are good for

concepts’ divergence and exploration freedom and identity mobility. Moreover early design

ERIMA07’ Proceedings


phases imply anticipation capacity of certain activity like knowledge acquisition. Find an efficient

method in order to innovate is the difficulty.

In the following paragraphs, we provide a detailed background on the PTC multi-input and the C-

K theory since they are the key elements of our approach.

The PTC multi-input model that supports innovation in early design phases

In 2006, we proposed the PTC multi-input model (potential technology concept multi-input model)

for the early phases of innovation processes (Pialot et al. 2006). Our model integrates both the

technological dimension and the market dimension via the potential. The PTC model is illustrated

in Figure 2.

The particular characteristic of the PTC model is the association of a concept to a potential of

added value of one or more technologies. Its main objective is the synthesis and confrontation of

the data coming from the technological survey, the market survey, and the different concepts of

solution coming from the idea’s portfolio of the company. Furthermore, the PTC model claims (i)

to provide a framework in the very early phases for an evaluation of the innovative opportunities

and their associated risks, and (ii) to propose a flexible methodology for the exploration of

innovation opportunities based on multiple inputs: the potential of added value identification, the

technological opportunities emergence, and the innovative concept generation or collection.

Figure 2. “Potential-Technology-Concept” model.

In the following, we define the three dimensions of the PTC model.

The potential of added value dimension models the existing gap between the product and the

current or future customer expectation. The potential should take into account not only the

approaches concerning the analysis of the customer’s need, but also its change dimension.

Therefore, the clear identification of the product added value induced by the potential is not only

integrated in the analysis of the current need, but also in the analysis of the changes (e.g. usage,

way of life). This Dimension is relating to the following questions: “Why?”, “For who?”, “When?”

and “Where?”.

The technology dimension encompasses the technologies (e.g. material, physical principle) and

the production techniques for the new product development. The aim is to identify the

opportunities offered by the technology (e.g. mechanical, electronic, magnetic) that can open the

domain of “the possible”. This Dimension is relating to the Question “How?”.

The concept dimension is related to the different ideas of the new concept of solution issued from

any creativity method, from a tools or ideas box, and from the portfolio of the company. This

Dimension is relating to the Question “What ?”.

ERIMA07’ Proceedings


We propose to build “PTC Trihedron” relating to a product or a system. It’s an association of

“potential”, “techno”, and “concept” elements and a system, with the result that we define an

Innovation like this:

Innovation = PTC Trihedron = 1 system + ∑ ( P elements + T elements + C elements )

It is very difficult to characterize and structure the innovation process phases in order to present

the complex dynamics of informal exchanges that the different actors’ encounter. Moreover, it is

quite hard to structure the richness (but randomness) of the existing creativity methods. The

contribution of the PTC model is to highlight the complex character and the need of combinations

and confrontations of “multi-input” opportunities for innovation. The multi-input aspect for the

innovation regroups the potential, technology and concept dimensions. Their exploration provides

many innovation opportunities to the company, and the PTC model’s architecture corresponds to

the different opportunity origins that exist in reality. For example, every stakeholder in the

company can identify relating to a system a problem or a change (potential dimension), identify

the use of another material or a different process (technology dimension) or have a new idea of

solution (concept dimension). This new intention is manifested in a proposition of a system with

one or more new elements on one or more PTC dimension. We obtain a new PTC Trihedron that

we then develop regarding the three dimensions.

In fact, in the PTC model, the three dimensions are linked and aim to foster the networking

between them. In fact, every new input proposition is analysed regarding the three dimensions of

the model to add elements from Potential, Techno or Concept dimension to complete the

definition of PTC Trihedron. This approach on the three PTC dimensions provides a framework

for a first evaluation of innovative opportunities and allows the limitation of the risks related to a

future innovation (in order to understand the risks related to innovation, see (Halman et al. 2001)).

In fact, the main objective is to select and analyse the different ideas during early design phases

(Figure 2).

During early design phases, every dimension is not stabilized and changes occur at any time.

These changes must be quickly propagated to the other dimensions during the early development

to foster decision-making with the most appropriate information and knowledge. The main goal is

to propose a multi-dimensional analysis in order to foster point of view confrontations in the very

early design phases, according to Callon Latour Theory. This model can also be used as a

mapping tool in order to manage the innovation strategy of the company.

Figure 3. “Potential-Technology-Concept” and “Weelwright and Clark” models mix.

ERIMA07’ Proceedings


Figure 3 shows Weelwright and Clark model completed by PTC model approaches. Modifications

appear in yellow color: the multi-input beginning and the multi-dimensions development of a new

innovation idea. We now present a methodology in order to develop innovation identity.

The C-K theory for a conceptual exploration and development of the solution space

The C-K theory, initially proposed by Hatchuel in 1996 (Hatchuel 1996), is named “C-K theory”

because its central proposition is a formal distinction between concepts (C) and knowledge (K).

The starting point is an interpretable concept without any logical status, or, in other words, a

comprehensible idea that cannot be directly materialized. Attention! This starting Concept noted

C0 is different to the Concept of Concept Dimension. In fact, C0 encompasses two previously

seen notions: “the system” and “the new intention”. For a better understanding, consider “Keys

that cannot get lost” as an example. “Keys” is the system and “that cannot get lost” the new


The principle of the C-K theory is to progressively add properties to the concept by switching

between the concept space and the knowledge space. Adding the properties supplies an

interpretable “object” that can be finally materialized by a stakeholder. So, in C-K theory, we can

define an Innovation like this:

Innovation = C0 + ∑ Pi

On the one hand, if the property we add to a concept is already known in the knowledge space,

we have a restricting partition. On the other hand, if the property we add is unknown in the

knowledge space involved in the concept definition, we have an expansive partition. Creativity

and innovation are due to expansive partitions of concepts.

Figure 2 illustrates the exploration of the expansive partition “safe hammering with hammer in

right and left hand doesn’t hold the nail" that is involved in the innovative design of the Avanti nail

holder (Hatchuel et al. 2004).

Figure 4. An application of the C-K theory at the example of the Avanti nail holder.

The resulting tree diagram of the development of the initial concept highlights the exploratory

character of the C-K theory. Some branches of the tree are cancelled, others further developed.

This formalism supports the exploration by a conscious and progressive development of the

ERIMA07’ Proceedings


different solution concepts starting from the initial concept. The stakeholders have control over

innovation concepts development. In this point, the C-K theory differs from the “classical”

creativity methods where first several concepts are generated arbitrarily before evaluating them.

Moreover, the C-K theory keeps in memory not only the paths that have been followed, but also

the mobilized knowledge and the concept expansions. For further details on the C-K theory, we

refer the reader to (Le Masson et al. 2006).

III. From the PTC Model to a methodological approach to innovate

Starting from the analysis of the three dimensions of the PTC model (potential, technology,

concept), different opportunities for the innovation are identified. In order to develop every

identified opportunity, each dimension has to be explored in order to confront the point of views,

and thus to innovate. To realize the exploration in the concept, potential and technology

dimensions, we propose to open workshops relating to the questions asked by each dimension:

“Why?”, “For who?”, “When?”, “Where?”, “How?” and “What?”. During the C-K reasoning,

Hatchuel et al. propose to use design spaces. A design space is a limited working context that

allows learning within the design process. This restriction of the reasoning, or, in other words,

localized workshop, is realized for a particular issue and the conclusions are then reintegrated in

the principal reasoning. The workshops that we propose are close to Design spaces in “Zoom”

spirit but are on the face of it proposed and relating to the three dimensions. So, by starting from

the initial proposition and exploring these different workshops, we obtain a lot of properties or

elements on the three dimensions relating to the innovation opportunity.

In C-K theory, the added properties to initial concept C0 have no specific origin. In the PTC

model, the elements from the three dimensions P, T and C you must to combine are not defined.

So we propose the new following definition of an innovation:

Innovation = C0 + ∑ ( P Pi + T Pi + C Pi )

C0 encompasses the two notions “the system” and “the new intention”. The progressively added

properties are relative to the questions of the different workshops: “Why?”, “For who?”, “When?”,

“Where?”, “How?” and “What?”. So they are relative to the three dimensions: Potential, Techno

and Concept.

The workshops related to the three dimensions are used throughout the entire design process

and “feed” a C-K reasoning continuously. We remind that the formalism C-K supports exploration

reasoning by a conscious and progressive addition of properties to develop different solution

concepts starting from the initial concept C0: the identity of the future innovation is progressively

established, according Callon and Latour again.

Figure 5 illustrates how the properties issued from the three dimensions refer to a C-K reasoning.

The “tree diagram” form of properties is not obligatory.

ERIMA07’ Proceedings


Figure 5. C-K reasoning resulted from the exploration in the three dimensions.

The limitation of the C-K theory is double: the origin of the initial concept C0 is not specified and

the exploration of properties that feed the reasoning is not guided.

PTC model precisely completes these two lacks. In fact PTC model give a framework for the

exploration of innovation opportunities based on multiple inputs: the potential of added value

identification, the technological opportunities emergence, and the innovative concept generation

or collection. Moreover, PTC model induces to open different workshops based on multiple

inputs: the potential of added value identification, the technological opportunities emergence, and

the innovative concept generation or collection to obtain various properties from the three

dimensions. So the result of this process is to have best leverage on the elements that contribute

to the future innovation identity. Each of the workshops allows all the stakeholders to work in the

way they are used to, while being the most inspired. For example, ergonomist and marketing

people are used to work in the design space potential – they are concerned by the demand and

the usage of the clients, and they are especially interested by the added values.

The existence of the three workshops throughout the entire process enforces the continuous

exploitation of all the three dimensions. We are convinced that this is a prerequisite for

innovation. Consequently, the obtained knowledge and information is rich and accurate in order

to better orientate the choices in the early design phases of the innovation process.

IV. Towards a tool for innovative concepts management

Until now, the presented process proposes to progressively establish the future innovation

identity. Adding the properties supplies an interpretable “object” that can be finally materialized by

a stakeholder.

To refine the stabilized identity and to develop and to evaluate each innovation opportunity, we

propose to use the ID² software tool proposed by Legardeur (Legardeur et al. 2005). ID² is mainly

oriented towards the synthesis and the sharing of information about different concepts of solution.

The principle is to compare the new proposition with others existing products or projects to

emphasize and develop the add value and to define specifications of the future innovation. For

ERIMA07’ Proceedings


this, ID² tool provides a collaborative platform for negotiation around a concept-criteria table: the

different concepts that should be compared are spread along the columns, and the criteria along

the lines of the table. The multidisciplinary team enriches each concept with its knowledge and

criteria (Garro et al. 1998).

The preceding explorations supply several criteria for the choice of the innovation. The idea is to

exploit these identified criteria in ID² tool. Figure 6 illustrates how the multiple identified criteria

can be organized in the ID² software in order to contribute to the different evolutions and


Figure 6. Mobilization of the criteria in the ID² software tool.

At this step of the process, criteria identification is ID² driven. The stakeholders progressively

supply criteria and have an influence on the C-K reasoning refinement and the development of

the innovative solution. By this way, they control innovation development. We can imagine certain

interactivity between the mobilized criteria in ID² and the C-K reasoning phases. The aim would

be finally to track back the mobilized criteria that lead to the definition and development of the

chosen concept.

For every criterion, we propose to define a results objective in order to have (i) a related concept

development or performance indicator and (ii) an estimation of its reliability and maturity.

Consequently, the indicator tells us by its value whether the result comes from a formal test or a

vague estimation of a stakeholder. Indeed, the reliability consideration takes a non-negligible

importance in the early phases of innovation processes where the information is less mature and

the input is often unofficial, private, or fuzzy. As a consequence, when the concept-criteria table in

the ID² software tool is filled, it becomes an interactive tool for managing the innovative definition

and development concepts and provides a solid basis for choosing innovation specifications and

the right strategies.

ERIMA07’ Proceedings


V. Two application case studies

So far, we have presented a new exploration method that integrates the three necessary

dimensions that have to be considered for innovation. The theoretical results have been tested on

different examples, and we present here a student’s case study of the imagination of a “heated

surfing wetsuit”. After that, we present the tool prospects we develop through the Crowdspirit

Web 2.0 application.

The “Heated surfing wetsuit” case study

Three workshops are opened. In the added value potential workshop, the questions “Why?”, “For

who?”, “When?” and “Where?” are studied. We started by the identification of various categories

of clients that could be potentially interested in the heated surfing wetsuit. For each potential

client, the usage value has been analyzed by exploring the different situations that are involved in

the given sport. The case study has been restricted to the design space on diving wetsuits that

have been studied extensively in order to understand the thermal behaviour. This study supplied

conception criteria of the wetsuits.

For the technology dimension, a flowchart of the potential technologies has been first created.

The aim was to analyze if an existing technology could be used, and to discuss the advantages

and drawbacks of every technology. Then, an expert group familiar with the textile industry has

been consulted in order to gain the most precise insight about the future of these materials.

Finally, more locally, a design space on the physical contradiction between the thickness of a

material and its thermal isolation has been studied by using some principles of the TRIZ method

(Altshuller 1999). The technology catalogue associated to this method in CATIA’s “Invention

machine problem manager” module has been consulted as well, as shown in Figure 7. This

Figure shows an extract of our reflections on the exploration of the technology dimension too.

Note that from now on, the design spaces are indicated by a dotted frame in the Figures 7 and 8.

Figure 7. Screenshot of CATIA’s « Invention Machine Problem Manager »An extract of the design

space of the technology dimension.

In the concept dimension, Figure 8, some different elements have been modelled: the surfing, the

role of a wetsuit, and the heat notion. These different modellings have brought up several

questions and various problems, and several design spaces have been created. As a

consequence, we acquired a lot of knowledge and many criteria have been identified.

ERIMA07’ Proceedings


Figure 8. An extract of the design space of the concept dimension.

All theses different and rich explorations on the three dimensions supply a lot of different

properties. Theses properties are combined to advance the reasoning about an innovative

conception, and a synthesis in the form of a C-K tree structure can be seen in Figure 8. We see

that the stakeholders have control over elements that contribute to the definition of the future


Figure 9. Reasoning in the form of a C-K structure.

We have now seen how to practically apply our proposition to manage the exploration of the three

dimensions on the example of a heated surfing wetsuit. Students work stop at the C-K reasoning

tree. The continuation would be ID² platform use in order to manage the innovation concept

development via criteria.

The “Crowdspirit” application

The aim of this case study is to develop tools from the defined methodology for a specific

application: the Crowdspirit Web 2.0 site. Web 2.0 represents comminatory site where the web

users are exchanged contents (and technological elements too like Ajax). The Crowdspirit

business model is particularly based on crowdsourcing: a web community of users that

contributes to a result with added value and the real contributors are rewarded with royalties. The

ERIMA07’ Proceedings


Crowdspirit goal is electronic products design. The web users take part in common work not in

the same time and in the same space. So it’s necessary to imagine an adaptation of our

methodological tools for an asynchronous collaboration.

We propose to work with Mind maps. In fact the form of mind map structure can be used for

creative and divergente exploration in the different workshops and for a convergente process in

the C-K reasoning tree. Figure 10 shows the different Mind maps.

Figure 10. Crowdspirit workshops and C-K reasoning in the form of a Mind map structure.

Father tree represents the first C-K reasoning made with the properties found in the different

workshops the floor above. But some branches of the father tree are cancelled, others further

developed. So we obtain the father tree and Son tree when another branch is explored: it’s the

“Darwin” principle. For each tree when the product identity is enough established, anteriority and

patent search and feasibility study are made by external team. After that best contributors

progressively supply criteria in ID² platform tool in order to manage the innovation concept

development. The goal is to obtain definite enough specifications in order to manufacture and

market the product by the specific electronic supply chain. Figure 11 shows the Crowdspirit

screen-shot for final innovation concept development with ID² from a Son tree.

ERIMA07’ Proceedings


Figure 11: Crowdspirit screen-shot of ID² linked to a Son tree.

VI. Conclusion

New product/process ideas are thus developed during periods of negotiation and research of

solution, which are often informal and unpredictable. At this level the goal of these phases is first

of all to be able to bring together a certain amount of data and information in order to justify and

consolidate the idea while creating a configuration in which it is possible to launch an innovative

project. The PTC (Potential –Technology – Concept) approach is one way to structure this

complex process of emergence of a new innovative solution.

To have more control over innovations development during this process, the efficiency of the

method and tools implies a clear strategic vision of the product, the internal company politics, i.e.

a “guide”. The results issued from the field (Lauche 2003) reveal the importance of such internal

politics during the early phases.


Altshuller G. (1999) TRIZ The innovation algorithm ; systematic innovation and technical creativity translated

by Lev Shulyak and Steven Rodman, Technical Innovation Center Inc., Worcester, MA.

Callon M., Latour B. (1986) « Les paradoxes de la modernité. Comment concevoir les innovations ? »,

Prospective et santé, 36.

Garro O., Brissaud D., Blanco E. (1998) Design Criteria. In the proceedings of 9th Symposium on

information control in manufacturing INCOM’98, Advanced in Industrial Engineering, Nancy – Metz, 24-26


Gomory R. (1989) From the Ladder of Science to the Product Development Cycle. Harvard Business

Review, Nov.-Dec.

Halman J., Keizer J., Song M. (2001) Risk Factors in Product Innovation Projects. In Conference The Future

of Innovation Studies, Eindhoven University of Technology, the Netherlands, 20-23 September.

Hatchuel A. (1996) Les théories de la conception, Paris.

ERIMA07’ Proceedings


Hatchuel A., Le Masson P., Weil B. (2004) C-K Theory in Practice : Lessons from Industrial Applications. In

the proceedings of the 8th International Design Conference, Design 2004, Editor D. Marjanovic, Dubrovnik,

18-21 May, pp. 245-257.

Kline S., Rosenberg N. (1986) An Overview of Innovation. In Landau, R., Rosenberg, N. The Positive Sum,

National Academy Press, Washington.

Lauche K. (2003) Sketching a Strategy: Early Design in Different Industrial Sectors. In International

Conference on Engineering Design, ICED 03 Stockholm, Folkeson Gralen Norell Sellgren (Ed.), Design

Society, pp. 125-126.

Legardeur J. Fischer X., Vernat Y., Pialot O. (2005) Supporting Early Design Phases by structuring

innovative ideas: an integrated approach proposal. In the CD-Rom proceedings of the 15th International

Conference on Engineering Design, Melbourne, 15-18 august.

Le Masson P., Magnusson P. (2002) Towards an Understanding of User Involvement Contribution to the

Design of Mobile Telecommunications Services, 9th International Product Development Management

Conference, European Institute for Advanced Studies in Management and Ecole des Mines de Paris, (Ed.),

Sophia Antipolis, France: 497-511.

Le Masson P., Weil B., Hatchuel A. (2006) Les processus d’innovation : conception innovante et croissance

des entreprises, Lavoisier Paris, ISBN 2-7462-1366-4.

Merlo C., Legardeur J. (2004) Collaborative tools for innovation support in product design. In the

proceedings of the 8th International Design Conference, Design 2004, Editor D. Marjanovic, Dubrovnik, 18-

21 May, pp. 787-792.

Mowery D., Rosenberg N. (1979) The Influence of Market Demand Upon Innovation: A Critical Review of

Some Recent Empirical Studies, Research Policy, vol.8, pp. 102-153.

Perrin J. (2001) "Concevoir l'innovation industrielle, méthodologie de conception de l'innovation", CNRS

éditions, Paris, ISBN : 2-271-05822-8.

Pialot O., Legardeur J., Boujut J.F., Serna L. (2006) Proposition of a new model for early phases of

innovation processes. In the proceedings of the 9th International Design Conference, Design 2006, pp. 603-

610, Editor D. Marjanovic, Dubrovnik, 15-18 May.

Rodenacker W.G. (1970) Methodishes Konstruieren, Heidelberg, Berlin 1970 (seconde édition, Springer,

New York, 1976), in Engineering design, Springer verlag, London, 1988.

Roozenburg N.F., Eckels J. (1995) Product Design: Fundamentals and Methods, John Wiley & Sons.

Rothwell R., Schott K., Gardinier J.P. (1988) Design and the Economy: The Role of Design and Innovation in

the Prosperity of Industrial Companies, Design Council, London.

Weelwright S.C., Clark K. (1992) “Revolutionizing Product Development”, The Free Press.

ERIMA07’ Proceedings


Using information technologies to improve the management of the French

healthcare system

ERIMA07’ Proceedings

C. Bourret 1,* , JP. Caliste 2

1 University of Paris Est (Marne-la-Vallée), France

2 University of Technology of Compiègne, France

* Corresponding author:, (33) 01 49 32 90 02

Abstract: No developed country today seems satisfied with its health system. Most countries face overwhelming

challenges, with increasing costs and deficits to master. In most of these countries an improvement of health

system management is linked to new information possibilities. Indeed, new uses of information may well provide

a solution when coupled with accelerated information technology tools. France faces an added challenge due to

its individualistic mindset and its highly compartmentalized health system. In what follows, we shall analyze the

specificity of health data, and then address the different levels of data needs: macro (States or regions), meso

(hospitals) and micro (physicians or patients). Then, we will point out the main challenge: how to facilitate the

exchange and sharing of information in the short run, and how to collectively produce information in the long run

so as to improve management and build innovative and cooperative practices. We highlight the importance of

building new innovative organizations across frontiers (interfaces organizations) and developing cooperative

practices. We shall analyze how the “réseaux de santé” or heath networks create collaborative work with the

central challenge of building trust from new uses of information and new communication approaches. These

innovative approaches meet the implementation of Electronic Health record (EHR) in United Kingdom or the

“Dossier Médical Personnel” in France. Such approaches invigorate the relationship between organizations,

physicians and patients, leading to added empowerment and responsibility amongst the actors involved.

Keywords: uses, information, complex system, healthcare, networks.

I. Introduction

Today’s challenge in developed countries is “managing to do better” (Moore, 2000). England’s

Information for Health Programme of 1998 inspired two subsequent laws in France. The law of

March 2002 concerned the Rights of sick people and healthcare quality, while the law of August

2004 addressed Health Insurance reform. Both laws have highlighted a growing need to use

information more efficiently in order to improve the management of France’s healthcare system

(Villac, 2004). According to the US Government Reform Committee (2005), bringing the IT

revolution to Healthcare is the “Last Frontier” in the United States.

In what follows, we will begin by pointing out the specificity of French healthcare system and of

health data. We will then analyze the uses of information at three different levels: the micro level

(doctors or patients), the meso level (hospitals) and the macro level (national or regional). Then

we will present the main tools at stake: firstly Electronic Health Record (EHR) or Dossier Medical

personnel (DMP) with comparisons with English National Health Service (NHS) and secondly

tariffs aspects. Finally, we will analyze a French experiment - the Health Networks or “réseaux de

santé” - an important attempt to redress compartmentalization by building interfaces.

Our paper is based on years of firsthand experience through the founding of a professional joint

Master’s degree between the Universities of Marne-la-Vallée and Compiègne. Our approach is

interdisciplinary, linking complex systems to a constructivist vision centred on informationcommunication

and quality management. We mainly use qualitative methods. These include

interviews with the main actors (doctors, hospital staff, ministerial and healthcare officers)

observing activities, as well as analysis of documents. In this way, we attempt to outline the

different perceptions of actors that need to be reconciled. We mix theory and practical, using both

ground analysis and conceptual tools to build our case studies.


II. The challenge in France: How to cross frontiers in an overlycompartmentalized

Healthcare system?

According to V. Fuchs, the main challenge faced by American medicine is “to devise a system of

medical care that provides ready access at a reasonable cost” (Fuchs, 1998). In France,

however, the challenge points to an old and deep tension between the concerns of liberty and

equality. The French health system struggles to choose among individual and collective

interests, while also addressing a vital need for data for decision making (Fuchs, 1998).

As in many other developed countries, an over-compartmentalization of activities is often seen as

a primary cause of high cost and weak quality in the France’s health system. Indeed, France’s

health system suffers a particularly worrisome case of compartmentalization, between the Health

Ministry (“Ministère de la Santé”) and Health Insurance (“Assurance Maladie”), between large,

« hospitalo-centric » hospitals and primary care, and between specialists and general

practitioners. Within the hospital structures themselves, there is added compartmentalization

between doctors and other professionals (e.g. nurses), and among treatment, care and social

goals. The patient is said to be torn between rival services of public or private hospitals, between

different health providers and between professions that are often in conflict. Each profession

rivals to defend its own specificity and power.

In the hospital, Glouberman and Mintzberg (2001) distinguish four separately-working worlds,

symbolized by four Cs: Cure, Care, Control, and Community. Cure relies on physicians, Care

depends on nurses, Control and administration are entrusted to managers, and Community

concerns boards and trustees. However, France is not alone. In the USA, Shortell et al. (1996)

highlight an excessive fragmentation, claiming that an integration of these various components is

imperative to each of them. The goal is to improve the quality of care and to contain (or better yet,

to cut) costs. Better information management is the key to reinforcing co-ordination among all

actors involved in the care delivery process.

According to the Fieschi report (2003) the individualistic French mindset makes it difficult to build

a culture of information and evaluation or assessment. The key challenge is first how facilitating

information exchange then information sharing and in the long run collective producing of

information for improving management and building innovative cooperative practices centred on

patients in an idea of management by processes (quality and traceability).

III. The specificity of health data

Health and medical patients’ data is uniquely personal, and demands rigorous measures of

confidentiality (privacy). It is subject to restrictive legislation, such as the “Health Insurance

Portability and Accountability Act” (HIPAA) in the United States, the “Commission Nationale de

l’Informatique et des Libertés” (CNIL) in France (in conformity with European Union directives), or

the “Commission d’ Accès à l’Information” in Québec (Canada).

The aim of such legislation is to assure property, access, storage and responsibility in using

patient’s data. Solutions vary according to different national contexts. Sometimes conflicts arise

between different legislation measures, such as that between individual States in the US and the

American Federal Government (Bourret, 2004).

IV. Innovating information tools in a networked health system

According to Grimson (2000): “The present inability to share information across systems and

between care organizations…represents one of the major impediments to progress toward share

ERIMA07’ Proceedings


care and cost containment”. Improving the healthcare system depends on health information at

three different levels: macro (State or regional), meso (hospitals), and micro levels (doctors and


At the State or regional level, efficiency is the main challenge. The difficulty here lies in mastering

costs and allocating resources to the meso and micro levels. Hospitals need to adopt a new

pricing system known as “tariffs according to the activity exercised” (T2A or Tarification à

l’Activité). This new system would result in a better allocation of resources. Another helpful move

would be the strengthening of certification policies and contracts in hospitals. Data is also

required for results analysis, goal assessment, and for the evaluation of health professionals’

practices, both at the individual and collective levels.

At the micro level, the French Dossier Médical Personnel (DMP) or Personal Medical Record will

introduce great changes in doctor-patient relations. The DMP is expected to ensure traceability,

co-ordination, transparency and quality of care. It is presented as “personal”, meaning it belongs

to the patients themselves; they decide who has access. The DMP is also presented as a

valuable opportunity for cooperation between the private and public sectors. It must be

coordinated with one or more other specific health records, such as cancer records (part of a Plan

Cancer), chemist records or health network records.

In May 2004, a new office was created in the US Ministry of Health: the National Health

Information Technology Coordination. The main US Health Maintenance Organization (HMO)

Kaiser Permanente (20,000 physicians, 8 million patients) highlighted the electronic medical

record as the central element of a policy to improve the quality and efficiency of care. All US

citizens are concerned by a standardized health record planned for 2010.

The implementation of a national Electronic Health Record (EHR) is a priority for the British

National Health Service (NHS). It aims to establish a record for life (e.g. the ERDIP project:

Electronic Record Development and Implementation Programme). The Electronic Health Record

should be operational in 2008. In Spain, the Autonomous Community of Andalusia has tried out

the historia sanitaria or Diraya project for all the people of Andalusia.

In these and other countries, such experimentations generally face common problems. Such

problems include management and data storage, financial problems (i.e. who pays?), project

management problems, or delayed voting on essential legal aspects. The road will be longer than

expected. We must also consider linking EHR and national Health or Sickness Insurance cards,

such as the Vitale card in France or the European card (implementation decided in 2002). Here

too, we face the problem of working across frontiers, since Health policy remains a largely

national affair (France, Spain, Germany, United Kingdom ...) and not an European one.

V. Common challenges: interoperability, coordination, coherence and a global

approach to complex system

Information Systems are the structuring element of Health Organizations and the essential

support for their evaluation. As such, Information Systems constitute an essential tool for building

new cooperative practices connected to information sharing. The Patient Electronic Health

Record (EHR) is a key enabler for eHealth (Villac, 2004) and a major component of Health

Information Systems (Fieschi, 2003). All the components will be interconnected through networks

such as the NHIN (National Health Information Network) project in the USA.

In the United Kingdom, the NHS Electronic Health Record (EHR) is a part of the specific

programme called « Connecting for Health ». It comprises a strong task force in national project

ERIMA07’ Proceedings


management, which is spread into five regional clusters (North East, North West, West Midlands,

Eastern, Southern, London).

What is the best level of building Information Systems in France? The Fieschi Report (2003)

suggests that the regional level would be most suitable. In France, we must also quote the

Réseau Santé Social -- a technological network for transferring reimbursement data (paying

back). This network comprises electronic sheets, or “feuilles de soins électroniques”, specific to

the French system of upfront “payment by act”, with patients subsequently reimbursed by Health

Insurance (a system dating back to 1927). Care quality can also be improved through the SNIIR-

AM, or “Système d’Information Inter Régimes de l’Assurance Maladie” one of the most important

data warehouses in the world. The Web médecin – used to check doctors’ prescriptions -- can

also strengthen care quality by recording exactly what drugs the patient takes.

While different responses exist at national levels, the main goal is to overcome the obstacle of

excessive compartmentalization. In the end, most of the challenges tackled are roughly the same.

These challenges include data property, data access and management of access authorizations

(the issues of identifications and of shared medical secrecy / privacy), doctors’ collective and

individual responsibility, the appropriate level of data storage and data management. The main

imperative is to achieve interoperability of both tools and data. As Moore and Fuchs have pointed

out, there is an overarching conflict between individual and collective goals.

J. Van der Lei (2002) has stressed that "applying information and communication technology

(ICT) to a medical domain is not merely adding a new technique, it radically changes processes

in that domain". He highlights the necessity of analyzing "feedback mechanisms".

Information and Communication Technology (ICT) has an impact on the attitudes of patients, in

large part because it alters traditional information asymmetries between patients and doctors.

Better informed, patients have now become far more demanding than before, more insistent on

flawless procedures, and more exacting with regard to their rights (leading to what one might call

the legalization of health). L. Sfez (2001) has even spoken of “the utopia of perfect health” – a

“utopia” in which these exacting patients demand increasingly formalized and contractual results.

ICT has also altered medical practice, which has in turn influenced the evolution of technologies

themselves. Didier Sicard speaks of “medicine without the body” (2002). All these changes lie at

the heart of our constructivist approach to understanding complex systems: analyzing the

makeup of social practices in the long run.

According to Anthony Giddens (1994) the success of these developments—so important to the

evolution of our societies—will depend greatly on our capacity to build a sense of trust, both in

technical tools (or “artifacts”) and in human interfaces. In creating interface organizations such as

Heath Networks (réseaux de santé), we take an important step towards crosscompartmentalization,

and thus, to trust-building amongst all involved actors. While this is

particularly true for France, the French case bears much in common with many others, especially

in France.

VI. “Dialogic” Heath Networks as innovative, integrated and “holographic”


Networked organisations are significant examples of complex systems. According to Edgar Morin

and Jean-Louis Le Moigne (2003), this complexity can be analyzed across different levels, or

“dialogic” (double-logic) principles. “Dialogic” refers to “logics” that were long considered

opposites, but that are now being managed as complementary pairs. Some examples of

dialogical pairs include order - disorder, individual - collective, local - global, autonomy -

centralisation, or public - private.

ERIMA07’ Proceedings


France is unique in its Healthcare Networks system (réseaux de santé). These Networks

developed in the 1980’s following the emergence of two different innovative approaches. Facing

the AIDS epidemic, primary care professionals were led to invent new practices of co-ordination

with hospitals and chemists. These practices became the first approach to Healthcare Networks.

The second approach concerns managers wishing to adopt the methods of American Managed

Care HMOs (Health Maintenance Organizations) in order to control costs better. In this case, we

could speak of treatment co-ordinated networks (réseaux de soins coordonnés). The April 1996

ruling favored the experimentation of managed care networks, allowing for tariff innovations.

Since the end of 2002, a global financing of Healthcare Networks has become an alternative to

separately financing activities (i.e. primary care and hospital activities). We now speak of global

“réseaux de santé” (Health Networks). Indeed, the World Health Organization (WHO) has

pronounced that health is vaster than mere treatment, since it also includes quality of life as well

as the social dimension of wellbeing.

Health Networks constitute a “holographic organization” (Shortell and al. 1996) or “organisation

hologrammatique” (Morin and Le Moigne, 2003). For Shortell, such an organization is not merely

the sum of its parts, but exists within each individual part. Shortell claims that “holography is the

antidote to runaway fragmentation and specialization”. The essence of the holographic

organization lies in its ability to embed the “whole” in each “part”. Thus, the goal consists in

working as “holistically” as possible in terms of knowledge, expertise, and information transfer.

The concept of the “holographic organization” is closely tied to that of “mass customization”.

“Mass customization” is achieved by developing services to meet the unique needs of each

patient. However, it does so in an efficient way, using relatively standardized support functions.

Training organizations are also available to both professionals and patients.

Health “holographic organizations” in networks may help mastering the ensemble of changes in

health systems. Developed countries all share a common challenge: building ties for a community

of services by using a global approach to health. This global approach must be achieved in a

perspective of management of complexity, based on the co-ordination of public and private

actors. The essential aim is the management of complexity to build trust between different actors.

This trust should generate a group-oriented culture (collective identity) and develop more

efficient, integrated, and quality-driven organizations or what Shortell calls “building community”.

In France, we must cope with twin challenges: developing both information and an evaluation

culture. That is why the step of building a réseaux de santé is crucial for us.

Firstly, health professionals must get to know each other. Then, they must learn to exchange

information. Finally, they must learn to produce information together. The most difficult step here

is accepting the judgement of others with regard to highly individual practices, since doing so

demands a great change of mentalities. Constructing collectively shared practices will take time.

Patience is required, since everything can be questioned in an instant. Such is the nature of trust,

which takes a long time to be built but a short time to be destroyed.

In his introductory conference at the Erima 2007 Symposium, Peter Allen remarked that all

change is “dialogic”. Change is made of both ruptures and continuities, of organizational

innovations and the overcoming of resistance to change itself. Technology is not everything. The

success of any such change depends on how well both human factors and technical factors are

taken into account, as well as on the synergy between these two types of actors. At this same

conference, Peter Allen explained that “there is no best strategy, but there are good and bad

ecologies of agents”. What is essential is shaping bundles of new, cooperative practices which

will contribute to the emergence of new learning organizations. Such organisations will train

professionals, but will also be capable of integrating their experiences, like the specific

understanding of particular patients.

ERIMA07’ Proceedings


VII. Conclusion

New uses of information lie at the heart of important changes in managing the health systems of

developed countries. The challenge is particularly strong in France where individualistic mindsets

and compartmentalization are very weighty. So building réseaux de santé may be a very

important means of developing trust, both in technological tools (e.g. SI, DMP) and in human


But we must not forget that the main goal is to improve health systems. We need to improve

services for the patients who become “empowered” actors, responsible for their own health—

particularly in France where DMP is their own property. According to Shortell (1996), a patient in

a waiting room once aptly described the challenge, saying “I want to know what’s done to me is

really needed and is done as efficiently as possible”.


Bourret C. (2004) Data Concerns and Challenges in Health: Networks, Information & Communication

Systems and Electronic Records », Data Science Journal, volume 3, pp. 96 - 113.

Fieschi M. dir. (2003) Les données du patient partagées : la culture du partage et de la qualité des

informations pour améliorer la qualité des soins, Rapport au ministre de la santé, 55 p.

Fuchs V.R. (1998) Who Shall Live ? Health, Economics, and Social Choice, World Scientific, 278 p.

Giddens A. (1994) The consequences of modernity, Polity Press, Cambridge, 1990, L’Harmattan , Paris,


Glouberman S., Mintzberg H. (2001) Managing the Care of Health and the Cure of Disease, Health Care

Management Review, pp. 56 – 84.

Grimson J., Grimson W., Hasselbring W. (2000) The SI challenge in Health Care », Communications of the

ACM, vol. 43, n° 6, pp. 49 – 55.

Information for Health. An Information Strategy for the Modern NHS 1998 – 2005, 1998, 123 p.

Moore G.T. (2000) Managing to do better: general practice in the 21st century, London, Office of Health

Economics, 62 p.

Morin E., Le Moigne J.-L. (2003) L’intelligence de la complexité, L’Harmattan.

Sfez, L. dir. (2001) L’utopie de la santé parfaite, PUF, 517 p.

Shortell S.M. and al. (1996) Remaking Health Care in America. Building Organized Delivery Systems,

Jossey Bass, San Francisco.

Sicard, D. (2002) La médecine sans le corps, Plon.

Van der Lei J. (2002) Information and communication technology in health care: do we need feedback?,

International Journal of Medical Informatics, vol. 66, Issues 1-3, 75-83.

Villac M. (2004) La “e-santé”: Internet et les TIC au service de la santé, in La société de l’information,

rapport Curien N. et Muet P.-A., La Documentation française, pp. 277-299.


Andalusia (Spain) :

Collectif Interassociatif Sur la Santé (CISS) :

Commission d’Accès à l’Information du Québec :

Commission Nationale de l’Informatique et des Libertés (CNIL) :

Conseil National de l’Ordre des Médecins :

Coordination Nationale des Réseaux :

Groupement d’Intérêt Public (GIP) Dossier Médical Personnel (DMP) :

Haute Autorité de Santé :

Healthcare Information and Management Systems Society :

Health Insurance Portability and Accountability Act (HIPAA):

ERIMA07’ Proceedings


Kaiser Foundation :

Ministère des Solidarités, de la Santé et de la Famille :

National Institutes of Health (US Department of Health and Human Services):

National Health Service:

Organisation pour la Coopération et le Développement Economique (OCDE) :

Résumé du rapport Vers des systèmes de santé performants (2004) :

Santé Canada – Health Canada :

Union Régionale des Médecins Libéraux d’Ile de France :

ERIMA07’ Proceedings


Needs for methods and models rationalizing work of the actors of an

organization implied in an innovation process:

basic principles and examples

ERIMA07’ Proceedings

C. Kolski 1 , E. Adam 1 , S. Bernonville 1,2 , R. Mandiau 1

1 LAMIH-UMR CNRS 8530, University of Valenciennes and Hainaut-Cambrésis, France

2 EVALAB, CHR Lille, France

Abstract: The processes of innovation can be considered before all as business processes; those generally

relate to more or less complex organizations. It is possible to understand and improve these processes by

exploiting methods and models coming from various fields. Basic principles and examples are provided in the


Keywords: innovation process, innovative services and processes, models, innovative organization

modelling, tools.

I. Introduction

The innovation is a wide, rich and multi-disciplinary field, which made the object of many studies

and works, studying the innovation according to various points of view: institutional, scientific,

organisational, and so on (Hage and Meeus, 2006; Allen and Henn, 2006). We are interested in it

under the angle of organization modelling, with the aim of improving the effectiveness of the

actors of the organization, and of integrating in this one assistance systems (relating for example

to economic intelligence). Let us note that economic intelligence and innovation are closely

dependent (MEDEF, 2006). Indeed, if one refers to success key stages of an innovation

approach 1 : (1) to find and to conceive the innovation (to analyze the innovation potential, to define

a strategy, to inform themselves on technologies, the markets, concurrence, to seek the

financings, to find solutions, to reinforce the potential of innovation, to innovate in partnership), (2)

to evaluate the innovation (to check the freedom of exploitation, to integrate the standards and

laws, to respect the environment, to study technical feasibility, to validate the assumption of

market), (3) to develop the innovation (to ensure the viability of the innovation, to use technology

transfers, to ensure the financing of the innovation, to establish a plan of development, to create

an innovating company, to set up a technological watch), it proves that tools coming from the

economic intelligence can be of a capital contribution. Let us note that in the processes of

innovation, the study and proposition of innovative processes have been included, i.e. those

profiting from an innovation, coming for new solutions or new services. For example, in the

hospital field, new innovating processes appear; they tie profit of new information technologies:

actors of the organization can be seen like nomadic users of new devices likely to help them in

their activities; however new problems appear, it is important to identify and to solve them

(Beuscart-Zéphir et al., 2005). Actually, when the global approach used is participative,

processes of innovation and innovative processes are or must be closely dependent.

The paper is first focussed on analysis, modelling and simulation of organizations implied in such

processes, these processes being seen as business processes (under the angle of the quality

standard ISO 9000, generation 2000, cf. Mathieu, 2002). Then, we underline a whole of ideas

related to models issued from multi-agent systems. Models from Software Engineering (SE) and

Human-Computer Interaction (HCI) are also currently studied with a view of organization

1 According to:


modeling; a concrete illustration is provided. A conclusion and research perspectives are given in

the last part of the paper.

II. Analysis, modelling and simulation of organizations implied in innovation


The human factors play an essential part in the effectiveness of the innovation processes in

companies. The majority of the human organizations committed in such processes are based on

creation, handling and exchanges of information, knowledge and documents (numerical, paper…)

between actors constituting the organization. The emergence of new technologies and powerful,

simple computerized tools often contrasts with the lack of methods for implement and integrate

them in the concerned organization. In addition, according to Adam et al. (1998), if solutions are

sometimes found to solve local problems of generally technical nature, it does not exist

systematic ways to solve complex organisational problems where the human factors are critical.

The analysis, modeling and simulation of organization (according to static and dynamic aspects),

figure 1, often makes it possible to identify factors of blocking, errors, waste of time, and so on.

Initially, the analysis must allow, after identification of the principal objectives of the organization,

to specify the role of the various actors and to describe the tasks assigned with each role. The

analysis of the external interfaces must be also carried out in order to identify the inputs and

outputs of the organization, for example the interaction links with the suppliers, the customers or

other organizations. As regards modeling, numerous models, coming mainly from Software

Engineering (UML models, SADT actigrams …) or derived from existing models, can be very

efficient (Bernonville et al., 2005). The selected models must for example reflect the importance

of the data in the human organization, to be able to underline the points/places of communication

and co-operation. They will have to be presented at the actors, in order to propose in a

participative way one or several computerized solutions as well as solutions related to the

organization itself. Simulation must then play a major role, by exploiting the potentialities of the

selected models. For example, the simulation based on the use of Petri Nets can make it possible

to model the interactions between actors finely and to highlight possible improvements

concerning them: during a study concerning organizations engaged in innovation processes in a

large company, simulations have led to decrease several weeks of the durations of certain

processes related to industrial patents (Adam, 2000). In the healthcare field, and particularly in

projects concerning integration of new interactive systems in the process of therapeutic

prescription, the use of UML activity enabled us early in the project to highlight important

differences in the role of the nurses, and thus to identify potential problems of co-operation

between various actors of this process (Beuscart-Zéphir et al., 2005 ; Kolski and Bernonville,

2006) ; an illustration will be given in the fourth part. This global approach goes in the same

direction as the quality standard ISO 9000, generation 2000, recommending the improvement of

the whole of the processes in companies.

ERIMA07’ Proceedings


ERIMA07’ Proceedings

Actors implied in the innovation processes



Analysis Modelling Simulation Assistance

tools design

Figure 1. Global principle of organization analysis, modelling and simulation

III. Modeling of organization for the innovation: potential contribution of agent

oriented models

The comprehension and the improvement of the innovation processes in the companies are in

our opinion dependent on the description and the study of the organizations. We think that the

multi-agent systems (MAS) research field can propose promising solutions for such studies

(Mandiau et al., 2002; Boissier et al., 2005). In particular, the organizations can be defined like a

whole of roles concerning the various actors and their interactions.

Multi-agent oriented models for human organization

While being based on the concept of agent and by generalizing an organization like being able to

integrate human agents as well as software agents, it seems that two possible axes can bring

methodological elements to the study and global comprehension of such systems.

The first axis consists in describing several role plays where various actors try to explain their

steps of decision-making in order to improve reciprocal understanding of the objectives of each

one for a better convergence of the decisions and working methods. Of these role plays, conflict

situations can thus be highlighted and then solved; new ideas can appear with respect to the

functions, tasks or constraints associated with each role (of oneself and others in the

organization). For example, works of (Guyot and Shinichi, 2006) propose situations applied to

processes in company or operations of collective behaviors. In the same way, the CIRAD (cf. carries out a similar work for human

organizations in developing countries; the objective is to improve their effectiveness by (1)

identifying and discussing problems without inhibition due to the hierarchy because of presence in

a play, (2) by giving an outline of the global vision (non-existent in the reality because for example

of the ignorance of the total functioning of their environment). We think that such steps can bring

a new glance on the improvement of the innovation processes, but our research does not relate

to this axis.

The second axis consists in the analysis and the improvement of the existing organizational

structures. The multi-agent systems research field is interested in it particularly under the angle of

the multi-agent organizations. Within this framework, works on the modeling of holonic systems

(which can be seen as particular multi-agent systems) can bring interesting contributions and

points of view. Holonic systems were proposed by Arthur Koestler around 40 years ago (Koestler,

1969). The underlying principle is the fact that, in real life, an entity must be considered both as a

whole made up of other entities and as being part of a set. Koestler’s ideas have already been

applied in various fields, notably in Intelligent Manufacturing Systems (in order to form one of the


models on which the factory of the future could be built), robotics, transport planning, cognitive

psychology, and so on. A Holon is defined by Koestler as being a part of a whole or of a larger

organization, rigorously meeting three conditions: to be stable, to have a capacity for autonomy

and to be capable of cooperating. Several conferences are now dedicated to such new multiagent

and holonic systems and organizations, see for instance (Marik et al., 2005). We have

exploited the holonic concepts for holonic modeling of organizations implied in innovation

processes; the application field concerned patent rights in the chemical domain (Adam, 2000 ;

Adam et al., 2001) ; the objective was to provide the actors of the organization with computerized

tools adapted to their tasks (Adam and Lecomte, 2003 ; Adam and Mandiau, 2005).

Design of a MAS into a human organization: application to an Information Multi-Agent System

Our research aims to set up an information management assistance system in watch cells or in

laboratories. In order to take into account the human factors, such as the notion of the group or

even the human-machine co-operation, we have developed a method (AMOMCASYS, meaning

the Adaptable Modelling Method for Complex Administrative Systems) to design and to set-up

multi-agent systems within human organization, more precisely in the cooperative processes of

these organizations (Adam, 2000). We have reused this method to develop multi-agent systems

for helping cooperative information management within a team of technological watch. The main

advantage of our method, and of our system, is that it takes into account the cooperation between

actors of workflow processes. Indeed, we have noticed that most of human organizations are

based on a holonic model (each part of the organization is stable, autonomous and cooperative

and is composed of sub-holonic organizations whose it is responsible) and we have built our

method by integrating these notions.

Regarding the system's modelling, the selection criteria for the proposition of AMOMCASYS, are:

the clarity (primordial for the confrontation of the models with actors of the system); the

representation of the communicated objects (the documents), the data flows, the cooperation

between actors, and the responsibility levels. So, AMOMCASYS uses four models: a data model

(by data model of UML), a data flows model (by adapted actigrams of SADT method), which

allows representing responsibility levels), a data processing model (by the processing model of

CISAD method), and a dynamic model (by parameterized Petri nets). Each of these models

brings a complementary view on the modelled system. They are not all necessary: in some

cases, it is possible to use only a data flow model or a data processing model. It is in this sense

that AMOMCASYS is an adaptable method, but in all cases it is indispensable of building the data

model, representing all the data, or the most important types of data, used during groupware

processes. Before the setting up of groupware software, AMOMCASYS proposes to actors of the

complex administrative processes to simplify them (that is to say decrease number of

communications, of database duplications, data controls…). It has been then necessary to

provide processes' managers (department managers and/or quality managers) with a CASE Tool

(Visual Basic layer based on the commercial software VISIO), easy to use, allowing them to

model the processes.

Three steps are necessary to set-up a MAS with AMOMCASYS (cf. figure 2): (1) an analysis has

to be done in the department; (2) next, the processes where the multi-agent has to be set-up are

modelled by using the data model and the dataflow model (and sometimes the data processing

model). Actors are confronted to the models in order to involve them in the setup of the project

and in order to validate the model. Some organisational optimization of the processes are

sometimes made at this stage (for example, the time for dealing with one procedure involving

about 15 actors was halved, by improving cooperation and increasing the responsibilities of the

actors). (3) Finally, the data exchanges and the working mechanism of the multi-agent system are

modelled with the processing model and the data model is used to represent classes that

ERIMA07’ Proceedings


compose the MAS. A simulation of the human activities can be used to determine the opportunity

to use agents on particular points of the process.


ERIMA07’ Proceedings

Actors of the department


Activity Processing

Modelling Modelling



& & Simulation


Analysts Modeller Designer

Figure 2. Steps of the AMOMCASYS method

Data model

Processing Model

Design Design

Although the definition of our MAS structure has been facilitated by the use of holonic principles,

the modelling of the system organization and the characterization of the functionality of the

agents remain problematic. We propose a specification in two stages: the first stage concerns the

individual functioning of each role played by the holonic agent; the second concerns the

functioning of the group, describing communications between agents and between actors and

agents. Regarding the design of the MAS to be integrated to the human process, AMOMCASYS

data model allows us to represent the principal Holon class (which describes the general agent

structure) as well as the classes associated to the knowledge on the environment (representation

of the process, the actor, the workstation, the responsible, the subordinates). Each holonic agent

has some roles, that implies co-operations between holonic agents and sometimes between the

agents and the actors (the users). The models of the AMOMCASYS method can model these cooperations

(cf. Figure 3).


Utilisateur B

Utilisateur C



Agent B

Agent C









BdD (Oracle, ...)

(first results)

(first results)

(first results)

Filter the data

Filter the data

Filter the data
















according to the results, it

should be interresting to modify

request, add/remove points to

search engine, ...

Display result

Display result

Display result













Figure. 3. Example of integration of software agents into a cooperative processes





Sort, modify the


Figure 3 presents the integration of software agents in an information retrieval process of a

technological watch team. Each agent is linked to an actor. Agents search information, filter them,

compare them and transmit them to actors, which check them to record them into a database.

The integration of agents in the process has been done in cooperation with the actors by using

the dataflow model and corresponds to the second step of our method. So, each agent helps the


user, to who it is dedicated, to search relevant information and to communicate it with other

actors. In order to maintain or create the feelings of community or group among the actors, which

is often forgotten with the use of new technologies (the individuals are isolated with their

workstation), we have proposed to develop self-organizing capacities in order to generate

communities of CIASTEWA, which have to answer at the same kinds of requests. This

reorganization is indicated to users in order to encourage them to cooperate, if they want to do it,

with other users having the same centers of interests.

So, to specify a co-operative information agents’ system into a human organization, we have to

follow three steps: a step of analysis and modelling of the human organization; a step of

modelling the insertion of agent systems into the human organization; and a step of design of the

MAS. The first prototype of the MAS that we have proposed for the technological watch

department has been setup in short term (few months), and has been particularly well accepted

by actors, thanks to the participative design that we propose. Currently, we develop a more

flexible multi-agent system, which integrates most of the holonic concepts, in order to let the

agents able to choose among a set of strategies according to the system goal, their roles and

their personal goals. Agents are able to: add, remove, change their roles; delegate tasks to

assistants or to agents of their teams; ask for new assistants. To manage roles and agents of the

multi-agent organization, we use notion of Holonic Ressources Agents (which act like Human

Resource managers). Flexibility that we give to our holonic agents allows a role to have variations

of its definition in sub-parts of a large system.

Figure 4. Screen copy of a CIASTEWA

ERIMA07’ Proceedings


IV. Modelling of organization for the innovation: potential contribution of SE and

HCI methods and models to create common work supports helping the

integration of innovative system in complex organization

Our first approach of organization modelling, described in the preceding part, exploited explicitly

agent oriented models. In a second approach, we study in a global way the contribution of

methods and models from Software Engineering and Human-Computer Interaction, with for

application framework the introduction of new interactive systems in hospitals. Thus, various

types of innovating systems are currently planned to support the organization concerned with the

activity of ordering - dispensation - administration; this activity is complex and involves risks. With

an aim of concrete illustration, we are interested here primarily in the management by the nurse

of the requests about samplings (blood tests, biological analyses) ordered by the physician for

several patients.

Following the observations carried out on the field, the ergonomists collected information on the

current situation in an hospital. In addition, they worked in collaboration with a IT company

specialized in CPOE (Computerized Physician Order Entry) applications and studied the possible

integration of a future software (in design phase) within the current organization. They thus

analyzed mock-ups and functional specifications in textual form provided by the company. Then,

the ergonomists wish to describe the current situation observed and the possible situation with

the software suggested by the company in order to highlight the possible differences. This

information could be transmitted to other actors to be analyzed. However, the methods and

models used by the ergonomists are too limited to describe deeply this type of situation. We thus

propose the use of a model from Software Engineering meeting the needs of the ergonomists.

Figures 5 and 6 present two common work supports carried out starting from the UML activity

diagram. The first diagram (figure 5) illustrates the task chaining carried out by each actor or

system for the situation observed on the field. While the second diagram (figure 6) illustrates the

task chaining carried out by each actor or system for a possible situation with the future software.

The comparison of the two supports makes it possible to highlight organisational changes to take

into account (sets surrounded and numbered on figures 5 and 6). The first unit (numbered 1 on

figures 5 and 6) shows that there would be no more orderings in paper form and that the software

would manage the orders that the physician keyed (regrouping, creation of lists for the nurse).

The second unit (numbered 2 on figures 5 and 6) shows that the nurse would not key any more

the requests of samplings. The nurse would have only to consult on the screen the list of the

samplings to be carried out. The third unit (numbered 3 on figures 5 and 6) shows that the nurse

would have an additional stage, i.e. he or she must initially validate the samplings carried out and

then validate the sending of the samplings carried out.

In conclusion, the comparison of these two models puts forward important changes in the

organization which could have consequences on the workload of the actors (ex: more or less

tasks to perform), supports used (for instance suppression of the paper media)… This is why

these supports will be able to help the various actors to understand the work situations, to take

the organisational decisions and to design interactive applications adapted to the work situations.

ERIMA07’ Proceedings


Figure 5. Example of common work support completed using the UML activity diagram

(observed current situation)

ERIMA07’ Proceedings


Figure 6. Example of common work support completed using the UML activity diagram

(possible future situation)

We have showed a concrete example of common work support. This support contributes to the

communication between the various actors of a project concerning the design of new systems of

CPOE type. Other supports were carried out on other specific cases concerning the activity of

ordering – dispensation - administration of the drugs in hospitals (Cf. Bernonville et al., 2006).

V. Conclusion

For the understanding and the improvement of innovation processes, the use of adapted methods

and models prove to be necessary. The Software Engineering and the Distributed Artificial

Intelligence (Systems multi-agents) fields offer a whole of possibilities on this subject; in fact it is

possible to exploit directly or to adapt some of them. We used and/or adapted several models in

various fields (implying complex organizations) in connection with innovation or innovative

processes. Our research perspectives relate to the study of new models and methods and their

application in various application domains. We also wish to gather in the same tool a maximum of

models; this tool would allow the creation of common work supports dedicated to the various

actors of a project (whatever the domain) and meeting their needs.


The present research work has been partially supported by the SOLVAY Company, the Ministère

de l'Education Nationale, de la Recherche et de la Technologie, the Région Nord-Pas de Calais

and the FEDER (Fonds Européen de Développement Régional), during different projects (such

as SART, MIAOU or EUCUE). The authors gratefully acknowledge the support of these

ERIMA07’ Proceedings


institutions. They thank also Emmanuel Vergison for his numerous remarks concerning different

aspects presented in the paper.


Adam E. (2000). Modèle d’organisation multi-agent pour l’aide au travail coopératif dans les processus

d’entreprise application aux systèmes administratifs complexes. PhD Thesis, University of Valenciennes and

Hainaut-Cambrésis, France, Januar.

Adam E., Kolski C., Vergison E. (1998). Méthode adaptable basée sur la modélisation de processus pour

l'analyse et l'optimisation de systèmes coopératifs dans l'entreprise. In M.F. Barthet (Ed.), Actes du 6ème

Colloque ERGO IA'98 Ergonomie et Informatique Avancée, ESTIA/ILS, Biarritz, pp. 270-279, janvier.

Adam E., Lecomte M. (2003). Web interface between users and a centralized MAS for the technological

watch. In D. Harris, V. Duffy, M. Smith, C. Stephanidis (Ed.), Human-Centred Computing: cognitive, social

and ergonomic aspects, Mahwah, New Jersey, pp. 629-633.

Adam E., Mandiau R., Kolski C. (2001). Application of a holonic multi-agent system for cooperative work to

administrative processes. Journal of Applied Systems Studies, 2, pp. 100-115.

Adam E., Mandiau R. (2005). A Hierarchical and by Role Multi-agent Organization: Application to the

Information Retrieval. In F.F. Ramos, V. Larios Rosillo, H. Unger (Ed.), Advanced Distributed Systems: 5th

International School and Symposium, ISSADS 2005, Guadalajara, Mexico, January 24-28, 2005, Revised

Selected Papers. LNCS, Springer, pp. 291-300.

Allen T.J., Henn G.W. (2006). The Organization and Architecture of Innovation, Managing the Flow of

Technology. Butterworth-Heinemann.

Bernonville S., Kolski C., Beuscart-Zéphir M. (2005). Contribution and limits of UML models for task

modelling in a complex organizational context: case study in the healthcare domain. In K.S. Soliman (Ed.),

Internet and Information Technology in Modern Organizations: Challenges & Answers, Proceedings of The

5th International Business Information Management Association Conference (December 13 - 15, 2005,

Cairo, Egypt), IBIMA, pp. 119-127, ISBN 0-9753393-4-6.

Bernonville S., Leroy N., Kolski C., Beuscart-Zéphir M.C. (2006). Explicit combination between Petri Nets

and ergonomic criteria: basic principles of the ErgoPNets method, Proceedings to the 25th European Annual

Conference on Human Decision-Making and Manual Control (EAM) 2006, Valenciennes, France.

Beuscart-Zéphir M., Pelayo S., Anceaux F., Meaux J-J., Degroisse M., Degoulet P. (2005). Impact of CPOE

on doctor-nurse cooperation for the medication ordering and administration process. International Journal of

Medical Informatics, 74, pp. 629-641.

Boissier O., Padget J., Dignum V. (2005). Coordination, Organizations, Institutions, And Norms in Multiagent

Systems: Aamas 2005 International Workshops on Agents, Norms, And Institutions for Regulated

Multiagent Systems. Springer-Verlag.

Guyot P., Shinichi H. (2006). Agent-Based Participatory Simulations: Merging Multi-Agent Systems and

Role-Playing Games. Journal of Artificial Societies and Social Simulation, 9 (4).

Hage J., Meeus M. (2006). Innovation, Science, and Institutional Change, A Research Handbook. Oxford

University Press.

Hamel A.. Conception coopérative et participative de simulation multiagents: Application à la filière avicole.

Thèse de doctorat, Université Paris Dauphine, mars 2006.

Koestler, A. (1969). The Ghost in the Machine. Arkana Books, London.

Kolski C., Bernonville S. (2006). Integration of software engineering models in the human factors

engineering cycle for clinical applications: using UML modelling and Petri Nets to support the re-engineering

of a medication computerized physician order entry system. First Common IMIA & EFMI Workshop

"Usability and Human Factors Engineering for Healthcare Information Technology Application" (22-24 May),


Mandiau R., Grislin-Le Strugeon E., Péninou A. (Ed.) (2002). Organisation et applications des SMA.

Hermès, Paris.

Marik V., Brennan R.W., Pechoucek M. (2005). Multi-agent Systems for Manufacturing: Second International

Conference on Industrial Applications of Holonic And Multi-agent Systems, Holomas 2005, Copenhagen,

Denmark Proceedings. Springer-Verlag New York Inc.

Mathieu S. (2002). Comprendre les normes ISO 9000 version 2000. AFNOR, Saint-Denis.

MEDEF (2006). L’intelligence économique, guide pratique pour les PME. Livre blanc, Cercle d’Intelligence

Economique du MEDEF, Paris. Available at:

ERIMA07’ Proceedings


Managing collaboration for improving design co-ordination

ERIMA07’ Proceedings

C.Merlo 1,2,* , J.Legardeur 1,2 , G.Pol 1,3 , G.Jared 3

1 ESTIA/LIPSI, Bidart, France

2 IMS – LAPS/GRAI, Bordeaux, France

3 SIMS/Cranfield University, Cranfiled, UK

* Corresponding author:, (33) 5 59 43 84 33

Abstract: This paper focuses on the co-ordination of engineering design through the collaboration point of

view in order to help managers controlling design processes. We focus on the organisation and process

aspects to facilitate collaboration between designers and to foster the co-ordination of design projects by

integrating the product, process and organisation points of view (PPO model). Indeed, results of the

collaborative design activities directly depend on relationships between design actors. Our aim is to allow

project managers enhancing and controlling different types of collaborative processes in order to prescribe

emerging collaboration between different experts. We propose then a method for storing and analysing

collaborative processes in situ, in order that such processes can be formalised and re-used by project

managers. This method is supported on a dedicated and implemented tool, called CoCa. A case study into

an SME, designing and manufacturing innovative mechanical products, has been achieved to evaluate the

possible feedback to design co-ordination and especially by managing collaborative processes through PLM

systems workflow technology.

Keywords: Design co-ordination, design process management, PLM systems, workflows, collaboration

I. Introduction

Many studies have tried to identify the best practices and strategies developed by enterprises

(Balbontin et al. 2000) in order to improve the development of new products taking into account

environmental challenges, market and customer characteristics, marketing process, product

characteristics, new product development process, organizational characteristics and corporate

culture, learning practices, and performance. On the one hand (Coates et al. 2000) suggest that

task management, scheduling, planning, and resource management are the most important

issues when it comes to operational co-ordination. On the other hand the performance of the

collaboration between co-design partners (Martinez et al. 2001), (Giannini et al. 2002) and also

with suppliers offers the possibility of gaining fast access to specialist knowledge and capabilities,

and of spreading and sharing costs and risks, and of better exploitation of the expertise of the

partners (Wognum et al. 2002).

The coordination and the control of engineering design refer to a global approach to the

development of new products. It implies the need to identify the different situations occurring

during the design process and adequate resources to satisfy the initial objectives. The progress

control of the design process can be defined as the understanding of existing design situations (in

the real world) in order to evaluate them and take decisions that will modify and improve the

future process.


ERIMA07’ Proceedings






Design Activity



Figure 1: Control of collaborative design model



Working tools,




Knowledge and



Products description:


Manufacturing and

usage instructions…

The control problem here is a problem of decision-making to support designers in their activities

(Girard and Doumeingts 2004) in order for them to achieve an objective in a specific context

(figure 1). From the operational point of view of the project manager, such aspects are difficult to

take into account in the every day life of a project. The main problem is that of proposing to

design actors the best context possible (e.g. objectives, information, resources, tools, methods) in

order to foster collaboration and to reach project objectives.

This paper focuses on the analysis of collaborative practices in order to prescribe improved

collaborative processes. In section 2 the global methodology of design co-ordination for project

managers is introduced before focusing in section 3 on the generic approach of collaborative

practices analysis. Then section 4 describes the case study and shows how collaborative

practices can be used to improve design process characterisation. An example of such process is

given and its management through a PLM system detailed.

II. Design coordination methodology

We introduce in this section the design coordination method developed during the IPPOP project,

an RNTL project funded by the French Ministry of Economy, Finances and Industry between

December 2001 and June 2005. This project provides an integrated data model related to

Product, design Process and Organisation (Roucoules et al. 2006). The main goal of IPPOP

project was:

− To propose a generic model to embedded Product-Process-Organisation (PPO) information.

− To identify conceptual links among the three PPO domains. Those links are indeed required

to track knowledge (who, what, when and why) related to the whole design process.


ERIMA07’ Proceedings

Strategic objectives






Product Data

Task 1






Figure 2. Generic approach for design co-ordination.















Product Data

Figure 2 presents the resulting methodology that allows the use of PPO concepts as well as the

coordination and the control of design projects:

− Product concepts (represented as ‘Product Data’) have been defined: design experts can

therefore create several product breakdowns according to multiple levels of detail, multiple

experts’ points of view, and multiple states of the product (Noel et al. 2004).

− Design Process concepts represent ‘tasks’ that have to be achieved. Those tasks can be

defined dynamically in order to go further than an ‘a priori’ definition of the design process

(Nowak et al. 2004).

− Industrial Organisation concepts are based on the modelling of the enterprise design system

as presented in (Girard and Doumeingts 2004), (Merlo and Girard 2004) and define a

reference breakdown in term of project, sub-project and tasks. Each level is characterised by

a specific context (decision centre, project and design framework) composed of objectives

(‘design framework’), resources and performance indicators.

The PPO integration is then done via adequate concepts. Product and process modelling are

linked via the “product data” considered as I/O of each task and that handle part of the product

breakdown. A Project is breakdown into sub-projects and tasks. This naturally links process and

organisation dimensions. Performance indicators based on PPO characteristics also give another

way of integration by ensuring the return information from sub-levels to upper levels. They are

defined using the three dimensions depending on the design objectives to be reached. For

example a project manager may control the project schedule progress by defining a time

indicator, or he may control the quality of product definition by specifying an indicator based on

the maturity of product data.

For design co-ordination, project managers need to identify effective action levers which will

influence collaboration thus increasing design performance. Those elements concern designers

themselves, not just the product or the activities. Nevertheless project managers have generally


difficulties to know how to influence collaboration. For doing so we study a method for analysing

existing collaboration and identifying “good practices”.

III. Analysing collaborative practices

In (Merlo et al. 2006) we propose a model inspired by our literature review and on empirical

studies of design situations with our industrial partner. Our goal is to go deeper into the

understanding and characterisation of collaborative complex processes. It deals with the

identification of the main relevant elements for the characterisation of the collaborative situations

in design. Collaborative situations can be defined from a co-ordination point of view, with

scheduling, planning, formalisation, and definition of milestones and activities. Alternatively, they

can be defined from a human relationship point of view with the persons who are involved in the

collaborative event, with their skills, their motivation, and their form of communication. In fact both

these two points of view must be taken into account in defining several collaborative factors to

categorise collaborative events such as: do actors work in the same place? in synchronous or

asynchronous mode? do they use predefined tasks? and so on.

The model of collaboration is built to characterise collaborative situations which occur during

design projects in small companies. This theoretical approach is based on the capture of

information describing collaborative events occurring between designers. To support this model,

we have implemented an analysis tool, CoCa. It does not help with co-ordination (decision

making) but helps to understand design activities and collaborative practices of the company.

CoCa allows the capture of events of design projects from the point of view of collaboration and

might be used to identify best practices, analyse encountered problems and improve managers’


For example, both collaboration model and CoCa tool integrate different kind of parameters by

capturing quantitative data such as time, activity type or solved problem as well as qualitative

data such as quality of communication or interests of actors. The characterisation of a

collaborative event (figure 3, right) includes the identification of the event through “context of

event” and “type subject” frames such as date, actor, and expectations of the event, outcomes or

taken decision, and the description of collaboration through “criteria” frame which represent the

quantitative elements. All these events are associated with the design context of the project

(figure 3, left) in order to be able to understand and to analyse the collaboration.

ERIMA07’ Proceedings


Figure 3. Project context then Collaborative criteria of an event.

Figure 4 illustrates the characterisation of qualitative elements of collaboration through “links”

frame which establishes problems or relations between events, and through “evaluation/analysis”

frame which evaluates results of collaboration.

Figure 4. Identification of event links then Analysis of event collaboration quality

These different categories of information characterise the collaborative events of a design project.

This information may be used by project managers to improve the way they co-ordinate design

processes and actors.

ERIMA07’ Proceedings


IV. Case study: from collaboration analyses to co-ordination improvement

Analysing a collaborative situation

An industrial case study has been achieved in a SME which, some years ago, developed new

means of manufacturing structures using honeycomb sub-assemblies. The company has

captured several markets with products manufactured using its technology and consequently the

number of employees grew from 4 to 40 over 10 years. Our method of experimentation was

based on a socio-technical approach (Boujut and Tiger 2002). We have focused our work on the

study of collaboration and relationships between actors and on design project co-ordination (Pol

et al. 2005). This work is the way to experiment and validate CoCa tool for the analysis of


After four months, four different projects have been deeply analysed and more than one hundred

collaborative events have been stored. This stored information and resulting analyses show that a

lot of information about the collaboration occurring during design projects can be stored.

Following example illustrates the consequences of such analyses on the project management:

the introduction of flexibility and detailed implementation of design processes.

The example is based on the CND (Customer’s Need Definition) process which corresponds to

the initial financial quotation phase of the design for the customer. This process allows studying

the relationships between the marketing department which generates first customer information

and the technical department which, in turn, has to estimate the cost to manufacture a product.

This estimate is based on the information given by marketing. This activity is actually formalised

by the project manager as mentioned in figure 5 with three sequential tasks: characterise CND

document by marketing department, validate it then evaluate financial aspects by technical


Figure 5. The CND process.

But, the actors may use various forms of collaboration to achieve these tasks. The analysis of this

initial collaborative situation through several projects allows identifying that CND process

description incorporates neither details on the way of achieving the tasks, nor flexibility. Moreover

the marketing person does not always have the necessary technical skills for all customers, and

furthermore he does not have enough time to carry out all the CND processes. So problem of

customer data management appears between the marketing and technical departments.

Analysing this CND process through several stored projects through CoCa tool, several scenarios

were observed which represent different forms of collaboration in carrying out this collaborative

activity: actors can collaborate in a synchronous way or not, in the same place or not, with

guidelines or autonomy to achieve their work or not, with scheduled tasks or not… The range of

alternatives seems to have correlations with the type of project, product and customer. A same

objective can be achieved through several types of collaboration. Thus scheduling alone is not

enough for the project manager to describe the conditions for the achievement of a design

situation. He can use several forms of collaboration in order to define the inter-actors exchanges.

ERIMA07’ Proceedings


Improving design process management

We observe that in this example of an innovative project and non-routine activity, the project

managers maintain flexibility by using “encouraged collaboration” in order to let actors be

reactive. The collaborative dimension must be studied to help project managers to define not only

scheduling but also prescribed interactions, methods and tools between actors, depending on

each design situation. In this way, the CND process is updated with an increased level of

granularity based on the guidelines from the collaboration analysis.

Consequently a new process is proposed based on the stored collaborative events: in figure 6 is

detailed previous task A11. The marketing person first evaluates the needs of the customer (task

A111), then he can:

− reject directly the customer request, if the customer needs are not appropriated for the

company (not formalized),

− make a visit to the customer: alone (task A112) before sending the detailed needs to the

designer (task A114) or with a designer (task A113),

− or directly send the needs to the designer if they are enough detailed (task A114).

Figure 6. Detailed but flexible process for A11 task.

Afterwards when the designer evaluates design (A114), he can meet the customer alone (A115)

or with the marketing person (A113), or directly characterize the CND document (A116). At each

task marketing person or designer have the possibility to end the process. As a conclusion the

project manager has the possibility to automate the design process by implementing a PLM

system with this process. The first node of flexibility is the task A11 because the detailed sublevel

may not be scheduled for a specific reason. Next nodes of flexibility are associated to tasks

A111 and A114 as choices exist for the owner of the task.

As a consequence of the results obtained through the collaboration analysis, we are able to

specify more accurately at least one sub-level: some tasks of macro-level are decomposed into

detailed tasks sequences by the identification of collaborative practices that are linked through

flexible nodes. Having doing so, we propose to manage this detailed process through a PLM

ERIMA07’ Proceedings


system using their workflow functionalities. The experimentation is based on WindchillTM (PTC)

PLM system. Actually macro-level and sub-process level have been implemented.

As an example figure 7 illustrates the workflow defined for managing the CND phase as

explained in section 3.2, and associated to the CND document shared between the marketing

and technical departments. This workflow correspond to previous process defined in figure 6 but

some more information is added:

− ‘State’ tasks define the different state modifications of the CND document.

− All possible ‘ends’ of the process are also defined as well as the required ‘notifications’.

− ‘Ad hoc’ task corresponds to the possibility given to a user to create dynamically new required

tasks. This allows introducing more flexibility in the design process.

− Most tasks have several outputs and represent nodes of flexibility that allows the actors

choosing the following tasks of the design process.

Figure 7. CND phase workflow.

Such experiment demonstrates that it is possible to manage more detailed processes both

predefined and flexible. Nevertheless the technical aspects of its implementation depend strongly

on the openness of the used PLM system and their possibilities of customization: can documentindependent

workflow be managed within this PLM system? Then can independent workflows be

synchronized through their tasks? When validated, these requirements imply that the coordination

of design projects is possible using this framework. Nevertheless some considerations still

remain. The main aspect concerns the acceptability of such automated management into SMEs:

our industrial partner has a size that requires more formalisation while maintaining high level of

flexibility but implementing such an IT system can generate other types of difficulties.

As a conclusion, when a problem of collaboration between actors appears in a design event, the

project manager is interested in analysing this event in order to understand what was wrong and

what could be improved. This will orient the decision to take to improve or reject a collaborative

practice that has occurred during the project. Combining different information can lead to detailed

analysis of problems or good practices in order to define guidelines to project managers. Such

guidelines improve design co-ordination by helping project managers in their decision making.

ERIMA07’ Proceedings


V. Conclusion

Main objective is to help project managers in SMEs when co-ordinating design by taking into

account the collaboration between actors and the way to influence it to improve design process.

Collaboration inputs through the use of the CoCa tool allow the understanding of factors

influencing collaboration and the proposal of improvements on various aspects of design coordination.

In particular, the analysis of several collaborative events from different projects leads

to the identification of “good practices” that are translated by project managers as more detailed

and flexible design processes. We have proposed to implement such processes by using PLM

systems workflow functionalities. Nevertheless PLM systems must be adapted to SME context.

Moreover more experimentation with CoCa tool must be achieved to identify other type of

improvements of design co-ordination by project managers.


Balbontin A., Yazdani B.B., Cooper R., Souder W.E. (2000) New product development practices in American

and British firms. Technovation. Vol. 20, pp. 257-274.

Boujut J.F., Tiger H. (2002) A socio-technical research method for analyzing and instrumenting the design

activity. Journal of Design Research. Vol. 2, Issue 2.

Coates G., Whitfield R.I., Duffy A.H.B., Hills B. (2000) Co-ordination approaches and systems. Part II. An

operational perspective. Research in Engineering Design, Vol. 12, pp. 73–89.

Giannini F., Monti M., Biondi D., Bonfatti F., Moanari P.D. (2002) A modelling tool for the management of

product data in a co-design environment. Computer Aided Design, Vol. 34, pp. 1063-1073.

Girard Ph., Doumeingts G. (2004) Modelling of the engineering design system to improve performance.

Computers & Industrial Engineering, Vol 46/1, pp. 43-67.

Martinez M.T., Fouletier P., Park K.H., Favrel J. (2001) Virtual enterprise – organisation, evolution and

control. Int. J. Production Economics, Vol. 74, pp. 225-238.

Merlo C., Pol G., Legardeur J., Jared G. (2006) A tool for analysing collaborative practices in project design.

Proceedings of the 12th IFAC Symposium on Information Control Problems in Manufacturing INCOM’2006,

Saint-Etienne, France, pp. 709-714.

Merlo C., Girard P. (2004) Information System Modelling for Engineering Design Co-ordination. Computers

in Industry. Vol. 55, No. 3, pp. 317-334.

Noël F., Roucoules L., Teissandier D. (2004) Specification of product modelling concepts dedicated to

information sharing in a collaborative design context. Proceedings of the 5th International Conference on

Integrated Design and Manufacturing in Mechanical Engineering, IDMME 2004, Bath, UK.

Nowak P., Rose B., Saint-Marc L., Callot M., Eynard B., Gzara-Yesilbas L., Lombard M. (2004) Towards a

design process model enabling the integration of product, process and organization. Proceedings of the 5th

International Conference on Integrated Design and Manufacturing in Mechanical Engineering - IDMME’2004,

Bath, UK

Pol G., Jared G., Merlo C., Legardeur J. (2005) Prerequisites for the implementation of a product data and

process management tool in SME. Proceedings of the 15th International Conference on Engineering Design,

ICED05, Melbourne, Australia.

Roucoules L., Noël F., Teissandier D., Lombard M., Débarbouillé G., Girard Ph., Merlo C., Eynard B. (2006)

IPPOP: an opensource collaborative design platform to link Product, design Process and industrial

organisation information. Proceedings of the International conference on Integrated Design and

Manufacturing and Engineering Design IDMME’06, Grenoble, France.

Wognum N., Fischer O., Weenink S. (2002) Balanced relationships: management of client-supplier

relationships in product development.

ERIMA07’ Proceedings


Using Organizational Intangible Assets for Better Levels of Operational


ERIMA07’ Proceedings

C.M. Dias Jr. 1,* , O. Possamai 2 and R. Gonçalves 3

1 Universidade Federal de Santa Catarina, Florianópolis, Brazil and

Universidade Nova de Lisboa, Monte Caparica, Portugal

2 Universidade Federal de Santa Catarina, Florianópolis, Brazil

3 Universidade Nova de Lisboa, Monte Caparica, Portugal

*Corresponding author:, +351212948365 or,


Abstract: This paper presents how to use the so named intangible assets as a competitive advantage to

achieve a better level of efficiency within an organisation. In order to do that we consider that the effective

identification of the contribution of the intangible assets can be used to help estimate the real level of

operational efficiency, which in turn is used to rationalise the performance of the activities to transform goods

(products) and services. Additionally, we proposed one classification of intangible assets and we identify

both attributes and variables that can be used as a basis to manage such assets. Moreover, the paper

presents alternative ways to use intangible assets to define better levels of operational efficiency in the

manufacturing activities.

Keywords: intangible assets, efficiency and organizations.

I. Introduction

In order to structure the set of activities necessary for pursuing a specific aim, every organization

must define what its core competencies are. For Cavalcanti (2001), these competencies would be

defined from the set of abilities and technologies developed that allow the organization to offer

benefits to its clients.

Angeloni (2002) proposes a model for research and organization development that strives to

actuate knowledge as an essential production factor, treating it as a repertory of individual and

group knowledge. The repertory is seen as a valuable asset for understanding and overcoming

environmental contingencies, and is described as having three interactive and interdependent

dimensions: the infra-structure dimension, the people dimension and the technology dimension.

Angeloni´s infra-structure dimension would be equal to Cavalcanti´s (2001), who defines it as

structural capital (knowledge fluxes that are systematized – systems, methods, culture and

values) this being the only asset effectively owned by the organization, and in which the idea of

ownership of this asset is viewed as a decisive factor in the most efficient management of the

production activity. What Lev (2002) calls organizational infra-structure would be the asset

(intangible) which counts the most and which is least known: “the motor that creates the largest

value out of all the assets”.

Individual and collective can be augmented by intangible assets, the definition of which is neither

concise nor uniform. In order to define the internal intangible assets effect, we shall initially use

the taxonomy advanced Dias Jr. (2003) which defines internal intangibles as organizational

resources that the company utilizes. Their correct application generates results in the form of

products (tangible and/or intangible) derived from a specific organizational structure (internal

concepts targeting to increased value), applied to the production of goods and services that aim

to generate perceived benefits.


The identification of intangible assets for the growth of organizational performance is born out of

the need to provide supply differentiation. This supply embodies the perspective of superior value

attributed to products and services, derived necessarily from the organizational capacity to

contemplate distinct market demands. Thus, it becomes necessary to analyse the way that

internal intangible assets energize organizational performance, with a focus on the efficiency of

production operations, thus demonstrating the relevance of these assets in maintaining better

levels of economic performance of organizational activities. At the same time, research suggests

that there is a need for management methods that are adjusted to goods production (tangible

and/or intangible assets) with the goal of providing constant revision of the means to instigate

operational efficiency, by identifying potential intangible assets.

II. Intangible assets as criteria for organizational performance

During the development of a business proposal, the valuation of the assets to be used in order to

reach the goal of economic ally profitable production obeys a logic of "subjective" rationality,

according to criteria defined by the owner the assets.

According to Martins (1972), a product can have different economic values depending upon its

owners perspective of income, as determined by a structure of calculations pertaining to parallel

situations. However, the sum of individual values in the assets used for pursuing the enterprise

mission, hardly represents the total value of the organization. Thus, the failure to determine a

total value of the organizational assets leads to the appearance of the goodwill that for Reis

(2002) represents an obstacle to the managers’ information, and is called "a repository of

unexplained values". Thus, it is necessary to demonstrate how the intangible assets can work as

elements that let the operational efficiency emerge.

III. Organizational intangible assets and operational efficiency

In their management of production systems restrictions methodology Antunes Júnior and Lippel

(1998) propose the adoption of IGOP (Index of Global Operational Profit) as an instrument of

measuring efficiency in manufacturing work stations. This proposed index (initiated by Nakajima,

1998), might be obtained from the multiplication of three production indexes of production

(Geremia, 2001): the availability index (operational time); the performance index (operational

performance) and quality index (approved products), described in equations 1, 2 and 3,


Availability index = load time – losses due to breakdowns and setup (1)

load time

Performance index = quantity produced (2)

work time . (capacity/time working)

Quality index = quantity produced – quantity recycled (3)

quantity produced

The IGOP calculation is described in equation 4.

IGOP = availability index. performance index. quality index (4)

The IGOP can also be calculated by the rate between the sum of production time of a certain

asset (product, piece) multiplied by its quantity, divided by the total time available for

transformation, as described in equation 5.

ERIMA07’ Proceedings



∑ tpi.qi

tpi = time of piece/product i;

qi = quantity produced of the piece/product i;

T = total time.

For the adoption of IGOP as an instrument of measuring operational efficiency, it is assumed that

the action of the manufacture unity (production, maintenance, logistic) and everything involved

with quality, processes, and groups of improvement among other functions is integrated (Antunes

Júnior and Lippel, 1998).

In an attempt to measure and evaluate individual performances for different organizational units,

Pandolfi (2005) defends the concept of efficiency seen through valuation performance, called

DEA – Data Envelopment Analysis, adapted and represented in equation 6.

Efficiency = tangible exits (6)

tangible incomes

Pandolfi (2005) suggests that the DEA would be a way to measure the relative efficiency of a

production system compared to other similar systems, those that produce quantity, even if

different, of determined products and services from variable quantities and with similar kinds or

raw material. Therefore, the maximum production that can be obtained from a system is less than

or equal to the input, assuming that the most efficient system presents no losses, thereby

reaching the maximum efficiency level of 100% (maximum efficiency = 1).

The efficiency measurement of each unit (organizational, department and/or section) can be

defined as, the weighed sum of the outputs, divided by the weighed sum of the inputs of each of

the n unit to be evaluated (see equation 7).



h j0 =



i =1

∑ ur.Yrj0

r =1

∑ vi.Xij0


hj = efficiency of j unity;

ur = value attributed to the r product or service output;

Yrj = quantity of product or r service in the j unit;

vi = cost attributed to the i resource;


Xij = quantity of the i resource consumed in the j unit.

ERIMA07’ Proceedings





Thus, the first question would focus on the need for a comparable measure of efficiency,

attributing an adequate set of weights to the coefficients of costs and values for the resources

used. Such a question leads to the issue of how to obtain such a set of weights in order

accurately assess the performance of the units.

However, it is difficult to attribute weights without knowing the production function of the system

as a whole, considering its operational characteristics (inputs and outputs) and the environments

where they belong (Pandolfi, 2005).

It is proposed that calculate index of operational efficiency, one can consider more than the value

of the tangible assets as raw material of production. Intangible assets are also directly

responsible in the creation of organizational value from the operational environment. The inputs

of a production system can be represented as in equation 8.

Tangible inputs = f(tangible assets + intangible assets) (8)

Tangible output is a result of the combination of tangible and intangible assets as described in

equation 9.

Tangible outputs = f(tangible assets; intangible assets) (9)

The concept of operational efficiency is in equation 10.

Efficiency = tangible outputs (10)

inputs (tangible assets + intangible assets)

Considering that the outputs are represented by tangible elements or an inseparable combination

of tangible and intangible elements, it is possible to calculate the levels of operational efficiency of

the production units from the allocation of resources in the intangible assets, crucial to the

manufacture of products considered strategic to business.

The academic world and practitioners have tried to improve the quantity and quality of the

elements’ contribution that brings value to the organizations. Yet, of the need is recognized to

determine “how” the intangible assets would be accessed.

IV. Determination of Indicators for Intangible Assets Management

A fundamental point to consider is the determination of indicators that represents not only the

incoming profits of the production activity such as: receipts, costs, profits by action and

investment income.

According to Nunes and Haigh (2003), most of the performance indicators were developed in the

beginning of the organized activity production together with the appearance of joint stock

companies when the tangible capital represented 100% of the value of a company and was

crucial to the development of the measures that could orient the performance of companies.

The financial indicators performed a valuable and unique instrument for many decades,

monitoring the performance of machines and equipment used in the production and

commercialization of products and even of some services, consonant with the financial income

obtained from those activities.

With the exponential growth of the participation of intangible assets in conception processes,

fabrication, commercialization and even final consumption of products and services, new abilities,

ERIMA07’ Proceedings


knowledge and know-how constitute elements that cannot be guided by financial indicators

(Nunes and Haigh, 2003). For França (2004), the evaluation of intangible asset indicators

becomes relevant if it is possible to realize a continuous process of self-knowledge of the

organizational values. Attention should be turned to the understanding of the aspects that

generate value, but are not recognized by the traditional systems of measurement, in order to

discover non-formalized processes in the organization.

It is worth nothing that even with the proposed mapping of the processes responsible for value

creation and the establishment of performance indicators for these same processes, the

existence of an elevated level of complexity included in the activities developed inside the

organizations must have external support that determines and redefines the aim of the

organizational actuation.

Most methods for establishing indicator establishments, based on the logic of scorecards, involve

the function of the strategic choice most accepted inside the organization. The cataloguing of

indicators for intangible assets used in companies with different market focus or even with distinct

production activities can represent a less painful work, in order to establish indicators that

translate a reality of the recognition of which intangible assets it is that can bring an effective

contribution to the organizations performance (Almeida, 2003).

However, not all organizations have a sufficient and appropriate number of indicators that can be

used for the orientation of actions related to the determination of internal potential intangible

assets. Thus, there are alternative ways to meet this need. There is a need to limit the numbers

of organizational indicators to remove aspects that are irrelevant in the determination of the

intangible assets conceived from an internal vision. Almeida (2003) defends the establishment

based on organizations oriented precisely to the production of knowledge and are in initial stages

of activities development. He proposes the possibility of intangible assets identification using two

parameters (see Figure 1):

• probability of manifestation (PMA): the probability of an organization to detain intangible

assets of a determined kind and, if detaining, the probability of being identified;

• measurement level (GM): for the kinds of intangible assets that manifest themselves in the

organization (existence and identification), which is the facility of measuring them according

to generic rules of scorecards.

ERIMA07’ Proceedings

Parameters Level


Probability of Manifestation


Measurement Level

� - Elevated

�- Average

� - Low

� - Elevated

� - Average

� - Low

Figure 1. Simplification for the Adapting Rating of Evaluation Methods of the Intangible Assets.

(Almeida, 2003)

Although it looks very simple at first sight, the identification of intangible assets proposed by

Almeida (2003) appears as a possibility for determination of internal intangible assets, which can

be identified according to who chooses them, and can, after this analysis, proceed to the choice

of indicators that look more adjusted to the intangible assets profile. The goodwill consists exactly


of the calculation of the difference between the total value of the company and the evaluation of

its tangible liquid assets and intangible individuals, whichever value is not open for allocation is

considered goodwill. However, the more assets are identified, the less will be the residue of the

goodwill (see Figure 2), tending to disappear when every kind of tangible and intangible asset is

identified (Congrès International de Coûts, 2001).

Can be




Figure 2. Identification of Potential Intangible Assets. (Williams, Stanga and Holder, 1989)

From the analysis proposed by (Williams, Stanga and Holder, 1989), the undetermined

“intangible repository”, named goodwill could be classified separately from the intangible assets

that represent a more significant value source to the organization.

Yet, it is observed that the subsequent expenses over the intangible assets can also be

recognized with the condition of being able to generate future economic benefits, and that the

expenses are measured and attributed to the respective intangible asset in a trustful way (see

Figure 3).

ERIMA07’ Proceedings


Potential Intangible Assets


The cost is

resulted from


Have a


life cycle?

Capitalize as goodwill Capitalize as a specific asset

Can be



Consider as an expense

referring to the period

Traditional Intangible Assets Treated as expenses

Brands Advertising and Promotions

Copyrights Advancement to authors

Commercial Franchises Costs of software developing

Sport Franchises Costs of debt securities emissions

Softwares Judicial costs

Goodwill Marketing Researches

Licenses Organization costs

Figure 3. Demonstration of the Intangible Assets Treated as an Expense. (Congrès International de

Coûts, 2001)







Having the possibility of a categorization of the intangible assets, the organization can direct their

efforts to achieving their internal goals, supposing that these same intangible assets represent a

contribution only from the moment that they bring improvement to the levels of internal efficiency

to the production operations.

Below we will describe the implications of the consideration of the internal intangible assets in the

calculation of the efficiency levels in the manufacturing unit.

V. Identification of Internal Intangible Assets and Calculation of the Operational

Efficiency Levels

From the demonstrations of the organizational intangible assets, relationship with the production

process formation, a novel management model was designed that is capable of identifying the

property of internal intangible assets and that can demonstrate their contributions for the growth

of the operational efficiency in the manufacture of goods and service contexts. For this, the

concepts of Iudícibus (1997) and the classification of Peña and Ruiz (2002) were used for the

identification of internal intangibles to the organization, distinguishing these assets from the

following described procedure. These internal intangible assets are:

• those considered to be owned;

• those that provide expected generation of future benefits;

• those that are for the organization’s exclusive use.

It is necessary to exclude the intangible assets developed from partnerships, in order to

effectively define intangible assets pertaining to the organization. The Figure 4, in the form of a

flow chart, helps to identify the interne intangibles assets of the other intangible assets.

Intern Intangible Assets

(identifiable, separable

and controllable)

Figure 4. Flow Chart of Determination of Internal Intangible Assets (IIAs).

From the contributions of Pandolfi (2005) for the calculation of the efficiency of a production unit

(see equation 7) and from the considerations contained in equation (10) referring to the

participation of intangible assets in the operational efficiency, the propertied formula for the

calculation of the efficiency levels is made for each product that is considered strategic to the

business context, considering the integration of internal intangible assets in each section that

constitutes the manufacturing unit.

ERIMA07’ Proceedings

Expends (investments)

in R&D that generates or

can generates industrial

or intellectual property


Generated from





Do not configure


intangible assets

Generated from

owned resources

With generation of

generation of future

benefits (profit in $)

Exclusive use of



VI. Conclusion

This article aimed at demonstrating the relationship of the organizational intangible assets with

production processes that can be used for the definition of which would be the internal

organizational assets to the organization, considering Iudícibus, (1997) definition: “those

generated in the context of the organization and originated from research and development that

can effectively represent future intellectual or industrial property rights”.

For the determination of the level of contribution that the intangible assets can generate, one can

have in mind their probabilities of manifestation and their levels of relative importance estimated

for the value generation to the organization, via the approach given by Almeida (2003). The

classification of the intangible assets of the organization’s property can be given by the

methodology of Peña and Ruiz (2002) supported by the considerations of Williams, Stanga and

Holder (1989).

It was observed that the concept of efficiency taken from the considerations of Pandolfi (2005)

should be perfectly adjusted for the consideration of the intangible assets, weight in the

calculation of the manufacturing operational efficiency levels. It is concluded that for the

improvement of the manufacture activity operational efficiency can be considered the participation

of organizational internal intangible assets as effective elements that energize the organizational

performance should be considered.


Angeloni, M. T. Organizações do conhecimento – infra-estrutura, pessoas e tecnologias. São Paulo: Editora

Saraiva, 1ª edição, 2002, 214p.

Antunes Júnior, J. A. V. and Lippel, M. Uma abordagem metodológica para o gerenciamento das restrições

dos sistemas produtivos: a gestão sistêmica, unificada (integrada) e voltada a resultados do posto de

trabalho. Available at: .

Cavalcanti, M.; Gomes, E. and Pereira, A. Gestão de empresas na sociedade do conhecimento - um roteiro

para a ação. São Paulo: Editora Campus, 2001.

Congrès International de Coûts, Léon, France, 2001. Goodwill – De la rouque. Available at:


Dias Jr., C. M. Proposta de Detecção de Intangíveis do Consumidor como forma de Priorizar os

Investimentos em Ativos Intangíveis da Organização. Masther Thesis in Engenharia de Produção –

Programa de Pós-graduação em Engenharia de Produção e Sistemas, Universidade Federal de Santa

Catarina. Florianópolis, 2003.

França, R. B. Avaliação de indicadores de ativos intangíveis. Doctoral Dissertation of Programa de Pósgraduação

em Engenharia de Produção e Sistemas, Universidade Federal de Santa Catarina. Florianópolis,


Geremia, C. F. Desenvolvimento de um programa de gestão voltado à manutenção das máquinas e

equipamentos e ao melhoramento dos processos de manufatura fundamentado nos princípios básicos do

Total Productive Maintenance (TPM). Masthers Thesis in Engenharia, School of Engenharia, Universidade

Federal do Rio Grande do Sul, Porto Alegre, 2001.

Iudícibus, Sérgio de. Teoria da contabilidade. 5. ed. São Paulo: Atlas, 1997. 330 p.

Lev, B. Ativos intangíveis: O que vem agora? 2002. Available at:

Pandolfi, M. Sistemas de medição e avaliação de desempenho organizacional: contribuição para gestão de

metas globais de performances individuais. Doctoral Dissertation of Escola Politécnica de São Paulo –

Departamento de Engenharia de Produção, São Paulo, 2005.

Peña, D. N. and Ruiz, V. R.L. El capital intelectual: valoración y medición. Espanha: Financial Times-

Prentice Hall, 2002, 246p.

Reis, E. A. Valor da empresa e resultado econômico em ambientes de múltiplos ativos intangíveis: uma

abordagem de gestão econômica. Doctoral Dissertation of Universidade de São Paulo, Faculdade de

Economia, Administração e Contabilidade – Departamento de Contabilidade e Atuária. São Paulo, 2002.

Silva, C. E. S. Método para avaliação do desempenho do processo de desenvolvimento de produtos.

Doctoral Dissertation of Programa de Pós-graduação em Engenharia de Produção e Sistemas,

Universidade Federal de Santa Catarina. Florianópolis, 2001.

Williams, J. R; Stanga, K. G. and Holder, W. W. Intermediate accounting. Flórida Hartcourt Brace

Jovanovich Publishers, 1989.

ERIMA07’ Proceedings


The TRIZ-CBR synergy: a knowledge based innovation process

G. Cortes Robles 1,* , A. Machorro Rodríguez 1 , S. Negny 2 , J.M. Le

Lann 2

ERIMA07’ Proceedings

1 DEPI-MIA Instituto Tecnológico de Orizaba, México

2 Laboratory GI-ENSIACET-INPT , Institut National Polytechnique de Toulouse, France

* Corresponding author:, +33. 5 62 88 56 56

Abstract: This paper presents a synergy between the Theory of Inventive Problem Solving (TRIZ) and the

Case-Based Reasoning (CBR) approach for problem solving to support creative engineering design. This

synergy is based on the strong link between knowledge and action. In this link, TRIZ offers several concepts

and tools to facilitate concept creation and to solve problems and the CBR process, a framework capable to

store and reuse knowledge with the aim to accelerate the innovation process.

Keywords: TRIZ, Case-Based Reasoning, Innovation, Contradiction matrix.

I. Introduction

According to Smith (Smith 2005), innovation’s outcome depends only on two factors: (1) creativity

and knowledge of talented employees and, (2) the effectiveness of the methods and processes

that support their work. In this paper a synergy that aims to support both dimensions - creativity

and knowledge - is presented. The first element in the synergy is the Case-Based Reasoning

process, useful to store and share knowledge. With regard to creativity, an approach capable to

support ideas generation for systematically solving problems is needed. Recently, an approach

that conceives innovation as the result of systematic patterns in the evolution of systems has

emerged in the industrial world: the TRIZ theory, which is the second element in the synergy.

Next sections briefly present this synergy.

Sections 2 and 3 introduce briefly the two main elements in the synergy: the Theory of Inventive

Problem Solving and the Case-Based Approach, then in section 4, the synergy is presented for

finally, in section 5 analyze an example of its application.

II. TRIZ: The Theory of Inventive Problem Solving

This theory was proposed by the Russian scientist G. Altshuller in the 1940s; actually it’s a well

accepted approach for solving problems. TRIZ has several advantages over traditional methods,

particularly when it’s applied in the early design stages. The main advantages are:

• TRIZ offers an important collection of knowledge extracted from several domains. This

capacity produces an environment where knowledge could be used in a transversal way. As

a consequence, the application of TRIZ is not restricted to a single technical domain.

• TRIZ is a more equilibrated approach that combines, in the same environment, a

psychological and technical creativity’s points of view. This capacity lies in its structure. TRIZ

combine in its structure four essential areas: (1) a statistical patent analysis (more than three

millions) to derive some general solving strategies, (2) a synthesis of the main advantages

extracted from numerous techniques for problem solving, (3) an analysis of the inventor’s

creative thinking patterns, with the aim to produce a set of strategies to model and to solve

problems, and finally, (4) a capitalization knowledge process in scientific literature. The


analysis of those areas, leaded to create some tools that make a tangible link between

knowledge and action (Cavallucci, 1999).

• This process guide to TRIZ cornerstones: (1) all engineering systems evolve according to

well defined regularities. (2) The concept of inventive problem and contradiction, like an

effective way to solve problems. This also means that any problem could be stated like a

contradiction, and, (3) the innovative process can be systematically structured (Terninko et al.


Like any other approach, TRIZ has several limits. Some of the most important in the present

context are:

(1) TRIZ doesn’t have a memory. Consequently, TRIZ can not remember specific past solutions

while solving problems. This procedural knowledge it’s then no available for other persons facing

similar problems.

(2) TRIZ uses general knowledge and, according Kolodner (Kolodner 1993); the application of

this kind of knowledge in a particular situation could be extremely difficult. Those limits need a

tool or methodology capable to store and reuse knowledge; central capacities of the Case-Based

Reasoning process. Next paragraph offers a succinct description of this AI tool.

III. The Case-Based Reasoning process (CBR)

In the CBR process, problems are solved by reusing earlier experiences. In this process, a target

problem is compared with a set of specific solved problems encountered in the past (called

cases), to establish if one of the earlier experiences can provide a solution. If a similar case or set

of cases exist, their associated solutions must be evaluated and adapted to find a new one. This

approach has proved its utility to support design activities, equipment selection and also

knowledge management activities among others (Avramenko et al. 2004).

The CBR as methodology for problem solving encompasses four essential activities: retrieve,

reuse, revise and retain. In this process, the problem solving process starts with an input problem

description or target problem. This description is used to –Retrieve- a problem or set of previous

solved problems (cases), stored and indexed in the memory. Then if one or various stored cases

match with the target problem, the most similar case is selected to –Reuse- its solution.

Subsequently, the derived solution must be -Revised-, tested and repaired if necessary in order

to obtain a satisfactory result. Finally the new experiences which comprise failure or success, but

also the strategies to repair and implement the final solutions (among others particular features),

are -Retained- for further utilization and the previous cases memory is updated. A CBR system

has several advantages such as:

• Learning is a very important product of the CBR process, maybe the most important. López

(López et al., 1997) emphasize: “Learning is in fact inherent to any case-based reasoner not

only because it induces generalizations based on the detected similarities between cases but

mostly because it accumulates and indexes cases in a case memory for later use”.

• According to (Leake, 1996), the retain phase or memorization stage within the CBR process,

it is an excellent support for share and acquire knowledge. This capacity it’s a consequence

of the strong connection between reasoning and learning.

• The CBR process is based on the analogical thinking process, which it is the most utilized

human process for problem solving (Terninko et al., 1998). Consequently, the solutions

available in a CBR system are easier for users to understand and apply than a rule- or model-

ERIMA07’ Proceedings


ased approach (Limam et al. 2003). This characteristic consents benefits such as: largely

volumes of information can be managed (more than in a rule-based approach) and the

knowledge as case memory can be maintained and updated automatically with the use of the

system. Another determining factor it is time impact. Users of this kind of computational tools

become more competent over time (Leake, 1996).

The CBR process has limitations such as:

• The case-memory store case for a single domain case. This is in fact, one of the most

important CBR advantages, but applying this specific knowledge-base to innovation projects,

could be an obstacle to creativity. This phenomenon occurs because single domain solutions

conserve a well defined reflection vector, which is difficult to surmount. This vector is called

psychological inertia in TRIZ (Altshuller, 1999).

• Another consequence of the limit mentioned above, is that creative solutions available in

others domains can not be considerer while solving problems. Nevertheless, the quantity of

sources and domains utilized when solving problems has a positive impact over obtained

solutions (Sifonis et al., 2003).

Based on the intrinsic limitations and advantages inherent to both approaches, it is possible to

state that TRIZ needs an element capable to store and reuse knowledge, and the CBR process

needs a structure that facilitates the access to solutions obtained in other domains and also a

general knowledge structure to index cases in the case-memory. Next section briefly describes

the combined approach TRIZ-CBR which satisfies both requirements.

IV. The TRIZ-CBR synergy

While analyzing the world patent databases, Altshuller and his research team realized that

identical problems have been solved in different domains. They also observed that even the most

creative solutions described in a patent, could be derived from some general principles. This

observation led Altshuller to deploy a knowledge capitalization process to extract and synthesize

those original strategies or methods for problem solving (Altshuller, 1999).

Consequently, Altshuller proved that knowledge from patent databases, could be extracted,

transformed and arranged in such a way, that its reutilization was accessible to any person in any

domain. Then TRIZ can be considered as the first innovation knowledge base (Zotlin et al., 1999),

which offers to an organization “the ability to strip away all barriers between different industry

sectors” and access to “the best practices of the world’s best inventive minds” (Mann 2003).

Nowadays, TRIZ users “continually demonstrate that applying common solutions for the

resolution of contradictions, identified as effective when applied to parallel problems in the world

patent base, radically improves the design of systems and products” (Terninko et al,.1998). This

TRIZ capacity had been exploited for the most important companies in today’s industrial

horizon.This reflection and knowledge capitalization process, also guided to establish the

foundation of several TRIZ tools. Between those tools one has a crucial role in the synergy: the

contradiction matrix. The analysis realized in the patent database, also revealed that an inventive

problem (problem that could be formulated as a contradiction), can be formalized with a reduced

number of parameters. This observation led to formalize 39 Generic Parameters and 40 Inventive

Principles (Altshuller, 1999). Both elements were organized in a 39*39 matrix named

Contradiction Matrix (Figure 1). This matrix had been updated in a 48*48 matrix (Mann et al.,


ERIMA07’ Proceedings


Figure 1. Fragment of the Contradiction Matrix

Hence, the contradiction matrix it’s useful to solve inventive problems. An inventive problem is

defined as:

• A problem that contains at least one contradiction.

• A contradiction exists when any attempt to improve one useful system parameter or

characteristic, has an unacceptable impact in another useful parameter. This is called

technical contradiction.

• An inventive solution, it’s that which surmount total or partially one contradiction.

This kind of problems is usually solved with trade-off solutions. TRIZ philosophy is to solve

contradictions with a premise: to avoid compromise. Altshuller found that several methods to

satisfy contradictions were available and easily exploitable. So, those strategies were arranged to

accomplish this objective inside the contradictions matrix.

The contradiction matrix plays the role of memory in the TRIZ-CBR synergy because it can be

easily adapted to different contexts and domains. This matrix contains the statistical analysis of

over 3 millions patents. One of the most important conclusions of this work it is next statement: “if

two problems share the same contradiction, then their nature it’s similar and consequently, the

associated solution of the first one could be applied on the second” (Altshuller 1999), (Mann

2003). Thus, this initial similarity between two problems can be exploited in the TRIZ-CBR

synergy. In order to explain the problem solving process in the synergy, the application

methodology of the contradiction matrix must be briefly presented. Next section describes this

logical sequence.

V. Deploying contradiction matrix

The simplicity to apply this tool has made about contradiction matrix one of the most utilized TRIZ

tools. The process it’s involves next five stages:

I. State the initial problem as a conflict between two characteristics or useful parameters of the

system (sub-system or component) where the problem has been identified.

II. Correlate both parameters with two parameters among the 48 generic parameters.

III. Utilize de contradiction matrix: In the first column identify the parameter that needs to be

improved and in the first line, the parameter that get damage. The intersection between line

ERIMA07’ Proceedings


and column isolates the successful inventive principles used to remove or minimize similar

contradictions across domains.

IV. Analyze the proposed principles.

V. Derive from those principles an operational solution. If during stage 4, any of the proposed

principles offer a potential concept solution, it is recommended to re-formulate the initial

contradiction or to explore the ensemble of principles.

VI. Example

Chromatographic separations are unit operation techniques to continuously separate a multi

component mixture. One of the possible technological starting points of this unit separation is the

True Moving Bed (TMB), for which a simplified version is illustrated in figure 2. For the TMB

separation technique, the component mixture is sent in a column where the liquid and solid

phases flow in counter current directions. The liquid outlet of zone 4 is recycled to the zone 1

inlet, and conversely for the solid: the zone 1 outlet is recycled to the inlet of zone 4. Moreover

this apparatus has one feed (with the mixture to separate) and two outlets to withdraw products:

extract (rich in the component the more retained, preferentially in the solid phase) and raffinate

(rich in the less retained component, preferentially in the liquid phase). The principle

disadvantage of this technique is the flow of the solid phase, which is a complex task. Applying

the five steps mentioned above:

Stage 1: formulate the problem as conflict. Reduce the solid phase flow without reduce global

efficiency of the separation process.

Step 2: correlated with two generic parameters. In this case, the contradiction can be formulated

in the following way:

• Improved parameter: the flow of the solid phase implies a difficulty of use, consequently the

parameter 33, “Convenience of use” or “Ease of operation” is chosen.

• Damaged parameter: it is the parameter 19, “Use of energy by moving object”.

Step 3: utilize the contradiction matrix1. The crossing of line 33 and column 19 of the matrix gives

the followings principles: 1 Segmentation, 13 Inverse, 24 Intermediary.

Step 4: analyze the proposed principles. The first principle specifies that the object or process

can be fragmented into independent zone. Consequently the first idea is to divide the system in

independent zone. On of the sub-principle of principle 13 is “Make movable parts fixed and fixed

parts movable”. Having in mind that the circulation of the solid must be reduced, it can be fixed.

Consequently if the solid becomes static, we have to perform the inlets and outlets (“fixed parts

movable”) in a rotating way in order to simulate fluid flows. Combination of both principles 1 and

13 gives the solution (SMB).

Step 5: derive from those principles an operational solution. As it is clearly explained by (Pais,

1998), the counter-current flow of fluid and solid is simulated. The absorbent bed is divided into a

number of fixed beds. The inlet and outlet lines move simultaneously one fixed bed at fixed time

intervals towards the liquid direction (figure 3). This is the technique of the Simulated Moving Bed

(Cortes et al., 2006)

1 For convenience typical 39*39 matrix was utilized in this example. An electronic version of this matrix is

available at

ERIMA07’ Proceedings


Figure 2. The True Moving Bed

The 40 inventive principles have been adapted in several technical and non technical domains.

Among those are: education, industrial engineering, process engineering, microelectronics,

architecture, etc. The knowledge transfer capacity of the contradiction matrix is then capital to

conceive the case-memory because the CBR process needs an abstract generalization that will

be utilized to store and to index problems in the memory, consequently an extremely flexible

structure is desirable.

VII. The Ideal Final Result (IFR)

According to TRIZ, all systems evolve towards the increase of degree of Ideality. This concept is

utilized in the synergy. A TRIZ tool based on this concept is the Ideal Final Result (IFR). This tool

helps solvers to explore the solution space and to support concept generation. The IFR is a

solution that: (1) eliminates the deficiencies of the original system. (2) Preserves the advantages

of the original system. (3) Does not make the system more complicated (uses free or available

resources) and (4) does not introduce new disadvantages. The IFR will define a perfect system

that opens a solution space which is rarely considerer in a problem solving process.

Those TRIZ key concepts defined, it’s possible to present the process at the core of the synergy

(figure 4):

Figure 4. The TRIZ-CBR synergy

ERIMA07’ Proceedings

Figure 3. The Simulation Moving Bed


In the process schematized in figure 4, the target problem is described and modeled as a

contradiction. Then, this contradiction and some other elements derived from the problem

description (available resources, objective, sub-systems, among others) are used to retrieve a

similar case in the memory. This search could offer or not a similar case. This condition

generates two different sub-processes:

I. A similar case is retrieved. So, its associated solution is evaluated to decide if such initial

solution will be reuse.

II. No similar cases are stored in the memory. Thus, the system will propose at least 1 inventive

principle (and no more than 6 between the 40 that exists), that has been successfully used in

the past, to solve this specific contradiction in some other domains. Afterward, the inventive

principles which are in reality some standard solutions or strategies to solve problems, must

be interpreted to propose a potential solution.

Subsequently, both sub-processes converge and the proposed solution is then verified and

repaired if necessary in order to obtain a satisfactory result. Finally the new experiences which

comprise failure or success, strategies to repair and implement the final solutions, among others

particular features, are retained for being reusable in the future and the case memory is updated.

The TRIZ-CBR synergy was tested in 100 cases (derived from patents) with excellent results

(Cortes 2006).

VIII. Example

This example shows how a problem was stated and solved. The problem is to maximize

the available space in vehicles when transporting purified water in a 19lts container (see

next figure).This problem generates numerous inconvenient: (1) containers can not be

placed vertically (one upon the other). This reduces the batch size in a vehicle and also

the delivery rate. (2) If they are placed vertically one next to the other, it is difficult to operators to

pick-up the container; this also represents a risk of injury. (3) Its necessary to adapt a structure to

load and transport the containers. As a result enterprises spend money adapting its vehicles. (4)

Clients had expressed that it’s difficult to move containers. Besides, enterprises and clients have

pointed out that the most frequent accident occurs when they are moving or transporting the

container. As a result, a new container is needed, one that maximizes space, reduces difficulty

associated to transport and minimizes the risk of injury. In addition, an excellent transparency

level it’s a priority. Next schema shows how this problem was faced.

ERIMA07’ Proceedings




How can the container be


Technical Physical

Contradiction: volume vs.

shape. Associated principles:

7, 35, 2, 30, 31

Proposed solutions:

- Principle 7: Make one part

pass through a cavity in the


- Principle 35: Change the

degree of flexibility the


Concept proposed: make

retractable the water inlet like

an extending radio antenna

Object Knowledge support

Creating a new


Search for a

similar system


available in

other domain

Concept proposed:

water bag

Knowledge available (TRIZ-CBR):

⇒ Solved contradictions:

knowledge acquired while

solving inventive problems.

⇒ General knowledge: specific

domain knowledge.

Modify concept

Modify knowledge

Piled containers

This schema shows two different solutions: fist one suggests transforming the container in such a

way that one could be placed one upon the other by retracting and extracting the container water

inlet. This solution can solve partially the transport problem. Second one proposes to completely

change the actual system. IFR suggests recommends to describe the ideal system, which in this

case, it’s a container that does not have any physical dimension (weight, volume, etc.) but which

accomplish its useful function. The most similar system it’s a water bag developed in the

aerospace industry. This option is actually under analysis to propose a new way to distribute

water. Clients have manifested an initial and natural opposition to this project, but nowadays, they

see the project from a different perspective. This project also involves other industries such as:

recycling industries, services, communication, among others.

IX. Conclusion

The TRIZ-CBR approach combines the TRIZ ability to propose creative solving strategies

applicable across-domains, and the CBR memory, creating a framework that closely relates

knowledge and action. Besides, the process schematized in figure 4 encloses a process where

knowledge it’s applied in a very dynamic way. The problem faced modifies the available

knowledge and knowledge impact the design process.

The synergy has another capital advantage: the TRIZ-CBR synergy has the capacity to offer

solutions even if a problem had never been faced in the past, and also to remember how a

solution was obtained. The contradictions based memory, allows prevention where a solution had

ERIMA07’ Proceedings


failed or to increase success possibilities when a successful solution was created. This capacity

reduces effort in problem solving activities, accelerating the innovation process.

As showed, the tools and concepts developed in TRIZ are valuable in knowledge creation.

Ideality has the power to polarize the individual’s mental models in the same direction and the

synergy IFR-Contradiction, the capacity to guide creative effort for developing solution close to

ideality. Furthermore, contradictions generate a creative chaos where new concepts are created

(Nonaka et al., 1995). All those stages are lastly supported by a system capable to capture, store

and make available the produced experiences while solving contradictions.

The associated disadvantages to this model are principally:

Human factors: emotions like fear, insecurity (the conviction that innovation requires some birth

qualities or to lose a position within the organization), among others; make really hard knowledge


The efficacy of the TRIZ-CBR memory is intrinsically related to its content. Thus to store cases in

an initial empty memory is time consuming. The TRIZ-CBR synergy was verified with 100 patents

(Cortes, 2006). Nevertheless, this kind of systems becomes more efficient over time and the

contradiction memory can be maintained and updated automatically with the use of the system.

The process to store solved contradictions in the memory is generally made a posteriori and

therefore, users can’t remember all developed stages while solving problems, generating

valuable information lost.


Altshuller, Genrich (1999) The Innovation Algorithm, Technical Innovation Center.

Avramenko Y., Nyström L. and Kraslawski A. (2004) Selection of internals for reactive distillation column—

case-based reasoning approach. Computers & Chemical Engineering, Volume 28, Issues 1-2, 15 January

2004, Pages 37-44.

Cavallucci D (1999) Contribution a la conception de nouveaux systèmes mécaniques par intégration

méthodologique, Thèse doctorale à l’Université Strasbourg 1.

Cortes Robles G., S. Negny, J.M. Le Lann, (2006) Innovation and Knowledge Management: using the

combined approach TRIZ-CBR in Process System Engineering”, ESCAPE-16 + PSE 2006 16th European

Symposium on Computer Aided Process Engineering and 9th International Symposium on Process Systems

Engineering. Garmisch-Partenkirchen, Germany.

Cortes Robles G., (2006) Management de l’innovation technologique et des connaissances : synergie entre

la théorie TRIZ et le Raisonnement à Partir de cas, PhD Thesis, INPT-ENSIACET, 2006 (in French).

Leake, D, (1996) Case-Based Reasoning: Experiences, Lessons, and Future Directions. Chapter 1, pages 1

to 35, Menlo Park: AAAI Press/MIT Press.

Limam Mansar Selma, Marir Farhi and Reijers Hajo A. (2003) Case-Based Reasoning as a Technique for

Knowledge Management in Business Process Redesign, Academic Conferences Limited in the web site.

López de Mántaras and Plaza E. (1997) Case-Based Reasoning: An Overview, AI Communications, vol. 10,

n° 1, p. 21-29, 1997.

Kolodner J. (1993) Case-Based Reasoning, Morgan Kaufmann Publishers, Inc.

Mann D. (2003) Better technology forecasting using systematic innovation methods. Technological

Forecasting & Social Change 70 (2003) p. 779–795. Elsevier Science Inc.

Mann D, Dewulf S, Zlotin B, Zusman A. (2003) Matrix 2003, Updating the Contradiction Matrix. Library of


Nonaka I. and Takeuchi H. (1995) The Knowledge Creating Company. How Japanese Companies Create

the Dynamics of Innovation. Oxford University Press.

ERIMA07’ Proceedings


Pais L.S., Loureiro J.M., Rodrigues A.E., Modeling strategies for enantiomers separation by SMB

chromatography, AIChE Journal, 44 (3) (1998) 561-569.

Sifonis C., Chen F., Bommarito D., (2003) Analogy as a Tool to Enhance Innovative Problem Solving,

Proceedings of the 25th Annual Meeting of the Cognitive Science Society, Oakland University, 2003.

Smith H. (2005) The innovator is a Problem Solver. Computer Science Corporation, June 2005 CSC World,


Terninko J, Zusman A, Zotlin B., (1998) Systematic Innovation: An Introduction to TRIZ. St. Lucie Press.

Zlotin B., Zusman A. (1999) Managing Innovation Knowledge: The Ideation Approach to the Search,

Development, and Utilization of Innovation Knowledge, Ideation International.

ERIMA07’ Proceedings


Intellectual Property Management: a TRIZ-based approach to manage

innovation within SMEs

ERIMA07’ Proceedings

D. Regazzoni 1 , C. Rizzi* 1 and R. Nani 2

1 Dipartimento di Ingegneria Industriale, Università di Bergamo, Dalmine (BG), Italy

2 Scinte Consultant, Ranica (BG), Italy

* Corresponding author:, +39.035.2052075

Abstract: In this paper, we present a methodology and a working paradigm, based on TRIZ theory,

specifically conceived for SMEs that are not able to face the problem of Intellectual Property Management

(IPM) in an autonomous way. As a first step, we introduce the competences and possible company structure

to manage and protect Intellectual Property by means of patents and trademarks; then we describe the

methodologies and the tools, which can be used for IPM, such as those derived from TRIZ. Finally, we

describe a case study which refers to a typical example of know-how transfer from a technological leading

sector to the large production of consumable products showing the use of considered TRIZ tools.

Keywords: TRIZ, Patents, Intellectual Property Management, SMEs

I. Introduction

The capability to manage Intellectual Property is becoming essential for SMEs that actively try to

face competition of emerging countries and Far East (China, India, etc.). Nevertheless, some

aspects are often neglected, such as the importance of patents, not only as a legal protection

from unauthorised copying of inventions, but also as a tool for a systematic innovation of both

product and process; in fact, patents can represent the starting point for systematic innovation. In

this paper, we first introduce three possible organisation paradigms for SMEs that aim at

managing internally Intellectual Property and playing an active role. Then we present a

methodology and its related tools, based on TRIZ theory that can be applied by SMEs to face the

problem of Intellectual Property in an autonomous way. The methodology, based on systematic

innovation tools, regards the following key aspects: valorisation and formalisation of the

company’s technical knowledge, monitoring and acceleration of the innovation process,

management and defence of the intellectual property by patents and trademarks and increase of

SMEs autonomy in the management of the IP. The last part of the paper shows a real application

of TRIZ tools to perform a technology transfer study regarding the wireless communication

technology in the aircraft industry.

II. Organisational paradigms for IPM within SMEs

Traditionally, SMEs are reluctant to establish an internal department for Intellectual Property

Management (IPM). However, creating a technical-legal Department is not so arduous as far as it

concerns either operativity or investments in terms of infrastructures and human resources. In

fact, the synergies derived from such a department in the innovation area permit to create the

conditions for self maintenance. To describe possible organisational paradigms we first introduce

the concept of ‘Standard Structure’ for a SME. A Standard Structure is characterised by a R&D

department and a Patent Attorneys whose role is to act as a legal representative for the company

with regard to IP. Generally speaking, such a structure is not used to manage those IP issues

related to the company’s know-how and industrial secret, i.e., its technological wealth which is not

protected by patents. The R&D department develops products mainly using information/data

available from the other departments of the company (marketing, procurement, sales, etc.); while

the Patent Attorney writes the new applications on the basis of information provided by the R&D


department and follows the procedure for the grant of the patent. S/he is likely not aware of the

company’s know-how and industrial secret; therefore s/he cannot lead the company toward a

consolidation of its IP (Fig. 1).

With reference to this standard configuration, we envisage three organisation structures

characterised by different levels of IP-management: basic innovation management (minimal

solution), intermediate management of IPR (intermediate solution), and active management of

IPR (optimal solution).

Figure 1. Standard structure

The first level implies that the R&D department plays a more active role: it manages the product

development and performs the state of the art analysis using one or more patents search engine

(e.g., Espacenet and Uspto) getting free from the traditional information channels internal to the

company (Fig. 2). The interaction with Patent Attorney becomes more active: patents analysis

completes technical information necessary to write new applications; the R&D department gives

an important effort to establish a communication channel between Patent Attorney and

Examination Board. Patent Attorney is still in charge for writing patents applications.

ERIMA07’ Proceedings


Figure 2. Basic Innovation Management

The second solution envisages the establishment of an IP department that manages and

transfers the company knowledge, creating communication channels between the R&D

department and Patent Attorney. At this level, the IP management mainly consists in analysing

and monitoring the state of the art. The patent Attorney remains and the IP department in case of

technical analysis and patent litigations technically supports him/her.

Figure 3. Intermediate management of IPR

This last structure implies an IPR department working in coordination with the R&D and the

Patent Attorney is not needed anymore because his/her role and technical-legal tasks are

completely assumed by the IP department (see figure 4). The IP department directly manages the

legal aspects of Patent, included grant procedures, hearings and litigations according to national

ERIMA07’ Proceedings


and/or international Patent Conventions. The IP rights management mainly concerns: monitoring

of the state of the art, promoting of studies and consultancy services in the field of IP; evaluation

of patent portfolio in relation to the company’s know-how and industrial secret. The use of specific

methodologies (e.g. TRIZ and GTI) and tools for systematic innovation (e.g., CAI tools) and

semantic-based search tools allows the synergy between the IP and the R&D department, both

involved in product innovation processes. These last ones permit to consolidate and enhance

product innovation processes and to stress the developments of new inventions with respect to

the state of the art. Thus, the IP department acquires an added value/benefit not available within

the other company departments.

Figure 4. Structure with active management of IPR

III. TRIZ methodology and Patents Management

In the following, we will introduce some tools that can be adopted both by IP and R&D

departments of SMEs to enhance IPM and innovation. As previously mentioned, the suggested

methodology bases on tools which are an integrating part of the TRIZ theory (Altshuller, 1984)

(Ikovenko, 2000) (The TRIZ Journal, TRIZ was developed by Genrich

Altshuller (1926-1998) and his research staff since 1946. Their goal was to capture the creative

process in scientific and technological area, codify it and make it repeatable and applicable

(Savransky, 2000). Altshuller started his work screening patents (over 1,500,000 patents have

now been analysed), looking for inventive problems and how they were solved. Functional

analysis, Technical Contradictions and Inventive Principles can be used, among the several TRIZ

tools, in order to manage technical knowledge and patents.

Functional analysis and knowledge valorisation

The Functional Analysis provided by TRIZ, combines the subject-action-object (SAO) logics with

the value engineering thinking (Miles, 1972). It formalises the technical knowledge through two

types of models based on functional decomposition of a system: the Function Tree Diagram and

the Functional Model (FM). In particular, FM takes advantage of a simple graphic language,

which permits to identify every component of a system/product, its role and functions (both useful

and harmful). In the field of patents management, functional modelling is particularly useful. First,

it allows to model a technical system in a synthetic and objective way, whether one has to

describe a new invention in a patent (figure 5a) or has to analyze a patent to understand how the

invention works (figure 5b).

ERIMA07’ Proceedings


Figure 5. Functional analysis and Patents

For both flows represented in Figure 5 functional models represent, in different contexts, an

impartial coding of the knowledge. Said coding is usable for the analysis of patents extensions,

patent-breakings and representations of technical knowledge. The functional description of a

technological apparatus or of a company department permits to build a balanced linguistic

structure, preserving the action-reaction principle between subject and object but eliminating all

descriptive redundancies used in everyday. If a patent is being analysed and modelled, a good

work highlight strengths and weakness of the device described, creating a robust base for

eventual patent breaking or circumvention. Bad models may bring to wrong evaluations of what is

claimed in the patent, with potential severe and expensive legal consequences. Therefore,

functional analysis constitutes a valid tool to share and spread out the technical knowledge inside

and outside the company.

For instance, the description of a user handbook or of a patent, written according to the functional

analysis has proved to allow a univocal translation into different languages. This can be of a

particular importance in litigations as misunderstandings or wrong translations could be avoided.

ERIMA07’ Proceedings



Contradictions-Inventive Principles and Innovation Monitoring

The “Contradiction” concept is one of the most important in TRIZ underlying philosophy.

According to TRIZ terminology a contradiction occurs when improving one parameter or one

feature of a technical system, the same or another feature or parameter are negatively affected.

TRIZ states that when a solution overcomes contradictions this is likely to be the most effective

inventive solution. Altshuller and his collaborators took out from a large amount of patents 40

inventive strategies (named Inventive Principles) to help an engineer finding highly inventive

(potentially patentable) solution to a problem. From this basis, he developed a matrix (named

Contradiction matrix or Altshuller matrix) whose cells contain the principles, which should be

considered for any specific situation.

Within the framework of IP management, Contradictions can be used to monitor innovation by

classifying patents on the basis of the contradictions they face and/or the solutions they provide,


thus concentrating the inventive efforts on specific objectives. On the other hand, identifying the

technical contradictions still subsisting in a product (either proprietary or owned by a competitor)

improves the understanding of where and how to innovate the product. Moreover the early

identification of contradictions speeds up the innovation process. Functional analysis provides a

valid support for this activity, i.e., contradictions identification.

Traditionally, Inventive Principles are used as a tool for problem solving. However, by analysing

patents one can trace the inventive principles used to find a solution and outline a trend of

evolution of the new ideas. Similarly to contradictions, Inventive Principles can be used for

different purposes: to classify patents by the most relevant inventive principles instead of

classifying them by branch, such as automotive, electronics, aeronautics, etc., or to monitor the

innovative process inside a company or of a specific industrial sector by recognizing/identifying

the most used principles (Nani 2005, Nani 2006) (see figure 6).

Figure 6. Inventive Principles and Patents

Thus, the classification of patents according to contradictions and/or inventive principles allows a

company to better define its Intellectual property strategy, considering also patent analysis and

patent breaking whether favourable or unfavourable to the company. Figure 7 shows the role in

which TRIZ tools can be adopted.

Figure 7. Application of TRIZ tools to perform IPR functions

ERIMA07’ Proceedings


IV. Application

The case study described in this paragraph refers to a typical example of know-how transfer from

a technological leading sector to the large production of consumable products. Such technology

transfer process must take into account severe constraints such as logistic, economic,

maintenance, ease of use and reliability issues. By monitoring the patents referring to the chosen

technology it is possible to depict the state of the art of the leading sector and to highlight the

connections with the target sector.

The adopted method is based on the evaluation of such intellectual property feature as:

� The technical value of patents and patent applications of the leading technology;

� The relationships of the leading technology along with the large scale wide range products,

whose existence may be not clear to the technology transfer office people;

� The economic potential value of the product, assessed to optimize cost referring to R&D,

Production and Management;

The present case study can be divided in the following steps:

1. Identification of the main patent class of the considered technology;

2. Identification of a set of potential target patent classes;

3. Classification and description of the referring technology on the base of TRIZ tools previously


Step 1 - Identification of the main patent class

The case study is about flight control systems used to manage flight parameters such as aircraft

position, direction and speed. In particular the patent search has been focused on wireless

technologies for radio-transmission of data developed in the aerospace and aeronautic industry.

The identification of the parameters characterizing the leading technology has been carried out by

analyzing technical data sheets provided by the main actors of the airplane industry and by

companies working specifically on electronics and flight control systems. The resulting

parameters have been crossed with the wireless technology feature, to perform a patent search.

The result obtained is a list of roughly 500 patents out of US, European and International patents,

describing the state of the art of the referring technology. The most recurrent class, to which the

patents belong to, according to International Patent Classification (IPC), is:

(IPC) G01S – Radio Direction Findings; Radio Navigation, Determining Distance or velocity by

use of radio waves � radio supported navigation

Step 2 – Identification of a set of potential target patent classes

After defining the main IPC class characterizing the state of the art of the wireless technology in

aerospace and aeronautic, the search efforts are put to determine potential target classes that

are not directly or clearly connected to the main one. Those classes may be far from the G01S

class and probably are not known by technicians’ expert in wireless communications. At first the

search has involved only the wireless technology, i.e. the remote measurement and transmission

of physical data, without any sort of constraint. The result obtain querying the patent database is

that about 48.000 documents of the last decade meet the search criterion. The most recurrent

IPC classes are shown in Table 1.

ERIMA07’ Proceedings


Table 1. Most relevant IPC classes related to the wireless technology patent search

Step 3 - Classification and description of the referring technology on the base of TRIZ tools

The technology transfer from aerospace to consumable electronic devices must take into account

specific constraints of the target industrial field. Ease of use and maintenance, transmission

reliability and cost are the main general parameters to be taken into account to obtain successful

technology transfers.

The classification of potential target fields represent a concrete decisional support while defining

the strategy to adopt. At the same time, defining the most important parameters of the target field

and ranking them in terms of IPC classes allows the technicians to start studying the features of

the target field(s). This can be done by exploiting tools provided by TRIZ methodology.

Some of the most important patents taken out from the first list of 48.000 patents by means of

classifying them according to TRIZ Inventive Principles are shown in Table 2.

Inventive Principle N. 1 - Segmentation � Portable instruments

US20050228549A1 Method and apparatus for isolating aircraft equipment

US20040255572A1 Aircraft engine in which there is a small clearance separating the fan

cowls and the thrust inverter cowls

US6328265 Slot forming segments and slot changing spoilers

US6069654 System and method for far-field determination of store position and

attitude for separation and ballistics

EP0637541B1 Decomposable wing and manufacturing system of a connecting

element for thin walled shaped piece, in particular for the segments of

such decomposable wing

EP0637541A1 Decomposable wing and manufacturing system of a connecting

element for thin walled shaped piece, in particular for the segments of

such decomposable wing

US4147056 Multi-segment head-up display for aircraft

..... .....

ERIMA07’ Proceedings


Inventive Principle n. 4 - Asimmetry � Ability to measure and model

anisotropic items

US6929222 Non-jamming, fail safe flight control system with non-symmetric load

alleviation capability

US6796532 Surface plasma discharge for controlling forebody vortex asymmetry

US6314361 Optimization engine for flight assignment, scheduling and routing of

aircraft in response to irregular operations

US6255964 Universal aircraft panel with a dynamically symmetrical series of

displays for the directional and rate flight instruments

US5060889 Apparatus and methods for maintaining aircraft track angle during an

asymmetric flight condition

US4521060 Hydraulic asymmetry detector

..... .....

Inventive Principle n.14- Curvature � Capability of discriminating

single/multiple direction(s) waves by means of a single instrument

US7004428 Lift and twist control using trailing edge control surfaces on supersonic

laminar flow wings

US20050242243A1 Process and device for the optimization of the deflection of the spoiler

flaps of an aircraft in flight

US6736353 Grooved profile for diverting liquid

US20030009268A1 Wind turbulence prediction system

US6626024 Redundant altimeter system with self-generating dynamic correction


US6600991 Neighbouring optimal aircraft guidance in a general wind environment

US6571155 Assembly, computer program product and method for displaying

navigation performance based flight path deviation information

US20010025900A1 System and method for wind-powered flight

US6161801 Method of reducing wind gust loads acting on an aircraft

US6044311 Method for protecting an aircraft against vertical gusts of wind and

pitch-attitude control device employing this method

..... .....

Table 2. Example of patent classification according to Inventive Principles.

V. Conclusions

In this paper we have introduced possible organisation structures for SMEs that intend to face the

problem of Intellectual Properties in an autonomous way. To this end, we present some tools

typical of TRIZ methodology that can help to deal with some specific issues of IPM. We have

mainly stressed the opportunity they offer to upgrade the quality of IPM by means of innovative

methods and qualified personnel with scientific-technical background. Both patents monitoring

and functional analysis permit to build up a precious synergy and to accelerate product innovation

inside the company. Thus, TRIZ methodology can represent a valid tool to interlace IPM and new

product demand with a systematic and rational approach.


Atshuller G.S. (1984) Creativity As an Exact Science. CRC, ISBN-10: 0677212305 ISBN-13: 978-


ERIMA07’ Proceedings


Ikovenko S. (2000) Knowledge-based Innovation – a Technology of the Future. In:From Knowledge

Intensive CAD to Knowledge Intensive Engineering, Eds U. Cugini & M. Wozny, Kluwer Academic

Publishers, pp. 3-10.

Miles L.D. (1972) Techniques of Value Analysis and Engineering. McGraw-Hill, ISBN-13: 978-0070419261.

Nani R., (2005) Boolean Combination and TRIZ criteria. A practical application of a patent-commercial-Data

Base. Proceedings of the Triz Future Conference 2005, Graz, Austria, November 16-18, 2005.

Nani R., Regazzoni D. (2006) Practice-base methodology for effectively modelling and documenting search,

protection and innovation. Proceedings of the Triz Future Conference 2006, Kortrijk, Belgium, October 9-11,


Savransky S.D. (2000) Engineering of Creativity. Introduction to TRIZ Methodology of Inventive Problem

Solving, CRC Press, ISBN 0-8493-2255-3.

ERIMA07’ Proceedings


How Innovation in the Organisation of Management Systems in SMEs could

contribute to the Economic Growth of Developing Countries?

ERIMA07’ Proceedings

D.A. Coelho 1,* , J.C.O. Matias 1

1 Centre for Research in Engineering and Industrial Management, DEM,

University of Beira Interior, Covilhã, Portugal

* Corresponding author:, +351.275.329.943

Abstract: In the past, economic success of most countries depended on the performance of their greater

companies. Nowadays, bearing globalisation in mind and the wide implementation of the multinational

companies, the economic success of developing countries and the internationalisation of their economy

depends on the performance of SMEs. However, these companies, in some countries mostly of a family

scope, usually have insufficient know-how about new forms of management. Recognising that management

systems are nowadays considered market qualifiers, this paper presents two ways the smaller companies

can direct their efforts to production, while implementing the various management systems. Outsourcing the

implementation and management of the several management systems to companies specialising in that

activity is one way. The alternative is forming new companies with the same goal, but in a co-operative way

amongst several small or medium sized companies operating in the same industrial sector or in close

geographical proximity. The choice between alternatives for an industrial SME depends on its sector of

activity, its financial health, and applicable market qualifiers and order winners. Examples of both forms of

outsourcing are discussed, and the selection criteria inherent to this decision process are discussed based

on the results of a survey of Portuguese industrial companies.

Keywords: Outsourcing, Entrepreneurship, Intrepreneurship, Innovation, Co-competition

I. Introduction

Innovation is not only one of the development phases of a product or technology (preceding the

diffusion phase and succeeding the invention phase), but the act of innovating is also concerned

with the manner innovation is understood. Innovation should be looked upon from the broadest of

perspectives. From such a stance, innovation includes the manner in which people,

organisations, companies, entrepreneurs, and even society in itself, create value by exploring

change. Change springs from a number of settings and events, including not only technological

advances, but also changes of a distinct nature and level of importance. Innovation is as much an

individual as a collective process (Lam 2005). Thus, support mechanisms should be devised in

order to improve the competitive placement of companies (OECD and Eurostat 2005).

A fair amount of the economic sustainability of many underdeveloped and developing countries

has been closely tied to big companies. Lower labour rates attracted many multinational

companies to establish labour-intensive production operations in underdeveloped and developing

countries, in the 1980s and the 1990s, and still do nowadays in many cases. Part of the economic

success of such countries depends strongly on their ability to attract these kinds of production

operations. There has in recent years been a change in this state of affairs, driven by the growing

impact of globalization and by actual success in attaining development in some of the countries

where delocalized operations moved into. Countries that were successful in attracting foreign

investment, but having seen a rise in living standards, such as the case of Portugal, need to seek

out and reinforce other mechanisms for sustaining economic growth. Increasing the performance

of Small and Medium Enterprises, seen through their internationalization, can have a big impact

on the success of the country’s economy. According to a European Commission (2004) report on

SMEs in Europe in the year 2003, SMEs and Entrepreneurship have emerged as the engine of

economic and social development throughout the world.


The relative weight of Small and Medium Enterprises (SMEs) is very big across Europe, adding

up to 99% of the total number of European companies, according to Eurostat. In Portugal, SMEs

are very important in the country’s entrepreneurial structure. 99,6% of the total number of

Portuguese companies are SMEs, offering 75% of employment and representing 58% of

economic transactions. Between the years 2000 and 2003, the number of Portuguese SMEs

grew at an annual rate of 9%, creating a rise in employment of 5,6% per year and a 4,3% growth

of sales volume per year. This contrasts with roughly unchanged sales volumes and employment

figures for the bigger companies operating in the country altogether. With an economy that is

undergoing development, SMEs play a decisive role in the economic development of Portugal.

However, these companies, in many cases owned and managed within a family, frequently have

insufficient know-how about the new forms of management (Conway 2006). SMEs have joined

certification of their management systems, namely their quality management system, for survival

reasons rather than any other. Given increasingly demanding supply chains (many of which have

an international dimension), SMEs will have to follow in the steps of competitors.

II. Enterprise culture and management systems

In the recent decades, the implementation of management systems has been massive, and it is

taking place through certification, based on normative documents which are internationally

accepted, as is the case of the documents of the International Standards Organisation (ISO).

Highlighted are the Quality Management Systems – QMS – (ISO 9000), since 1987 (revised in

1994 and in the year 2000), and the Environmental Management Systems – EMS – (ISO 14000),

more recently, since 1996 (revised once, in the year 2004). Although there is yet no ISO standard

in the area of Occupational Health and Safety (OHS), the OHSAS 18001 standard/specification

(Occupational Health and Safety Management Systems – OHSMS), created in 1999 by an

international group of organisations, including the renowned British Standards Institution (BSI), is

starting to show universal acceptance.

ISO 9001 - QMS

In what concerns the standards for implementation of a QMS, in the end of December 2005, in

total, the number of certificates amounted to 776608, pertaining to 161 countries/economies (see

Table 1). Of this certificate total, more than 500125 were ISO 9001:2000 certifications, distributed

among 149 countries/economies. With respect to the year 2001, which was the first year that

followed the revised edition of the year 2000, the year 2003 entails an increase by 455737, which

is a value more than ten times larger than the amount of the year 2001, a time when the total

number of certifications amounted to 44388 in only 98 countries or economies. In what concerns

its global diffusion, it can be seen that the largest economies (China, Italy, United Kingdom,

Japan, USA, Germany, Australia, France and South Korea) are among the top ten of ISO

9001:2000 certifications. In recent years, the interest in ISO 9001 certification has grown in all

regions of the globe. However, and in relative terms, the growing percentage of the quota for the

Far East should be emphasised. It has risen from 24% in the year 2001 to 34% in the year 2003.

Comparatively, the quota of ISO 9001:2000 certification in Africa and the Middle East for the

same period has risen from 3% to 4%. In the remaining regions of the globe, their respective

quota has been either stable or diminishing; such is the case for Europe, showing a drop from

52% down to 47%. The numbers presented are not, however, entirely reliable and do not include

the implementation of non-certified quality systems.

ISO 14001 - EMS

In what concerns the ISO 14001 Standards, in December 2005 there were as many as 111162

certified organisations from 138 countries and economies, while in December 2003 there were

ERIMA07’ Proceedings


only 64996, from 113 countries/economies (see Table 2). The latter value entails an increase by

roughly a third part in comparison with the year 2002, when the total amounted to 49440 and

represented more than seven times the 1998 value (7887). In what concerns the top ten of

countries with ISO 14001 certifications, the group is composed of the same countries/economies

which form the ISO 9001 top ten, with the exception of Sweden, now taking the place of Australia.

Additionally, just as what was observed concerning ISO 9000, in recent years, the interest in ISO

14001 has grown in every region of the world. However, and in relative terms, apparently there is

a growing quota in the Eastern region of the globe, as well as in the remaining regions, with the

exception of Europe and Oceania, the former having dropped from 54% down to 47%, as a reflex

of higher certification growth rates in other regions.




ERIMA07’ Proceedings

Dec 2001 Dec 2002 Dec 2003

























World Total 510616 44388 561747 167124 567985 497919 660132 776608

World Growth 101195 - 51131 122736 6238 330795 162213 116476

Number of

countries /


161 97 159 133 152 149 154 161

Table 1. Evolution of ISO 9001 certifications (source:
















2004 Total

Dec 2005

of which


World Total 7887 14106 22897 36464 49440 64996 89937 111162 56593



Number of

countries /


3454 6219 8791 13567 12976 15556 24941 21225 -

72 84 98 112 116 113 127 138 107

Table 2. Evolution of certifications based on the ISO 14001 standard (source:

Data 2003 2004

Number of countries where Occupational Health and Safety Management

Systems certification took place

70 82

Total number of certificates issued 8399 14019

Total number of OHSAS 18001 (or directly equivalent document)


3898 11091

Table 3. Evolution of the certification of OHS Management Systems (source:



Industrial companies have adhered universally to the ISO 9000 standards (QMS). Judging by the

data just analysed, adherence to the ISO 14000 standards (EMS) should quickly follow in a

similar path, probably due to a similar motivation. Furthermore, although there is yet no ISO

standard in the area of Occupational Health and Safety (OHS), the OHSAS 18001

standard/specification (Occupational Health and Safety Management Systems – OHSMS),

created in 1999 by an international group of organisations, including the renowned British

Standards Institution (BSI), is starting to show universal acceptance. According to a survey

carried out by the “OHSAS Project Group”, based in the United Kingdom, concerning OHSAS

18001 certification world-wide, from the year 2003 till the year 2004 the number of OHSAS 18001

certifications has almost tripled, increasing from 3898 up to 11091 (see Table 3 – source: Moreover, there are already many countries which have

adhered to OHSMS certification.

In the greater companies, many of which are part of international groups, the culture of the

various management systems is clearly rooted. On the one hand, this is so because they which to

position themselves at the forefront of management excellence, and on the other, due to market

competition. On the other hand, in the smaller companies, in some countries mostly consisting of

family companies, there is insufficient know-how concerning the new management approaches,

in areas such as quality, environment, occupational health and safety, energy, maintenance, or

innovation. Since the implementation of management systems, namely in the Quality area, now

takes the form of a market qualifier (Hill 1993), industrial companies should act dynamically,

adapting to the new challenges. Although resources may not be as scarce in larger companies, in

the smaller sized companies, the lack of resources limits their strategic options. In the vast

majority, these SMEs do not possess enough resources, or competencies, to implement and to

support a management system, not even the most exploited one, the Quality Management

System. According to data supplied by consultant companies operating in Portugal, most SMEs

using consulting services, want to implement QMSs, EMSs and OHSMSs (

Another fact is that those SMEs that seek more consulting services originate in

the industrial sector, those noteworthy include, among others, the metallurgic and metalmechanics

sector, the cork sector, and the footwear and textile sectors.

III. Outsourcing and co-operation between SMEs

The new global economy, with increasingly fractionated production processes, and with new

threats and opportunities, has forced companies to look beyond their individual strategies, placing

inter-company collaboration in the agenda of the business world. This does not apply merely to

bigger companies, but also to SMEs, since, especially due to survival reasons, they have to

respond dynamically to constant challenges. Collaboration between companies may assume the

forms of virtual collaboration (Somora et al. 2005), production processes collaboration (Lin et al.

2005), or collaboration at the level of distribution channels, among other forms. Additionally,

competition through market positioning is increasing, side by side with the augmented frequency

of collaboration among competitors. The dynamics of network organisation, of partnerships and

collaborative enterprises are fundamental principles of organisation in the New Economy. This

kind of co-competition is often placed at a regional level.

It is certain that the resources, both human and financial, are not abundant in any company, but

the situation in industrial SMEs is typically troublesome. It is preferable that these smaller sized

industrial companies dedicate their efforts – resources – to the activities for which they were

designed (producing), dealing with the management systems issues together with another entity,

namely, an entity that supports the management of their system(s). We consider two alternative

ways for the smaller sized companies to dedicate their efforts to the activities for which they were

ERIMA07’ Proceedings


designed (producing) and simultaneously implement the various management systems,

separately or in an integrated way. One alternative is calling in specialised companies dedicated

to assisting the management of the various systems (Outsourcing) – this could certainly lead to

the creation of new enterprises (Entrepreneurship). The second alternative concerns the

generation of new companies (co-operatives) in the scope of a group of companies (“Cocompetition”

– Intrepreneurship) in the same industrial sector or in close geographical proximity,

with the goal of assisting their management systems. For a small company, the choice between

both options depends, among others, of factors such as its economical and financial situation, or

applicable market qualifiers and order winners.

IV. Results of industrial SME survey in Portugal

A study based on a survey was carried out with the purpose of gauging the decision criteria

concerned with the solution chosen to the problem, common in industrial SMEs, of the

implementation (and certification) of management systems, as well as other purposes related to

the industrial strategy of industrial companies, such as their needs of personnel training. A

questionnaire (in Portuguese) was sent to randomly selected industrial SME companies in

Portugal, resulting in 31 completed questionnaires. The companies were selected from a

commercial database. The questionnaires were sent out both by email and by post mail to around

500 SMEs from industrial sectors of activity located in Portugal’s mainland territory. The

assumption pointed out throughout this paper that the choice for a smaller industrial company

between the alternatives presented depends on its sector of activity, its financial health, and

applicable market qualifiers and order winners, was verified. The main questions included were:

“What are your market qualifiers?”; “What are your order winners?”, “How would you select

between the two alternatives for establishing Management Systems?”. Questions were presented

to respondents with an open ended answer option as well as multiple choice answer alternatives,

which had resulted from a previous smaller study, with a pilot nature, carried out within a sample

of industrial SMEs in the region of Beira Interior, in Portugal.

Market qualifiers were defined in the questionnaire, according to Hill (1993), as market entry

requirements. Hence, a company whose product does not fulfil the applicable market qualifiers is

not considered as a potential supplier of the product to that market. Concerning market qualifiers

of the 31 industrial SMEs participating in the survey, the following results have been compiled:

• Product certification: 42%

• ISO 9001 certification: 35%

• E-procurement: 13%

• ISO 14001 certification: 10%

• Responsiveness and adaptability to customer demands: 9%

Order winners were also succinctly defined in the questionnaire, according to Hill (1993), as the

actual product, service or company features or properties that decide the sale for the company.

Order winners are, over time, usually demoted to market qualifiers, after some time, as

competition catches up (this notion is parallel to the “red ocean” concept considered by Kim and

Mauborgne 2005). The respondent companies identified their applicable order winners as


• Product quality: 81%

• Product pricing: 77%

• Delivery time: 65%

ERIMA07’ Proceedings


• ISO 9001 certification: 23%

• Product certification: 23%

• Quality of After-Sales service: 23%

• ISO 14001 certification: 6%

There is evidence that Management Systems are relevant and important to SMEs. Hence, if a

choice would be brought forward between an external company and a co-operative to assist

putting in place and help running the Management Systems, it would be also very relevant to get

a glimpse at how this decision is informed. Therefore, one of the questions in the questionnaire,

according to the two alternatives discussed throughout the paper, had the purpose of probing if,

in the case of a company, for the service of current management of a management service (e.g.

Quality, Environment, or OHS), the company had to decide between resorting to the support of an

external company (a) and an inter-company co-operation (b), what would be the criteria that

would inform this decision process. The following results were obtained for this question:

• Economic and Financial considerations: 71%

• Know-How (competencies): 45%

• Enough Human Resources: 32%

• Focusing on Production Activity: 26%

• Public Incentives: 16%

In sequence to the previous question, the respondents were questioned about what they saw as

advantages and disadvantages of each of the alternatives considered, i.e., an external company

(a) or an inter-company co-operation (b). The compilation of results for this question is shown in

aggregate form in Tables 4 and 5.

ERIMA07’ Proceedings

Advantages Disadvantages

More economical (cost reduction) Increased cost

Independent service Less control over the situation

New ideas and new processes Periodical service (lack of continuity)

Acquiring new knowledge in the area

(intermediate levels of personnel)

External human resources

Collaborators pay more attention to external


Management of productive timings and search

for alternative means

Ignorance and adaptation of company

management to the new reality

Less internal intervening and consequent

inapplicability of the system

Inadequacy to company culture

Table 4. Advantages and disadvantages of resorting to an external company (a).

Advantages Disadvantages

Sharing of knowledge Periodical service (lack of continuity)

Teamwork strength (synergies) Increased cost

Internal efforts in system applicability Waste of internal resources


More economical Individual interests (superimposition of



Partner fidelity

ERIMA07’ Proceedings

All the disadvantages common to an intercompany

service which is not adequate to the

company’s culture

Table 5. Advantages and disadvantages of the inter-company co-operation (b).

The results fall straight into what was hypothesised throughout this paper, especially the

importance of factors related with human resources (including competencies) and the financial

questions, which primarily affect SMEs. The results show that, in what concerns criteria that

influence the choice at hand, most answers (71%) fall upon economical and financial

considerations, and about 45% concern the set of human resources and competencies. This,

indeed, confirms that, in particular in industrial SMEs, besides the lack of financial resources, it is

the shortage of human resources and competencies in these areas of management systems that

limits the adherence to them. The set of advantages and disadvantages unveiled by the survey

for the two alternatives presented to deal with resource scarcity within industrial SMEs shows

how this decision process is informed.

V. Conclusion

In this paper we have presented the rationale leading to a survey, and its results, concerning how

one form of innovation in the organisation of management systems in SMEs is perceived by a

sample of industrial SMEs in Portugal. A brief historical perspective showed how currently many

countries may have in the internationalization of SMEs an important mechanism of economic

growth. In the European Union the number of SMEs represents 99% of the total companies, and

in Portugal the contribution of SMEs to economic growth is very significant. The tendency of

SMEs to join management systems certification, especially in what concerns the Quality

Management System, was emphasized, despite the fact that many do not possess the resources

and competencies to implement and to support them on a day-to-day basis (EMSs and OHSMSs

are also involved). The paper suggests that smaller sized industrial companies ought to dedicate

their efforts, and scarce resources, to the activities for which they were designed (producing),

while leaving the implementation of their management systems to another entity. In this respect,

the traditional option has been to outsource part of this work to external entities (consulting

companies). The paper sheds light into an innovative manner to solve the aforementioned

problem, by resorting to the generation of a company, in the form of a co-operative, within a

group of companies (pertaining to the same industrial sector, or in close geographical proximity),

in a ‘co-competition’ manner, and as a form of ‘intrepreneurship”.

A questionnaire study was the means devised to probe the decision criteria concerned with the

kind of solution chosen to the problem, common in SMEs, of the implementation (and

certification) of management systems. The aims of the questionnaire were twofold, among

industrial SMEs in Portugal: 1 - to verify the relevance of the several management systems for

industrial SMEs; 2 - to uncover the factors and criteria underlying the choice between outsourcing

and co-operation in the implementation and assistance in running (certified) management

systems. Resource scarcity seems to be the main driver for the selection criteria elicited by the

study (economic and financial considerations, competencies and know-how, quantity of human

resources, focusing on production and public incentives). The survey also sought to unveil the

advantages and disadvantages of resorting to each of the two alternatives presented in this

paper. The reduced number of respondents (31) renders the results of the survey statistically

insignificant, although its probing nature should not be discarded. While some of the advantages


and disadvantages unveiled by the survey are common to both alternatives, and hence do not

hold a distinguishing character, others hold that distinguishing nature. In what concerns the

disadvantages of resorting to an external company, those which are exclusive to this alternative

follow: Ignorance and adaptation of company management to the new reality, less internal

intervening and consequent inapplicability of the system and less control over the situation. As for

the exclusive disadvantages of the other alternative, an inter-company co-operation, elicited by

the survey were: Superimposition of individual interests (of co-operating companies) and waste of

internal resources. Concerning the former alternative, advantages reported in the questionnaire,

exclusive to it were: Independent service, acquiring new knowledge in the area (intermediate

levels of personnel), insights brought by external human resources, the fact that collaborators pay

more attention to external trainers and the improved management of productive timings as well as

of the search for alternative means. Finally the exclusive advantages of resorting to an intercompany

co-operation unveiled by the survey were: Sharing of knowledge, synergies due to

teamwork strength, the existence of internal efforts towards system applicability and enhanced

partner companies’ fidelity in business.

In summary, the results of the survey carried out corroborate that it is possible to point out the

importance of management systems certification as market qualifiers, and that these swiftly

shifted from having an order winner nature to becoming compulsory as market qualifiers.

Moreover, certified management systems are more often considered as market qualifiers than as

order winners, and hence their increased importance is emphasized. The urgency to implement

and certify management systems, on the part of industrial SMEs is demonstrated, despite the

common problem of resource scarcity. This also demonstrates the relevance to industrial SMEs

of the two alternatives discussed throughout the paper (resorting to an external company, e.g. a

consultancy, or forming a specialised company in a co-operative manner and in a co-competition

form). Underlying these alternatives is the premise that, despite their admitted scarcity of human

resources, of economical resources and of competencies, their urgency in acquiring market

qualifiers, and hence getting up to speed with evolving markets, is paramount to the actual

survival of these companies.

The results of the survey study have direct implications to the strategic issues involving industrial

SMEs, namely in what concerns management system certification, in terms of Quality,

Environmental and Occupational Health and Safety, as well as in other management systems still

holding a rather emerging character currently, such as Innovation, Energy or Ergonomics. The

results support the growing importance of these to industrial SMEs and therefore such issues

should warrant an ongoing attention and consideration as a theme of focused research. An

underlying problem to this added attention to increasing sorts of management systems is

providing affirmative answers to the question ”can more be done with so little, in order to remain

competitive?”. This question is also applicable to developing countries where the path that

remains to be walked is longer for many of the SMEs, and where the words “more” and “little” in

the previous question may reverberate with added intensity. The data from Portuguese industrial

SMEs may prove interesting and relevant as an example of a means of illustrating a positive

answer to the aforementioned question, with the aim of supporting economic growth based on a

healthy industrial SME base.


We would like to thank the industrial companies participating in the survey for their support to this

research. A subset of the results of the survey (pertaining exclusively to companies of the Beira

Interior region of Portugal) was presented at the 16 th Flexible Automation and Intelligent

Manufacturing Conference, which took place in Limerick, Ireland in June 2006.

ERIMA07’ Proceedings



Conway C. (2006) Supporting family businesses: issues and challenges, The International Journal of

Entrepreneurship and Innovation, 7(2), pp. 127-128.

European Commission (2004) SMEs in Europe 2003, Observatory of European SMEs 2003/7, Luxembourg:

Office for Official Publications of the European Communities.

Hill T. (1993) Manufacturing Strategy – The strategic Management of the Manufacturing Function, 2nd

edition, Houndmills and London: The Macmillan Press, Ltd.

Kim W. C., Mauborgne R. (2005) Blue Ocean Strategy – How to create uncontested market space and make

the competition irrelevant, Harvard Business School Publishing Corporation.

Lam A. (2005) Organizational Innovation, in J.Fagerberg, D. Mowery and R. R. Neelson (eds.), The Oxford

Handbook of Innovation, Oxford University Press, Oxford.

Lin H. W., Nagalingam S. V., Chiu M. (2005) Development of a Collaborative Decision-Making Model For a

Network of Manufacturing SMEs, in Proceedings of the 18th International Conference on Production

Research – ICPR 18, Salerno, Italy, July 31-August 4, 2005.

OECD and Eurostat (2005) “Oslo Manual - Guidelines for collecting and interpreting innovation data”, third

edition, Paris: OECD Publishing.

Somora D., Sislak J., Valcunha S. (2005) Model of Cooperation Among Small and Medium Enterprises

within Virtual Enterprise, in Proceedings of the 18th International Conference on Production Research –

ICPR 18, Salerno, Italy, July 31-August 4, 2005.

ERIMA07’ Proceedings


An SDSS for the Space Process Control, an Hybrid Approach: Fuzzy

Measurement, Linear Programming, and Multicriteria Decision aid.

Application to Regional Planning

ERIMA07’ Proceedings

D. Hamdadou * , K. Labed, B.Beldjilali

Department of data processing, Faculty of Sciences,

University of Oran Es-Senia, BP 1524, El-M' Naouer,Oran, 31000, Algeria

* Corresponding author:, 00 213 41 51 47 69, 00 213 50 72 74 48

Abstract: Our study falls under the perspective which aims at optimizing the quality of decision brought to

the space-time decision-making process. The had aim is to claim with an extensible, generic , deterministic

and multicriterion model based on the axiomatic of models representing decision strategies and authorizing

interaction between criteria. The suggested approach is constructive, interactive and based on uncertainty

theories (fuzzy Logic, possibility theory, fuzzy integrals) and linear programming. We define a new approach

as well for the description of available information as for their use and suggest replacing the additivity

property in the performance aggregation phase by a more reliable property: the growth using nonadditive

aggregation operators resulting from the capacity theory and largely known as fuzzy measurements. The

latter allow evaluating space compatibility between the available data by defining a weight on each subset of

criteria and fuzzy integral, more specifically, the Choquet's integral is an aggregation operator able to

consider the interaction among these criteria. We elaborate, in this paper, after schematizing the complex

decisional concerned situation, a decision-making process. The latter is based on a combined use of

Geographical Information Systems (GIS) and Multicriterion Analysis Methods namely the sorting (ordinal and

nominal) approaches to claim the territorial (spatial) context analysis. This study allows the professionals to

carry out a diagnostic and proposes adapted actions in the resolution of two Regional Planning problems:

The first relates to the search of a surface better satisfying certain criteria and the second consists in

realizing the land use plan.

Keywords: Spatial decision support system (SDSS), Regional planning (RP), Multicriterion analysis (MCA),

Geographical information system (GIS), Fuzzy Measurement.

I. Introduction

Decision-making methods are still scarcely used. The demand is, however, increasing in the

environment and urban development sectors since the price objectives are no longer the only

ones to justify a decision. Several authors have already showed the adequacy of innovative GIS

association to multicriterion analysis methods to the service of RP decision-making. In (Eastman

and Toledaro 1994), the author has approached the best adapted site for a factory of carpet

manufacturing. In (Scharling 1997), many applications of multicriterion methods concerning the

environment management and especially localization with a relatively restricted number of

variants have been described. In (Joerin 1997), MEDUSAT is proposed for the localization of

waste treatment site.

In this context, multicriterion classification methods traditionally employed compartmentalize the

complex interaction phenomena among the criteria. Indeed, the most classical procedure in the

multicriterion evaluation consists in considering a simple weighted arithmetical average to

incorporate information characterizing decision maker's preferences on the set of criteria. This

supposes that criteria are preferentially independent. However, in reality and especially in a

complex field as RP, the criteria interact (correlation, interchangeability, complementarity, etc.)

and assumption of preferential independence hypothesis is seldom checked (Marichal 2003).

The expression of human subjectivity in territorial problems, as well as the interaction phenomena

modeling between environmental criteria constitutes significant aspects in the aggregation

problems. This requirement leads to consider increasingly complex models, ready to represent subtle

decision phenomena. The essence of this paper is to propose a spatial decision support system


(SDSS) devoted to help deciders to better analyze the territorial context. The main benefit of this

strategy is to optimize the aggregation phase by considering the interactive aspect between the

identified criteria. This paper is organized as follow:

In the balance of this article, once the context and the contribution of our study specified (who

decides what and how?), section 2 briefly reviews RP models using the weighted arithmetical

average. The third section will be devoted to a background of the proposed model (nonadditive

models, Fuzzy measurement). Section 4 clarifies the opportunity of exploiting choquet's integral in

multicriterion sorting aggregation. In Section 5, we present the fuzzy measurement identification

model as well as two sorting approaches using choquet's integral (ordinal and nominal sorting)

and we outline the algorithms and the linear programs developed over this model. In section 6,

we describe in detail the main steps of the proposed decision-making strategy.The suggested

decision-making process is accompanied by a case study ,described in section 7, focusing on the

various phases of the process. Finally, Section 8 concludes the paper by summarizing our work

and providing a preview of future research.

1. Setting in context, Scope and study hypotheses

Because the social aspiration to administrative decision transparency and information in the

environment field becomes a stake, the decision-making process bears deep changes from a

traditional downward approach towards a new logic where the decisional power is redistributed.

In parallel, the increase in the environment allocated place has generated a significant increase in

the production of quantitative and qualitative information on the project impacts. In order to

interpret and integrate these new data in his procedure, the environmentalist needs decisionmaking

tools. That implies that many parameters are considered during each decision or

intervention, and that enormous information quantities are handled. The data-processing tools

provide in such a context an appropriate support. The placement of these tools is however not an

easy matter to achieve and the treated problem is then summarized into divergence between

optimization and decision-making in urban engineering.

The statement of the principal question

How to face the urban problems complexity and how to use the ground in a measured and

rational way in consideration of all territory characteristics? In other word: Which decision-making

procedure is necessary to be adapted to the environment?

The study hypotheses

The decision-making aid approach out of RP will be in the context previously described and in

particular must be voluntarist, but not interventionist, decentralized, flexible, opened and

participative. Also, the presence of an optimal solution is conditioned by three constraints:

Globality, Stability and Transitivity.

2. Contribution

Our work deals primarily with decision-making systems for the territorial (spatial-temporal)

process control. For this purpose, in the developed decisional activity, problems of a Space,

Multi Scale, Multi Actor, Multi Objective, and Multi Criterion nature are raised. The present

research aims to propose procedures which allow installing effective software tools to support two

RP problems:

ERIMA07’ Proceedings


• The punctual management problem which consists in searching for a surface for a better

satisfaction of certain criteria; such as the localization of an infrastructure of the type:

Construction for dwelling, administrative building, purification station, etc.

• The problem which consists in the geographical chart segmentation in areas: designing a

polygon network where each polygon determines the land use type: such as the design of plans

of zones by considering the vicinity of these zones and the total organization of the suggested


Thus, we envisage means of assistance to the decisional step out of RP, relatively to the stakes

of the various decision-making process phases. However, a first step consists in identifying

thematically the environmental criteria considered. In the second step, we deal with the complex

phenomena of interactive criteria (correlation, interchangeability, complementarity and preferential

dependence) and we introduce fuzzy measurements as solutions to the compensation problem

between the criteria into the aggregation phase and primarily into the weight determination

process (Hamdadou and al. 2007). This report has led us to use discrete Choquet’s integral as

an aggregation operator in both the sorting methods. This operator aims to improve the

multicriterion analysis power by generalizing the arithmetic weighted average (Hamdadou and

Labed, 2006).

II. The weighted arithmetical average: Criticism

In the multicriterion decision-making procedure, when that preferential independence between

criteria is supposed, it is frequent to consider the classical additive model within the phase of

performance aggregation. The most used aggregation operator is the weighted arithmetical

average, an additive operator of the form:



M w(


= ∑wi


/ x∈[



, ∑iwi = 1 et wi


∀ i∈





Where N= {1... n} indicates the set of n indices relative to the criteria and ωi the weight (or the

importance coefficient) of the criterion i. To reduce the notations, we write criterion i instead of

index criterion i.

It is acquired that the set function additivity is not always a required property in real situations,

particularly in the presence of human reasoning where the preferential independence hypothesis

is seldom checked. Indeed, the weighted arithmetical average is unable to model any interaction,

and led to mutual preferential independence among the criteria. Also, this operator:

• Gums the possible conflicting character of the criteria;

• Eliminates from the Pareto-optimal 1 alternatives which can be interesting;

• Can favor the extreme alternatives;

• A weak weight variation can involve great consequences on the total preference.

III. Nonadditive Models

In the multicriterion aggregation, we have recourse to the nonadditive models when the

separability property is not checked. The latter were proposed by (Sugeno 1974) to generalize

additive measurements and seek to express synergies between criteria. Among the most used

nonadditive aggregation functions, we cite: the ordered weighted averages (OWA); the

nonadditive integrals with regard to a capacity: the most known are Choquet's integral and

Sugeno's integral.

1. Fuzzy Measurements

1 An alternative is a Pareto-optimal or effective if it is not dominated by any other one. It cannot be improved with regard to a

criterion without deteriorating it for another one.

ERIMA07’ Proceedings


A fuzzy measurement on N is a monotonous overall function : 2 →[




S ⊆ T ( S , T ⊆ N ) and checks the limiting conditions ( { } ) = 0

v , v( S)

≤ v(

T ) each time

v and v ( N)

= 1 . For any S ⊆

N, v(S) can be interpreted as the weight of the combination of criteria S. Better still; it is its

capacity to make alone the decision (without intervention of the other criteria). The growth

expressed by this operator means then that the importance of combination cannot decrease

when we add an element to it.

2. Choquet's Integral, Definition and Intuitive Approach

Choquet's integral can be seen as the simplest means to extend, to any alternatives, a decision

maker's reasoning on binary alternatives. This concept has been initially introduced in the

capacity theory (Choquet 1953). Its use as a fuzzy integral compared to a fuzzy measurement

has been then proposed by Murofushi (Sugeno 1974). Choquet's integral of the function x: N →

IR compared to ν is defined by:


C v(


=∑x( i)

[ v(



) − v(



) ]




Where (.) indicates a permutation of N such that: x(1) ≤ k ≤ x(n)

As an aggregation operator, Choquet's integral is a monotonous increasing function, defined of

[0, 1] n in [0, 1] limited by two values (Cν(0,…,0)=0 and Cν(1,…,1)=1) and satisfying particularly

remarkable properties of continuity, idempotence and decomposability (Marichal 2003).

3. Fuzzy measurement Additivity of order K

In the decisional problems including n criteria, to be able to consider interaction among the

criteria in the decision maker's preference modeling, we need to define 2 n coefficients

representing the fuzzy measurement ν , where each coefficient corresponds to the weight of a

subset of N. However, the decision maker cannot provide the totality of information allowing

identifying these coefficients. In the best cases, he can guess the importance of each criterion or

each pair of criterion. In order to avoid this problem, (Grabish 1997 ) has proposed the concept of

Fuzzy measurement Additivity of order K.

In the suggested sorting approaches, we will use a model of order 2 of Choquet’s integral


allowing modelling the interaction among the criteria by using only n+ Cn

= n(



/ 2 coefficients

to define the fuzzy measurement.

IV. Multicriterion Sorting Problems and Choquet's integral

Let F a coherent criteria family and A a set of actions, a multicriterion sorting problem consists in

partitioning A, according to F. It consists in posing the problem in terms of actions sorting by

categories, in consideration of the revisable (and/or transitory) character of A. This problem either

recommends acceptance or rejection for certain actions, or proposes a methodology based on an

assignment procedure to categories of all the appropriate actions for a possible repetitive and/or

automated use.

According to the problem structure, we distinguish two types of sorting: In case where the

categories are ordered and characterized by a limiting reference actions sequence. Each

category is represented by two families of reference actions; one is lower (constituting the lower

limit) and the other higher (constituting the upper limit), this class of problems is known as "the

ordinal sorting problems" or "multicriterion segmentation". If the categories are not ordered

and are characterized by one or more standard actions (actions of central reference), this class of

problems is known as "the nominal sorting problems" or "multicriterion discrimination". In

the literature, the multicriterion decision-making problems of reference (Choice, Sorting,

Description and Arrangement) are approached by methods which do not consider the concept

of interactive criteria, and suppose rather that the criteria are preferentially independent.

However, in a complex field as RP, the criteria interact and the preferential independence

ERIMA07’ Proceedings


hypothesis is seldom checked. In the following, we will consider interactive criteria aspects in the

sorting approaches developed.

V. Proposition of a Fuzzy Measurement Identification Model

Marichal and Roubens's model (Marichal 1999) founded on Choquet's integral is based on a

partial quasi-order in the set of actions A and on certain semantic considerations around the

criteria. The latter concern: the Importance of Criteria and their Interactions. This model

represents information concerning the criterion importance by a partial preorder in F. This

information is "poor" because the fact of defining a partial arrangement on F according to the

criterion importance coefficients ωj / j∈ F does not identify precisely the criteria importance

coefficients ωj. Consequently, to make this model of fuzzy measurement identification more

deterministic as for the calculation of importance coefficients and the interaction indices among

the criteria, we consider, moreover, the limits of ωj by the intervals of the form[ωj- ,ωj+] / j ∈F

(Hamdadou and al. 2006 ). Formally, in the proposed model, the most important input data are:

A={a1,…,an} : the set of actions ; B={b0 ,….., bp} : the set of the limiting reference actions; C={C1

,..,Cn}: the considered ordered categories; F={g1,…,gn}: the criterion family ;

h h

B = bp|



..., k et p=


..., Lh

: the set of actions of central h eme category reference;

{ }

U k h

B= B : the set of all the central reference actions (h



eme category); a partial quasi-order in A

≥ A (a partial arrangement of the actions according to their total performances); a partial preorder

≥ F in F (a partial arrangement of the criterion according to their importance coefficients) ; a

partial preorder ≥ P in the set of criteria pairs P(a partial arrangement of the criteria pairs

according to their interaction indices); the sign of certain interaction indices ωij > 0, =0 or < 0

representing a positive synergy, an independence or a redundancy between the criteria i and j .

All these data are modeled in terms of equations or linear inequations according to Môbius's

representation of a fuzzy measurement ν (Marichal 1999).

Ordinal sorting: This sorting strategy implies a synthesis outclassing approach which rests on a

preferences model accepting the situations of incomparability between the actions and not

imposing any transitivity property. This method allows assigning the action ai ∈ A to a category Ch

of the ordered categories set C= {C1 ,..,Ch }.The multicriterion evaluation is carried out through

two phases: the category modeling procedure and the assignment (conjunctive and disjunctive)


Nominal sorting: We deal with the nominal sorting procedure by considering the interactive

aspect between the criteria; it aims at helping the decision maker to choose the most possible

categories to the assignment of an action ai ∈ A. This procedure belongs to the supervised

classification methods. It allows the determination of the fuzzy resemblance relations by

generalizing the indices (agreement and disagreement indices) used in method ELECTRE III

(Belacel 2000). It allows assigning an action to the category of which the membership degree is


According to the sorting strategies described above, we define two kinds of sorting algorithms.

The determination of importance coefficients and interaction indices among the criteria (the

corresponding fuzzy measurement) is ensured by solving the corresponding constraint

satisfaction linear program.

ERIMA07’ Proceedings



qj(bh) , pj(bh), νj(bh) are respectively the indifference, the

preference and the veto thresholds; λ : The cut value 1


For i=1 to m Do

For h=0 to p Do

For j=1 to n Do

Calculate the partial agreement index cj(ai,bh);

Calculate the partial disagreement index dj(ai,bh);


Calculate the total agreement index C(ai,bh);

Calculate the credibility index (ai,bh);

Enddo; Enddo;


1.Conjunctive Assignment Procedure


For i= 1 to m Do

For h=p downto 0 Do

If d(ai , Ch ) ≥ λ then break; Enddo;

Assign ai to the category Ch+1 ; Enddo;


2. Disjunctive Assignment Procedure


For i=1 to m Do

For h= 0 to p Do

If σ (bb ,ai ,) ≥ λ and σ (ai , bb ) < λ then break;


Assign ai to the category Ch ;Enddo;


ERIMA07’ Proceedings


dj - (b h k) , dj + (b h k) , νj - (b h k) , νj + (b h k): Are, respectively, the

thresholds of discrimination and veto of each

profile and each criterion;

ωi h : The importance coefficient of the criterion i in the

category Ch

ωij h : the interaction index between the criteria i and j

in the category Ch ; λ : The cut value


For i=1 to m Do

For h=1 to k Do

For p=1 to Lh Do


For j=1 to n Do

Calculate the partial similarity index Cj(ai,bp h ) ;

Calculate the discordance index Dj(ai,bp h ) ;


Calculate the total similarity index I(ai,bp h );

Calculate the membership degree of an action

to each category d (ai,bp h );

Enddo; Enddo;

Calculate the credibility index

d(ai , Ch ) = Min{ d(ai , C1 ),…, d(ai , Ck ) }; Enddo;

Assignment Procedure


d(ai , Ch ) = Max{ d(ai , C1 ),…, d(ai , Ck ) };

If d(ai , Ch ) ≥ λ then assign ai to the

category Ch

Else assign ai to the basket class;


2 This parameter ensures that the action compared with a category profiles satisfies the principle of



VI. The Model Description

In this work, we develop a multicriterion decision-making process (Figure 1) which integrates a

territory model and a multicriterion model. The suggested procedure refers to the use of discrete

Choquet's integral as an aggregation operator in the two sorting approaches. The latter thus can

be regarded as an extension of the sorting method ELECTRE Tri (Bouyssou and Dubois, 2003).

Figure 1. The suggested Decisional Model

The territory Model

The spatialized information is a privileged vector for decision-making. Through this model, we will

try to show how GIS, turned for a long time to description, is integrated for the realization of

effective communication support in the phases of multicriterion decision-making dialogue and

justification. However, the couple (GIS, simulation model) constitutes a model allowing describing

the territory. It is the support of spatial analysis procedures. When the decision makers manage

to identify the actions and the criteria, these procedures (can concern the evaluation of sun

durations, risks of pollution, etc.) allow attributing relatively, to the various actions, a value

(performance) according to each criterion. The set of actions and their performances for each

criterion constitutes the "evaluation matrix" (or table of performances).The actions are attached to

places and the evaluation matrix can thus be represented in the form of chart. The link between

ERIMA07’ Proceedings


the actions and territory is maintained throughout the procedure. This feature is advantageous,

since it constantly allows locating the alternatives (actions) in their environment (Hamdadou and

al. 2006).

The Multicriterion Model

Analysis of the various actions is then made by the use of a multicriterion method (ordinal or

nominal sorting) by generating one or more propositions. These two procedures do not seek to

give an optimal decision because of the conflicts and transformations which intervene during the

course of the decision procedure, but provide rather a suitable decision resulting from a

compromise action. Moreover, they allow implying the decision maker in the phase of model

construction so that he can integrate his preferences (elaboration of a concerted territory

diagnosis) (Joerin 1997). The multicriterion classification methods use only comparisons between

the action to be affected and the class reference objects. This comparison is made by means of a

relational preference model. Thus, these methods avoid the recourse to distances and allow the

use of quantitative and/or qualitative criteria. Moreover; they allow avoiding the encountered

problems when data is expressed in different units.

Problem 1: To treat the problems which consist of searching for a surface better satisfying certain

criteria, it is enough to apply an ordinal sorting (the procedure is presented in Section 5) to the set

of actions belonging to an area on the chart such that the number of categories is equal to three.

The low category A1 constituted by actions issued too bad, category A3 gathering actions issued

sufficiently good (actions which define the required site) and category A2 containing the actions

which can be classified neither in A1, nor in A3. This allows the decision maker, if he meets

boundary constraints, to modify the researched site limits in the zone constituted by these

average actions.

Problems 2: The decision maker can choose the various types of land use, and then defines for

each type a set of prototypes. It is enough later, to apply a nominal sorting (the procedure is

presented in Section 5), which will allow to assign each action to a type of land use. The

elaborated model proposes that the actors implied are related to each other by the negotiation

relations. These negotiations can relate either directly to the proposals resulting from the

multicriterion sorting, or to the subjective parameters stated during the action analysis. We can,

for example, ask each actor to fix his own subjective parameters to obtain a proposition by actor.

Given the space character of the problems concerned with this model, these propositions will

generally have the shape of a chart. The superposition of the different charts established for each

actor can thus contribute to the emergence of a consensus.

Model Suggested Use Procedure

A decision-making procedure consists in using a model "to reproduce" the decision maker's

problems and preferences. This is by stressing the distance which separates the real problems

and simplified representation used in particular for a decision-making. Among the most famous

decision models we cite: Simon’s model (Simon 1977), Pictet’s model (Pictet 1996) and that of

Tsoukias (Tsoukias 2004).

The multicriterion and complex nature of spatial problems makes that the linear model of Simon

and its extensions insufficient to answer the decisional complexity of these problems. They

neglect three key elements of the decision-making in a spatial context: Participation, Negotiation

and Consultation.

ERIMA07’ Proceedings


Territorial and urban decision-making processes, such as those of R.Laouar (Laouar 2005),

F.Joerin (Joerin 1997) and S.Chakhar (Chakhar et al. 2005) produce conceptual executives

integrating these elements. The suggested decision-making model (Figure 1) includes three

principal phases covering spatial decisional problems as a whole:

• The Model structuring : Identifying the actors, Identifying the criteria and Identifying the


• The Exploitation model

• The concretization of the results: analytical part of the process, it is also the validation part.

VII. Case study

As an application example among a lot treated, we propose the treatement of the problem

concerning the selection of propitious sites in order to construct buildings (problem1). Thus, the

treatment of the problem, which consists in the search of a surface better satisfying certain

criteria, requires the application of an ordinal sorting with a correlation (fuzzy measurement) to

the actions set.

Delimitation of the area of study, geographical context

The area of study is in the canton of Vaud, to approximately 15 km in the north of Lausanne. Its

geographical limits in the Swiss coordinates system are 532.750-532 500 m and 158.000-164

000m. The surface of the area of study is of 52.500 km². The choice of this area primarily results

from the great number of space data at disposal (Joerin 1997).

Identification and evaluation of the criteria

We have chosen the following criteria according to the availability of data and characteristics'

particular of the zone to study:

Criteria Type Associated Factors (Sub-criteria) Method Evaluation

Harm Natural Air Pollutions, Odors Attribution of a note

Noise Social Motorways, Railway Attribution of a note



and Natural risks




ERIMA07’ Proceedings

Underground water, Sectoral Plan: sites and

natural constraints, landscapes to be protected

Landslides, Flood, Seism, Fire

Economic Distance to equipment: gas, electricity, water,

access road

Attribution of a note

Procedures of space analysis, Consultation

of the experts

Accessibility Social Distance to localities Attribution of a note

Climate Natural Sun, fog, temperature, dampness Attribution of a note

Table 1. Identification and evaluation of the territory adequacy criteria for the habitat

Balanced distances for connections with the various


The user has the choice between the treatments of various RP problems. Once the choice

determined, a window displays the performance matrix (in this case: 650 actions x 7critera) as

well as the various associated parameters (Criteria’s Weight, Threshold of Indifference, Threshold

of preference and threshold of veto).


Actions (sites) Category A1 (good) Category A2 Category A3 (bad)

Figure 2. Window Displaying Table criteria-parameters and the Results

The results of multicriterion analysis can be displayed under a textual form or a graphic mode in

the geographical chart such as in (Fig 2) .In order to confirm the obtained results, the sensibility

analysis indicates and testifies the global concordance of subjective parameters chosen. In our

application, the sensibility analysis of the results is realized by varying criteria’s Weight,

Threshold of Indifference, Threshold of preference and threshold of veto. This kind of

proceeding is very expedient when there is a discord between concerned parts. In fact, it permits

to calculate the marginal effect on the final decision associated to a compromise on many criteria

or on their poise.

VIII. Conclusion

In this article, we have established a decisional model using a rigorous method that involved

continuous challenges and modifications. This approach implies the development of a

multidisciplinary strategy integrating a territory model and tools for multicriterion analysis. Our

major contribution is the treatment of interactive criteria in decisional process out of RP.

In order to test the effectiveness of our approach, a case study has been briefly introduced.

Future works are devoted to verify the applicability of the proposed methodology in other complex

contexts. Also, our study aims at improving quality of the decision brought to the process by integrating

Multi Objective Genetic Algorithms (MOGA).


Belacel N. (2000) Méthodes de classification multicritère: méthodologie et applications à l’aide au diagnostic

médical, Th. Doct. Univ. Libre de Bruxelles, 148 p.

Bouyssou D., Dubois D. (2003) Concepts et Méthodes pour l’Aide à Décision, Hermès.

Chakhar S., Mousseau V., Pusceddu C. et Roy B. (2005) Decision map for spatial decision making in urban

planning,CUPUM'05, London, UK.

Choquet G. (1953) Theory of capacities , Annales de l'institut Fourrier, 5,131-295.

Eastman J.R., Toledaro J. (1994) Exploration in Geographic Information Systems Technology, Volume4,

GIS and Decision Making, Switzland.

Grabish M. (1997) k-order additivity discrete fuzzy measure and their representation, Pattern Recongintion

Letters, 92,167-189.

ERIMA07’ Proceedings


Hamdadou D., Ghalem R., Bouamrane K., Beldjilali B. (2006), Experimentation and optimization of sorting

methods for design and implementation of a decisional model in regional planning ,CSIT 2006, Amann,


Hamdadou D., Labed K. (2006) Un Processus Décisionnel Par Utilisation Des SIG Et Des Méthodes

Multicritères Pour l’Aménagement Du Territoire : PRODUSMAT, MCSEAI’06, Agadir, pp. 671-676.

Hamdadou D., Labed K., Benyettou A. (2007) Un Système Interactif Multicritère D’Aide à la Décision en

Aménagement du Territoire: Approche du Tri, Intégrale de Choquet et SIG, SETIT 2007, March 25-29, 2007,


Joerin F. (1997) Décider sur le territoire: Proposition d’une approche par l’utilisation de SIG et de MMC, Th.

Doct, Ecol. Polytec. Feder. De Lausanne, no 1755, 268p.

Laouar R. (2005) Contribution pour l’aide à l’évaluation des projets de déplacements urbains, Th .Doct,

LAMIH, Valenciennes.

(Marichal J.L. (2003) Determination of weights of interacting criteria from a reference, European journal of

operational Research, 124(3):641-650.

Marichal J.L. (2003) Fuzzy measures and integrals in the MCDA sorting problematic, Th. Doct. Univ. Libre

de Bruxelles. 202p.

Pictet J. (1996) Dépasser l’évaluation environnementale, procédure d’étude d’insertion dans la région

globale, Presses Polytechniques et universitaires Romandes.

Roy B. (1981) The optimisation problem formulation: criticism and overstepping, The Journal of the

Operational Research Society, 32, No.6, pp.427-436.

Scharling A. (1997) Pratiquer ELECTRE et Prométhée, Lausanne, Presses polytechniques.

Simon H.A. (1977) The new science of management decision, Prentice-Hall, New Jersey.

Sugeno M. (1974) Theory of fuzzy integrals and its applications, Th.Doct. Institut de technologie de Tokyo,


Tsoukias A. (2004) From Decision theory to a Decision Aiding Methodology, Annales du LAMSADE,

CNRS,Université Paris Dauphine.

Grabish M. (1997) k-order additivity discrete fuzzy measure and their representation, Pattern Recongintion

Letters, 92,167-189.

ERIMA07’ Proceedings


Symatop – a web based platform giving a flexible and innovating tool for

decision making and for human and process development.

ERIMA07’ Proceedings

M. Rousselle 1,* , E. Charbonnel 2 …

1 AILE/ laboratoire 3IL (LR2I) Ester Technopole 87069 Limoges cedex

* Corresponding author:, +33555358860

Abstract: This paper aims to present a web based tool that is developed to face complexity of human

behaviour in situations of changes, recruitments and market developments…“Dominant Factors Analysis®”

(DFA) is an exercise of simulation of attitudes in order to understand professional behaviour and to be able

to act on company strategies, values and politics. DFA has as target to simulate clients, partners, employees

and/or candidates preferences in order to understand and anticipate deviances in a strategy or in

professional behaviour.

Keywords: Flexibility, creativity, innovation, usefulness

I. Introduction

“In a few hundred years, when the history of our time is written from a long-term perspective, it is

likely that the most important event those historians will see is not technology, not internet, not ecommerce.

It is an unprecedented change in the human condition. For the first time ,they will

have to manage themselves. And society is totally unprepared for it.” 1

In today’s world of knowledge, the purpose of the tool presented in this paper is to bring light of

the behavioral aspect of a person or a group by focusing on the persons or the group motivators

in order to accompany people and companies in the challenge “to manage themselves”.

The methods used for this tool is based on a long tradition of research within behavioral and

neural science mixed with the latest web techniques.

II. Who is Symatop?

Symatop is a French company integrated in the AILE incubator, French Minister of research, in

May 2005 (

The company was founded in Limoges in the centre of France, in July 2005 by Maggie Rousselle,

Marc-Antoine de Sèze, Yves de Tonquedec, Serge Rébeillard and Cécile Kreweras (both

inventors of this methodology), Lionel Fleury and Thierry Charbonneau.

III. What does Symatop do?

Symatop has developed a multilanguage Web based platform that is dedicated to accelerate the

decision making process. It is managerial tool that focuses on the evolution of people and

processes. This tool is developed to face complexity of human behaviour in situations of changes,

recruitment and market developments…

1 Peter Drucker


The DFA is an exercise of simulation of attitudes in order to understand professional behaviour

and to be able to act on company strategies, values and politics.

If we try to give a definition, we might say:

DFA is a tool that enables in-depth diagnosis and evaluation of the preferences, choices and

opinions of a group of people who are concerned with a problem common to the group.

The tool gives total freedom to user to use it according to the specific needs. Examples of areas

of usage are:

• Recruitment

• Personal development plans

• Integration processes

• Acquisitions, identifying know-how

• Detecting potential

• Team building (or “Team Binding”)

• Identifying company values

• Identifying gaps in strategy

• Coaching, individual or group

• Market analyses

• Customer satisfaction

• Personal satisfaction, etc.

The fundamentals of the tool come from the Kernel® that was elaborated about 20 years ago.

The Kernel is based on the work done the last 50 years by Scientists acting in behavioral and

neural science 1 . The effects of interactions between the three brains (cortex, limbic and reptilian)

have been elucidated. The DFA is based on those woks and draws from them numerous


Symatop disposes of the rights for the Kernel 2 in Europe and North America and has developed a

Web platform in order to be able to use the tools per distance. Symatop uses this tool in order to

innovate from a solid mathematical base.

New concepts are created depending on the needs, values and strategies of the clients and

integrated in the Kernel concept.

Certified Consultants, Coaches and Companies use and adapt the tool according to their clients.

The creativity of this tool makes it unique on the market.

Symatop works with a number of universities around the world in order to spread out some new

aspects and utilization of the tool, based on semantic and mathematical approaches, not yet

completly worked out.

Paris Dauphine, laboratory LAMSADE

Laboratory 3IL


Henri LABORIT, Antonio DAMASIO, Lucien ISRAEL, etc.

2 “Kernel ®” , “ DFA” have been created and developed by Serge Rébeillard and Cécile Kreweras,

associates in Symatop.

ERIMA07’ Proceedings


University of Limoges

University of Luxembourg

University of Belgique

University of Colombia

IV. How does the “Dominant Factors Analysis®” work?

The DFA is based on 30-60 cards with adapted phrases (statements). The phrases can be based

on company strategy (Ex. : In order to become the largest company in our sector we need to

invest in direct marketing) or professional behaviour (Check information by going back to its

source whenever possible). These cards will then be placed by the indicators on a chart of 100


The indicators can be adapted according to the needs. Below are some examples:

I consider it very


It gives me energy to

do this

ERIMA07’ Proceedings

It is important It is less important It is optional

It takes some energy

to do this

It takes quite a lot of

energy to do this

It takes very much

energy to do this

This is very urgent It is urgent It can wait It doesn’t have to be


Figure 1. Some examples of indicators

The exercise is made in four steps:

I. A general, spontaneous, selection into the four main preferences.

II. A regrouped selection within each preference into 5 new selections. Here the purpose is to

search for professional experiences, mind images.

III. A last selection is a fine tuning within the 20 potential groups the final priority in order to

create a total prioritisation.

IV. A choice of action points among the phrases. This last points puts the person in action and a

choices are made that later on will be part of the personal action plan.

Now each card is selected in a hierarchy and this prioritisation gives an unique overview of

selection of preferences of each person. The answers will be processed in a statistical and

mathematical program. Then the Kernel results are presented by four main areas:

I. Hemispheric dominance – Right and Left brain

II. Universes – Sensitivity and Intelligible

III. Territories – Vision (synthesis, openness, change), Affect (sensitivity, openness to others,

group), Raison (analyse, objectivity, evaluation) Control (achievement, reliability, conformity)

IV. Main types – conception, proactivity, mastering of the environment, management of



Figure 2. The four main graphs giving a unique picture of the professional behaviour of the


The results are given through a personal coaching and a document of 30 pages with personal

action points. The feedback can be given by distance, over the phone and by internet. This

makes it an excellent tool to use at 100% by distance.

When the “DFA” is used in order to validate a company strategy or analyse customer satisfaction,

the results are presented in a general graph with axes defined by the client.

Action plans are then elaborate by a team coaching or by the executive team depending on the

situation and wishes of the client.

ERIMA07’ Proceedings


10 – set clear targets in order for

everyone to know the objectives.

5 – communicate company strategy in

a clear and precise way.

7- promote personal initiatives in order

to drive the company forward

30- promote internal promotion for all


- 30


To be put in place - 20

ERIMA07’ Proceedings

- 20

- 10

- 2 - 1 + 1

+ 2

- 10


Figure 3. Some examples of customised graphs for the DFA

V. Additional facts about the DFA

+ 10 + 20 Importance

+ 10

+ 20

Is in place

1- Let’s recall that in most cases, opinions or reactions differ less than the importance given to

them. It is on the basis of this observation that the DFA has been developed.

So, the DFA can be used in numerous cases where the individual (and often differing) opinions of

those in a group must be taken in account before a final decision is made.

It means this is a tool particularly useful in situations where change is being implemented or where

companies face difficult times during merger or the integration with another company or division. And, of

course, all sorts of opinions studies can be carried out with DFA, whether internal or external to the

company, and social phenomena, occurring in small or large groups can be analyzed.

2- The DFA enables people concerned by a particular problem or situation to classify all relating

factors (often greater then 40) by order of importance: since, within the framework of any given

situation, reactions or opinions can differ widely, it is important that they all be taken into


This ability to carry this out constitutes a major innovation.

3- All analysis are made with a specific computer program. The resulting analyses - based on

proven statistical methods - are of course presented and read in a original fashion.

Because of this originality - based on an innovation system for the processing of basic information

- DFA has been awarded an international patent (USA and EUROPE).

4- Amid the many results which can be obtained, the following should be borne in mind :

• preferences, attitudes, values … which are predominant in a given situation are clearly identified

and their relative importance measured;


• converging and diverging factors within the framework studied are precisely identified;

• themes analyzed and brought to light can clarify and give in-depth explanations for the

phenomena, which have been observed;

• with this approach it is possible to determine behavioral typologies, when they exist (useful in a

merging process);

• measurement of level of involvement from the management team.

VI. Case Study of Top Profile and Dominant Factors Analysis®” (DFA)

In 2006 a North American production company made an acquisition on the European market.

This merger was of a urgent art and therefore all actors had 3 months to complete the fusion of

the two companies.

The prime focus was put on searching synergies as quickly as possible while taking into

consideration the individual and it’s potential as a group.

The plan was based on 6 steps:

ERIMA07’ Proceedings


Action plan



Figure 4. The six-step plan for the integration process.








values into







new values


This process is defined in cooperation with the CEO of the new company.

As an introduction, the executive teams from the two companies meet and get the plan and the

purpose of the process.

1- The individual phase.

In order to establish the strength of the group we start by looking at each individual. This is done

by a Top Profil Manager. The exercise is done via the Web with a personal assistance by phone.

The results are shared with each individual during a one hour long feedback and coaching

session, at the end of this session there is a adherence to the results and a personal action plan

is established. Each individual will decide if they wish to share the results with the others in the

group. Objective of this phase was to create an awareness of the strengths and the working

points of each individual.

2- The Sharing phase.

During a group coaching the results of the team members are presented theme by theme. The

strengths of the team members are high lightened in order for everyone to know where this

specific competence can be found. The weaknesses of the group profile was discussed in order

be able to anticipate future problems in the professional behaviours.

The purpose of this phase was to create an awareness of the strengths, the working points and

the synergies of the group.

Ana Maria








ERIMA07’ Proceedings








15 20 25 30 35 40

Figure 5. Presentation of the vision of the group

3- Building new values and searching for synergies.








Next step was to create new values as a foundation for the new company. This was done during

a brain storming session where each member was asked “What is important in order for this

fusion to succeed?”. Once all ideas and point of views were written down, a last selection of the

40 most important statements was done. The purpose of the phase was to determine the bases

of the “Dominant Factors Analysis®”.


4- Putting values into coherence.

The 40 statements are integrated in the “Dominant Factors Analysis®” and the members of the

executive team and 15 employees prioritise the new values via the Web based tool.

5- Analyse of the results from the Dominance Analyse and the Top profile.

The results of this exercise gave the actual situation and the action items viewed as easy to put in

place and those more difficult. These result where then analysed against the Top Profile results in

order to set up an action plan based on the actors the most suitable to make it a success. In this

stage the analysis is made only by the top management and the consultant in order to decline the

results to the executive team.

6- Action Plan

A two day group coaching with the executive team where the action plan was defined, the

timelines set and the communication plan established. In this process the roles within the

executive team were redefined and a five year vision plan elaborated.

Follow up. The first month a weekly conference call was organised in order to make sure that the

immediate action plan was followed. After that a monthly conference call was organised and the

possibility for each executive member to have a personal coach accompany the changes.

A follow up is planed on the anniversary of the acquisition.

VII. Conclusion

Symatop is positioned at the cross-roads of many fields such as behavioral science, statistics and

semantics. These three topics are integrated in a very subtle way and this guarantees wealth and

reliability of the analyses.The DFA eliminates most of the subjectivity which exist in classic

qualitative methods: it acts as a “mirror“ in complex situation and objectify effect of the results


This fact points out that the implication of the participants during the study and their adherence to

the results presented are very high.

On the whole, it gives a realistic description of the available choices: the establishment of a

hierarchical structure that faithfully illustrates the complexity of individual preferences.


Kernel strict sensu:

Roger SPERRY : Brain Evolution - The Origins of Social and Cognitive Behaviors, Journal of Children in

Contemporary Society, Vol. 16: 1-2, 1983.)

Paul McLEAN : « Les trois cerveaux de l’homme » (1990 – Robert Laffont)

Robert ORNSTEIN : « Evolution of Consciousness », (1991 - Prentice Hall Press)

Henry MINTZBERG : Harvard Business Review, juin 1976, « Organiser à gauche, diriger à droite »,

« Structure et dynamique des organisations (1982 – Editions d’organisation), « le pouvoir dans les

organisations » (1986 – Editions d’organisation)

Ned HERMANN : « Les Dominances Cérébrales et la Créativité » (1988 Brain Books pour la version

américaine ; 1992 – RETZ pour la traduction française).

ERIMA07’ Proceedings


Antonio R.DAMASIO : « L’erreur de Descartes – la raison des émotions » (1995 - Odile Jacob)

Henri LABORIT : « Eloge de la fuite » (1985 – Gallimard)« La nouvelle gille » (1979 – Robert Laffont)

« Biologie et structure » (1980 – Gallimard) "Du Soleil à l'Homme – l’Organisation énergétique des

Structures vivantes" (1963 - Masson) Lucien ISRAEL « Cerveau droit – Cerveau gauche » – Collection

Culture et Civilisation (PLON 1995).

Semantic and behavioural references:

Mainly used to establish contents of Kernel and “Analyse of Dominance” ® (DFA)

Alfred KORZYBSKI « Science and Sanity, an introduction to non-aristotelian systems and general

semantics » (1933 – The International Non-Aristotelian Library publishing Company) « Manhood of

Humanity» (1950 –Country Life Press corporation).

Gregory BATESON «vers une Ecologie de l’esprit » (1977 – 2 tomes –Le seuil)

Paul WATZLAWICK « Changements. Paradoxes et Psychothérapie » (1992 - Editions du Seuil)

Carl ROGERS « le développement de la personne » (1968 - Bordas)Berne Eric. Des jeux et des hommes,

Éditions Stock, 1988 (Games People Play - The Psychology of Human Relationships, 1964).

(What Do You Say After You Say Hello?, 1971). Que dites-vous après avoir dit bonjour?, Éditions Tchou,


Raymond Boudon, La place du désordre, PUF 1984

Mathematical and statistical references:

mainly used for the “Analyse of Dominance” ® (DFA)

Kreweras G., Les décisions collectives. Paru dans « Mathématiques et Sciences Humaines: n° 2, Printemp s


Defays, D. Relations floues et analyse hiérarchique des questionnaires. Mathématiques et Sciences

Humaines, 55 (1976), p. 45-60, Hierarchical analysis of preferences and generalizations of transitivity,

« Analyse hiérarchique des préférences et généralisations de la transitivité », Mathématiques et sciences

humaines, n° 61, Printemps 1978

Casin, Ph.; Turlot, J. C. Une présentation de l'analyse canonique généralisée dans l'espace des individus.

Revue de Statistique Appliquée, 34 no. 3 (1986), p. 65-75

Sidney Siegel & N.John Castellan Jr. Nonparametric statistics for the behavioral sciences McGraw-Hill 1989

G.Langouet & J.C. Porlier, Pratiques statistiques en Sciences Humaines et Sociales, ESF éditeur 1985.

ERIMA07’ Proceedings


A Knowledge Management Approach to Support Learning and Education of

Newcomers in Wide Organizations

ERIMA07’ Proceedings

F. Sartori 1,* , S. Bandini 1 , F. Petraglia 1 ,

P. Mereghetti 1 , L. Wickell 2 , J. Svensson 3

1 Complex Systems and Artificial Intelligence Research Center (CSAI), University of Milano-

Bicocca, Milan, Italy

2 Department of Product Support Systems, Volvo Parts, Gothenburg, Sweden

3 Volvo Technology France – Renault Trucks SAS, St. Priest, France

* Corresponding author:, +39 02 64487857

Abstract: Knowledge Management has been always considered as a problem of acquiring, representing

and using information and knowledge about problem solving methods. Anyway, the complexity reached by

organizations over the last years has deeply changed the role of Knowledge Management. Today, it is not

possible to take care of knowledge involved in decision making processes without taking care of social

context where it is produced. This point has direct implications on learning processes and education of

newcomers: a decision making process to solve a problem is composed by not only a sequence of actions

(i.e. the know-how aspect of knowledge), but also a number of social interconnections between people

involved in their implementation (i.e. the social nature of knowledge). Thus, Knowledge Management should

provide organizations with new tools to consider both these aspects in the development of systems to

support newcomers in their learning process about their new jobs. This paper investigates how this is

possible through the integration of storytelling and case-based reasoning methodologies. The result is a

conceptual and computational framework that can be profitably exploited to build effective computational

systems for the training of newcomers in wide organizations, according to learning by doing strategy.

Keywords: Learning by Doing, Storytelling, Case Based Reasoning

I. Introduction

Storytelling is a short narration through which an individual describes an experience on a specific

theme. In this way, the human being is motivated to focus the attention on his/her own knowledge

about the specific theme that is the subject of narration (Bruner, 1991). Within organizations,

storytelling can be considered an effective way to treasure the knowledge that is produced from

the daily working activities. For example, Roth and Kleiner (1997) have analyzed how the

adoption of storytelling allows an organization to be more conscious about its overall knowledge,

to share knowledge among all the people involved in its generation, to treasure and disseminate

new knowledge originated by the sharing of different stories.

The adoption of storytelling can promote the development of new professional contexts where

different professionals collaborate to solve common problems, share experiences, explicit and

implicit assumptions and understandings in order to improve the global capability of the

organization to transform, create and distribute knowledge.

In this sense, Knowledge Management can profitably exploit the storytelling as a way to make

explicit the individual experiences, skills and competencies, promote the negotiation processes

through dialogues among people involved, support the reification of new knowledge in order to

make it available for the future and help newcomers in the learning process about his/her job

through the analysis of the problem–solving strategies and social context represented by the



In this paper, we present a conceptual and computational framework for supporting continuous

training within wide organizations, in the learning by doing (Wenger, 1998) context. This approach

is based on the integration of storytelling and case–based reasoning (Kolodner, 1993)

methodologies: the former allows to manage a decision making process like a story that

describes problem characteristics and what kind of communications among people and problem

solution strategies can be applied to solve it; the latter is a very useful and efficient mean to

compare stories (i.e. cases) finding solutions to new problems by reusing past experiences.

Next section is devoted to make clear how learning by doing, storytelling and case based

reasoning can be put together; first, a brief introduction to learning by doing and

historical/methodological motivations to adopt it as a good paradigm for supporting continuous

learning in organization is given. Then, its relationship with storytelling and case based reasoning

is explored in details, to show how storytelling is the theoretical bridge between the need to

support learning by doing through computer-based tools and one of the most suitable computer

science paradigm for this scope.

In section 3, an application of the framework to the SMMART (System for Mobile Maintenance

Accessible in Real Time) project will be briefly introduced, to show its effectiveness in

representing problem solving strategies of experts in the form of stories that can be archived as

cases into a case base and used as pieces of experience to build newcomers’ training systems

based, according to the learning by doing approach. In particular, the domain of the SMMART

project is the troubleshooting of truck (thanks to the collaboration with Volvo Trucks) and

helicopter (thanks to the collaboration with Turbomeca) engines, thus the stories involved

concern the experience owned by expert mechanics and the system is devoted to support

newcomers to a truck and helicopter manufacturers’ after-sales department.

Finally, conclusions and future work will be briefly pointed out.

II. Learning by Doing, Storytelling and Case Based Reasoning

The Report to UNESCO by International Commission on Education (Delors, 1996) emphasises

the role of learning in the new millennium. Learning in this sense means resource, possibility for

every human being to realise itself, and not only compulsory education, training, acquisition of

competencies, expertise, abilities or skills. Today learning throughout life is a necessity for

individuals to participate in the knowledge society and economy, it’s a fundamental strategy. Adult

Education asserts as a new man’s right, it finds its fundament and its aim in acknowledging value

of every person to whom it must be warranted the opportunity to properly express itself along the

life-span development.

This perspective of education as a life long process upsets time and modality of learning: All

people can be protagonist of their own life, choices and iter. In this sense the main purpose of

educational processes is the promotion of person’s integral health, in all contexts in which he/she

lives: In the family, at work, in the local community. A democratic development of every Country

would be possible only if education will be a right -not a privilege- during the whole life span

development (Alberici, 1998). All people should have opportunity to know their own capabilities

and best exploit them. It’s necessary to enable adults to become actors of their own development

throughout life (Lengrand, 1970), resources for their own life project and for community.

Contemporary socio-cultural context supports the idea of knowledge acquisition and

management, not only as development of Organisation, policy, methods of knowledge diffusion,

but also as a community’s benefit. Starting from these considerations, we reflect about the

concept of continuous learning within organizations and how to support it. In particular, we focus

the attention on learning by doing paradigm.

ERIMA07’ Proceedings


Learning by Doing is based on well known psycho-pedagogical theories, like cognitivism and

behaviourism, which are devoted to point out the role of practice in humans' intellectual growth

and knowledge improvement. In particular, this kind of learning methodology refuses the typical

idea that concepts are more fundamental than experience and, consequently, that only a solid set

of theoretical notions allows to accomplish a given task in a complete and correct way. Learning

by doing methodology states that the learning process is the result of a continuous interaction

between theory and practice, between experimental periods and theoretical elaboration moments.

Learning by doing can be articulated into four distinct steps (Figure 1), where practical phases

(i.e. Concrete Experience and Experimentation) are alternated with theoretical ones (i.e.

Observation and Reflection and Creation of Abstract Concepts): starting from some kind of

experience, this experience originates a mind activity that aims to understand the phenomenon;

this step ends when a relation between the experience and its results (typically a cause-effect

relation) is discovered that can be generalized to a category of experiences similar to the

observed phenomenon. The result is a learned lesson that is applicable to new situations which

will eventually occur in the future.

Figure 1. the four steps in learning by doing

In our framework, a concrete experience can be represented by a story, which represents a

decision making process about a problem to be solved. This story should give to a newcomer an

idea of how a critical situation could be tackled, according to the knowledge owned by experts.

Moreover, it could give indications about who could help him/her in case of need.

Stories can be archived as cases according to the case-based reasoning (CBR) paradigm. Case

Based Reasoning is an Artificial Intelligence method to design knowledge management systems,

which is based on the principle that “similar problems have similar solutions”. For this reason, a

case based system doesn’t require a complete and consistent knowledge model to work, since its

effectiveness in finding a good problem solving strategy depends typically on how a problem is

described. Thus, CBR is particularly suitable to adopt when domains to tackle are characterized

by episodic knowledge and it has been widely used in the past to build decision support systems

in domain like finance (Bonissone and Cheetam, 1997), weather forecasting (Hansen and

Riordan, 2001), traffic control (Gomide and Nakamiti, 1996), chemical product design and

manufacturing (Bandini et al., 2004), and so on.

A case, as shown in Figure 2, is a complete representation of a complex problem and it is

generally made of three components: description, solution and outcome (Kolodner, 1993). The

main aim of CBR is finding solutions to new problems through the comparison of it with similar

problems solved in the past, as shown in Figure 3, which represents the well known 4R’s cycle by

ERIMA07’ Proceedings


Aamodt and Plaza (2004): the comparison is made according to a retrieval algorithm working on

problem features specified in the description component. When an old problem similar to the

current one is retrieved, its solution is reused as a solving method for the new problem. The

solution can be then revised in order to fit completely the new problem description and finally

retained in the case base to become a sort of new lesson learned. In the retained case, the

outcome component gives an evaluation about the effectiveness of the proposed solution in

solving the problem. In this way, new cases (i.e. stories) can be continuously created and stored

to be used in the future, building up a memory of all experiences that can be used as newcomer

training tool.

Figure 2. Case structure

Starting from concrete experiences newcomers can learn decision making processes adopted

within the organization they are introducing quicker than studying manuals or attending courses.

Moreover, the comparison between their own problem solving strategy and the organization one,

represented by the collection of stories, stimulates the generalization of problems and

consequently the reflection about general problem solving methods, possibly reducing the time

period to make the newcomers able to find effective solutions.

Figure 3. The 4R's cycle for CBR system development

CBR is one of the most suitable Artificial Intelligence methods to deal with learning by doing

(Petraglia and Sartori, 2005), due to the perfect match between their cycles of life. In particular:

ERIMA07’ Proceedings


• The description of a new case can be a way to represent experimentation in new situations,

since the aim of CBR is to solve a new problem exploiting old solutions to similar problems.

Thus, a new case is the attempt to apply past experiences to a new concrete situation in

order to validate a problem solving strategy, as the experimentation in new situations is a way

in the learning by doing context to test the generation of abstract concepts starting from

already validated concrete experiences;

• A retrieved case in the case base represents a concrete experience in the learning by doing


• Retrieval, reuse and revise are the CBR phases during which a solution to a new problem is

found and reused by comparison with similar past problems and then adapted to fit

completely the critical situation defined by problem description. Thus, they can be exploited

to model the theoretical steps of learning by doing methodology (i.e. Observation/Reflection

and Creation of abstract concepts), through which a newcomer finds a general way to tackle

a problem starting from a set of existing examples;

• Finally, the retained case in the CBR paradigm is the completion of the initial problem to be

solved with the optimal solution obtained at the end of the CBR cycle, thus it represents a

new instance of the initial experimentation in new situations.

Moreover, since the concept of story can be used to describe both a case in the CBR paradigm

and a concrete experience in the learning by doing methodology, in our opinion, storytelling is the

optimal connection between a case-based support to the development of training systems for

newcomers and the learning by doing context.

III. A Case Study: the SMMART Project

SMMART (System for Mobile Maintenance Accessible in Real Time) is a research project funded

by the European Community 1 that aims to develop a decision support system for supporting

experts of Volvo Truck 2 , a world leader in the manufacturing of truck engines and Turbomeca 3 , a

world leader in the production of helicopter engines, in troubleshooting engine problems. To this

aim, a case-based reasoning module of the final system is going to be designed and

implemented in order to detect the most probable faulty engine component on the basis of a

given set of information, which can be archived as a story. In what follows, due to the lack of

space, we’ll consider only the truck troubleshooting problem, but a similar approach could be

used to describe helicopters’ one too.

1 Project n° NMP2-CT-2005-016726



ERIMA07’ Proceedings


Figure 4. A typical story about a truck troubleshooting session.

The narration (see Figure 4) about the problem starts when a driver recognizes that a problem

arose on his/her truck engine. For example, a light of the control panel turns on or some

unpredictable event happens (e.g. smoke from the engine, oil loss, and noises during a break and

so on). Thus, the driver contacts the truck after sale assistance to obtain problem solution. The

mechanic who receives the truck is responsible for making a detailed analysis of the truck by

taking care of driver impressions, testing it and collecting information coming from on-board

computers. Then, he/she has to find the fault, repair it and verify that the problem has been

solved before the truck leaves the workshop.

The problem analysis made by mechanic (see Figure 5) considers two main categories of

information: symptoms and fault codes. Symptoms give qualitative descriptions of truck problems

and their context. For example, the sentence “The truck cruise control fails to maintain set speed

while driving uphill at -20°C under heavy charge” specifies that a possible fault of the cruise

control (i.e. the symptom) is detected when the road is not plane, the temperature is very low, and

the truck is transporting a big load (i.e. the context). The same problem could be not detected

under different conditions. Fault codes are quantitative information coming from on-board

computer: when some event happens that possibly causes malfunctions, a fault code is

generated and memorized to be used during troubleshooting sessions. A fault code is

characterized by many fields, the more important of which is the FMI (Failure Mode Identifier) that

identifies the category of the fault (electrical, mechanical, and so on). The main activity of the

mechanic during the truck analysis is the correlation between symptoms and their fault codes: in

this way, it is possible to identify the faulty component, repair it and try to verify if problem has

been solved by controlling if fault codes disappear when the truck is turned on.

ERIMA07’ Proceedings


Figure 5. A sketch of the problem analysis made by a mechanic in the SMMART context

The CBR module of the SMMART project has been designed to give suggestions about the most

probable faulty components of the truck engine according to a given combination of symptoms

and fault codes. To this aim, a story about a troubleshooting session is represented as a case,

made of two parts:

• Case description, containing all the information necessary to characterize the truck problem,

that are symptoms, fault codes, symptoms’ context and general information about the truck

(e.g. the truck model, the type of on-board computer, and so on);

• Case solution, containing the most probable faulty components of the truck according to the

given problem description. This component could be a root cause, that is the real and atomic

source of the fault that should be substituted or repaired (e.g. an electric cable) or something

more general that should be further investigated in order to find the root cause (e.g. the

cooling fan). In the second case, the solution gives indications about the most useful

troubleshooting method too (typically, fault trees or MBR is used).

When a new story is generated that represents the current problem (i.e. a problem without

solution), it is represented as a case and properly described in terms of symptoms, fault codes

and context. Then, it is compared with other cases already solved in the past in order to find

similar story descriptions: The solution of most similar story is then reused as a starting point for

deriving the solution to the current problem, suggesting in this way the most probable root cause

or the best method to identify it. The comparison between stories is done according to a retrieval

algorithm based on the K-Nearest Neighbour approach (Finnie and Sun, 2002).

The algorithm can be divided into the following steps:


• Receive the description of the new troubleshooting session, called Cc , and a problem solved


in the past, called Cp . According to Figure 5, descriptions are composed of symptoms and

fault codes;

d d

• Build the set of symptoms shared between Cc and Cp : this set is made of symptoms that are

both in the description of the current problem to be solved and the past problem. For each

symptom in the set, increase the similarity value.

o For each symptom in the set previously defined, compare fault codes. In the

SMMART context, it is possible to build a categorization of fault codes according

ERIMA07’ Proceedings


ERIMA07’ Proceedings

to the engine component they are related to, exploiting information they contain:

For example, fault codes with MID value 128 are related to the truck engine.

Thus, it has been possible to link fault codes to symptoms building a sort of

hierarchical structure that exploits truck engine components as a “bridge”

between symptoms and fault codes;

� Build the set of fault codes shared between Ccd and Cpd: this set is

made of fault codes that are in the symptom description both in the

current problem and the past problem;

� For each fault code in the set, increase the similarity value;

o For each symptom not in the set, decrease the similarity value.

• Repeat steps 1 and two for each case Cpd in the case base.

• Rank all the past cases according to the similarity degree and reuse the solution of the most

similar one for the new problem.

From the learning by doing point of view, the case base composed of all the stories about past

troubleshooting sessions is a very important source of knowledge for newcomers; they could be

solicited to solve a problem by specifying what are the symptoms and the related fault codes.

Then they could try to identify faulty components and then compare their solution with the one

proposed by the system, with an immediate evaluation of their own capability to learn expert

mechanics’ decision making processes and identification of points they have to work on, maybe

asking directly to the people who solved past problems. In this way, experience and knowledge

created by the organization over the years and captured by the CBR system could be used as a

very important training method alternative to the more traditional ones.

IV. Conclusions

This paper has presented a framework to support learning by doing within wide organizations;

this framework is based on the integration of storytelling and case based reasoning


Storytelling has been chosen due to its capability of taking care of different kinds of knowledge in

the description of working experiences and presenting important pieces of expertise to

newcomers in wide organizations; according to Atkinson (1998): “Storytelling is a fundamental

form of human communication [...] We often think in story form, speak in story form, and bring

meaning to our lives through story. Storytelling, in most common everyday form, is giving a

narrative account of an event, an experience, or any other happening [...] It is this basic

knowledge of an event that allows and inspires us to tell about it. What generally happens when

we tell a story from our life is that we increase our working knowledge of ourselves because we

discover deeper meaning in our lives through the process of reflecting and putting the events,

experience, and feelings that we have lived into oral expression.”

On the other hand, case based reasoning is one of the most suitable Artificial Intelligence

paradigms to deal with episodic and heterogeneous knowledge and consequently, in our opinion,

it is probably the best approach to manage unstructured narrations about expertise and problem

solving strategies.

The proposed framework provides newcomers with a complete representation of the

competencies developed by experts over the years. Thus, they can increase their experience

about the problem solving strategy used inside the organization as well as the understanding


about who are the people to contact in case of need (i.e. the experts who solved similar problem

in the past).

In order to test the effectiveness of our approach, its application in the context of the SMMART

project has been briefly introduced. Future works are devoted to verify the applicability of the

proposed methodology in building supporting systems for learning by doing in other complex



Aamodt, A. and Plaza, E., (1994) Case--Based Reasoning: Foundational Issues, Methodological Variations,

and System Approaches, AI Communications, Vol. 7, No. 1, pp. 39-59.

Alberici, A. (1998). Towards the Learning Society: an Italian perspective. In: Holford J., Jarvis P., Colin G.,

International Perspective On Lifelong Learning. London: Kogan Page Press.

Atkinson, R., (1998). The Life Story Interview, Sage University Papers Series on Qualitative Research

Methods, vol. 44, SAGE Publications, Thousand Oaks, CA.

Bandini, S., Colombo, E., Sartori, F., Vizzari, G., (2004) Case Based Reasoning and Production Process

Design: the Case of P-Truck Curing, In: ECCBR – Proceedings. Volume 3155 of Lecture Notes in Computer

Science., Springer–Verlag, pp. 504–517.

Bonissone, P. P., Cheetham, W., Financial Application of Fuzzy Case-Based Reasoning to Residential

Property Valuation, Proceedings of the 6th IEEE International Conference on Fuzzy Systems, Vol. 1, pp 37-

44, 1997.

Delors, J., et al. (1996). Learning: the Treasure within. Report to UNESCO of the International Commission

on Education for the Twenty-first Century. Paris: UNESCO Press.

Finnie, G., Sun, Z. (2002) Similarity and Metrics in Case-based Reasoning, Intelligent Systems, 17(3) 2002

pp 273-285.

Gomide, F., Nakamiti, G., Fuzzy Sets in Distributed Traffic Control, 5th IEEE International Conference on

Fuzzy Systems - FUZZ-IEEE 96, pp 1617-1623, New Orleans - LA - EUA, 1996.

Hansen, B.K., Riordan, D., Weather Prediction Using Case-Based Reasoning and Fuzzy Set Theory,

Workshop on Soft Computing in Case-Based Reasoning, 4th International Conference on Case-Based

Reasoning (ICCBR01), Vancouver, 2001.

Kleiner, A. and Roth, G. (1997) How to Make Experience Your Company’s Best Teacher, Harvard Business

Review, Vol. 75, No. 5, p 172.

Kolodner, J., (1993) Case-Based Reasoning, Morgan Kaufmann, San Mateo (CA), 1993.

Lengrand, P., (1970). Introduction à l’éducation permanente. Paris: UNESCO Press.

Petraglia, F. and Sartori, F., (2005) Exploiting Artificial Intelligence Methodologies to Support Learning by

Doing Within Organisations., In G. Hawke and P. Hager (eds.): Proceedings of RWL405-The 4th Int.

Conference on Researching Work and Learning, Sydney, December 2005, ISBN 1920754970.

Wenger, E., (1998) Community of practice: Learning, meaning and identity, Cambridge University Press,

Cambridge, MA, 1998.

ERIMA07’ Proceedings


ERIMA07’ Proceedings

About the transferability of behavioural skills

M. Saumonneau 1,* , I. Franchisteguy-Couloume 1 , V. Lartigue 1

1 Laboratoire Graphos, Estia, Technopole Izarbel, 64 210 Bidart, France

* Corresponding author:, (33) 5 59 43 85 44

Abstract: The strong instability of the economic, social and political environments requires reconsidering the

organization of work. Employees are supposed to be able to move in between several companies and

functions, in brief to develop their own adaptability and flexibility. This phenomenon concerns the main

socio-professional groups such as young people, seniors, low qualified people and executives. All of them

have to wonder about their future in agreement with companies, trade-unions... In our research-study, we

will focus on people with “low levels of qualifications”. In organizations, the “model of competence” allows to

understand a member of the organization not only through his working station but also according to his own

skills. We will consider the competence as it is generally defined in the literature of management that is to

say in accordance with three components: knowledge, know-how, knowing how to be (also called

interpersonal skills, relational skills; social skills and so on). In our article, we will call these skills behavioural

skills. Within the framework of this article, we would like to study the transferability of knowledge focussing

on the transferability of behavioural skills.

Keywords: behavioural skills, methods of learning, transfer of knowledge

I. Introduction

Other the last few years, French labour market has gone through paradoxical evolutions. On the

one hand, a lack of work force is noted in many branches of industry (craft industry, public

buildings, hotel business, for example). On the other hand, some other branches are affected by

the effects of internationalisation, carrying on with delocalization and thousands of redundancies.

For these people, who often knew just one employment in these factories, who were not very

mobile because of their family circumstances and who where usually little qualified, it becomes

difficult to find an employment in a new context.

At the same time, companies specialized in temporary job require people of low and average

qualification in large numbers, who are able to adapt quickly to multiple work situations.

How can we imagine that there is not convergence between resources and needs? How can we

explain that companies specialized in temporary job cannot satisfy the offers they have whereas

more and more people are without job in the labour market?

Some brief replies to this paradoxical phenomenon can probably be found in the concept of


II. Link between employability and transmission of skills?

Employability (Le Grand Robert) is the "capacity to acquire and maintain necessary skills to find

and preserve a job". This definition seems interesting as it indicates the individual capacity to be

maintained in a position and to find another job. Each person acts for his own employability.

However, employability is widened to an organisational dimension (Finot, 2000). So, employability

is considered as the necessary skills and the necessary human resources management to allow

the employee to find a new job at any time. As Le Boterf stresses it, employees have to insure

their job and to adapt themselves to the evolution of their own job. (2005). This underscores the

necessary involvement of the company while introducing the concept of competence.


So, developing employability among employees amounts to setting up an adequate management

of skills in a company. It would make possible to integrate the various components of the

competence including technical know-how, behaviours, attitudes and knowing to be 1 (also called

interpersonal skills, relational skills; social skills …) (Bellier, 1999). Competence is the result of

three components: knowledge (head), know-how (hardware) and knowing how to be (heart) of the

person (quoted by Durand, Pestalozzi, 1797).

Let us take the example of a weight-heavy driver. This one will be able to exert its trade only in

the condition of controlling several skills such as for example:

- To be able to interpret, give sense to the traffic signs present on the road (knowledge)

- To be able to operate weight-heavy under difficult conditions such as bad weather,… (knowhow)

- To be able to react to unforeseen on its round of delivery without referring systematically to its

senior in rank, showing autonomy (knowing how to be).

However, the reference research works in the field focus on management, on the transfer and the

capitalization of knowledge and know-how. On the other hand, few works have approached the

issues of management, transfer and capitalization of Knowing How to be, also called behavioural

skills. Therefore, the question of the identification and the acquisition of the behavioural skills


The following figure wishes to illustrate that a whole side still opens with research in the field of

the transfer of skills.

Domain of





Knowing how to be

Figure 1. Knowledge management zones of polarization according to skills of each person

We can wonder about the process of the identification and the acquisition of knowledge-being. If

we take again the example of the driver deliveryman given above, it is noted that its capacity to

be reacted in an autonomous way to unforeseen is a paramount competence in the daily

exercise. This importance will be often stressed in the speech of the people in load of recruitment.

For as much, there exists little of reflexion on the conditions and the procedures of transfer of this

type of competences that it is in work of the academic type or in practice of the companies to the

daily newspaper.

1 “Savoir-être” in French

ERIMA07’ Proceedings








III. About behavioural skills acquisition

Behavioural skills acquisition refers automatically to the concept of “learning”.

To enhance the learning of behavioural skills implies to identify the components of behavioural

skills and learning situations. These two stages help understand the modalities of skills


1. Cartography of behavioural skills

A research realized in partnership with a European group of temporary jobs enables to give

twenty seven behavioural skills cartography (Saumonneau, Lartigue 2006).


The objective of this study was to propose a framework allowing the identification of behavioural

skills and their methods of acquisition. Taking into account the complexity of the context, we

chose to adopt a qualitative based methodology. Semi-directing individual talks were carried out

starting from guides of structured talks allowing an analysis in three steps:

- a first phase of talks near twenty people of low and average qualification, fifteen persons in

charge for company and four people specialized in temporary work, aimed to identify behavioural

skills and to specify the elements present in the speech about the methods of transfer and


- one second phase of semi-directing talks carried out near eight experts of the field made it

possible to validate the first results obtained by confirming them or relativizing them.

- From the whole of the talks which were recorded then retranscribed and coded, a collection of

“behavioural skills” was established in the form of cards and of the cards “methods of transfer and

acquisition of key behavioural competences” were written.


The analysis of the various talks made it possible to identify a selection of key behavioural skills

for which we carried out regroupings in the form of cartography. Seven methods of learning of

these skills were then more particularly retained.

In the next figure 2, skills structured in eight groups are presented.

ERIMA07’ Proceedings










Capacity of diction

Figure 2. Cartography of behavioural skills

As example, we can define a group of skills. The group “adaptability” joins together behavioural

skills tending to develop the aptitude of an individual to modify his cognitive structure or its

behaviour to answer harmoniously new working conditions, with a new environment, new

situations. Competences Versatility, Flexibility, Reactivity, Availability, Autonomy, Capacity of

initiative belong to this same group.

We also noted the existence of a proximity even of a stronger interdependence between some of

these twenty-seven skills. Stronger links could be noted between skills than those belong to the

same group or different groups.

We then built for each skill retained a “skill card” according to the same model made up of four

distinct parts:

• the definition of skill,

• the type and situations of work associated with skill,

• methods of acquisition of skill,

• links between the profile of the person (built starting from five preset axes) and skills.

Seven principal methods of acquisition of behavioural skills were identified among which the

trade-guild, the tutorat, simulation, the case study, the setting in situation, the role play, on-the-job

training. We can then wonder if it is possible to link behavioural skills and learning of these skills.

ERIMA07’ Proceedings










Capacity of concentration




Capacity to communicate



Opening to criticism

Sense of observation




Capacity to cooperate

Respect of hygiene




Respect of the discipline


2. Behaviour skills and learning

From now on, possibilities of acquisition of behavioural skills through relevant situations of

training must be considered. Thus, individual learning can be defined as the process by which a

person acquires new skills.

This learning is carried out in a various way by the experiment, the formation or information. To

include/understand the methods of realization of these learning appears essential then to support

the transfer of behavioural skills.

According to Grundstein and Rosenthal Sabroux, (Grundstein and Rosenthal Sabroux, 2001),

skills are a mixture of knowledge, capacities to act and goal oriented behaviours in any given

situation. Therefore, the definition comprehends the capacity to gather knowledge and to put

them in action in a context. Similarly, during the research with the European group of temporary

job, we noted that the behavioural skills and the situations of work are closely intertwined. Indeed,

some skills will be particularly essential in certain situations of work (for example empathy skills

will be more essential in situation of work in relation to the customer).

In a learning situation, the three skills of knowledge, know-how and knowing to be cannot be

completely dissociated. They are in permanent interaction and cannot exist independently.

Indeed, skills of knowing-to be are useless if they are not mobilized within a background where

knowledge permits the comprehension of the stakes, the strategies and authorizes a process of

actions (know-how). Reciprocally, knowledge remains useless if it is not associated to know-how

and to knowing how to be.

Within the framework of the study evoked previously, eight generic situations of work were

defined: individual work, work of precision, work of output, repetitive work, work in relation to the

customer, work in relation to thirds, work in a hard environment, seasonal work. It then appears

links clearly marked between situations of work and the behavioural skills more precisely

mobilized in these situations. As example, the study showed that individual work rather mobilized

capacity of initiative, reactivity and versatility whereas the work of precision rather mobilizes

capacity of concentration and sense of observation. Each behavioural skill thus could be

identified like particularly mobilized in quite precise situations of work. It then appeared necessary

to identify methods of acquisition of these behavioural competences.

Several studies focus on the process of transfer of skills (Szulanski, 1996; Galbraith, 1990). It is

defined by Argote et al. (2000) in the following way: “Knowledge transfer in organizations

manifests itself through exchanges in the knowledge or performance in the recipient unit”.

Initially analyzed in the intra-organisational context in terms of creations of knowledge (Nonaka,

1994), or of development of the productivity (Epple et al., 1996), the transfer of skills was then

studied within the framework of sharing knowledge between organizations (Powell et al., 1996;

Simonin, 1997,…). This transfer of knowledge was not studied within the framework of the

transfer of behavioural skills inside the organization.

Analyzed in an economic outlook this work presents the process of transfer of competences like a

linear process composed of five phases allowing a transfer of knowledge between an identified

source and a receiver (individuals, groups or organizations) (Berthon, 2005).

ERIMA07’ Proceedings




Figure 3. Transfer of skill in an economic context

ERIMA07’ Proceedings

adaptation deployment acceptance appropriation


These five phases initialization, adaptation, deployment, acceptance, appropriation are probably

adaptable to the transfer of behavioural skills Indeed, during the study, we perceived that the

acquisition of the behavioural skills generally mobilized within the framework of the use of people

of low and average qualification is carried out within the framework of quite precise methods of

acquisition. Trade-guild, tutorial, simulation, the case study, the setting in situation, role play and

on-the-job training.

A complementary research whose methodology still remains to be defined would be then to

consider. Crossing the concepts of situations of work and precise methods of acquisition, it would

aim at modelling the context inherent in the transfer of behavioural skills. Once this work of

identification carried out, it would be then interesting to clarify this context of training in order to

support its realization within enterprise interested in developing the transfer of behavioural skills

developed by people of low and average qualifications.

To support the acquisition of behavioural skills, people should become aware of their importance.

In the same way, each individual should be able to detect this importance, and, finally, to

establish links, to build connections between two situations.

IV. Conclusion

A new field of research opens: the process of learning of behavioural skills is still little studied.

Behavioural skills are the result of training, of professional and personal experiments, of each

individual’s personality and of the strong interdependence existing between these elements. As

Nonaka and Takeuchi (2007) worked in the identification of the knowledge movement specifying

the cycle of knowledge conversion (tacit-explicit), according to this model, we wish to go deeper

into the detection of situations and of elements supporting the acquisition of behavioural skills.

It will be advisable nevertheless to remain careful concerning the stage of identification of

behavioural skills. Indeed, these competences must clearly be distinguished from the

characteristics of the personality of the person (Bellier, 1999). However, the confusion of these

two concepts is sometimes made in the practices of management, the speeches or the tools

dedicated to the evaluation of skills.

The stake for companies is serious. Argyris and Schon (1978) underline that « individual learning

builds up the learning of organisational skills, which in turn improve individual learning”. So, the

person is at the core of the organizational learning. The knowledge of members of the

organisations modifies and makes richer the learning of the organisation. Comprehension of


learning of behavioural skills opens prospects in terms of knowledge creation and in conclusion in

terms of innovation.


Argote L., Ingram P., Levine J., Moreland R. (2000), Knowledge transfer in organizations : learning from

experience of others. Organizational Behaviour and Human Decision Processes, vol. 82, n°1, p1-8.

Argyris C. Schon D.A (1978), Organizational learning, Reading, Addison Wesley.

Bellier S. (1999), Le savoir-être dans l’entreprise Vuibert Paris 1999.

Berthon B. (2005), « Une vision intégrative du transfert de connaissance sous l’angle de la théorie de

l’activité », XIVème Conférence Internationale de Management Stratégique, Angers 2005.

Durand T. (2000), L’alchimie de la competence, Revue française de gestion, n° 127

Epple D., Argote L. Murphy K. (1996), An empirical investigation of the micro structure of knowledge

acquisition and transfer through learning by doing, Operation Research, vol.44, pages 77-86 cité par Prevot

F. (2005), « Le transfert inter-organisationnel de connaissances par les multinationales vers leurs

fournisseurs locaux : une typologie des pratiques des firmes américaines au Brésil », XIVème Conférence

Internationale de Management Stratégique, Angers 2005.

Finot A. (2000), Développer l’employabilité, Insep Consulting Editions, Paris, p 101.

Galbraith C. (1990), « Transferring core manufacturing technologies in high tech firms », California

Management Review, 32(4), p 56-70 cité par Berthon B. (2005), « Une vision intégrative du transfert de

connaissance sous l’angle de la théorie de l’activité », XIVème Conférence Internationale de Management

Stratégique, Angers 2005.

Grundstein M., Rosenthal-Sabroux C., (2001), Vers un système d’information source de connaissances,

chapitre 11, in Ingénierie des systèmes d’information, Cauvet & Rosenthal-Sabroux, Coord., Hermès, Paris

Le Boterf G.(1994) De la compétence : essai sur un attracteur étrange, les Editions d’Organisation.

Le Boterf G.(2000) Construire les compétences individuelles et collectives, les Editions d’Organisation,


Le Boterf, G., (2005), « Construire les compétences individuelles et collectives : les réponses à 90

questions », 3ème édition, col. Livres outils, Ed. Organisation.

Le Grand Robert de la langue française, Volume 2.

Nonaka I (1994), A dynamic theory of organizational knowledge creation. Organization Science, vol.5, n°1,


Nonaka I., Takeuchi H., (1997), La connaissance créatrice : la dynamique de l’entreprise apprenante, De

Boek Université S.A

Szulanski G. (1996), « Exploring internal stickiness : impediments to the transfer of best practice withine the

firm », Strategic Management Journal, 17 :27-43 cité par Berthon B. (2005), « Une vision intégrative du

transfert de connaissance sous l’angle de la théorie de l’activité », XIVème Conférence Internationale de

Management Stratégique, Angers 2005.

Powell W., Koput K, Smith-Doer L. (1996), « Interorganizational collaboration and the locus of innovation :

networks of learning in biotechnology », Administrative Science Quaterly, vol. 41, p116-145, cité par Prevot

F. (2005), « Le transfert inter-organisationnel de connaissances par les multinationales vers leurs

fournisseurs locaux : une typologie des pratiques des firmes américaines au Brésil », XIVème Conférence

Internationale de Management Stratégique, Angers 2005.

Prevot F. (2005), « Le transfert inter-organisationnel de connaissances par les multinationales vers leurs

fournisseurs locaux : une typologie des pratiques des firmes américaines au Brésil », XIVème Conférence

Internationale de Management Stratégique, Angers 2005.

Simonin B. (1997), “The importance of collaborative know-how : an empirical test of the learning

organization”, Academy of Management Journal, vol.40, n°5, p1150-1174.

ERIMA07’ Proceedings


A Framework for the Potential Role of Information Specialists as Change

Agents in Performance Management

ERIMA07’ Proceedings

G. Roushan, G. Manville

Bournemouth University, UK

Abstract: This paper aims to explore the changing role of the Information Specialist (ISp) in the

implementation of business performance improvement through business process re-engineering (BPR)

initiatives. The paper will begin by examining the evolution of BPR and then discuss the changing role of the

ISp. Technology enabled Performance Management (PM) and its strategic implications would be key to

measuring the effectiveness of BPR and the role of the ISp is a vital part of this. Through a literature review

and case based empirical evidence, a conceptual framework has developed to appraise the role of the ISp.

Keywords: Performance Management, Business Process Reengineering, Information Specialists,

Information Systems

I. Introduction

BPR can be defined as the “fundamental rethinking and radical redesign of business processes to

achieve an improvement in critical, contemporary measures of performance, such as cost,

quality, service or speed (Hammer & Champy, 1993).Slack et al (2004) also refer to it as

breakthrough or innovation based improvement which invariably is described as technology

orientated. BPR was very popular in the early 1990’s during a climate of recession and

downsizing as an opportunity to streamline processes and cut cost. A study of over 100 reengineering

projects by Hall et al (1993) found that the failure rate was about two thirds. Al

Mashari et al (2001) concede that BPR has lost favour but their research concluded that most

organisations knowingly or not are involved in BPR and that the success rate is more favourable

at around fifty five present. Perhaps the reason for many of the failures could be to do with the

mechanistic interpretation of BPR by the key theorists Irani et al (2000). As a consequence of

this, many BPR proponents engaged in a period of soul searching and embraced the emerging

technology of Enterprise Resource Planning (ERP) software as a vehicle for implementing BPR

Hayes et al (2005). The paper explores the evolving role of the ISp in performance improvement

initiatives such as BPR. This paper considers ISps to be individuals employed to provide

professional expertise in delivering solutions to corporate information needs and to help monitor

organisational performance.

Literature Review

In such an environment the IT/IS department will be required to continually supply new

deliverables, including wider information provision through adoption of appropriate software and

hardware and systems maintenance and upgrades. In particular the IT/IS department is usually

expected to resolve the issues of how the problems of legacy systems will be overcome. These

can restrict BPR projects because of a lack of connectivity between functionally designed

systems and their data models, but as they usually represent years of development the legacy

systems often cannot be as easily replaced as Hammer’s “Don’t Automate, Obliterate” rhetoric

might suggest (Earl & Khan 1994). Similarly, Love (2004) insist a need for improved IS evaluation

due to the complex nature of IS/IT together with “uncertainty and unpredictability associated with

its benefits.

The role of IT and successful performance improvement of BPR initiatives can be crucial to the

organisation’s performance. Neely (1999) regards IT’s role as imperative in performance


measurement development. Furthermore, Bititci et al (2002) conducted research on web enabled

performance measurement systems and concluded that if properly implemented, such systems

would promote a proactive management style and greater confidence in management decisions.

This is supported by Beretta (2002) who advocates the adoption of performance measurement as

a tool for effective decision making. However he does provide a note of caution that ERP systems

can be drastically limited by their functionally orientated implementation ie. The existing system is

simply automated. This goes against the grain of Hammer and Champy (1993) who argue “don’t

automate – obliterate” The relative effectiveness of the ISp within organisations can follow a

similar continuum to the four stage operation model developed by Hayes and Wheelwright

(1984). This model originally applied to the operations function which charts the function’s

contribution to organisational effectiveness from essentially a reactionary role to redefining the

industry’s expectations.

Innovative IT solutions coupled with the growth of the internet have resulted in the creation of new

business models, Timmers (2000). IT impacts on organizations in three ways: automating existing

business processes, outsourcing and vertical integration opportunities and the creation of new

business models that engage the customer. Neely (1999) believes IT to be a key driver behind

performance measurement development which can facilitate data collection, analysis and

presentation. Garengo et al (2005) add that new technologies help to reduce the costs of

implementing implementing a performance measurement system making it accessable to small

and medium enterprises (SMEs). Markovic and Vukovic (2006) put forward a five step plan which

inextricably links future strategy development and subsequent performance management with IT.

The emerging opportunities from IT based technology has led to organizations transforming their

relationships with other organizations within the value network, (Johnson et al 2005). Edwards et

al (1995) state that a change to IS/IT management attitudes is needed if IT/IS is to truly integrate

with the business. Also, Kaplan and Norton (2004) suggest that value is created through internal

business processes. Kaplan & Norton (2004) regard “the availability of information systems and

knowledge applications and infrastructure required to support the strategy”.

This has led to compelling arguments for a board room presence for the IT specialist. The term

Chief Information Officer (CIO) was coined by Gruber (1986 cited in Hayes et al 2005) to coordinate

the IT strategy across functions. Since that time new opportunities emerging from ERP

software and the internet provides further justification for a CIO to co-ordinate the activities inside

the organization and within the value network, Hayes et al (2005). This is echoed by Busi and

Bittici (2006) who argued that with advances in information and communication technology (ICT),

there is huge potential for managing the information from suppliers to customers. They refer to

this as “collaborative performance management” where partners can seamlessly collaborate on

fully interoperable technologies. However, they do acknowledge that this is an aspiration and that

there are gaps in the literature in this regard.

II. Research background

A review of existing literature in the area of BPR and Information Management reveals a lack of

consensus amongst researchers concerning the appropriate role for ISps during and after BPR.

Opinion is divided as to whether IS professionals should reactively support BPR or whether IT/IS

developments should be driving these initiatives. A questionnaire based ‘Descriptive Survey’ with

60 respondents is used as a first stage of primary data gathering. This is followed by follow-up

interviews with 20 of the participating organisations to gather further information on their

experiences. The final stage of data collection consists of further in-depth interviews with four

case study companies to provide an even richer picture of their experiences.

ERIMA07’ Proceedings


III. Summary of Findings

The questionnaire responses indicated that the role of the ISp prior to performance improvement

programmes was that of a support function, managing the IT requirements of the organisation,

whereas during BPR, there was a need for ISps to gain a greater understanding the information

requirements of the organisation and its new processes. The ISp role was to be involved at the

start of the BPR programme, whilst not leading or owning it. The follow-up interviews pointed to

the possibility of the ‘hybrid’ ISp, a professional being business-aware and IT-literate, and in

some cases acting as a catalyst for future change. This supports the argument by Gruber for the

role of a CIO (1986 cited in Hayes et al 2005).

The Case Studies have confirmed that the prior to performance improvement initiatives the role of

the ISp in the case study organisations was a technical support function. This matured during

BPR, to a role that was key in helping to identify processes for redesign and helping to redesign

them with the capabilities of IT in mind. These case studies have further indicated that

subsequent to BPR organisations perceive the need for a much more business-driven role for the

ISp, adding value to the organisation and increasing the benefits of process redesign. Results

clearly indicate that the success of the BPR initiatives is dependent on effective performance

management and knowledge sharing which aligns with the corporate strategy.

In some cases ISps have been very much involved in change teams, liaising with other business

professionals to drive requirements and to set expectations. A model has been created to

illustrate how organisations considering change programmes might adopt best practice and

successfully development the role on the basis of the experience of the organisations involved in

this research.

Nature of


Company One (A1)

Express delivery service

Aim of BPR Integration of customer

service; information

management and


Role of ISp

before BPR

Role of ISp

during BPR

Role of ISp

after BPR

Responsible for

disparate back-office


Becoming an element of

the change initiative

where cultural change is

given prominence over

technological change.

Focused on business


improvement through

business systems

ERIMA07’ Proceedings

Company One (B1)


To streamline and

integrate disparate

business units.

Reduce cost by

shedding headcount.

No policy and standards;

Users expecting ISp to

support these disparate


Helping to identify where

processes could be

simplified. Introducing


Disparate business units

enabled to access and

process information using

Company One (C1)

Financial Services

To improve

performance in terms of

processing speed and

throughput. Desire to

cut costs dramatically,

as type of business is

very cost competitive.

Short term technology


Effective systems being

a differentiator in

respect of service

quality and


Sharing and helping in


improvement solutions

using continuous

Company One (D1)

Global blue chip

Continue the



programmes already in

place by undertaking

business reengineering


opposed to just

process re-engineering

Support existing

functional strategies.

Support role for

business in change


Help with streamlining


Role of IS increasingly

focused on the




future role of


IV. Conclusion

ERIMA07’ Proceedings

integration. centralised systems. improvement


Part of the highly

structured” change



Due to the increasing

numbers of workers

requiring remote access

to enable them to work in

flexible and effective

manner, the future ISp

will have to ensure

support for such working


Formation of

Information Systems

(IS) steering committee

involving key

stakeholders from other


Business support


Must pay attention to

needs of the internal

process ‘customers’;

Development of

knowledge sharing


The specific aim of the paper was to investigate the role of ISp as a change agent of business

improvement initiatives such as BPR and to test the proposition that the role of the ISp in BPR

initiatives and resultant process oriented organisations is different from that of the traditional IT/IS

technical specialist. IT enabled performance management and its strategic implications would be

key to measuring the effectiveness of BPR and the role of the ISp is an vital part of this. In

particular, evidence has been sought to test the theory that in process oriented organisations

ISps play a wider, more pro-active and more business oriented role than previously.

The research has provided detailed empirical investigations into the actual experiences of

organisations that have undertaken BPR as performance improvement initiatives. It also suggests

a reference framework which companies might use in considering their future use of ISps. In

addition, post-BPR the ISp’s role as a business-aware and IT-literate ‘hybrid’ emerged as a

strong theme in the research. The future ISps needs to cater for the more complex information

requirements of cross-functional and extra-organisational processes. The organisation’s surveyed

postulated that the ISps role will develop still further, suggesting the ISp will become a catalyst for

change, using IT to add more value to a more customer-focused business. The suggestion was

also that increasingly mobile workforces and dependency on outsourced operations or the

services of ISPs would enable the organisation to focus on its core business. IT/IS and

Performance Management initiatives should be aligned with the implementation of corporate

strategy and appropriate IT enabled performance metrics.

Whatever the future ISp is called, the role will be the same: to facilitate performance improvement

through IT, and hence an understanding of the key and fundamental needs of the business is

increasingly paramount. Measuring this added value will be complex will again place new

demands on ISp. These authors suggest that ISps will more and more need to understand and

communicate the increased value to be gained from the deployment of IT.

Change programmes will be business-led, and increasingly supported by a ‘hybrid’ professional,

who is technology- and IS-aware, whilst also understanding the needs and expectations of the

business. In addition to supporting change programmes such as BPR, those organisations

surveyed suggested that the ISp will in fact become a ‘catalyst’ for change, using IT to add value

to the business. The role will be more ‘customer-driven’. In order to fulfil this role, performance

measurement needs to be at the heart of the ISp role. Some organisations believe this will also

include the needs of the new mobile work force, and involvement in outsourcing programmes.

The ISp has been found to be an essential participant in BPR projects. The organisation must be

made aware of the capabilities of technology as an enabler of new process designs, and it is

essential that an understanding of current IT is represented within the BPR team. In order to


judge the effectiveness of the BPR implementation, appropriate IT enabled performance metrics

need to be developed which can facilitate effective data collection, analysis and presentation.

This new role of the ISp, to be more aligned with the business and to become far more customer

focused. This shows how the new role encompasses not just the primary activities, but the

support activities of an organisation as well. Information management across all functions has

been shown by this research to be a key deliverable of the IS infrastructure during and after BPR,

as the traditionally isolated and insular processes within the organisations become crossfunctional

and open. Information sharing is essential. The role of the ISp has thus evolved to

encapsulate the business needs of the organisation, and become a change agent, enabling this

new way of working with the dual focus of information technology and the needs of the business.

It is now appropriate to consider the impacts of these findings in two ways. Firstly, the extent to

which they are consistent with or contradictory of previous published work is of interest, especially

to business academics. Secondly, the relevance of the findings to practitioners in the future

recruitment and deployment of ISps is a matter worthy of comment.

Driving the Strategy

(Redefining Industry


Supporter of


(Internally Supportive)

Implementer of


(Internally neutral)

IS Value

Figure 1. ISp Strategic Engagement Matrix (Adapted from Hayes & Wheelwright (1984)

This evolution of the ISp follows the path from being a reactive internally neutral approach to a

proactive role which underpins the organisation’s competitive advantage. Our model (Figure 1)

highlights the ISp focus and the corresponding organisational value. The model shows that the

traditional role of the ISp is shifting from a functionally based role and focused on the

implementation of strategy to a strategic role which is not only organisational wide but can link

outside the organisation to other organisations within the supply chain or value network. The

relevance to practitioners is that it demonstrates the importance of the ISp in influencing and

driving strategies which involve process reorientation. However without effective performance

management the effectiveness of the change and the satisfaction of strategic goals will be difficult

to appraise.

ERIMA07’ Proceedings

Corporate view; Strategic focus; IS steering committee; Consultancy;

Innovative use of IT; Process innovation; Information management;


Organisational Value

Information requirements; Information Sharing; Integration; Process

Redesign; Streamlining; Process definition from customer viewpoint;

Workshops; Education; Project Management.

Operational; Back office; System support; Functional focus; Technology led;

Support of disparate systems; Separate from the main business.

Innovative use of IT; Process innovation; Information management;

Departmental Organisational Extraorganisational






Al-Mashari, M,Irani,Z, Zairi,M (2001) "Business process reengineering: a survey of international experience",

Business Process Management Journal, vol.7,no.5, 437-455

Beretta, S (2002)

Bititci, U.S, Nudurupati, S.S, Turner, T.J, Creighton, S. (2002) Web enabled performance measurement

systems - Management implications, International Journal of Operations & Production Management;

Volume: 22 Issue: 11; 2002 Case study

Busi, M. and Bittici, U.S. (2006) “Collaborative Performance Management: Present gaps and future

research” International Journal of Productivity and Performance Management, Vol 55, Number 1, pp 7-25.

Earl, M. and Khan, B. (1994) How New is Business Process Re-design?, European Management Journal,

Volume 12, Number 1, March

Garengo, P. Biazzo, S and Bittici, U.S (2005) “Performance Measurement Systems in SMEs: A Review for a

Research Agenda” International Journal of Management Reviews, Vol 7, Issue 1, pp 25-47

Hall, G., Rosenthal, J., Wade, J. (1993), "How to make reengineering really work", Harvard Business

Review, No.6, pp.119-31

Hammer, M. and Champy, J. (1993) Reengineering the Corporation - A Manifesto for Business Revolution,

Nicholas Brealey Publishing

Hayes,R, Pisano,G, Upton, D, Wheelwright,S, (2005) "Pursuing the Competitive Edge: Operation, Strategy,

Technology", Wiley: New Jersey

Hayes, R.H. and Wheelwright, S.C. (1984) Restoring our Competitive Edge, Wiley: New York

Irani, Z, Hlupic,V, Baldwin, L.P and Love P.E.D. (2000), "Reengineering manufacturing processes through

simulation modelling", Journal of Logistics and Information Management, Vol.13,no.1,pp7-13

Kaplan, Robert S, and Norton, David P. (2004) The Strategy Map: Guide to Aligning Intangible Assets,

Strategy & Leadership, Vol. 32, No. 5, pp. 10-17

Markovic, N. and Vukovic, M. (2006) Restoring Performance Measurement and IT – Do we need new IS

development paradigm, Centre for Business Performance, Cranfield School of Management

Neely, A (1999) "The performance measurement revolution: why now and what next?" International Journal

of Operations and Production Management, vol.20,no.10,pp205-228

Love, P. Ghoneim, G. and Irani, Z (2004) Information Technology Evaluation: Classifying indirect costs using

the structured case method, Jornal of Enterprise Information Management, Vol 17, Number 4, pp. 312-325

Slack,N, Chambers,S, Johnston,R (2004), "Operations Management", 4th Ed. FT Prentice Hall

Timmers, P. (1999) Electronic Commerce: Strategies and Models for business-to-business trading, Wiley:

New York

ERIMA07’ Proceedings


Creating Cultural Change with Employees Transferring Through TUPE

ERIMA07’ Proceedings

J.Roddy 1,*

1 Answers Consulting, Middlesbrough, England

* Corresponding author:, +44.1642.722151

Abstract: Frequently when employees transfer from one employer to another due to a business purchase,

sale or award of a contract, the new employer needs to change the behaviour of the employees to generate

the success and returns for the business envisaged. This change of behaviour can be difficult to create. In

this paper, the author describes some of the reasons for the difficulties experienced, and introduces an

eight-step process, which played a significant role in generating commitment in two different organisations

with measured changes in organisational behaviour.

Although analytical methods were used to identify issues and track changes, the changes themselves were

born of the trust generated through the process, and of the bonding that occurred within the management

team as difficulties were overcome and progress made. This paper shares the value of approaching such

change through a combination of traditional project management and non-traditional emotional


Keywords: Culture, change, behaviour, process, trust

I. Introduction

As the business climate has changed over the last 20 years, there has been a substantial shift

from companies and organisations remaining largely intact, to such organisations continually

changing their shape and structure to try to find the best ‘fit’ for the business going forward.

As this realignment occurs, employees are moved from one employer to another, and sometimes,

within a particularly difficult or fluid market, this process can take place several times before an

owner finally decides to stay with the business, or to close it down. Although in the UK, the

employees’ terms and conditions are protected under the Transfer of Undertakings (Protection of

Employment) Regulations (TUPE), most employees will be aware that part of the sale and

restructuring may require cost savings through for example integrating the business and creating

economies of scale, or downsizing. Whatever the reality, at the point of transfer, emotions are

high and the new owner needs to decide how and when to meet the challenges facing the

organisation. This paper discusses some of the key issues, which have been observed by the

author over many such projects, and describes a methodology that has been shown to be highly

effective in two very different environments.

II. Background issues in major organisational transfers

Before the sale

Often in an acquisition, the incoming management assume that there has been a common

approach to management within the organisation, and that the culture has been established over

many years of activity. However, where an organisation has been failing, or is being prepared for

sale, new managers, and sometimes a new management team, may have been brought in 12-18

months prior to the acquisition to make changes. These new managers bring with them a different

way of doing things, and often value different characteristics amongst staff. This, in turn, can

create an undercurrent of uncertainty with staff as they try to decide where they sit with the new

style of management compared to the old.


In some cases, the management team may not yet be a team. In one project the author was

involved with, the senior management team had been put together less than 6 months prior to the

sale, and were still getting to know one another at the time the business was sold. The new

leader found it strange that there were so many different and sometimes contradictory messages

coming back from direct reports. In addition, some managers did not transfer to the new

employer, and a new team was forming as gaps were filled with new team members.

Frequently, in these situations, up to the point of transfer, the senior team have been paying great

attention to the sale of the business, and have delegated a significant amount of their day-to-day

work. Often, one side effect of this delegation is that small micro-cultures have grown up, all

focussed on delivering the task in hand and with the freedom to decide how that delivery should

take place. In this climate, it is often the stronger leaders who have taken charge, and a coercive

or authoritative style of management has developed.

After the sale

With the arrival of the new leadership of the business, employees often wonder what the impact

on their day to day lives will be, often preferring to hear that it will be business as usual, but truly

hoping that there will be changes and more clarity on their roles and what they are to deliver. As

the new leadership team now starts to focus on the business again, some teams will be relieved

and hand back work quickly, whilst others will resist this as they have enjoyed the freedom and


Often, the incoming management team feels that, having seen a lot of money spent in acquiring

the business, the employees will be grateful and receptive to the new direction and strategy

proposed, ready to get behind it and drive forward for success. Usually quite a few employees are

in this category. Finding those people early in the process, particularly those who are opinion

formers in the company, and working with them, is a key factor in building success.

However, there will be other employees with reservations. They will wait to see what happens, to

see who will be the winners and losers in restructuring they are sure is about to happen. Some

will be acutely aware that whilst they chose to join the previous company, they did not choose to

join this one. Some will just not agree with the new company’s stated objectives, or see nothing

wrong with the current way of doing things. So whilst there is usually a common interest in

continuity of employment, there will be many different perspectives on what that means.

It is important to note that whilst people are generally on edge at this point, the work will still be

done, as they want to show the new owners that things are working, and will also want to share

how things could be improved from their perspective. The key point is that in an organisation

having all of these different perspectives, much energy is being lost each day in the processing

and discussion of these differences, with many decisions being made on a case by case basis.

Longer term, both productivity and morale suffer.

In such an organisation, people notice differences throughout the organisation and may put it

down to differing leadership styles. There is little, if any, recognition that they have each

experienced, and responded to, the changes in the organisation in their own way. In fact, people

respond to the climate at work based not only on their feelings on that day, but also on messages

and feedback that has been presented to them over the course of their career. With so many

perceived uncertainties and different perspectives, it is difficult to manage the transition and the

substantive change required whilst holding on to the recent and more distant past.

ERIMA07’ Proceedings


III. Determining the starting point

In the two successful change programmes referred to in this paper, the principle of creating a

shared vision of the future was applied. The principle here is simply that trying to change the

perspective of each individual is a difficult and time-consuming process. It is much easier to

create a new perspective that people can play a part in creating, and own. This ownership also

creates a positive vision of where both the organisation and the employees want to go, creating

momentum in this common direction.

In the two cases referred to, a questionnaire was used to determine how people perceived the

working environment around them and then to identify the environment that they would like to

work in. The questionnaire was based on the work by Blake and Mouton (1964), but used as a

tool to determine the employees’ experience and desire, rather than its more conventional use as

a tool for leadership assessment.

Everyone in the organisations participated in the survey. The data was maintained as completely

confidential, and the results were presented in anonymous graphical format, by workgroup,

department and company, and shared with everyone in the company. Individual results were fed

back on a one to one basis, via the survey administrator, who could also talk through the meaning

of these results with the individual.

The results of the survey were very similar in both cases, despite one organisation being

predominantly male, and the other predominantly female, one based in manufacturing, the other

service based. An example of the data derived from the survey is shown below in Figure 1.

Figure 1. Typical initial results from survey

The darker (blue) points in Figure 1 show that the employees perception of the environment they

work in are very different indeed, even between those who had been colleagues for years. The

distance between the points, as well as the scatter within three different quadrants, indicates that

there is no common view of the organisational climate. In general, the closer the point lies to the

task focus axis, the more frustrated or unhappy the employee tends to be.

The lighter (orange) points in Figure 1 show that the employees had a much more common view

of the environment they wanted to work in, with the points clustered more closely together and in

a single quadrant. This outcome was common for both leaders and employees, and focused on

task whilst allowing people the space and headroom to take decisions.

ERIMA07’ Proceedings


The impact of this survey on the individuals was huge. Firstly, the recognition for some that

everyone saw the work environment so differently created genuine interest in discussing other

people’s points of view and what gave them that perspective. Secondly, the data presented on

the preferred environment was quite shocking to many employees, who had assumed that

managers would have wanted a different environment, and vice versa. This was the area where

most challenge took place. It is worth noting that the key factor in using this survey was not the

survey itself. Even if the results had turned out differently, the discussion could then have been

held to determine what the common culture was going to be. Whatever the outcome, the survey

provided an opportunity for the organisation to create its own ‘joint’ culture, based on an

organisational view rather than a few individuals. Where this approach has failed, it has tended to

be because the leadership have hoped to find evidence that their pre-determined solution is best,

or to prove a specific narrow point, rather than being open to a new solution.

This early stage in the new relationship with the organisation is a wonderful opportunity for

change. Often, when companies come in and try to impose a new culture without understanding

what is currently in place, great resistance is shown. New leaders can find it hard to define why

these changes are so much better than the tried and trusted ways.

Other organisations in the author’s experience have waited for a while before making change.

The effect of this can be that the employees assume that the existing culture is acceptable to the

new management. When the organisation then decides a change is necessary, it can be hard

work to create enough momentum for change to occur.

Thus, the time window for successful change in this environment can be relatively small, a few

months only, and needs to be handled proactively.

IV. Creating Cultural Change

Cultural change requires as much effort to develop and implement in a consistent and sustainable

way, as any process change. In looking back at the successes and failures of cultural change in

organisations, the author has developed an eight-step model, shown below in Figure 2.

Figure 2. Flowchart showing the model for creating cultural change

ERIMA07’ Proceedings


Each of the steps in the model are described below.

Clarity and Consistency

This defines the need for the organisation to be very clear about what changes they would like to

make and why, as these changes are intended to be long term, embedded changes. A key first

step was provided by the survey, as it identified clearly the direction the organisation was going to

take and also identified a number of behaviours, some of which were helpful going forward and

some of which had to change. Here the management team also had to decide how they were

going change themselves, how they would approach their staff consistently, and also which

aspects were those that were most important to delivering the change, and therefore which ones

they were going to consistently reinforce with staff.


The sharing of the results with the staff was a key factor, both in terms of setting the scene for the

next period, and also having frank and open discussions about what that might mean for people

within the organisation. Dissent was welcomed, and was generally dealt with by the team rather

than the manager. This required a substantial amount of line management time, but it was time

well spent as employees could see the programme had been thought through. It also enabled

issues to be handled at the start, rather than later in the process. This sharing process continued

throughout the change process.

Individual Action

Based on the sharing of the information, individuals became more or less comfortable with the

new environment. Opportunities to discuss this on a one to one basis were offered, with

discussions being very open and honest. For example, where the organisation wanted to

increase delegation, training was given to increase people skills where required. Discussions

were held about what was expected within each job role, and leaders were encouraged to

discuss the changes with people in terms of their feelings as well as their tasks.

Honest Discussion

Where individuals felt that they could not perform the role in the way that role was being

described, an open and honest discussion was held about the impact and options. Where

challenges were brought to the new way of behaving due to work pressures, the management

team worked together to find solutions in alignment with the preferred direction, clarifying their

own views and finding consistency as a team. The managers were then honest about the conflict

and about the resolution. There were people who were not prepared to come on board initially,

but in the new environment of being clearer about what was expected of them, they found it

increasingly difficult to resist the change. For some, the solution was to move on to new roles

elsewhere, for others, changing their approach was easier than they had anticipated.

New Ways of Working

The organisation then began to change the way work was done, to accommodate the new values

and ways of behaving. As leaders and employees adjusted to the new ways of working, further

changes were introduced, such as the promotion of individuals naturally skilled at creating the

new working environment. Recruitment, assessments and recognition became more closely

aligned with the new outputs, and managers focussed on encouraging the new ways of working.

ERIMA07’ Proceedings


Part of the success of this approach was that people wanted to make the change, because they

believed that they would create the environment that they would prefer to work in.

Tracking and Measuring

In both cases, the movement towards this new environment was part of a larger, transformational

change programme, and was tracked and measured in the same way as other, more traditional

improvement projects, with key targets specified and met, for example ensuring everyone had a

staff assessment. A part of this process was to repeat the survey after 6 months and see how the

culture had changed.

Accepting Loss

Both organisations recognised that some individuals would leave the organisation either by

choice or by circumstance. In both cases, the management teams treated the employees with

compassion, and were seen to do so, without compromising the direction or speed of the


Building Trust

A combination of clear leadership and direction, consistency, working towards solutions rather

than blame, compassion, a co-created new environment, and an acceptance of dealing with the

emotional needs of the staff all helped to create an environment of trust within the organisation.

As trust grew, the changes were implemented more smoothly than before, and with fewer


V. Conclusions

At the end of six months, changes were apparent in both organisations, and the change teams

started to withdraw. Figure 3 gives an example of how the employees viewed their environment

after six months.

Figure 3. Typical results after 6 months

The darker (blue) points on Figure 3 show that the organisation was moving strongly towards a

shared view of how the organisation operated, and that progress towards the shared ideal vision

had been made.

ERIMA07’ Proceedings


The lighter (orange) points show that there had been small changes to the view of the ideal

environment, but that it was still largely in the area determined six months previously. This helped

the organisations to review and clarify their objectives, as they looked at how the changes made

had affected the business performance, as well as how the organisation now felt. This review

process is shown as part of the eight-step process in Figure 2.

Comments from both organisations at the end of twelve months were that performance of the

organisation was much stronger and that the organisation was still evolving. Both organisations

were now comfortable and confident in working through the change process themselves.

Interestingly, both organisations were also very clear that they did not want to move back to the

‘old ways’, indicating that significant and positive lasting change had occurred.


I would like to thank Celerant for allowing the use of their basic questionnaire in some of this

work, and also the many staff members within different organisations for participating and

providing feedback on the methodology.


Blake, R & Mouton, J. (1964) The Managerial Grid: The Key to Leadership Excellence. Houston: Gulf

Publishing Co.

ERIMA07’ Proceedings


Organizational routines and dynamics of organizational cognition

ERIMA07’ Proceedings

J. Aguilar 1 , M. Gardoni 2

1 INSA, LGeCo, Strasbourg, France

2 INSA, LGeCo, Strasbourg, France / Universidad Nacional de Colombia, Medellin.

Corresponding author:, 33 (0)3 88 14 47 61

Abstract: R&D and innovations processes are activities rooted in social structures, cultural norms and

values in individual and collective perspectives related to each other. Thus, technological knowledge is

linked at the individual and collective actions and decision. However given the nature of linking between

actors related with R&D and innovation activities, decisions and actions, oriented to generate new

technological products and processes, transcend individual interpretations and assumptions about

technology and its environment. In this sense, R&D and innovations processes come from the interactions

among these interpretative systems of each particular actor involved in those activities within firms.

Organizational behaviour literature has called this interaction “shared cognitive structure.” From our point of

view, shared cognitive structures could eventually become manifest in the form of new products. We

propose that both a focus of cognitive topics and an approach of predictable behavioural (or not behavioural)

patterns like rules and routines, can allow identifying explanations about shared cognitive structures for

managing R&D and innovation activities. This paper examines concepts from resource-based, technological

frames and knowledge-based views to suggest a framework for analyzing shared cognitive structures from

routines and rules point of view, oriented to search new ways for managing R&D and innovations activities.

Keywords: organizational routines, dynamic capabilities, social cognition

I. Introduction

The social cognition part of the assumption that the people act on the base of their interpretations

of the world and that doing this they represent an individual social reality and give to it meaning

(Orlikowski y Gash, 1994). Weick (1995) shows that through of these interpretations, actor ‘make

sense’ of the context and the activities that they evolve before they can act. March and Simon

(1958) assume that within an organization, everyone has a cognitive base that serves to organize

and to form their interpretations of the reality and give them meaning in order to be able to act:

assumptions about the future, knowledge about different alternatives and foresee the produced

by these alternatives. In this way, each organizational member has a reference frame. These

individual frames have been coined by cognitive psychology as "schemes"; however, literature in

organizational behaviour has extended this individual idea to groups and organizations that has

called “shared cognitive structure”.

Literature of social dynamics cognition (Howells, 1995; Swan & Newell, 1998; Nicolini, 1999,

Kaplan and Tripsas, 2004) has tried to explain these aspects revealing the beliefs that are shared

by the members of an organization and how organizational decision can be probably different to

individual beliefs.

II. Technology frames and shared cognitive structures

For firms to achieve a purpose, people do not have to agree on personal goals, and in the

division of labour in an organization they will have different knowledge (Cusmano, 2000).

However, it is necessary to share certain basic values and perceptions about environment to

align their competencies and objective orientation. When the people are confronted with

environment, they use their cognitive base to “form simplified representations of the information


environment” (Kaplan and Tripsas, 2004) which reduce the complexity of environment, to be able

to make interpretations of the environment and afterward, decide and act.

Processes oriented to R&D and innovation activities are related with capabilities to create new

technological knowledge and transfer it across of the organisation. However, this transfer process

could be carried out by the firm as a result of organizational recurring processes characterized in

organizational routines 1 . In this way, both interpretative frames and organizational routines could

be important because they capture how actors (users, producer and managers think about

technology (Kaplan and Tripsas, 2004), and on the other hand organizational routines define

specific process to identify shared cognitive structures.

The concept ‘Technological frames” (coined by Orlikowski and Gasth (1994)), can be defined as

the process through which a producer or user approaches a set of interpretative processes for

taking a certain action. In this sense, individuals focus on the particular interpretations made

about technology and its role within the organization.

However some researchers have expanded these ideas emphasizing the social characteristic of

technological frames, their implications for technology trajectory and their implementation and use

(Orlikoswki and Gash, 1994). This collective technological frame has been defined as the

outcome of interactions between users and producers and between them. The interpretations of

different actors interact between each other and with the technology to produce outcomes. This

process involves sharing of personal experience and individual technological frames. This

process could create a common frame facilitating collective learning (Spender, 1998), and

defining rules and routines that constitute what the company makes in terms of its actions and

decisions around innovation activities

As it were said before, the technological frames are the lenses through which the company

collects and interprets the reality. But an “interpretative process” it is necessary to connect the

technological frames to technological outcomes. Trips and Kaplan (2004) defines four states for

this process: attention, interpretation, decision and action. In this process, an actor (users,

producers, managers) collect and filter of the atmosphere, soon gives a meaning him to this

information, soon what he was interpreted is transferred to actions or results. These authors

affirm that this process is iterative; that the technological frames could be modified, and in the

same way, the collective technological frames are the interactions between several interpretive

processes of diverse actors within or outside to the firm.

In the context of this article, the concepts of technology frames and shared cognitive structures

will be used with the same purpose.

III. Organizational routines

An organizational routine is considered as a regular and predictable behavioural pattern of firms

that is part of the recursive process that constitutes an organisation and have autonomy through

their repeated application and in response to selective pressures (Reynaud, 1996; Cohen, 1995).

Each routine relates to a given task within a specific activity, and provides the action according to

the instruction defined by rules or depending of shared cognitive structures. In this context,

organizational routines are not a single pattern but a set of possible patterns enabled and

1 In this context a routine is considered as a regular and predictable behavioural patterns of firms

that has autonomy through their repeated application (Cohen, 1995; Reynaud, 1996). Each routine

relates to a given task, cognitive or physical, within a specific activity, and provides the action

according to instructions or rules, norms or previously defined procedures.

ERIMA07’ Proceedings


contained by a variety of individual cognitive structures (Pentland and Rueter, 1994). In this

sense, technological frames define how companies operate and how they define their rules and


This notion of routine has confluence with shared cognitive structures in terms of selection,

aptitude and learning, and the role of context. However, the emphasis in this definition is on

selection process which implies the possibility of the automatic character of the routine and the

outcome of a selection process are defined for specific collective cognitive frames. In this sense,

shared cognitive structures are an important action for create rules and routines.

IV. R&D activities and shared cognitive structures

According to Murray (2001) (quoted in Frank, 2003), the technological knowledge base of the firms

are generated by different combinations of knowledge-searching, assembly and appropriating

processes within and outside the firms (Figure 1). These searches, assembling and appropriating

activities are related with R&D activities, and constitute one of the main activities to produce new

knowledge within the firms. The organization sets the direction of the search, questioning by itself

whether the relevant knowledge is internal or external, and additionally whether the organization

needs to integrate external knowledge in their research activities.

However these activities of searching, assembling and appropriation can be diffuse during

periods of knowledge creation. The actors (user, producer, managers, institutions, etc.) identify,

choose, invest, support or adopt a technology. The actions defined by each one of them have an

effect on the way to follow by a given technology, and the individual experiences of actors create

a shared understanding of technology and establish ways to following. Individual cognition frames

act on other technological frames and finally collective technological frames appear as a result of

actors interactions. In the process of decision making on what technologies to follow, the

companies incorporate their interpretations of the technology, the necessities of the users and

their own capabilities. Thus, the set of actions taken by several actors will form the technological

knowledge path (Figure 1).

Figure 1. The knowledge production process

ERIMA07’ Proceedings


In terms of R&D management within firms, relationships between actors are different depending

of what outcomes they are looking for, or processes that they are realizing. Nevertheless in R&D

processes, the structure of some of the processes changes, because the processes are not, in

general, unidirectional like operative processes. It is said in general, because R&D activities can

be operational (routinized processes) defined by standard procedures, and non-routinized

processes framed within individual cognitive mechanisms to produce knowledge. However both

of them are important in R&D activities in terms of shared cognitive structure. The first ones

because collective cognitive structures can be routinized to improve efficiency, and the second

ones because, although is not possible to routinize the activities associated to creation of

knowledge, it is possible to establish mechanism partially routinized to use and reuse for other

actors, knowledge partially elaborated. Thus, for instance, the information and the return of the

experience; prior experience; organizational history; accumulated knowledge of actor’s interaction

who act in these activities, can be fundamental for defining results or collective orientation

towards objectives according main purposes of the organization. In these shared cognitive

structures play a main role organizational routines and rules which are used to support

mechanism of dependence or interdependence between individual and organizational thinking.

However in R&D activities, routinized processes and non-routinized processes are both important

because in some cases they are necessaries to construct a set of routines that facilitate the

productivity in the work to simplify tasks and orient the labour in a single sense; but at the same

time non-routinized processes are important because they can generate new knowledge

independent of established routines. However, possibility of change of existing routines, can be

achieved when is capturing new experiences in existing routines. In the same way, non-routinized

processes can be captured and becomes reused as experience practices and mechanism of

learning, because capture and reuse of experiences accelerate sharing knowledge and they

enhance specialized knowledge for the firm (Busch, Gardoni, &Tollenaere, 2006; Gardoni, 2005).

Additionally, these experiences could modify existing routines; formalize efficient activities, or

feedback experiences through all R&D activities.

In this sense, PIFA -Process Information and Functionality Analysis- helps to formalize the

process flows in where there is a dependency between the tasks and the flows of information

necessary to make them (Busch, Gardoni, & Tollenaere, 2006). The interest of the PIFA-tool is

the information flow and the combination of the information with the functionalities in daily work

activities, which is important to improve the workflows of specific functions (Busch, Gardoni, &

Tollenaere, 2006). Additionally is important for the different parallel tasks that are made. For

example, the processes of scientific and technological searching-information can obey to routine

processes. However, experience of many actors, can suppose a more efficient form of searching

process, nevertheless the results of the process of search also can be useful for the profit of

specific results of non-routine processes but they can feedback the result of parallel tasks

associated to the same research processes.

In this sense, sharing knowledge can impact in the search of the shared objectives for the

organization, or also to increase the efficiency of existing processes, independent of the results of

the same one. In this sense, PIFA helps to construct shared cognitive structures in the flows of

tasks around organization; nevertheless, as already it had been said before, and it can help for

the construction of objectives equally shared, avoiding that use and exploitation of creative

knowledge prevents the existing of efficient routinized processes.

In intensive knowledge firms the execution and re-execution of task depends on the information

and its changes (Busch, Gardoni, & Tollenaere, 2006). PIFA formalize this knowledge flow and its

parallel workflows. In other words, PIFA shows the places where the information is flowing, and

where it should be supported and improved. The objective of PIFA is therefore, not only to model

the cognitive shares, but also to identify new ways for knowledge improvement. In this same

ERIMA07’ Proceedings


direction ANITA-model (Frank, 2003) it has a similar use in the sense to learn through sharing

knowledge, nevertheless its direction is based on the use and handling of contents of written

information. A retrieval and visualization module gives to user access to written information

content and allows the visualization according to several points of view corresponding to several

users. In this sense, outputs are adjusted to the specific topic analyzed and to the general frame

of firm.

Effectiveness of R&D activities, in terms of process management, could be oriented to identify

whether all knowledge production could be useful to create shared cognitive structure. This

integration process could be useful or could be not, when knowledge intensive firms have both

routinized and non-routinized processes. As a first approximation (Cusmano, 2000) it can depend

on the features of the R&D activities. Some technological topics require partners with cognitive

proximity and reciprocal knowledge absorption, some other technological fields require the

integration of diversified competencies not necessary related to the same technological fields. For

instance for reciprocal learning takes place, producers (or partners) should have sufficient

cognitive distance 1 , since they possess different technological frames, in order to create “nonredundant”

knowledge (Cusmano, 2000), but on the other hand they should be close, in

technological frames to enable fluently communication. Cusmano (2000), (quoting to Metcalfe

1995), shows how the main concern of R&D activities is ensuring equilibrium between “creative

destruction” and “order”, which meaning as "coordination of the system rather than convergence

to a centre of gravity".

V. Conclusion

Shared cognitive structure attempts to reproduce routines within an organisation. This notion of

routine has confluence with shared cognitive structures in terms of selection, aptitude and

learning, and the role of context. These routines can become habituated in norms, refined in other

routines and/or changed to new behavioural actions and oriented to specific technological


This article shows an approach of the associated factors for R&D and innovation activities for

capturing and representing the organizational cognition oriented to R&D and innovation activities.

ANITA and PIFA are two cases of tools useful for this purpose.

R&D management could be oriented to identify collective routines that enhance their specialized

knowledge, but additionally it is important to identify individual habits that could be better used

without share collective frames but it is important for capturing and representing previous

experiences to be shared for other actors.


Busch, H.; Gardoni, M. & Tollenaere, M. (2006) PIFA: An analysing method to

understand Knowledge Sharing Aspects in Dynamic Business Process Management– case study at

STMicroelectronics, submitted to CERA

Cohen, M. et al (1995) Routines and Other Recurring Action Patterns of Organizations: Contemporary

Research Issues. Santafe Institute. Working Paper 95-11-101 (

last modified (04/10/98).

1 Cognitive distance is referred to different life paths and in different environments, where people

interpret, understand and evaluate the world differently (Cusmano, 2000; Noteboom, et al 2006)

ERIMA07’ Proceedings


Cohen, W., & Levinthal, D. (1990) Absorptive Capacity: A New Perspective on Learning and Innovation,

ASQ, 35 (1990), 128-152.

Cusmano, L. (2000) Technology Policy and Co-operative R&D: the role of relational research capacity.

DRUID Working Paper No 00-3

Frank, Ch. (2003) Knowledge Management for an Industrial Research Center – case study EADS. Ecole

Doctorale Organisation Industrielle et Systèmes de Production. Institut National Politychnique de Grenoble.

Doctoral Thesist.

Gardoni, M. (2005) Concurrent Engineering in Research Projects to support

information content management, Concurrent Engineering: Research and Applications

(CERA), volume 13, number 2 June, 1063 293X, p 135-144.

Howells, J. (1995) A socio-cognitive approach to innovation. Research Policy 24 pp. 883 - 894.

Kaplan, S. & Tripsas, M. (2006) Thinking about Technology: Applying a Cognitive Lens to Technical

Change. Working paper September 04-039

March, J. G., & Simon, H. A. (1958) Organizations. Wiley, New York.

Metcalfe J.S. (1995), 'The Economic Foundations of Technology Policy: Equilibrium and Evolutionary

Perspectives', in Stoneman P. (ed), Handbook of the Economics of Innovation and Technological Change,

Basil Blackwell, Oxford

Murray, F (2001) Following Distinctive Paths of Knowledge: Strategies for Organizational Knowledge

Building within Science-based Firms. In Managing Industrial Knowledge – Creation, transfer, utilization-,

Nonaka, I. and Teece, D. (ed.), SAGE publications, Thousand Oaks, p. 182-201

Nicolini, D. (1999) Comparing Methods for mapping organizational cognition. Organization Studies 20 (5) pp.

833 - 860

Noteboom, et al. (2006). Optimal cognitive distance and absorptive Capacity. CentER Discussion Paper

Series No. 2006-33

Orlikowski, W. & Gash, D. (1994) Technological frames: making sense of information technology in

organizations, ACM transactions on Information Systems, 12 (2), 174-207.

Pentland, B; Rueter, H. (1994). Organizational routines as grammar of action. Administrative Science

Quaterly. Vol. 39, 484-510

Reynaud, B. (1996). The properties of routines: tools of decision making and modes of coordination.

Working Paper. Non edited.

Spender, J.; Eden, C. (1998) Introduction. In: Spender, J.; Eden, C. (eds.) Managerial and organizational

cognition: theory, methods and research. London, SAGE.

Swan, J.; and Newell, S, (1998) Making sense of technological innovation: the political an social dynamics

of cognition. In: Eden, C (1998) Managerial and organizational cognition. SAGE, London.

Todorova, G.; Durisin, B. (2003) The concept and reconceptualization of absorptive capacity: recognizing

the value. Working Paper N. 95, SDA Bocconi.

Weick, K. (1995) Sensemaking in organizations. Thousand Oaks: Sage Publications.

ERIMA07’ Proceedings


ERIMA07’ Proceedings

A Decision Support System

for Complex Products Development Scheduling

I. Lizarralde 1, 2, * , P. Esquirol 2 , A. Rivière 1

1 EADS France

2 Université de Toulouse, LAAS-CNRS, Toulouse, France

*, 33 (0)5 61 16 88 43

Abstract: This paper investigates the problems of project scheduling at the design stage of the development

of a civil aircraft. Such a complex system development is characterised by a dynamic environment and

uncertainties concerning the duration of design activities. In order to deal with the characteristics of the

design process reality, we propose a Decision Support System based on a Constraint Satisfaction Problem

model that supports three main functions: plateau level scheduling, dependencies between design teams

and scenarios management. Our approach is aimed to be generic while remaining flexible enough to be

implemented within the aerospace industry. It should facilitate cooperation between design teams and

support the decision making process at different managerial levels.

Keywords: Design, Project Management, Scheduling, Scenarios, Dependencies.

I. Introduction

Product development complexity can be characterised on one hand by the large number of

physical items to be integrated with multiple connections that might be difficult to control

(structural complexity). On the other hand, it can also be characterised by process complexity,

which deals with product development activities, taking into account items such as design

procedures, skills organisation, work distribution, decision procedures, etc. and that is mainly

characterised by the numerous interactions between development teams. Consequently, the

development of a new civil aircraft can be considered as complex from a product and process

point of view. Development of complex products has been discussed by numerous papers and

influential publications (ULRICH and EPPINGER 2004). However, current development projects

prove that there are still major challenges to be addressed in controlling target dates and

resources allocated to a specific project (GAREL, GIARD et al. 2004). The risk of overrunning is

particularly high in aircraft industry where resources and budget engaged are important

(REPENNING 2001). Facing this situation some correctives actions might be taken (e.g. late

allocation of resources based on outsourcing or hiring new personnel, planning changes, etc.) but

they might affect the company’s operational performances.

II. Needs for complex systems development scheduling approach

In order to manage the functional and structural complexity of large systems development, design

groups are located in the same environment and generally derived from the product breakdown

structure (PBS). These groups are called “Plateau”. For a limited period of time, these plateau

made of different but consistent skills that will strive to reach common objectives related to the

development of a specific subsystem. Plateau level schedules are one of the key tools for the

programs managers at system levels. They allow managers to control the plateau’s activities

progress status and help him in the decision-making process concerning task definition and

resources allocations. For design activities in plateau level scheduling, a majority of methods

assume that information to build schedules are available, stable and complete (e.g. activities

duration). However, facts show that design processes are exposed to a significant level of



Plateaux have been an efficient answer to reduce developments costs and time to market.

Nevertheless, the organisation structure based on plateaux has accentuated some of the

problems characterising complex product developments. Indeed, organisation structures based

on plateaux have highlighted the need to manage efficiently internal resources and to satisfy

agreed time constraints. Therefore, the importance of dependencies with other plateaux has to be

emphasized; otherwise the risk of loosing a systemic vision of the entire product is becoming


Dependencies between design teams can be identified when data exchange is requested. Data is

a generic term used to describe deliverables exchanged between design teams. Different models,

drawings, mock-ups, requirements specification documents, calculation results, sketches, test

results, etc can be part of the deliverable. The content, maturity level and delivery date of the

exchanged deliverables are often subject to negotiations (SAINT-MARC, CALLOT et al. 2004),

(GREBICI, BLANCO et al. 2005). Consequently, dependencies management often refers to

interfaces management, deliverables management, contracts management or interdependencies

management. Therefore, a solution that merges scheduling practices and dependencies

management is requested in order to improve the plateau level scheduling process taking into

account the uncertainties of the design process reality.

III. A building-block based approach

Our approach is based on use cases provided by a major European aerospace company. A

procedure has been set up to analyse the company internal procedures related to Project

Management (PM) activities and during semi-structured interviews with team leaders and

program management functions. Our research project is based on a building-block approach,

described on Figure 1 that represents a group of functions to be provided to end-users through a

Decision Support System (DSS). The underlying PM model enables a Constraint Satisfaction

Problem (CSP) approach to solve the dynamic aspects required by the DSS. This model is a prerequisite

for the development of the different building-blocks.

Figure 1. The building-lock approach

ERIMA07’ Proceedings


In order to answer to the identified needs, we have developed three main functions. Firstly, we

focus on plateau level scheduling. Then, we tackle dependencies management and finally, we

propose a scenario based approach to deal with uncertainties.

Before detailing these three features, we describe the underlying PM model.

IV. The underlying PM model: an energy based constraints satisfaction problem

The approach relies on a CSP model. Resources intensities (per activity and per period) are the

main variables of the problem. This scheduling problem can therefore be considered as a

resources allocation problem.

First, we describe the energy allocation problem based approach, which will be the basis for the

modelling of most constraints. Then, we list the set of constraints we model. Implementing this

model uses a Constraint Logic Programming (CLP) environment. CLP extends Logic

Programming and provides a flexible and rigorous framework for solving CSP models.

The energy allocation problem based approach

Activities are mainly defined by their energy: ei denotes the energy required to perform i, between

its starting date si and its finishing date fi. Energy characterizes a quantity of work and is then

proportional to time and to the strength/intensity of the resource able to realize it.

Energy is particularly interesting to tackle this specific scheduling problem in which work

quantities that define the activities are well defined and can be considered as data, while

durations and resource allocations are decision variables. The energy concept enables the

definition of specific constraint propagation algorithms (see for example, (BRUCKER 2002))

useful both to characterize the problem consistency but also to improve the resolution process, by

reducing dynamically the domain of remaining variables, after each decision step.

The main idea of this so-called energy-based resolution approach is to deduce restrictions on

time location and resource allocation for one activity by taking into account the resource

availability and the minimal resource consumption of the remaining concurrent activities. This kind

of reasoning has been successful in many scheduling problems (see (ESQUIROL, LOPEZ et al.


In our model we consider full elastic preemptive activities (BAPTISTE, LE PAPE et al. 1999): the


duration of an activity i is not known in advance and its intensity ai can vary during the

performance. Then the number of resource units allocated to i may become null at some periods

θ, excepted for si and fi. We also suppose this intensity to be integer, considering that elementary

resource units are persons. Consequently, the intensities { a } are the main variables of the

problem, one per activity and per period. The scheduling problem is thus transformed into an

allocation problem.

Concerning resources definition, as reviews dates are given by senior management, the maximal



resource availability A is also supposed to be fixed at this decision level. A is an integer

number that represents the maximum number of persons in the team who may work concurrently

at any period θ.

ERIMA07’ Proceedings




A Constraint Satisfaction Problem model

Concerning constraints, the first three types of constraints of our model are easy to express and

have already been discussed by numerous papers: Activity energy constraint, Cumulative

resource constraint, and the time window constraints (DEMEULEMEESTER and HERROELEN

2002), (LABORIE 2003), (BAPTISTE, PAPE et al. 2001), (KUMAR 1992).

The next two constraints are related to interdependencies between two activities. On one hand,

we have proposed an interdependency constraint that deals with a pair of activities belonging to

the same design team schedule: the Energy-Precedence Constraint (EPC).

Classically a scheduling precedence constraint between two activities {i, j} forces an activity i to

be finished before an activity j begins. It is expressed as the potential inequality t j − ti

≥ pi


which is equivalent: t j ≥ ci


In a concurrent engineering context, a full parallel execution of design and development activities

is desired but not always possible since it could violate the resource availability constraint or

because there may be interdependencies between some pairs of activities. In the latter case an

activity i is forced to be in a state where it has already consumed a minimal energy eij (with

eij < ei) before activity j can start. This energy corresponds to the minimal work that has to be

done in activity i to produce reliable data that can be used to start activity j. For that reason we

call it an Energy-Precedence Constraint (EPC): EPC (i, j, eij).

EPCs are the most difficult constraints to express with allocation variables { a } in place of the

time variables {ti, ci, pi}. We have proposed in other works (LIZARRALDE, ESQUIROL et al. 2007)

some propagation routines dedicated to these constraints.

On the other hand, we have modelled the interdependencies between two design teams, which

are usually formalised by contracts, using a new type of constraint: Contract Dependencies

Constraints (CDC).

Consider a dependency that involves two design teams and activities i and j for each team. The

figure 2 illustrates the case where the task i assigned to the design team x must deliver a data d

with a minimal maturity level uij to the design team y in order to start the task j.

x design team

y design team

ERIMA07’ Proceedings


d data


Figure 2. Dependency between two teams. Figure 3. Energy and maturity functions.

For the maturity level uij, energy needed to reach this level can be calculated: eij=f(uij).

We are currently working on the different functions that will relate both variables depending on the

nature of the design activity. Indeed, a design team that re-uses concepts from former project will








e able to supply data with high level of maturity in early phases of its development (I, figure 3);

while a very innovative development will lead to a late concepts freezing (II).

Then, we can define a new temporal constraint between these activities defined by a due date

and the eij. It is a special temporal constraint since the due date is not related to the completion of

the activity but to the carrying out of a certain amount of work, in other words a constraint related

to a dependency obliges to expend a certain amount of energy before a given date. Indeed, the

Contract Dependency Constraint (CDCij) is defined by two pieces of data: {tij, eij}

For the activity i of the first design team we have ai

= eij

and a = ei

− eij



Secondly, the DSS proposes a frame to deal with different scenarios. The scenario management

process project includes scenario generation and scenario evaluation. A scenario is a description

of the original schedule, the possible events that might affect it and their impacts. For the

scenario generation phase, three method to generate scenarios are available based on the CSP

model. The first one is creating schedules as described in the paragraph related to plateau level

scheduling problems solving. The second method is based on the modification of one or several

variables and the analysis of the impact on other variables and constraints. This method is known

as sensitivity analysis. In the third method, the user inverts the process in order to create new

scenarios. This method is called a goal-seek analysis due to the fact that the user can build a

schedule respecting all its goals but without taking into account some external constraints. Once

the scenarios are generated, an evaluation is performed based on a risk analysis process. For

ERIMA07’ Proceedings





∑ i



while the earliest time of the activity j of the second design team is fixed and equal to tij.


θ j

a = 0 ∀θ ∈[




On the contrary of classic scheduling problems, our model takes into account dependencies as

constraints as essential features to describe a scheduling problem. Dependencies that are

negotiated between design teams shall be treated as dynamic constraints in design schedules.

V. Capabilities of the Decision Support Systems (DSS)

Description of three main capabilities

Based on this CSP model, we have developed a decision making support tool that offers the user

three main capabilities.

Firstly, our DSS deals with solving plateau level scheduling problems. In standard resourceconstrained

project scheduling problem, the objective is usually to find the schedule that

minimizes the makespan or the maximum lateness, with the help of a black-box one-step solving

algorithm. In our case, two types of solving strategies are proposed, both based on the CSP

model. On one hand, we can design a solving strategy interacting with the user, who defines a

hierarchy of constraints that enables if the problem is over-constrained to relax first the weakest

constraints (e.g. the user might consider that respecting a review date is a weaker constraint than

respecting resources allocation for that period). On the other hand, we can design a heuristic

solving strategy mainly characterised by the order in which decision variables are instantiated and

by the order in which values are enumerated for each variable instantiation (maximum values

first, minimums or midpoints, etc).

each scenario likelihood is defined as well as an impact factor. A combination of both factors

allows the user evaluate each scenario and make comparisons between different scenarios as

well as to order hierarchically a set of scenarios. Therefore, different evaluated scenarios are

available so project managers can decide which one to use as a planning baseline.

Thirdly, dependencies between design teams have a key function in our DSS. First,

dependencies have been modelled as a new constraint that is included in our CSP model.

Dependencies management is also the basis of propagation procedures in our DSS. Indeed,

taking into account the dynamic nature of the design process, the issues of changes propagations

in constraints and schedules become crucial. Each time a design team updates its own schedule,

especially after an unforeseen event, the information is transferred to other teams it has some

dependencies with. Contracts related to these dependencies will be included in the schedules

allowing the identification of decisions impacts and effects on different design teams by


Example of a solving strategy interacting with the user

The following example illustrates how the different capabilities could be used in order to support

the decision making process of the managers of different hierarchic levels.

Consider a part of the PBS composed by two design teams and a management team:

ERIMA07’ Proceedings

Management team Z

Design Team X Design Team Y

Figure 4. Hierarchical relation between the teams of the example

Focusing the scheduling process of the design team X, we define 8 tasks with the following

amount of work: eA=20, eB=20, eC=19, eD=14, eE=11, eF=8, eG=6 and eH=4. Other constraints

include a contract constraint between the design team X and design team Y. This contract

establishes that design team X shall deliver a data called uij to the design team Y before the

period 4 with a maturity level of 50% (for ore details see (LIZARRALDE, ESQUIROL et al. 2006)).

Design team X has identified the task A as the task that will define this data (i=A) and it has

calculate that the energy that should be expended in order to achieve the maturity level of 50% is

9 units. Therefore, these 9 units shall be performed during the period 1, 2 and 3. Then we can

establish: CDCAj = {tAj = 4, eAj = 9}. Moreover an Energy-Precedence Constraint (EPC) exists

between the tasks B and C/D: EPC (B, C, 20) and EPC (B, D, 20). Let us note that both

constraints are equivalent to the traditional scheduling precedence constraint since eij = ei in both


For this problem there is not any solution that satisfies all the constraints. Therefore the manager

of the design team X launches several simulations in order to find feasible schedules that

correspond to scenarios that are built relaxing one or more constraints. Once the different

scenarios are identified, he or she can choose a new schedule and perform the actions in order to

relax in reality the constraints. In this example the identified solution has been the following



ERIMA07’ Proceedings

A 3 3 3 3 3 3 2

B 2 2 2 2 2 2 3 3 2

C 3 3 3 3 3 3 1

D 2 2 2 2 2 2 2

E 2 2 3 3 1

F 1 3 3 1

G 1 1 3 1

H 1 3

5 5 5 5 5 5 5 3 2 5 5 5 5 7 7 7 7 5 5 4

Period 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Figure 5. Scenario chosen by the user. Resource constraint relaxed.

In order to perform this schedule the resource constraint has been relaxed during the periods



∑ =

= i H


ai i=




The actions linked to the implementation of this relaxation include the validation of hiring two

more designers during four periods by the head of the “Management Team Z”. But this actor does

not agree to allocate two more people in the design team X and accepts only one more resource.

Taking into account this fact, manager of design team X restarts the simulation process as the

new scheduling problem with one more resource allocated to the periods {14,15,16,17} has not

any solution.

This time the solution found includes the relaxation of the CDC.

A 2 2 1 2 3 3 3 3 1

B 3 3 3 3 2 2 2 2

C 3 3 3 3 3 3 1

D 1 2 2 2 2 2 3

E 1 2 3 3 2

F 3 3 2

G 1 3 2

H 2 2

5 5 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 5 5 4

Period 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Figure 6. Scenario chosen by the user. Resource and contract constraints relaxed.


The manager of design team X announces to design team Y that even if the date of the contract

will be respected, it will not be able to attain 50% of the maturity demanded. Indeed only 5 units of

energy will be expended for task A before period four. The maturity level expected by design

team Y will be available during period 5. This is a current Concurrent Engineering practice where

data will be delivered in more that one step giving the possibility to the customer to follow the

work with preliminary information (TERWIESCH, LOCH et al. 2002). Two deliveries that correspond

to the following new constraints:

CDCAj = {tAj = 4, eAj = 5}

CDCAj = {tAj = 5, eAj = 9}

If design team Y can rearrange its schedule in order to accept the data coming from design team

X with a lower maturity level, a new contract will be signed between both teams and two feasible

new schedules will be established for the project. Nevertheless, if design team Y is not able to

find a solution with the new constraints imposed by design team X, it could launch simulations

relaxing one or more constraints. Possible solutions can include the allocation of a new resource

or the modification of the contract with another team. We can realize on the one hand how the

problem can come back to the head of the management team Z who will need to take a decision

between the allocation of one resource either to design team X or to the design team Y or relax

other types of constraints (e.g. review dates) given to both design teams. On the other hand we

notice that Contract Dependencies Constraints can be propagated in different teams. Therefore,

our approach allows measuring the impact of a project management decision on the global

project and not only inside the team scheduling scope.

VI. Conclusions

Based on empirical studies, we have investigated the plateau level scheduling process. In our

proposal, the problem is considered as a discrete Constraint Satisfaction Problem. The proposed

solution includes the energy allocation based approach and the mathematical definition of two

new types of constraints for resources allocation. These foundations contribute to the

identification of scenarios and to the management of dependencies between design teams. To

illustrate these proposals the capabilities of the developed prototype have been presented.

Further work include new resolution strategies based on heuristic solving strategy mainly

characterised by the order in which decision variables are instantiated and by the order in which

values are enumerated for each variable instantiation (maximum values first, minimums or

midpoints, etc). Moreover, we are testing different ways of implementing propagation

mechanisms in our CLP environment in order to improve the efficiency of the constraint

propagation techniques. These techniques are checked using real use cases in order to evaluate

and improve run-time performances.

Finally we are also considering a probabilistic approach to deal with the relaxation of the Contract

Dependencies Constraint. This approach consists on modelling assumptions linked to the data of

the contract and the rework estimations if assumption fails.

Features related to the capabilities presented in this paper have been tested with two prototypes

of the tool. Based on prototypes’ evaluation and feedback, an advanced prototype will be

released in 2007.

ERIMA07’ Proceedings



BAPTISTE, P., C. LE PAPE, et al. (1999). Satisfiability Tests and Time-Bound Adjustments for Cumulative

Scheduling Problems. Annals of Operations Research Vol.92, No., pp. 305-333.

BAPTISTE, P., C. L. PAPE, et al. (2001). Constraint-based Scheduling. Norwell, Kluwer Academic


BRUCKER, P. (2002). Scheduling and constraint propagation. Discrete Applied Mathematics Vol.123, No.1-

3, pp. 227-256.

DEMEULEMEESTER, E. and W. HERROELEN (2002). Project scheduling - A research handbook. Boston,

Kluwer academic publishing.

ESQUIROL, P., P. LOPEZ, et al. (2001). Propagation de contraintes en ordonnancement. Ordonnancement

de la Production. P. L. F. Roubellat, Hermes. Chap. 5: 131-167.

GAREL, G., V. GIARD, et al. (2004). Faire de la recherche en management de projet. Paris, Vuibert.

GREBICI, K., E. BLANCO, et al. (2005). Framework for Managing Preliminary Information in Collaborative

Design Processes. Proceedings of the 2nd International Conference on Product Lifecycle Management

PLM05, Lyon, France.

KUMAR, V. (1992). Algorithms for constraint-satisfaction problems : A survey. AI Magazine Vol.13, No.1, pp.


LABORIE, P. (2003). Algorithms for Propagating Resource Constraints in AI Planning and Scheduling:

Existing Approaches and New Results. Artificial Intelligence Vol.143, No.2, pp. 151-188.

LIZARRALDE, I., P. ESQUIROL, et al. (2006). Adapting project management to complex systems

development reality: a maturity and energy constraints based approach. Proceedings of the 16th CIRP

International Design Seminar, Kananaskis, Alberta, Canada.

LIZARRALDE, I., P. ESQUIROL, et al. (2007). Scheduling the development of a civil aircraft. International

Conference on Industrial Engineering and Systems Management (IESM 2007), Beijing (China).

REPENNING, N. P. (2001). Understanding fire fighting in new product development. Journal of Product

Innovation Management Vol.18, No.5, pp. 285-300.

SAINT-MARC, L., M. CALLOT, et al. (2004). Toward a data maturity evaluation in collaborative design

processes. Proceedings of the 8th International Design Conference DESIGN04, Dubrovnik, ed: Marjanovic,


TERWIESCH, C., C. H. LOCH, et al. (2002). Exchanging Preliminary Information in Concurrent Engineering:

Alternative Coordination Strategies. Organization Science Vol.13, No.4, pp. 402-419.

ULRICH, K. and S. EPPINGER (2004). Product Design and Development, Irwin,McGraw-Hill.

ERIMA07’ Proceedings


Fostering SMEs networking through Business Ecosystem and ICT

ERIMA07’ Proceedings

G. Perrone 1, *, L. Scarpulla 1 , L. Cuccia 2

1 DTMPIG, Università degli Studi di Palermo, Italy

2 SEAF, Università degli Studi di Palermo, Italy

* Corresponding author:, +39 091 665 70 35

Abstract: Globalisation is a new challenge for European Small and Medium Enterprises; indeed, on the one

hand it represents a threat, since new emerging companies are entering their domestic market, but, on the

other hand, it represents the opportunity to enter new emerging and growing markets. In order to win this

challenge European SMEs need to play the networking card; in fact, it is well acknowledged that networking

can improve SMEs effectiveness and efficiency which is absolutely necessary to win the globalisation battle.

Many public regional institutions are thinking about how to improve the networking capacity of their SMEs

through specific programs and investments. This paper reports the results of an ongoing research project

aiming at improving the networking capacity of SMEs through an innovative conception of the Business

Ecosystem idea. The paper shows how the innovative Networking Business Ecosystem has been conceived

and how it works for pursuing this aim.

Keywords: SMEs networking, Business Ecosystems, Enterprise interoperability, Business Research for

SMEs, Distributed ICT platforms

I. Introduction

From Powell’s seminal work (Powell, 1990), networked organisations have emerged as a new

enterprise pattern able to better match the new competition arena requirements. From then on

many papers have addressed hybrids from an economic point of view (Menard 2004), from an

organisational point of view (Grandori and Soda, 1995), and from a performance point of view

(Mazzarol, 1998). This last point is particularly interesting; indeed, an underlying assumption

concerning networks is that hybrids are especially good for SMEs. In fact, through networks,

SMEs are able to overcome some of the limitations due to their size (achieving scale and scope

economies coming from resource pooling) by maintaining the advantages coming from being

small (reactiveness, proactiveness and so forth). Recently, several papers have addressed the

performance issue for SMEs networks indicating that networks are able to support long term

growth for SMEs (Havnes and Senneseth 2001, C. Lin and J. Zhang 2005). These results have

become so important that public policy in Europe is pushing SMEs in networks with specific

programs. This is so true that firm networking is present in several work packages of the 7 th

Framework Programme of the European Commission; specifically the Activity: 2.2 (Research for

SME associations) aims at improving SMEs association (EU Commission 2007a), the objective

NMP - 4.3.3 Networked Production (EU Commission 2007b) aims at developing research for

networked production and the Objective ICT-2007-1.3: ICT in support of the networked enterprise

(EU Commission 2007c) aims at developing new ICT tools for SMEs networking. This last point is

particularly interesting for our purposes. Indeed, networking technologies promise new tools for

improving business networking making collaboration and coordination easier. However, ICT can

also improve the ability of SMEs to associate in networks. In that case ICT needs to encounter a

business paradigm that facilitates SMEs association in networking along complementarities

matching and business opportunities discovery; this paper goes toward this important direction.

Indeed, it presents a novel methodology for improving SMEs networking not from a co-operation


and operative point of view, where different ICT platforms have been already developed, but from

the SMEs networks construction point of view. For this purpose we have matched a business

paradigm, the Business Ecosystem one (Moore, 1993) and ICT platform. The result is an evolving

technological environment that allows SMEs to create networks by matching complementarities

and finding new business opportunities. Since this is an undergoing research project, in section II

the main idea of the Networking Business Ecosystem (NBE) is presented, while in section III an

overlook over the Intelligent System underlying the NBE environment is also provided.

Conclusions are sketched in section IV.

II. A Business Ecosystem for Business Networking

The aim of the research here presented is to address a new approach for creating SMEs

networks through a specific characterisation of the Business Ecosystem concept and the use of

distributed ICT platforms. In particular, the NBE is the objective of a research project called

Sicilian Digital Business Ecosystem (SDBE). The NBE takes inspiration from the European

Project called Digital Business Ecosystem ( whose main objective is

to spread the use of open source software technologies through SMEs by means of an ICT

platform that allows SME users to adopt open-source software developed by SME providers

according to their needs. Such an exchange should improve the development of new software

applications, allowing the growth of both SMEs users and providers. The NBE takes inspiration

from the idea to share a distributed ICT platform, but it focuses on developing networking

opportunities for SMEs registered in the NBE. In particular, as also depicted in Figure 1, the NBE

consists of a set of registered SMEs that interact each other through an Intelligent System Engine

(ISE) whose main aim is discovering new business opportunities through a networking

integration. SMEs networking opportunities are found by the ISE, that evaluates SMEs aiming at

finding sustainable cooperation solutions. The ISE works both replying to a SME specific request,

the Pull Approach, and scanning the NBE looking for new business opportunities to suggest, the

Push Approach. In the Pull Approach, a SME informs the system about a deficiency (that can be

both a strategic and operational deficiency or shortage) in its activities; this triggers the ISE,

whose Network Engine searches the NBE to find out possible partners that can help the SME in

solving its deficiency. Possible partners are evaluated and accordingly ranked by the ISE Network

Catalyser; network potential partners are ranked according to their potential ability in solving

SME’s problems and their attitude that is measured according to their behaviour in previous






ERIMA07’ Proceedings



Figure 1. The Networking Business Ecosystem




Pull approach:

triggered by SME



Network of SMEs

Push approach:

triggered by ISE



SMEs whose rank overpasses a threshold level are selected and proposed for a networked

solution. The Push Approach can work in different ways as deeply explained in Section III. After a

relationship is suggested and established, the network is depicted as a separate entity because it

has properties that cannot be gleaned from the single component firms information. Features

such as the governance system, the network structure, and partners’ commitment should be

evaluated and stated by partners to complete network description. The network is thus evaluated

by calculating the performance indicators needed to trigger the evolutionary use of the NBE.

Therefore, the NBE consists of several components. A Business Modelling Language (BML)

Editor is used by the SME to describe its business characteristics; it is based on the Semantics of

Business Vocabulary and Business Rules (SBVR) approach as proposed by the OMG (OMG

2006). A Knowledge Base (KB) system, that is the set of models and ontologies used to

represent SMEs and their requests, is obtained by introducing in the Zachman’s framework

( of Enterprise representation a network-oriented vision of the Porter Value Chain

concept. The ISE consists of a Network Engine, that processes the business discovery rules

allowing the pull and push modalities. Finally, the Evolutionary System is a set of algorithms that

allows the evolution of the NBE knowledge on the base of the networks results. All the NBE is

implemented in a distributed peer-to-peer architecture, whose main technical characteristics have

been inherited by the EU Project DBE.

III. The Intelligent System Engine (ISE)

The ISE works out according to the three approaches depicted in Figure 2: the Operational, the

Strategic and the Evolutionary.

Figure 2. The ISE

Strategic approach

ERIMA07’ Proceedings



Strategic rules set Operational rules set





As also observed by Menard (Menard, 2004) the resource pooling is one of the evidences

characterising firms networks. Indeed, hybrids seem to exist because markets are perceived as

unable to adequately bundle the relevant resources and capabilities, while integration in a

hierarchy would reduce flexibility by creating irreversibility and weakening incentives. On the

other hand, pooling resources in network organisation allows to keep strong incentives while

maintaining organisation flexibility. The Strategic approach deals with resources pooling at

strategic level. It works according to the following steps:

• SME strategic asset assessment; each SME is called to describe its strategic assets within

the Zachman’s framework of Enterprise representation; strategic assets are classified in main

categories according to the framework. Each asset is primarily linguistically described by the

firm; afterwards, the firm classifies it as critical or not critical for its development and, finally, it

evaluates its strength on the asset according to a Likert scale of values.

• SME strategic asset needs; each SME, by using the same framework, is called to describe

the needed strategic assets. Again, each asset is linguistically described, it is classified as


critical or not critical, and finally, an evaluation, by using a Likert scale of values, of its

importance for the firm development is provided.

As the reader can notice, the two previous steps perform a kind of SWOT analysis concerning

strategic assets. It is also to be said that the criticality evaluation refers to the necessity to

develop or to keep the asset within the firm in order to reduce opportunism, hold-up or hold-out

risks, or because of other correlated assets property. Once the SWOT phase has been

completed, the ISE launches the partner search; it consists of two steps: the semantic search and

the compatibility search. For each SME denouncing an asset need whose importance is higher

than a threshold level, the ISE searches for those SMEs who have a positive assess on that

asset; this is made possible by the semantic engine of the ISE. The result of the semantic search

is a set of SMEs, i.e. the semantic set, who have developed the required asset. The compatibility

search runs over such a set. Indeed, the ISE searches for the SMEs having the highest strength

on that asset by reducing the semantic set into a qualified set. Once the partner search is over,

the ISE suggests possible solutions for the needing SME according to the following rules set:

R1: IF the required asset is critical AND the partner’s asset is also critical THEN DEVELOP A


R2: IF the required asset is critical AND the partner’s asset is not critical THEN INSOURCE THE


R3: IF the required asset is not critical AND the partner’s asset is critical THEN OUTSOURCE


R4: IF the required asset is not critical AND the partner’s asset is not critical THEN DEVELOP A


The rationality of the above rules is quite evident according to Organisational Economy results.

Indeed, if the assets are both critical, the partners are called to develop a partnership, that is a

long term relationship, in order to reduce the opportunism for both. On the other hand, if the asset

is critical for the requiring firm and it is not for the partner, the requiring firm should try to insource

the asset from the partner. If the asset is not critical for the demanding firm and it is critical for the

partner, it should outsource the asset from the partner. Finally, if the asset is not critical for both

of the involved firm, they should try to organise a transaction.

Operational approach

The Operational approach is more operations oriented. It allows both Pull and Push execution; it

works scanning firms data through a set of Operational Rules aimed at finding bottleneck in

exploiting resources. Operational rules work in three phases: the first, solution discovery, aiming

at detecting firm’s situation and, eventually, suggesting solutions; the second, partner search,

aiming at selecting possible cooperation partners showing the required features, and the third,

fitting evaluation, aiming at ranking the selected SMEs. Activities are classified by firms as core or

not core according to their strategic relevance (strategic evaluation); such strategic relevance

depends on the visibility in customers’ point of view, the efficiency the firm is able to provide, and

the risk associated to the externalization. Furthermore, the firm states if an activity is a bottleneck

or not (capacity evaluation); indeed, if an activity is classified as a bottleneck the difference

between available and required capacity should be considered, while if the resources underlying

the activity are underexploited, the difference between available and average capacity is the

amount available to establish relationships. In the solution discovery phase, after having

considered these differences, which are evaluated as significant if they overpass the selected

threshold level, their cause is detected through a comparison with the interface activities: indeed

it is possible that a misuse in a certain activity is caused by problems found in others activities.

For example, warehousing resources may seem insufficient because there is a bottleneck in

ERIMA07’ Proceedings


transportation capacity that obstacles the correct goods delivery. Through the business discovery

rules application, the problem is brought back to a lack or an excess in a core or a not core

activity, so that four different kinds of solutions could be suggested by the ISE, as shown in Table


Strategic evaluation Capacity evaluation BDR solution





Table 1. Business Discovery Rules (BDR) rationality

MAKE is the solution suggested for a lack in a core activity: the activity should be maintained

internally, thus the system suggests an insourcing strategy that allows to acquire the requested

resources from the best qualified firms in the ecosystem;

BUY is the solution suggested for a lack in a not core activity: the lacking activity can be

successfully outsourced to a registered partner or assigned to a firm that has a trusted outsourcer

itself; this solution can take different forms, the “outsourced integration”, that means shifting from

“making” the activity to buying it, when the efficiency or better price provided by partner suggests

the firm not to perform a not core activity anymore; “exploiting integration”, that is the

‘outsourcing’ of an activity not yet performed internally, when a firm would like to exploit a new

business opportunity starting a new activity and chooses to rely on cooperation to leverage on

other firms experience, skills, and resources and reduce risks; virtual organizing integration, that

is a collaborative outsourcing allowing firms to perform new activities exploiting the scale and

scope economies drawing on resource pooling (Cuccia et al, 2006);

FIND NEW CUSTOMER is the solution suggested for an excess in a core activity accompanied

by a general under exploitation situation; the excess can be used to increase customer base in an

existing market or find new markets;

SELL is the solution suggested for an excess in a not core activity or an isolated excess: in this

case the resources can be proposed to other firms needing to outsource some of their production

or other activities belonging to their value chain.

After locating the most suitable solution for each problem detected, the system starts the partner

search phase in order to find the right partner to solve the deficiency. This search consists of

three steps: the semantic search, the compatibility search and the numerical fit. The semantic

search is made easier by the common description used by the firms to fill the knowledge base;

indeed, the ISE engine usually compares values belonging to the same activities field for all the

partners involved in the cooperation, or it compares semantic description of product and

resources or firms and customers, and it selects firms whose descriptions are “near” enough. The

compatibility search aims at matching firms experiencing complementary deficiencies: indeed

firms with shortage should only receive cooperation proposal by firms with excess in the same

activity, in order to guarantee the capacity stream compatibility. The numerical fit aims at finding,

among all the pre-selected firms, the ones able to provide all the capacity requested or to exploit

all the resources provided by the selecting firm, thus optimizing the cooperation exchange.

The fitting evaluation phase is aimed at ranking the partners selected in the previous step; this is

done analysing business affinity parameters, suggested by the firm itself, and previous

cooperation and behaviour parameters, fed by the evolutionary approach and aimed at better

rewarding good performing or frequently selected firms.

ERIMA07’ Proceedings


Evolutionary approach

The Evolutionary Approach is primarily based on learning; indeed it allows the system evolution in

an experience-based way, since it takes into account the behaviour held by registered SMEs in

previous relationships to establish new cooperation solutions. Two different kinds of evolution

should compose the overall approach: the population and the searching evolution.

The population evolution is based on recording the links created between the registered SMEs.

After a cooperation solution has been established, the resulting structure should be described in

terms of governance mode, network topology, cooperation frequency, partners’ satisfaction.

Partners experiencing satisfying relationships and usually relying on cooperation to solve their

deficiencies, are proposed to join in stable networks and to find business opportunities with the

other nodes belonging to the network, even if not directly linked. In this way, more embedded ties

are established between cooperative partners, allowing to improve trust, joint problem solving and

information transfer. Furthermore, by understanding the network the firm belongs to may result

useful to know all the potentially available business opportunities, while looking at the network

ties, the firm can understand how to gain access to these opportunities. In order to allow the

population evolution three different kinds of information should be recorded for each couple of

firms registered: cooperation frequency, cooperation duration and cooperation satisfaction. The

cooperation frequency is calculated as the rate between the number of cooperation undertaken

by a couple of firms and the maximum number of cooperation undertaken by each couple of firms

registered in the ecosystem in a given time horizon T; the cooperation duration is calculated as

the rate between the overall cooperation duration for a couple of firms and T; finally, the

cooperation satisfaction is an aggregated index taking into account all the evaluations provided

by the firm for the cooperation itself (in terms of revenue increase, access to wider markets or

customer base enlargement, image improvement and so forth) and for the partner’s behaviour

(contribution to network success, timeliness, flexibility, commitment, reliability). When considering

the cooperation frequency we can map each SME network; indeed by indicating with fkj the

cooperation frequency between the k-th and the j-th firm, we can build the k-th firm network as

composed by all the firms j* for which fkj* is greater than a threshold (fkj*≥th), all firms i* for which

fj*i* is greater than a threshold (fji**≥th), and so forth. Furthermore, by analysing the distribution of

the network links it is possible to infer about the network topology; indeed by setting aij=1 if fij>0, 0

otherwise, we can calculate the network connectivity as in (1)


∑∑ >

2 aij

i j i



n −1)

being n the overall number of firms belonging to the network; the higher is the connectivity index,

the more distributed are the network relationships, and thus network communication and power.

However, a better understanding of the network roles is provided by computing the graph

centrality based on betweenees (Freeman, 1979), that is a measure of the degree of

centralization of the most centralized node compared to the centralization shown by the other

nodes belonging to the network; this measures is computed like in (2).



[ CB(p*)

- CB



i CB

= 1

3 2

n − 4n

+ 5n

− 2



ERIMA07’ Proceedings



CB (pk


g ( p )


= ∑∑

i j > i gij

ERIMA07’ Proceedings


where gij represents the number of geodesics linking nodes i and j, while ) ( ij k p g is the number

of geodesics linking nodes i and j containing p (k)

. The more centralized is a network the more

emerges a node that exercises, even not formally, a crucial role for the network survival. The

network topology has implications for the governance structure: indeed for example a star

structure, that is the most centralized network possible, requires a leadership in the hub node,

while a fully connected structure, that is the network where connectivity equals to one, is suitable

for a relational network. After mapping the network we can analyse firms cooperation attributes in

order to suggest how to evolve cooperation relationships. Using a two levels evaluation of each

attribute we can face the situations shown in Tables 2a and 2b.

Since the temporal horizon considered is limited, high duration-high frequency relationships are

not possible.

Let us examine first the case “low duration-high frequency” relationships. These are typically

market-like links, such as subcontracting ones; when they are also satisfactory (Table 2a), it is

possible to argue that the transaction features require this kind of arrangements characterised by

short term and repetitive contracts. In this kind of relationship, usually a firm relies always on the

same trusted partners, but sometimes puts them in competition in order to maintain market

pressure. On the other hand, if satisfaction is low (Table 2b), this might mean that, while firms

involved have correctly chosen their partners (otherwise frequency would not be so high), they

have not properly shaped the relationship. This means that governance does not properly fit with

transaction features; this occurrence might suggest to maintain the governance structure by

redesign the transaction parameters (re-design subcontracting) or to move to a more structured

relation (move to partnership).

Let us consider now the case of “low frequency-high duration” relationships; if such relationships

are satisfactory (Table 2a) they can be classified as partnership, since they are characterised by

long term relationship among parts; on the other hand, if the relationship is not satisfactory (Table

2b), firms may either redesign partnership arrangement (to improve coordination mechanism for

instance), or, in case the transaction is not critical, they might think to shift to a subcontracting


Table 2a. High Satisfaction

Table 2b. Low Satisfaction

Duration high Duration low

Frequency high Not possible Subcontracting

Frequency low Partnership Occasional




Frequency low Re-design


Duration high Duration low

Not possible Re-design




Finally, let us consider the case of “low duration-low frequency” relationships. This is the case of

“occasional relationship”. In this case, if satisfaction is high (Table 2a), partners should think


about structuring their relationship. Therefore, depending on the nature of the transaction, the

relationship should evolve on a subcontracting or partnership case. On the other hand, if

satisfaction is low (Table 2b), the occasional relationship should not be repeated.

Cooperation satisfaction and cooperation frequency allow also to run the second kind of

evolutionary approach: the searching evolution. Performance indicators used to evaluate

cooperation satisfaction are filled by SMEs actually involved in networks suggested by the NBE

according to the Operational and the Strategic approaches and by the system itself. SMEs are

asked to state their satisfaction for joining the network and to evaluate other partners’ behaviour,

in terms of contribution to network success, reliability, timeliness, flexibility, and trust; on the other

hand, the system provides an objective evaluation of SMEs suitability to cooperation, in terms of

completeness of information required, collaboration frequency and use of interoperable IT tools.

Firm satisfaction is used to update rating parameters: indeed, if the firm evaluates in a negative

way the solution experienced, this is symptomatic of a wrong choice of the attributes used to

select partners; therefore, parameters are modified in order to invert the rating between the preselected

partners belonging to the cluster.

Other partners’ evaluations, weighted with the corresponding partner’s score, are composed with

the system evaluation and used to update the firm’s score that will afterwards be used by other

potential partners to evaluate the firm itself. In this way the system keeps in memory the

behaviour held by SMEs in all the cooperation experienced, thus discouraging opportunistic

behaviour, and strengthening the good reputation of well performing firms. Indeed, in the partner

selection process, it is worthwhile taking into account if the firms have previously cooperated

(eventually considering also frequency and satisfaction for the cooperation) and if they share a

cooperation partner, since firms prefer to work with partners already known or suggested by

trusted partners.

IV. Conclusion

This paper presents an undergoing research whose aim is to build an ICT Ecosystem for SMEs

Business Networking. As the reader can notice this project is very ambitious and of great

relevance in term of scientific interest and public policy. The Business Networking Ecosystem is

aimed at discovering new networking opportunities for participating SMEs by matching both a

Strategic, Operational and Evolutionary point of view. The research core of the project is the ISE,

whose set of rules is called to locate and discover new networking solutions. In order to do that at

Strategic level, the ISE performs an asset SWOT analysis among the SMEs in the Ecosystem

and search for possible network solutions by suggesting governance structure according to the

main results of the Organisational Economy Theory; at Operational point of view it uses a

formalisation of the Porter Value Chain approach, while at Evolutionary level it uses learning and

knowledge evolution. While this paper provides only the general description of the Networking

Business Ecosystem, forthcoming papers will describe in further details its valuable components.


This research has been developed within the SDBE project funded by the Regional Government

of Sicily under the Program POR. The authors wish to tank the Engisud S.p.A. staff involved in

the project for their valuable contribution to this research.

ERIMA07’ Proceedings



Cuccia, L., La Commare, U., Perrone, G., Scarpulla, L., and Alessi, M. (2006) “Digital Business Ecosystem:

Business Discovery applications supporting SME networks and ICT adoption”, Proceedings of the MITIP

2006 International Conference, 11-12 September, Budapest, Hungary, pp.167-172.

EU Commission, (2006a), FP7 Capacities Work Programme: Research for the Benefit of SMEs,

EU Commission, (2006b), Cooperation Theme 4 Nanosciences, Nanotechnologies, Materials And New

Production Technologies – NMP, 2007,

EU Commission, (2006c), ICT - Information And Communication Technologies, WorkProgramm 2007,

Grandori A., Soda G. (1995) Inter-Firm Networks: Antecedents, Mechanisms and Forms. Organisation

Studies, 16/2, pp.183-214.

Havnes P.-A., Senneseth K. (2001) A Panel Study of Firm Growth among SMEs in Networks. Small

Business Economics, Vol.16, pp. 293–302.

Freeman L.C. (1979) Centrality in Social Network Conceptual Clarification. Social Networks Vol. 1, pp. 215-


Mazzarol T. (1998) Partnerships: A Key to Growth in Small Business. 43rd ICSB Conference, Singapore.

Menard C. (2004) The Economics of Hybrid Organizations. Journal of Institutional and Theoretical

Economics, Vol. 160, pp. 345–376.

Moore J. F. (1993) Predators and Prey : A New Ecology of Competition. Harvard Business Review.

OMG (2006) Semantics of Business Vocabulary and Business Rules (SBVR), dtc/06-03-02,

Powell W.W. (1990) Neither market nor hierarchy: network forms of organization. Research in

Organizational Behaviour, Vol. 12, pp. 295-336.

Yeh-Yun Lin C., Zhang J. (2005) Changing Structures of SME Networks: Lessons from the Publishing

Industry in Taiwan. Long Range Planning Vol. 38, pp. 145 – 162.

ERIMA07’ Proceedings


Meta-modelling “object”: expression of semantic constraints in complex

data structure

ERIMA07’ Proceedings

M. Lamolle 1 , L. Menet 1,*

1 LINC-THIM, University Paris 8, IUT of Montreuil, Paris, France

* Corresponding author:, +33148703461

Abstract: To manage parameters, the majority of Information Systems is concerned by heterogeneity in

both data and solutions. Consequently, the management of this data becomes complex, inefficient, insecure

and expensive. The need to use a structured formalism to handle complex data appears. We suggest a data

integration solution based on an XML architecture. This architecture embeds a Master Data Management in

the Information System. The unification of Master Data is primarily done by the definition of models. These

models are XML Schema documents describing complex data structures. We propose to enrich the structure

and the semantics of these models by defining a metamodel. In the metamodel, we introduce semantic

object relations for defining links between concepts. The resulting metamodel is used to define an UML

profile and to optimize operations such as models validation, data factorization and trees representation.

Moreover, UML profile is exploited to make easier the definition of models.

Keywords: interoperability, Master Data Management, Metamodel, Metaschema XML, XSD Language

I. Introduction

In the frame of the interoperability of heterogeneous data sources, two main data integration

approaches exist: the virtual approach (or mediator) (Lenzerini 2002) and the materialized

approach (or data warehouse) (Widom 1995). We suggest an implementation of the second

approach by an XML architecture called EBX.Platform. This architecture allows companies to

unify the management of their strategic data without any changes in their databases or their

existing applications. This unification is conducted in three ways: (i) definition of the main data

model through the XML Schema language, (ii) persistence in a common repository, specific to

the product, in a remote or integrated database, (iii) availability of a generic user-friendly web tool

interface for data consulting, for updating and for synchronizing the repository with the

Information System of the companies.

One of the major benefits of EBX.Platform for companies is that the repository manages the

inheritance of instances. The abilities of data factorization brought by inheritance and by EBX

allow data duplication and related problems (costs and risks) to be avoided. To implement the

mechanism of inheritance, a first conceptual model has been realized. This paper presents our

XML solution of data integration and the improvement of the conceptual model by the definition of

an object metamodel. The resulting metamodel is used for defining an UML profile which enables

us to describe a formal approach of design of Master Data models.

II. EBX Platform

The company Orchestra Networks proposes Master Data Management software called

EBX.Platform. Based on Java and XML Schema, EBX.Platform is a standard and non-intrusive

solution that helps companies unify and manage their reference business data and parameters

across their Information System.


Master Data Management (MDM)

The Master Data Management is a way of unifying, of managing and of integrating references

data accross the Information System of the company. These data can be of several kinds:

• Products, services, offers, prices

• Customers, providers

• Lawful data, financial data

• Organizations, structures, persons

Currently, the majority of Information Systems is concerned by heterogeneity in both data and

solutions. In this framework, there are three kind of heterogeneity:

• Diversity of storage systems (databases, directories, files…)

• Diversity of formats of data (files owner, XML documents, tables…)

• Diversity of solution to manage the different types of data

Consequently, the management of the data becomes complex, insecure, inefficient and

expensive. Moreover, using different applications to manage this diversity involves redundancy in

both data and tools. An Information System without MDM will then present some problematic


• No unified vision of the references data

• Data duplicated in several systems

• No coherence between companies and subsidiary ones

• No single tools for users

EBX.Platform is a Master Data Management solution, based on powerful concepts, to solve these


EBX.Platform’s concepts

EBX.Platform is based on two concepts: (i) an adaptation model which is a data model for a set

of Master Data. It is an XML Schema document and (ii) an adaptation which is an XML instance

of the adaptation model which contains Master Data Values. Using XML Schema allows each

node of the data model corresponds to an existing data type according to the W3C standard

(W3C, 2004) to be specified. EBX.Platform supports the main XML Schema datatypes, as well as

multi-occurrence complex types. Indeed, the XML Schema formalism allows us constraints

(enumeration, length, lower and higher limit, etc.), information about adaptation and its instances

(access connector, Java factory class, access restriction, etc.) and layout information (label,

description, formatting…) to be specified for each node of the schema. For each node of the

adaptation model, declared possible instances, corresponds a node in the adaptation. If an

adaptation model has several adaptations, we consider that an adaptation tree is handled (cf.

figure 1)

ERIMA07’ Proceedings


Figure 1. An adaptation model and its instances

In an adaptation, each node has the following properties: (i) An adaptation value; if this value is

not defined in the current adaptation then it is inherited from its ancestor (parent adaptation),

recursively. If no ancestor defines a value, then the value is inherited, by default, from the data

model. (ii) An access right for descendants; the adaptation node can be either hidden (to

descendants), in read only (for the descendants), or in read/write (for the descendants).

Figure 2. Architecture of EBX.Platform

The architecture of EBX.Platform (see figure 2) is based on three important components:

• EBX.Platform, with EBX.Manager, provides both business and technical users with a Webbased

tool for Master Data Management. EBX.Manager dynamically generates a rich User

Interface from Master Data models without any programming. The figure 3 shows the graphical

interface generated from an adaptation model:

ERIMA07’ Proceedings


Figure 3. EBX.Manager web-based tool

• EBX.Platform Engine is based on a technology that allows the management of multiple

instances of Master Data in a core repository. EBX.Platform Engine main features are data

validation, data configuration, life cycle management and access rights management.

• EBX.Platform services allow the integration of Master Data with Information Systems. It

provides import/export features and integration with third party tools, such as EAI, ETL, ESB,

directories. Custom MDM services can be developed using a standard Java API. Indeed, using

our Java API it is possible to integrate new features in EBX.Platform. For example, services can

be used to perform some reporting, data historization, management of processes etc... In the

figure 4, we have illustrated the use of services defining a workflow engine:

ERIMA07’ Proceedings


Figure 4. Definition of a workflow engine using services

In this custom service, it is possible to define tasks for users. The mechanism of workflow

enables to define ordered tasks to be performed by users. Each task of the process has to be

fully realized before the next one. This service can be used for the management of projects where

tasks are assigned to teams in a precise order.

III. Meta-modelling “object”

We introduce object features that are added to the conceptual model and we propose a

metaschema of an adaptation model, in order to consolidate both the conceptual model and the

existing data validation. Our first goal is to add object metadata to the adaptation model. We can

use the following notions in terms of relations between objects: generalization, specialization and

dependence (aggregation or composition). To illustrate these concepts, let’s take an example

frequently used in some UML academic cases. This example defines five concepts: Person,

Teacher, Student, University and Department. These concepts are semantically linked. More

precisely the concept Person is the generalization of Student and Teacher, and the concept

University is a composition of Departments. These semantic links have strong impacts in data

factorization and optimization. Moreover, the composition hypothesis between concepts has as

the consequence of creating a strong dependency between these ones. Let us consider in our

example the notion of dependence (more precisely the composition) between University and

Department concepts. The composition implies that there cannot be instances of the Department

concept without instances of the University concept. An optimization can be made for the

instance deletion process: the deletion of an instance of the University concept implies that all

dependent instances (departments) will be removed. However, in the case of an aggregation

between these concepts, the aggregated instances (departments) will not be deleted if they are

used by other concepts.

Generalization and specialization relations are used to factorize data in our system. In the

generalization case, common attributes are gathered in a general concept. For example,

attributes such as first name and last name are common to the concepts Student and Teacher.

To factorize data these two attributes are migrated to the concept Person, avoiding the

duplication of their definition in the concepts Student and Teacher. In the same way as (Zerdazi

ERIMA07’ Proceedings


and Lamolle, 2005), we propose some metadata to be included in the XML schema to implement

these notions. As W3C suggests it, the XML Schema extensions that we design are defined in

the « appInfo » element as in the following example about the above composition between

University and Department:



ERIMA07’ Proceedings

Path of department’s concept in the schema

Facets and concepts have been defined parsable by EBX.Platform. By defining XML Schema

extensions in the elements annotation and appInfo, it is not necessary to provide a schema

allowing the structure of these facets to be defined. One of the resulting issues is that the

validation of these extensions is fully delegated to the validation engine of EBX.Platform, and not

to them XML Schema engine. We have defined a metaschema of an adaptation model describing

the structure of the concepts provided by EBX.Platform to avoid this issue.

IV. Definition of an UML profile

The definition of an adaptation model is made through the XML Schema technology. XML use is

adapted to the needs of EBX.Platform which implies a wide knowledge of this language. There

are many XML Schema tools; on the one hand the user can use the XML Schema features which

are not implemented by EBX and on the other hand he is not guided about the extensions of

EBX. As a result, formalism must be used to make this modelling easier. In addition to its

modelling abilities, UML allows profiles (Mahmoud 2003) (Pilone and Pitman 2006) to be defined.

A profile specializes the UML formalism for an application field or a particular technology. Many

profiles have been developed, for example CORBA and EJB. After studying the XML

metaschema, we deduced a UML profile defining relations between the different concepts

introduced by EBX (see figure 5)


Figure 5. Extract of UML profile representing EBX.Platform metamodel

Using UML extension mechanism enables us to extend the UML formalism to our semantic. This

extension is performed using stereotypes and marked values. Stereotypes are employed to

define a new type of element from an existing one. For example, we define a Table element, from

the existing Class element, with the syntax Table. As a consequence, the Table

element will be instantiated from the metamodel constructor in the same way that the Class

element. Tagged values specify keyword-value pairs of model elements to set a property existing

element or for stereotypes.

In the figure 5, we present some important concepts of an adaptation model. The UML profile

allows us to define an adaptation model being composed of a root element which is also

composed of sub-elements within the meaning of XML Schema. The semantic features of UML

are used to define relation between our concepts. Indeed, we specify that an element can be a

table or a simple element using inheritance mechanism. Our profile is then used to define an

adaptation model with the UML formalism. Our profile is applied on an adaptation model based

on the database Mondial defined by Peter McBrien:

ERIMA07’ Proceedings


Figure 6. Part of the definition of an adaptation model using an UML profile

The figure 6 presents the use of our profile for defining an adaptation model. We can see different

stereotypes (elements between brackets) and marked values. The stereotypes are used to

indicate the type of an element e.g. the stereotype indicates that the given element is

a table. The marked values are used to set specific values to attributes defined in the given

stereotype; in our profile the stereotype has attributes such as its primary key. Then,

we can set these ones using marked values.

By defining a metamodel, we improve the process of data validation which enables us to use the

XML schema validation engine in an automatic way. Moreover, the definition of a UML profile

allows us to ensure that the designer uses the semantic strictly defined by EBX.Platform, avoiding

some XML Schema specificities which are not managed. We associate a generator of XML

Schema code to this UML profile which makes the definition of an adaptation model easier and

more secure.

V. Conclusions and perspectives

We propose a generic solution of data integration based on the XML technology. Our solution is

able to integrate data from several kinds of sources such as databases or XML documents. We

have seen that every adaptation model is an XML Schema standard document. Some XML

Schema features are difficult to handle for the designer of models. Graphical XML Schema tools

exist but they are not restricted to our semantic and they cannot be aware about the extensions

brought by EBX. So, we have proposed a metamodel to validate adaptation models in a

transparent way by using the XML Schema validation engine. We have also presented how the

use of UML profiles makes easier the definition of an adaptation model.

The follow-up to our work will be the enrichment of our metamodel for the Master Data

Management module allowing semantic constraints to be expressed and validated according to

profiles (for example business language). We will develop two ways. The first one is the modeling

methods and constraint expression (expression of facets). Formalism design as UML will allow

ERIMA07’ Proceedings


EBX schema modeling. The created schema is validated by this modeling based on rules. The

second one is the integration of constraint expression according to profiles (constraints on types

or between concepts). This notion supports a semantic to the represented concepts and

expressions of dependency.

We will take into account the ODMG standard features (ODMG, 1999) to EBX formalism

(inheritance notion about the models directly specified in the schema, such as specialization,

generalization, dependence, etc.), the UML formalism, the advantages of OWL language

dedicated to the ontology definition (Kalfoglou and Schorlemmer, 2003), and the advantages of

conceptual graphs for the expression of relations and of constraints between concepts. The

features of OWL and of conceptual graphs will be used to perform inferences on data allowing

some optimizations.


We would like to thank Orchestra Networks for their support to our researches.


Kalfoglou Y. Schorlemmer M. (2003) Ontology mapping: the state of the art. The Knowledge Engineering

Review 18(1) pp. 1-31.

Lenzerini M. (2002), Data Integration: A Theoretical Perspective, 21st ACM SIGMOD International

Conference on Management of Data / Principles of Database Systems (PODS’02), p. 233-246.

Mahmoud N. (2003) VUML : a Viewpoint oriented UML Extension, 18th IEEE International Conference on

Automated Software Engineering (ASE'03), pp. 373-376.

McBrien P., the mondial database

ODMG (1999) The Object Data Standard: ODMG 3.0, Morgan Kauffman Publishers.

Pilone D., Pitman N. (2006), UML 2.0 in a Nutshell, O’Reilly (Eds).

W3C. (2004) XML-Schema Part 1: Structures, 2nd Ed.,

Widom, J. (1995) Research Problems in Data Warehousing. In Proceedings of the 1995 International

Conference on Information and Knowledge Management (CIKM), Baltimore, Maryland.

Zerdazi A. Lamolle M (2005) Modélisation des schémas XML par adjonction de métaconnaissances

sémantiques, 2 ème rencontre des Sciences et Technologie de l’Information, ASTI, Clermont-Ferrand.

ERIMA07’ Proceedings


A general framework for new product development projects

ERIMA07’ Proceedings

M. Zolghadri 1* , C.Baron 2 , M.Aldanondo 3 , Ph.Girard 1

1 IMS Laboratory, CNRS UMR 5218, Bordeaux, France

2 INSA / LESIA, Toulouse - 3 EMAC / CGI, Albi

*, +33 5 4000 2405

Abstract: Firms look for procuring a sustainable advantage over their competitors. They should be,

therefore, both operationally and strategically efficient. However, as firms work with other companies, they

are under threats of misunderstanding, inefficiency, difficult synchronisation, etc. These threats may be

transformed into real business traps. In this paper authors propose a global framework. It focuses on

collaborative new product development project. The ultimate goal is to minimize harmful potential

consequences of collaboration. This framework emphasizes critical characteristics of any collaboration

among partners. It helps users to have a clearer understanding of the overall environment of such projects.

This understanding leads not only managers towards a better definition of their strategy but also guides

them to acquire a more resourceful operational efficiency.

Keywords: New product development framework, business efficiency, uncertainties

I. Introduction

According to M.Porter [1], firms follow two methods to achieve a sustainable advantage: strategic

positioning and operational effectiveness. He resumes strategic positioning by “Do different things

than competitors” and operational effectiveness by “Do the same things better than competitors”.

This latter objective was that one of Japanese companies for many years Porter says and it

allows their huge growth for several decades. However, the sustainable advantage cannot come

only from operational effectiveness because the competitors will reach the same level of

efficiency more or less rapidly. He argues that the companies should think of their strategic

positioning, by doing different things than their competitors to remain at the top of the market


The academics and practitioners understood these ideas and looked for methods which may

support their march towards this goal. Especially for those who were focused on new product

design, approaches such as Concurrent Engineering or Dynamic Product Development were set

up [2]. Since then, these methods and approaches evolved thanks to the capitalisation of

experiences. But, parameters such as the intensification of exchanges between countries, the

use of electronic exchanges and finally Internet make easier the use of these methods for every

company and the gap between them becomes smaller and smaller continuously. This means that

newer paradigms are necessary to cover strategic positioning needs. In this paper, we define a

paradigm for New Product Development (NPD) projects which allows a firm to find a sustainable

position within the market and an operational effectiveness of its business.

The paper is organised as follows. Section two gives an overview of related works in the field of

product design project. Especially, we will focus on those works that have guided us towards the

definition of the global framework presented in this paper. Section three defines necessary and

sufficient concepts for definition and use of this framework. Some practical consequences of the

framework are then gathered in section four. Finally some conclusions and challenging

perspectives complete the paper.


II. Related works

The ultimate goal of profitable firms is to gain a sustainable business advantage. To do so, they

launch NPD projects and have to be able to manage them as surely as possible during their

whole lifecycle. These projects are always complex and pose real problems to firms especially to

SMEs due to their limited financial, technical and human resources. To understand the global

situation of NPD projects two related areas should be studied: product design (and its

management) and network of partners design (and its management). Vonderembsea et al. in [3]

looks at the network of partners as a consequence of the product design. In [4], authors claim that

these processes run in parallel and depend to each other. It means that their management

systems have to be connected together too. However, mainly an NPD project is initiated by

customers’ needs identified at the very first phase of the lifecycle [5]. Based on our research the

idea of simultaneous design of product, process and supply chain, was first proposed by Charles

H. Fine in his book [6]. Fine shows that the conjunction between these design activities forms a

fundamental element which can ensure business success. He called the global framework threedimensional

concurrent engineering 3D-CE and proposes a method, Double Helix, to define the

firm' strategy based on its relationships with suppliers, customers and its market position. A

strategy cannot be determined or otherwise it would not be realistic, if the attributes of the market

first and those of the product are not considered. That is the reason why, for instance Fisher in [7]

distinguishes two categories of product: primarily functional or innovative. A functional product

answers customer needs that we can qualify as “basic” while the innovative ones answer future

or potential needs of the market or answer “basic” needs by using new versions of the product or

by using new design or make processes. Fisher shows that the supply chain and therefore an

important part of the partners’ network, depends directly to the functional or innovative category

of products.

NPD projects are intensively studied from the technical aspect of the product structure and

architecture. Jiao in [8] provides a very complete survey of major works which take account of

product family architecture. Among this huge amount of results, we would underline research

performed by Fixson [9]. He works within the framework of 3D-CE and defines a method to

design the product family architecture. He studies the product architecture especially the coupling

between components and their interfaces. The components’ coupling and interfaces allow to

define the very first architecture of partners [10]. Nevertheless, in his study Fixson did not

consider the influence of the product architecture on the way that partners of the product

development should be selected; neither the way that the potential partners’ specificities influence

the product designers’ job. Even if, many authors such as Croom [11] and Vonderembse [3]

shows that early involvement of partners, in design and realisation phases of the NPD project is

an important success parameter.

This means that the corporate and business strategies ought to consider product design,

partnership design and collaboration characteristics from the beginning of any NPD project. This

is the main task of the strategy. We develop some necessary strategies for firms in order to be

able to control in a rational manner their collaboration. So, let us have a deeper look at the

concept of strategy. Strategy is hardly studied in many fields but specially it gains a significant

role since the work of some of the pioneers such as Porter [12] and Mintzberg [13]. Nonetheless,

it remains still fuzzy for many actors and according to Nollet et al. in [14] some academics and

practitioners do not use strategic methods anymore. Evered cited by Nollet defines the strategy

as a “continuous process by which goals are determined, resources are allocated, and a pattern

of cohesive actions is promoted by the organisation in developing competitive advantages”.

Somehow, a strategy definition clarifies actions to realise according to a specific roadmap in order

to reach an objective, Andrews cited in [14]. So, thinking, deciding and preparing long-term

collaboration plans and actions with partners are the constitutive elements of the definition of the

firms’ strategy. That is the reason why, the framework presented in this paper is used to identify

necessary strategies.

ERIMA07’ Proceedings


III. Towards a model for the NPD projects

We consider an NPD project as a global business project of a given company, called focal

company which desires to put on the market “new” products. Our definition of “new” relies on the

classification of product strategies proposed by Booz et al. [15] augmented by [16]. This

classification distinguishes 6 major strategies related to products: New to the World (new

products that create an entirely new market), New to the Company (new products that, for the first

time, allow a company to enter an established market), Additions to Existing Product Lines (new

products that supplement a company’s established product lines), Improvements in/Revisions to

Existing Products (new products that provide improved performance or greater perceived value

and replace existing products), Repositionings (existing products targeted to new markets or

market segments) and Cost Reductions (new products that provide similar performance at lower

cost). These strategies are structured along newness to the company and newness to the market

and Griffin made a list of their major objectives. Intuitively, one understands easily that each of

these strategies requires some kinds of innovation. Roughly speaking, the innovation associated

with any NPD strategy may concern product design and/or production processes, or management

and organisation. These innovations could be incremental or radical [2], [17]. But the newness we

focus on is that one applied exclusively to the product. Necessary innovations within the

organisation and management methods not perceivable by customers are not considered. By

new we would mean “those design or re-design of a subset of behaviour and/or components of

the product, perceivable by the customer”. This definition of newness is in according with the

framework defined by Gero [18] in its initial version of FBS (Function – Behaviour – Structure) of

design process. Gero distinguishes function F, from behaviour B, and structure, S. F, B and S are

connected together by 8 processes, namely: formulation, synthesis, analysis, evaluation,

documentation, reformulation types 1, 2 and 3. The relevancy of the product is assessed by

comparing an expected behaviour and the behaviour derived form the structure. Thus, the

newness is mainly related to the way that the product structure, new or not, may behave in a new


NPD project phases

According to Suh cited in Jiao [8], an NPD project is a framework which encompasses 5 domains,

namely: customer, functional, physical, process and logistics. This multifaceted framework can be

modelled in a sequential way. In figure.1, we define model of NPD projects’ lifecycle. It is based

on a traditional model according to Rak in [19]. Nevertheless, in such linear model, one cannot

see the details of processes executed in parallel. That is the reason why, we add to it a more

complete model of design phases taken from Ullrich and Eppinger in [5]. Rak sees the lifecycle

composed of ten successive phases: Requirements’ analysis, Feasibility study, Design,

Definition, Industrialisation, Patent and intellectual property, Manufacturing, Sale, Use and

(Recycling - dismantlement) Destruction. Focusing on design activities, Ullrich and Eppinger

define six activities: Planning, Concept development, System-level design, Detail design, Testing

and refinement, Production ramp-up. They have also defined various activities performed during

each phase. For instance, during the phase concept development, the design team does: identify

customer needs and production specifications, generate concepts etc. The study of these two

correlated representations of lifecycle allows us to group some of these numerous phases

together. We suggest grouping the lifecycle phases into three main phases: Analysis, Design and

Do. The precise definition of these macro-phases is given hereafter.

Analysis Phase. During this phase, the goal is to set up business objectives by defining the

products and macro parameters of its design, production, sale etc. Managers have to determine

the global business conditions by answering questions such as “make or buy?”. The analysis

phase is composed of product planning, identifying customer needs and product specification [5].

This phase is traditionally executed inside the focal company. However, scientific literature has

ERIMA07’ Proceedings


already shown that major partners of the company are already engaged in this phase or if not

they should be concerned from the very early phase of the NPD project, see [11, 20]. The

complete answer of all questions arisen during the analysis phase calls then for an analysis that

considers the constraints of partners too.







custom. needs

Product specif.

Concept generation

Concept selection

Concept testing

ERIMA07’ Proceedings

[Ulrich and Eppinger, 2004]


Product architecture



Industrial design






Design for manufacturing


Figure 1. NPD project life cycle












This question may be answered by considering not only the focal company needs during the

design phase but also during the manufacturing and production phase. For instance, managers

have to know if a potential supplier S will be able (or not) to produce 100 units of the component

C per week! If no, designers may change the product architecture by eliminating this component

from the product architecture or may choose another supplier if possible! So, it means that the

question of partnership is posed from the very early steps of any potential collaboration.

Moreover, collaboration means trades-off. This is why the analysis phase is subdivided into two

main overlapped steps: internal analysis and trade-off analysis (cf. Figure.2). After the internal

analysis done by the focal company, its managers should perform trade-off analysis with their

inescapable partners. A partner is inescapable if the project cannot be launched without its

participation. This is the case of the motor vehicle equipment manufacturers (Delphi, Valeo or

Bosch) for car makers. The problems that managers have to solve during these two overlapped

steps are similar even if the internal analysis looks for focal company’s benefit while the trade-off

analysis target at win-win relationships.

Design Phase. The Design phase corresponds to four design processes which run in parallel

(see [4]): Service design, Product design, Internal facilities and organisation design, Network of

partners design. We refer to these four processes by SPIN model. The way that these processes

are presented in Figure.2 underlines that they are coupled closely together and every

modification of one may have consequences on the others. These processes design necessary

elements components for the NPD project: resources, services, etc.

Do Phase. Finally, when all the outputs of these design processes are ready (designed or redesigned),

the “production” of the product and its associated services can begin. There is an

implicit temporal dependency between the execution of the SPIN processes, not shown in

Figure.2. This corresponds to the Do phase. The functional decomposition of this phase is almost

easy by considering all activities necessary to offer product and services to final customers. For

instance, for the product production process, these activities correspond to manufacturing,

logistics and delivery, inventory.


We would notice that these macro phases are not performed one by one and have complex

connections with each other. For instance, the design team begins the job before the end of the

analysis phase. We do not develop further this point in this paper.

Conceptually, the whole NPD project can be viewed as a controlled system conducted by a

specific control and supervision sub-system. Each of the aforementioned phases of the NPD

lifecycle must be controlled by a specific control sub-system. The control and supervision subsystem

of the design and do phases nevertheless can be split up into four distinguished but interconnected

modules, each one specialised for one of the four SPIN processes.

Figure 2. The framework of co-working projects

Dependencies and mutual Constraints

Readers have already remarked specific forms of elements within the phases and also various

arrows which connect them together. They underline complex feedback and feed-forward

relationships between phases from one side and between the control/supervision systems and

their controlled systems from the other. As the phases run dependently, any modification could

provoke a cascade of modifications through the whole NPD project. We subdivide globally the

constraints into two classes: inter-processes dependencies and inter-phases dependencies. The

inter-processes constraints represent all existing constraints between the processes of a given

phase. These dependencies are studied in several research fields. Concurrent engineering for

instance is a field that studies the links between the design of product and internal

facilities/processes while 3D-CE looks for connections between product, internal

facilities/processes and network. [4] studies coupling between product design and network

design. The inter-phases constraints model links between processes of two phases. Design-for-

Assembly or Design-for-Manufacturing represents the fields which study a part of these interphases

constraints. We will come back to these points in next section.

IV. The use of the framework for selection of partners for Design and Do phases

The focal company should choose its partners. We focused on those partners engaged in design

which will continue their collaboration during the do phase too. This is a critical selection

especially when the NPD project is “New to the world” and “New to the company”. Partners play

an important role in these cases because the support techniques for the analysis, design and do

ERIMA07’ Proceedings


phases are not mastered totally by the firm; external expertises are necessary. Other strategies of

Booz classification are concerned by this question too but it is reasonable to say that in these

cases the focal company has a clearer idea of products, their design and realisation processes.

So, the partners’ influence is under control. The two first categories of strategy contain a higher

level of uncertainties and are risky. We call them “stormy” strategies. The four next strategies

represent fewer risks; they form the “calmer” strategies. But, whatever the product strategy is, the

focal company has to think of “feed-forward” strategies in terms of partnership (see subsection 2

hereafter). These strategies highlighted, thanks to the framework, should answer the need of

“partnering, supplier relationships and strategic alliances with suppliers” [21].

Mutual dependences of phases

The collaboration with partners begins at the analysis phase. It will have consequences on design

and do activities. This is also the case of design decisions that can modify activities of the do

phase. Generally speaking, collaboration with partners imposes adaptations in terms of the

product specificities, design protocols and management, or manufacturing processes, procedures

and management. Collaboration concerns mainly three levels: partial design of the product

(engine design for a car maker), partial execution of specific tasks and activities (galvanizing) or

supplying of items and sub-assemblies. We analyse below main uncertainties and possible

consequences of partnership regarding stormy and calmer strategies.

a) Partial design of the product by partners.

Calmer product strategies. The focal company controls the output of partners’ design. The

uncertainties are relatively low. The choice of the partner can be done based on a benchmark of

formal and informal performance measures of partners.

Stormy product strategies. The collaborative design of the product represents some risks. The

feasibility (or not) of the partial design could influence the whole product architecture and

therefore other partners’ activities. In the worst case, it can influence all of the partners’ design

activities due to mutual interface and coupling between various components of the product.

b) Partial execution of activities by sub-contractors.

Calmer product strategies. The focal company controls the inputs and outputs design and do

activities. The knowledge level about the product is high and sub-contractors’ activities represent

smaller risk of modification. The attention is much focused on the quality of partners’ provided

data or items.

Stormy product strategies. The focal company does not know very well externalised activities.

Execution techniques are mastered by partners. The focal company provides and receives its

own items to partners. Partners are responsible of their added value. The items once treated are

sent back to the focal company. The risk for focal company is high. Changes of the execution

(received items) propagate through the product architecture and affect other partners.

c) Supplying items.

Calmer product strategies. The supplied items correspond to the specifications of “calls for

tenders”. The focal company’s risks are small.

Stormy product strategies. Suppliers answer requirements of calls for tenders. They can provide

supplementary data (other possibilities or options) which can contribute to the modification of the

product design, partially or completely (for example, a family of on-board computers with several

ERIMA07’ Proceedings


options and forms).

Feed-Forward strategies.

According to these dependencies we identify three sets of feed-forward strategies. They

determine the way that co-operation within the NPD projects should be thought, planned and

executed in order to minimize the influence of disturbances. The feed-forward strategies take

account of potential consequences on long-term decisions made during the NPD project. They

focus mainly on the relationships with partners. The main idea is to define collaboration

conditions aiming at defining the most relevant negotiation elements with partners.

Analysis-for-designing. It underlines that the analysis phase should take account of design

constraints. The collaborative design conditions have to be clarified from the beginning in terms of

boundaries and mutual constraints. Design activities are executed synchronously or

asynchronously by the NPD actors. In both cases, the design team should be managed and all

actors’ activities coordinated coherently under the pre-defined conditions.

Analysis-for-doing. It considers specifically do phase constraints. Every partnership decision has

direct consequences during the do phase. Therefore, the most important realisation parameters

and constraints are to be identified and considered. These parameters are extracted from

manufacturing, logistics, etc. of materials and data exchanges.

Design-for-Doing. A part of this strategy defines design-for-assembly and design-formanufacturing;

known practical activities of design. Operations management, production planning

and inventory are some basic management activities of the do phase. The design phase

influences directly all of them making them “easy” or “hard” to perform. This is the case for

example of production planning. By taking account of the way that a product has to be

manufactured (number of items, routings, …) the design team can make production planning an

easy task by reducing as much as possible the number of items and the number of the bill-ofmaterial.

By the same way, by choosing the “right” supplier or sub-contractor, the design team

makes easy the synchronisation and operations management. Therefore, the design team

actions have to be done based on the real considerations of the do management.

Figure 3. Various strategies to consider using the global framework

By considering these connected problems, any company manager can think of opportunities and

threats a co-working project may generate.

ERIMA07’ Proceedings


V. Conclusions

By taking account of various constraints, identified through and within a NPD project framework, a

company could identify an action plan for collaborations and partnership. It should allow

managers to, a) prepare the necessary logistic infrastructure for product and data exchanges, b)

negotiate with partners using estimated parameters of co-working procedures, c) construct winwin

partnerships with partners which develop partners' loyalty, d) organize the network of partners

for effective networking and e) eradicate useless tasks as much as possible, especially during the

product design process which induce future time and money consumption. These activities, direct

consequences of the framework consideration participate not only to the strategic position of the

firm but also to the efficiency of actions.

The framework identifies two major research fields in our opinion. The first one is related to the

feed-forward strategies. Patently, the work on these strategies should be completed in order to

offer a structured set of roadmaps and techniques to reach the business target of a NPD project.

The study of a) the propagation of modifications throughout the global product and network

architecture and b) the identification of modification inductors, form our second challenging

research field.


[1] Porter M.E. What is strategy? Harvard Business Review, November-December (1996)

[2] Ottosson S., Dynamic product development – DPD, Technovation, 24 (2004), 207-217

[3] Vonderembsea M.A, Uppalb M., Huangc H.H and Dismukes J.P., Designing supply chains: Towards

theory development. Int. J. Production Economics, 100, issue 2 (2006) 223-238

[4] Zolghadri M., Baron C., Girard Ph., Innovative product and network of partners co-design: context,

problematic and some exploratory results, CERA, To be published, 2007

[5] Ullrich K., Eppinger S., Product design and development, McGraw Hill, 2003

[6] Fine Ch.H. Clockspeed, Winning industry control in the age of temporary advantage, Sloan School of

Management, MIT, Basic Books, (1998)

[7] Fisher M.L. What is the right supply chain for your product? Harvard Business Review, March-April

(1997), 105 – 116

[8] Jiao J., Simpson T.W., Siddique Z., Product family design and plateform-based product development: a

state-of-the-art review, Journal of intelligent manufacturing 2006, 1-36

[9] Fixson S., Product architecture assessment: a tool to link product, process, and supply chain design

decisions, Journal of Operations Management, 23, (2005), 345 – 369

[10] Zolghadri M., Baron C., Girard Ph., Aldanondo M, Vareilles E. How the architecture of a product can

help managers to define the network of partners?, accepted paper in PLM conference 2007, July 2007

[11] Croom S.R, The dyadic capabilities concept: examining the processes of key supplier involvement in

collaborative product development, European journal of purchasing and supply management, 7, (2001), 29-


[12] Porter, M., Competitive Strategy, Free Press, New York, 1980.

[13] Mintzberg H., Ahlstrand B., Lampel J., Strategy Safari, Prentice-Hall International, 1998

[14] Nollet J., Ponce S., Campbell M. J., About “strategy” and “strategies” in supply management, Journal of

purchasing and supply management, 11 (2005) 129-140

[15] Booz, Allen and Hamilton. New Product Management for the 1980’s. New York: Booz, Allen, and

Hamilton, Inc., 1982

[16] Griffin A., Page A.L., PDMA Success Measurement Project: Recommended Measures for Product

Development Success and Failure, J product innovation mgt, 13 (1996), 478-496

[17] Tomala F., Sénéchal O. Innovation management: a synthesis of academics and industrial points of

view, 22 (2004), 281-287

[18] Gero J.S., Kannengeisser, The situated function-behaviour-structure framework, Design studies, 25

(2004), 373-391

ERIMA07’ Proceedings


[19] Rak I., Teixido C. La démarche de projet industriel : technologie et pédagogie, Les Editions Foucher,

Paris, 1992

[20] Arend, R. Implications for including shared strategic control in multi-party relationship models, European

Management Journal, 24, (1), 2006, 38-48

[21] Hadeler B.J, Evans J.R. Supply strategy: capturing the value, Industrial management, 36, (4), (1994) 3-


ERIMA07’ Proceedings


An Organizational Memory-based Environment as Support for

Organizational Learning

ERIMA07’ Proceedings

MH. Abel * , D. Lenne, A. Leblanc

University of Technology of Compiègne, CNRS


Compiègne, France

* Corresponding author:, +33 (0)3 44 23 49 50

Abstract: Information and Communication Technologies have transformed the way people work and have a

growing impact on long life learning. Organizational Learning is an increasingly important area of research

that concerns the way organizations learn and thus augment their competitive advantage, innovativeness,

and effectiveness. Within the project MEMORAe2.0, we are interested by the capitalization of knowledge

and competencies in the context of an organization. We developed the E-MEMORAe2.0 environment which

is based on the concept of learning organizational memory. This environment is meant to be used by a

Semantic Learning Organization as support for Organizational Learning. In such an environment, actors of

the organization use, produce and exchange documents and knowledge. To that end, they have to access

the resources and to adapt them to their needs. In this paper, we present the organizational learning

approach, we stress the role of an organizational memory in this approach and we show how it enables

knowledge transfer processes. Then we present the project MEMORAe2.0 and we describe how we

implemented the organizational learning approach in the E-MEMORAe2.0 environment.

Keywords: Organizational Learning, Organizational Memory, Community of practice, Knowledge

Representation, Ontologies.

I. Introduction

Globalization, information and communication technologies (ICT), innovation, are the new criteria

of the economic environment. The company's knowledge capital is increasingly crucial. In this

context, companies must take into consideration two new risks:

• Knowledge obsolescence with respect to its environment (technologies, competitors,

markets, methods...). It is thus necessary to change from a stock logic to a flow logic which

could be used to set up devices of training and innovation.

• Loss of know-how, competencies. This loss can take place in time (retirement, mutation...). It

can also take place through space when know-how, competencies are used only in one site

but not in the other sites of the company.

Because they are concerned by organization learning, Communities of Practice (CoPs) and

Organizational Learning (OL) are two ways for preventing these two risks.

The term “Community of Practice”, CoP, is relatively recent coinage, even though the

phenomenon it refers to is age-old. The concept has turned out to provide a useful perspective on

knowing and learning. A growing number of people and organizations in various sectors are now

focusing on communities of practice as a key to improve their performance. CoPs are groups of

people who share a concern or a passion for something they do and learn how to do it better as

they interact regularly (Wenger 1998)


OL is an increasingly important area of research that concerns the way organizations learn and

thus augment their competitive advantage, innovativeness, and effectiveness. OL requires tools

facilitating knowledge acquisition, information distribution, interpretation, and organization, in

order to enhance learning at different levels: individual, group and organization.

In the Information Systems context, the “Semantic Learning Organization” (SLO) is an emerging

concept that extends the notion of learning organization in a semantic dimension. A SLO must be

considered as a learning organization in which learning activities are mediated and enhanced

through a shared knowledge representation of the domain and context of the organization (Sicilia

& Militras 2005).

These concepts are at the core of the project MEMORAe2.0 1 . With this project, we are interested

in knowledge and competencies capitalization in the context of organizations and more precisely

the capitalization of the resources related to these knowledge and competencies. We particularly

focus on the way members of an organization could use this capitalization to get new knowledge

and competencies. To that end, we developed an environment based on the concept of learning

organizational memory. This environment is dedicated to be used by a SLO and to facilitate CoPs

development. In such a system the learning content is indexed on knowledge and competencies

organized by means of ontologies. Users can acquire knowledge and competencies by doing

different tasks (solving problems or exercises, reading examples, definitions, asking questions…).

In our memory, competencies are defined via knowledge they enabled by practice.

In the following, we specify the role of communities of practice in the innovation process; we

present the organizational learning approach in this context before to stress the need to use an

organizational memory support. Then we present the project MEMORAe2.0 and we describe how

we implemented the organizational learning approach in the E-MEMORAe2.0 environment.

II. Community of Practice and Organizational Innovation

According to Wenger, Communities of Practice are everywhere - at school, at work, in our

hobbies… Members of such a community are informally bound by what they do together and by

what they have learned each other through their exchanges about what they do (Wenger 1998).

He defines a community of practice along three dimensions:

• What it is about: the subject of interest.

• How it functions: members are engaged together into a social entity.

• What capability it has produced: a set of shared resources (vocabulary, documents,


A community of practice is constituted of volunteers who are concerned by a work-related or

interest-related field (Brown and Duguid 1991). Thus it is different from a team; the two constructs

can be characterized as follows (Storck and Hill 2000):

• The team members are assigned by the organization staff. The community of practice

members freely join the group.

• Authority relationships within the team are organizationally determined. These relationships in

a community of practice emerge through interaction about the subject of interest.

1 In French, Mémoire organisationnelle appliquée au e-learning.

ERIMA07’ Proceedings


• Teams have goals determined by the organization staff. Communities are only concerned by

their interaction between members.

• Teams rely on work processes organizationally defined. Communities develop their own


Communities of practice exist in any organization even if they are not bound by organizational

affiliation. They are important to the organization functioning and become crucial to those that

recognize knowledge as a key asset. They fulfill many functions with respect to the creation,

accumulation, and diffusion of knowledge in an organization (Wenger 1998):

• They enable exchange and interpretation of information. Their members have a shared

understanding, vocabulary so members can easier communicate and present relevant


• They can capitalize knowledge in “living” ways. They preserve tacit knowledge that formal

systems cannot capture. Members discuss together, that is ideal for initiating newcomers into


• They can steward competencies to keep the organization at the cutting edge. Members

discuss novel ideas, work together on problems. This collaborative work is important because

members invest their professional identities in being part of a dynamic community.

• They provide a place that allows identifying individuals competencies, identities. Identity is

important because it allows knowing who are people, what they interest in, and their

competencies. If companies want to promote people creativity, they have to consider

communities as a means to help them to develop their identities.

In some organizations, the communities themselves are becoming recognized as valuable

assets. Thus, they serve both each member of the organization (specify his/her identity) and the

organization itself. According to the study presented in (Lesser and Storck 2001), four areas of

organizational performance are impacted by communities of practice: 1) Decreasing the learning

curve of newcomers; 2) Responding more rapidly to needs and inquiries; 3) Reducing rework and

preventing “reinvention of the wheel”; 4) Spawning new ideas for product and services.

Acting as a community of practice seems a prerequisite to an organization to enable its members

to share experiences, knowledge and competencies i.e. to learn each other.

III. Learning organization / Organizational Learning

A learning organization (LO) is an organization in which processes are imbedded in the

organizational culture that allows and encourages learning at the individual, group and

organizational level (Sunassee and Haumant 04). Thus a LO must be skilled at creating,

acquiring, and transferring knowledge, and at modifying its behaviour to reflect knew knowledge

and insights (Garvin 1994). According to (Dogson 1993), a LO is a firm that purposefully

constructs structures and strategies so as to enhance and maximize organizational learning (OL).

An organization cannot learn without continuous learning by its members. Individual learning is

not organizational learning until it is converted into OL. The conversion process can take place

through individual and organizational memory (Chen & al 2003). The results of individual learning

are captured in individuals’ memory. And, individual learning becomes organizational learning

only when individual memory becomes part of organizational memory.

ERIMA07’ Proceedings


Finally, OL seldom occurs without access to organizational knowledge. In contrast to individual

knowledge, organizational knowledge must be communicable, consensual, and integrated

(Duncan and Weiss 1979). According to (Chen & al 03), being communicable means the

knowledge must be explicitly represented in an easily distributed and understandable form. The

consensus requirement stipulates that organizational knowledge is considered valid and useful by

all members. Integrated knowledge is the requirement of a consistent, accessible, wellmaintained

organizational memory.

IV. Organizational Memory

According to (Stein & Zwass 1995), an organizational memory is defined as “the means by which

knowledge from the past is brought to bear on present activities and may result in higher or lower

levels of organizational effectiveness”. It can be regarded as the explicit and persistent

representation knowledge and information in an organization, in order to facilitate their access

and their re-use by the adequate members of the organization for their tasks (Dieng & al 1998).

Thus, an organizational memory seems indispensable for organizational learning. An integrated

organizational memory provides a mechanism for compatible knowledge representation, as well

as a common interface for sharing knowledge, resources and competencies.

Organizational memory can be made of both hard data such as reports, articles but also soft

information such as tacit knowledge, experiences, critical incidents, and details about strategic

decisions. We need ways to store and retrieve both kind of information. Indeed, ideas generated

by employees in the course of their task seldom get shared beyond a small group of people or

team members. This informal knowledge or non canonical practice is the key to organizational

learning (Brown & Duguid 1991). New collaborative technologies should be designed based on

this informal knowledge, or communities of practice. The use of information systems to manage

organizational memory improves precision, recalling, completeness, accuracy, feedback, and

reviewing, far better than the human beings currently involved in organizational memory.

V. The project MEMORAe2.0

Communities of practice seem to be a key asset to facilitate organizational learning and

innovation. Just because they arise naturally does not mean that organizations can’t do anything

to influence their development (Wenger 1998). One of the main reasons that communities are

considered as an important vehicle for innovating is their potential to create an environment

where members feel comfortable for sharing ideas.

“Provided with an ontology meeting needs of a particular community of practice, knowledge

management tools can arrange knowledge assets into the predefined conceptual classes of the

ontology, allowing more natural and intuitive access to knowledge” (Davies & al 2003, part 1.1).

Our objective within the project MEMORAe2.0 is to develop a knowledge management

environment which facilitate the communities of practice attitude and thus contribute to

organizational learning and innovation.

The project MEMORAe2.0 is an extension of the project MEMORAe (Abel & al 2006). Within the

project MEMORAe, we were interested in the knowledge capitalization in the context of

organizations and more precisely the capitalization of the resources related to this knowledge by

means of a learning organizational memory. We particularly focused on the way organization

actors could use this capitalization to get new knowledge. To that end, we developed the

environment E-MEMORAe as support for e-learning. In such a system a learning content is

indexed to knowledge organized by means of ontologies: domain and application. The domain

ERIMA07’ Proceedings


ontology defines concepts shared by any organization; the application ontology defines concepts

dedicated to a specific organization. Using these ontologies, actors can acquire knowledge by

doing different tasks (solving problems or exercises, reading examples, definitions, reports…).

We used Topic Maps (XTM, 2001) as a representation formalism facilitating navigation and

access to the learning resources. The ontology structure is also used to navigate among the

concepts as in a roadmap. The learner has to reach the learning resources that are appropriate

for him. E-MEMORAe was positively evaluated (Benayache & al 2006).

Within the project MEMORAe2.0 we are interested in using the MEMORAe approach in an

organizational learning context. To that end, we take into account different levels of memory and

different ways to facilitate exchanges between the organizational actors. The environment E-

MEMORAe2.0 has been designed and is meant to be used by a Semantic Learning Organization

(SLO). In such an environment, there is a difference between knowledge and resources of: a) the

whole organization; b) a community of practice in the organization – the organization is

constituted of different communities of practice even if it can be seen as a community of practice

itself; and c) an individual.

For example, when actors need to know who works on a project, they have to access the

information relative to the project itself. A way to do this is to navigate through a concept map

based on an ontology defining the organization knowledge. According to their access rights, they

can visualize different resources. In case of exchange resources, they can exchange ideas or

information (externalization of tacit knowledge). Thus, learning can occur by means of these

different resources, for example by: Asking a question to the right person (the one who is

described as an expert: (s)he worked on a project linked to the searched knowledge…); Asking a

question to everyone concerned by a subject (forum); Reading the right rapport, book…

(communication resources); Performing the right exercise, problem, QCM (action resources)...

To that end, we designed the organizational learning memory around two types of sub-memory

that constitute the final memory of the organization:

• Group memory: this kind of memory enables all the group members to access knowledge and

resources shared by them. The group is at least made of two members. We distinguish three

types of group memory corresponding to different communities of practice:

Team memory: The team memory capitalizes knowledge, resources, communication

concerning any object of interest of the group members.

Project memory: The project memory capitalizes knowledge, resources, communication

concerning a project. All the information stored is shared by the members who work

on the project.

Organization memory: this memory enables all the members of the organization to

access knowledge and resources without access right. These resources and

knowledge are shared by all the organization members.

• Individual memory: this kind of memory is private. Each member of the organization has his own

memory in which he can organize, and capitalize his knowledge, resources.

These memories offer a way to facilitate and to capitalize exchanges between organization


For this purpose, we extended MEMORAe ontologies to represent these sub-memories and

exchange resources (see Figure 1).

ERIMA07’ Proceedings


Figure 1. Part of the domain ontology

To validate our approach, we reused the two pilot applications developed in the framework of

MEMORAe. The first one concerns a course on algorithms and programming at the Compiègne

University of Technology (France) and the second one concerns a course on applied

mathematics at the University of Picardy (France).

VI. The E-MEMORAe2.0 environment

Our objectives within E-MEMORAe2.0 (see Figure 2) are to help the users of the memory to

access and exchange information about organization knowledge at anytime. To that end, users

have to navigate through the application ontology that is related to the organization, to visualize

the indexed resources thanks to this ontology, to ask for questions or make remarks thanks to

this ontology... These actions are possible according to their memory access rights. It should be

noted that all the memories are structured around the same ontology; they differ only by the

indexed resources.

At each step, the general principle is to propose to the learners, either precise information,

resources on what they are searching for, or links allowing them to continue their navigation

through the memory. To be more precise, the user interface (see Figure 2) proposes:

• An access to different memories (top left), specifying the memory visualized and allowing

the access to authorized memories. By default, the user visualizes his private memory.

• Entry points (left of the screen) enabling to start the navigation with a given concept: an entry

point provides a direct access to a concept of the memory and consequently to the part of the

memory dedicated to notions.

• A short definition of the current notion: it enables the learner to get a preview of the notion

and enables him to decide if he has to work it or not.

• A part of the ontology describing the current resource is displayed at the center of the


• A list of resources (bottom of the screen) which contents are related to the current concept:

they are ordered by type (books, course notes, sites, examples, comments, etc.). Starting from a

notion, an entry point or a notion reached by the mean of the ontology, the user can directly

access to associated resources. Descriptions of these resources help the user to choose among


• History of navigation: it enables the learner to remind and to be aware of the path he

followed before. Of course, he can get back to a previously studied notion if he wants to.

ERIMA07’ Proceedings


Figure 2. Navigation in the memory (in French).

Ontologies enable the organization and capitalization exchanges. In order to facilitate the

externalization and capitalization of tacit knowledge, we decided to associate exchange

resources to each ontology concept. An exchange resource concerns one concept and can be

asynchronous (forum, wiki) or synchronous (chat). It gives to group members the opportunity to

exchange ideas, information about one subject; this subject is the concept which indexes the

exchange resource. Currently, these informal exchanges are realized in a writing way. We plan to

record oral exchanges via internet calls (for example skype).

In order to put into practice our approach, we used various tools in the E-MEMORAe2.0

environment. Thus we associated to each memory formation a forum whose fields correspond to

the concepts of the application ontology. In this way, exchanges are capitalized and accessible to

formation actors according to their rights. In the same way, we associated a blog to each memory

whose tickets correspond to the concepts of the application ontology. Contrary to the forum, a

blog doesn’t enable reflection around a question but reflection around an idea, an argumentation

or a talk.

In the following we present an example of the forum use. Figure 3 shows how users can access

to a forum associated to an ontology concept in a group memory. They just have to select this

resource type in the resources part linked to the concept. When they select the type forum, the

list of question subjects that were posted is printed. In case users want to post their own question,

they have to click on the bubble icon (placed at the right of the term Forum) and then specify the

subject and the question itself. In case they want to read answers to this question subject, they

have to click on the subject itself. A screen (see Figure 4) appears that prints: (1) the subject and

the question (top part), (2) the different answers with their author. At each time, the date of the

contribution is registered.

ERIMA07’ Proceedings


Figure 3. Forum Access (in French)

Figure 4. Forum Resource (in French)

VII. Conclusion

In this paper, we presented the organizational learning approach followed in the framework of the

project MEMORAe2.0. Our approach consists in offering a same environment to CoPs and

teams. Team members will be able to exchange ideas, information about a particular subject that

interests them. Thus, they can learn each other about this subject and then constitute a CoP

insight the team. We showed how we implemented our approach in the E-MEMORAe2.0

environment for academic organizations. The main component of this environment is an

organizational memory that enables knowledge transfer at three levels: individual, group and

organization. A first evaluation of this memory, that was restricted to the organization level, has

given encouraging results. Students appreciated to have access to documents by the way of the

course concepts. In order to complete this evaluation, we plan now to experiment the two other

levels through project-based activities. We also plan to examine to what extent industrial

organizations, and companies could benefit of this approach. However it should be noted that

software environments are not sufficient to promote organizational learning. It is also a question

of culture, as well at university as in any other organization.


Abel, M.-H., Benayache, A., Lenne, D., & Moulin, C. (2006). E-MEMORAe: a content-oriented environment

for e-learning. In S. Pierre (ed): E-learning networked environments and Architectures: A Knowledge

ERIMA07’ Proceedings


processing perspective. Springer Book Series: Advanced Information and Knowledge Processing (AI & KP)

pp 186-205.

Benayache, A., Leblanc, A., Abel M.-H. (2006) “Learning memory, evaluation and return on experience”

Proceedings of Workshop of Knowledge Management and Organizational Memories, ECAI2006, Riva del

Garda, Italy, August 28 – September 1, 2006, pp.14-18.

Brown, J. S. and Duguid, P. (1991) “Organizational Learning and Communities-of-Practice; Toward a Unified

View of Working, Learning and Innovation.” Organization Science 2, No. 1, 1991, pp. 40-57.

Chen, J., Ted, E., Zhang R. and Zhang Y. (2003) Systems requirements for organizational learning.

Communication of the ACM, December 2003, vol. 46, no 12, pp73-78.

Davis, J., Duke, A. & Sure, Y. (2003) “OntoShare – An Ontology-bases Knowledge Sharing System for

Virtual Communities of Practice”