pdf 820Kb - INSEAD CALT
pdf 820Kb - INSEAD CALT
pdf 820Kb - INSEAD CALT
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
The ONTO-LOGGING Consortium consists of (alphabetical order):<br />
ARCHETYPON,DELTATEC,FZI, INDRA, <strong>INSEAD</strong>, META4<br />
Status: Confidentiality:<br />
[<br />
[<br />
[<br />
[<br />
Corporate Ontology Modelling and Management<br />
System<br />
X<br />
]<br />
]<br />
]<br />
]<br />
Draft<br />
To be reviewed<br />
Proposal<br />
Final / Released to CEC<br />
Project ID: IST-2000-28293<br />
Deliverable ID: D8b –version 1.0<br />
Workpackage No: WP8<br />
[<br />
[<br />
[<br />
X<br />
] Public<br />
] Confidential<br />
] Restricted<br />
Title:<br />
Evaluation report of the use of<br />
Onto-Logging platform in the<br />
user site<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Pages: 110<br />
Responsible Author: <strong>INSEAD</strong><br />
Co-Author(s): INDRA<br />
- for public use<br />
- for Onto-logging consortium and Commission<br />
services<br />
- restricted to a group specified by the Onto-logging<br />
consortium and including Commission services<br />
Title: Evaluation report of the use of Onto-Logging platform in the user site<br />
Summary / Contents:<br />
The main goal of this document is to propose a framework for the evaluation, and its<br />
application for the evaluation of the Ontologging System.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Table of Contents<br />
Page : 2 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
1 Introduction ....................................................................................................................................6<br />
2 The Principles of the Evaluation.....................................................................................................7<br />
2.1 Evaluation: Objective & Challenges .........................................................................................7<br />
2.2 Evaluating What? ....................................................................................................................8<br />
2.2.1 Levels of evaluation ............................................................................................................8<br />
2.2.2 The phases of the evaluation: formative, summative and “substantive value”.........................10<br />
2.3 The Evaluation Methods & Tools ...........................................................................................12<br />
2.3.1 Evaluation: an overview .....................................................................................................12<br />
2.3.2 Observational methods ......................................................................................................13<br />
2.3.3 User’s Opinions methods ...................................................................................................13<br />
2.3.4 Interpretative methods .......................................................................................................14<br />
2.3.5 Predictive methods ............................................................................................................14<br />
2.3.6 Test, Experiment and Usability Engineering methods ..........................................................15<br />
2.4 Defining the evaluation process ..............................................................................................16<br />
2.4.1 The definition of the research questions ..............................................................................16<br />
2.4.2 Selecting the evaluation methods (defining the instrument) ..................................................18<br />
2.4.3 Executing the instrument and gathering the data ..................................................................19<br />
2.4.4 Analysing the data, theory confirmation & building .............................................................19<br />
3 Evaluating the Ontologging System ..............................................................................................21<br />
3.1 Overview ..............................................................................................................................21<br />
3.2 Goals and scope of the evaluation of Ontologging ...................................................................22<br />
3.2.1 Knowledge Management & Ontologies, some clarification ..................................................22<br />
3.2.2 The goals of Ontologging: the design of a Knowledge Management System that better<br />
support some knowledge processes ..................................................................................................24<br />
3.3 Selecting the elements to be evaluated (evaluation criteria) ......................................................25<br />
3.3.1 Identifying the elements associated to each of the goals .......................................................25<br />
3.3.2 Levels of evaluation for Ontologging. What could be evaluated............................................27<br />
3.3.3 The different phases of the evaluation .................................................................................30<br />
3.3.4 A pre-selection of the elements to be evaluated ...................................................................31<br />
4 Gathering the Data.......................................................................................................................36<br />
4.1 Previous work.......................................................................................................................36<br />
4.1.1 The IR (Information Retrieval) approach ............................................................................36<br />
4.1.2 The Ontology-based approaches .........................................................................................37<br />
4.2 The approach used for evaluating Ontologging........................................................................39<br />
4.3 The definition of the instruments and the gathering of the data .................................................40<br />
4.3.1 The context of the evaluation .............................................................................................40<br />
4.3.2 The Questionnaires............................................................................................................41<br />
4.3.3 Focus groups.....................................................................................................................44<br />
4.3.4 Interviews.........................................................................................................................45<br />
4.3.5 Experiments, via Scenarios ................................................................................................46<br />
4.4 Some directions towards a more quantitative (and rigorous?) evaluation...................................46<br />
4.4.1 The reason of the qualitative evaluation ..............................................................................46
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 3 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
4.4.2 Some direction for a quantitative evaluation........................................................................46<br />
4.4.3 Some prospective operationalization of “SMART” quantitative evaluation............................47<br />
4.4.4 Concluding remarks on the quantitative evaluation ..............................................................49<br />
5 Results & Analysis........................................................................................................................50<br />
5.1 Phase 0: State of the situation (formative evaluation) ...............................................................50<br />
5.1.1 Results..............................................................................................................................50<br />
5.1.2 Analysis............................................................................................................................53<br />
5.2 Phase 1: Ontology Building (and some content population) ......................................................54<br />
5.2.1 Ontology building (at INDRA) ...........................................................................................54<br />
5.2.2 The reengineering of the Ontology .....................................................................................57<br />
5.2.3 Some lessons learned. ........................................................................................................60<br />
5.3 Phases 3: Content population .................................................................................................60<br />
5.4 Phases 4: Evaluating Ontologging “knowledge retrieval” .........................................................61<br />
5.4.1 Evaluating of the basic knowledge retrieval.........................................................................62<br />
5.4.2 Evaluating of the user-centred usages .................................................................................65<br />
5.5 A Comparison with a more traditional knowledge management system (KnowNet) ...................66<br />
5.5.1 Description of the evaluation..............................................................................................67<br />
5.5.2 Comparing the two systems................................................................................................67<br />
5.5.3 Lessons learned from this comparison.................................................................................67<br />
5.6 Final words ...........................................................................................................................68<br />
6 Discussion and Conclusions ..........................................................................................................69<br />
7 References.....................................................................................................................................72<br />
8 Annex...........................................................................................................................................75<br />
8.1 Annex 1: Ontologging Goals & focus. The consortium perspective. ..........................................75<br />
8.2 Annex 2: Description of INDRA (the main user group)............................................................76<br />
8.2.1 An overview .....................................................................................................................76<br />
8.2.2 Activities of Indra’s Competence Centres ...........................................................................77<br />
8.2.3 The tendering process at INDRA........................................................................................80<br />
8.2.4 References ........................................................................................................................81<br />
8.3 Annex 3: Ten Usability Heuristics (Nielsen J. and Molich P.)...................................................81<br />
8.4 Annex 4: Glossary of terms....................................................................................................82<br />
8.5 Annex 5: Pre-questionnaire for the participants in the Ontologging usability test .......................83<br />
8.6 Annex 6: Questionnaire Ontology-based approach for the structuring of knowledge ..................86<br />
8.7 Annex 7 Spanish questionnaires .............................................................................................89<br />
8.8 Annex 8 User modelling tools and knowledge distribution agents questionnaires .......................95<br />
8.9 Annex 9 Ontologging project questionnaire........................................................................... 101
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Change Log<br />
Page : 4 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Vers. Date Author Description<br />
8-April-03 Thierry Nabeth Very First draft version of deliverable (methodology)<br />
12-Nov-03 Liana Razmerita Works on chapter 4 and integrating preliminary evaluation results<br />
23-Dec-03 Liana Razmerita Integrating evaluation results and lesson learned<br />
27-Jan-04 Thierry Nabeth Finalisation and final conclusion<br />
Acronym/<br />
abbreviation<br />
List of Acronyms and Abbreviations<br />
Resolution
Executive Summary<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 5 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
The main goal of this document is to propose a framework for the evaluation, and its<br />
application for the evaluation of the Ontologging System – a next generation ontology-based<br />
knowledge management platform - that has been designed as part of the Ontologging IST<br />
project.<br />
This document is structured in several sections.<br />
The first section presents the methodological aspects and principles of the evaluation, and in<br />
particular describes what are the objective of the evaluation, what are the different phases for<br />
the evaluation (formative or summative), and what are the different categories of methods<br />
(observational, users’ opinion, interpretative, etc.) that can be used. This section also<br />
proposes to use of an adapted version of the Donald Kirkpatrick model, in order to evaluate<br />
the different levels of efficiency and effectiveness achieved by the solution.<br />
The second section presents how these principles are to be applied to the concrete evaluation<br />
of the different components of the Ontologging platform. In particular, this section identifies<br />
the activities that are going to be evaluated (structuring knowledge, capitalizing knowledge,<br />
searching knowledge, and sharing knowledge), selects the different methods to be used and<br />
establishes an action plan.<br />
The third section (chapter 4) is related to the execution of the action plan of the evaluation. It<br />
describes the setting of the evaluation (what is the context of the evaluation at Indra, what is<br />
the profile of the participants of the evaluation). It also presents on the evaluation methods<br />
used and the elaboration of the evaluation instruments, and collection of the data.<br />
Then, the fourth section comprises an analysis of the data, and some preliminary results of<br />
the evaluation.<br />
Finally, the last section draws the main conclusion of this evaluation, and defines some future<br />
work in this domain to be conducted.
1 Introduction<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 6 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
The main goal of this document is to propose and to apply a framework for the evaluation of<br />
the Ontologging System – a next generation ontology-based knowledge management<br />
platform– that has been designed as part of the Ontologging IST project.<br />
This document tries to provide a complete view of the evaluation:<br />
The rational for the evaluation, what are the methods available, what methods have been<br />
selected for evaluating the Ontologging system, how these methods are operationalized,<br />
executed, and how is the result analysed.<br />
This document is structured in several sections.<br />
The first section presents the methodological aspects and principles of the evaluation: what is<br />
the evaluation for? What can be evaluated? What should be evaluated? What are the different<br />
methods available to conduce an evaluation? Etc.<br />
The second section presents how these principles are to be applied to the concrete evaluation<br />
of the different components of the Ontologging platform (a next generation, ontology-based<br />
knowledge management platform). In particular, this section establishes an action plan for<br />
evaluating the system, and in particular defines the methods that have been chosen to capture<br />
the data.<br />
The next section (chapter 4) is related to the execution of the action plan of the evaluation. It<br />
presents concretely the setting of the evaluation (what is the context of the evaluation, who<br />
are the participant), the elaboration of the instrument and the collection of the data.<br />
Then, the following sectio n is concerned with the analysis of the data, and of the result of the<br />
evaluation (this section may be empty at an initial stage of this document). This section also<br />
presents how the different results are taken into account in order to improve the system.<br />
Finally, the last section draws the main conclusion of this evaluation, and defines some future<br />
work in this domain to be conducted.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
2 The Principles of the Evaluation<br />
Deliverable ID: D8b<br />
Page : 7 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
2.1 Evaluation: Objective & Challenges<br />
The evaluation of a research project always raises many questions and challenges.<br />
A first set of questions is related to the objective of the evaluation. Is the evaluation<br />
conducted to guaranty that the resources have been properly utilized for what they were<br />
intended for (in particular when it concerns the spending of public money), or is the objective<br />
of the evaluation to provide the participants an assessment and some feedback that will help<br />
them to better pilot the project and in particular maximize the generation of value by this<br />
project?<br />
A second set of questions is related to the scope of the evaluation. Are we interested in<br />
assessing the process of advancement of the project or in evaluating the quality of the results<br />
that are generated by this project? Are we interested by evaluating the technical system (the<br />
demonstrator) that is being designed, or by the approach that this system is expected to<br />
validate?<br />
Another set of questions concerns the dynamics of the execution of the project. What are the<br />
consequences of the evaluation on the dynamics of advancement of a project? Indeed the<br />
evaluation is rarely neutral and brings several secondary effects such as: an increase<br />
transparency and a magnification of the importance of the elements that are being monitored;<br />
a reduction of the flexibility and an increase risk-avoidance by the people whose activity is<br />
being monitored.<br />
An additional set of questions has to do we the operationalization of the evaluation: What is<br />
the amount of resources that should to be dedicated to the evaluation of the project? How can<br />
we evaluate the effort, and in particular decide how the evaluation resources have to be<br />
allocated? How should we direct the effort (prioritization)? How do we deal with all the risks<br />
associated with the evaluation, and in particular the resistance of people and organizations to<br />
participate in an activity that consume their time, and may threaten their position?<br />
Finally, a last set of question is related to the analysis of the result of the evaluation and the<br />
use of the evaluation. How do we proceed to extract the maximum of this evaluation, identify<br />
the most significant results and learn from them?<br />
The answer to all these questions is difficult, and is well is beyond the scope of this<br />
document. Indeed, if the main focus of a research project should be the maximization of the<br />
effectiveness of the evaluation effort in the perspective of the value of the generated<br />
knowledge (value for the end user; innovativeness of the solution; capability to exploit this<br />
knowledge), a project very rarely provides the time to evaluate all the potential impact on the<br />
society of the knowledge that has been created.<br />
Besides, and as indicated, many factors can make the operationalization of the evaluation<br />
delicate and difficult to achieve such as: cost and time factors (the evaluation do not come for<br />
free, and usually takes time); or some less tangible factors such as the willingness of people<br />
and organization to participate in the evaluation that they can consider as a thread, and the
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 8 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
effect of the evaluation on the dynamic of exec ution of a project (with the bureaucratic<br />
syndrome).<br />
The approach we have decided to adopt in this document is to concentrate as much as<br />
possible on the dimension of the evaluation that is related to the substantive value<br />
(effectiveness) of the knowledge that is generated (in order work the effectiveness of the<br />
approach), and to support the other dimensions when this first dimension can not be<br />
adequately covered.<br />
2.2 Evaluating What?<br />
2.2.1 Levels of evaluation<br />
Donald Kirkpatrick, who started his work on learning evaluation in 1959, proposes four<br />
levels of evaluation of training programmes (Donald Kirkpatrick, 1996). Level 1, measures if<br />
the Learner liked the training (Did Learners like it?); Level 2 measures if the learning<br />
objectives have been met (Did Learners learn?); Level 3 measures the transfer of skills back<br />
to the job (Are they using it?); Level 4 measures the impact on the business (Did it matter?).<br />
We believe that this model of evaluation, which addresses both the efficiency and the<br />
effectiveness aspects of training programmes, can easily be transposed to the evaluation of<br />
systems in general, and in particular in systems that include a strong technical component<br />
(such as it is the case for the Ontologging system).<br />
We will therefore rely for the evaluation, to the four Kirkpatrick levels, plus two additional<br />
levels (technicalities & economics) that appear to be useful for evaluating technical systems:<br />
• Level 0: Technicalities. Do the approach/system perform technically well?<br />
• Level 1: Users’ acceptance: do they like the approach/system?<br />
• Level 2: Assessment: Do the management of knowledge approach/system function?<br />
• Level 3: Transfer: Is the approach/system used?<br />
• Level 4: Impact: Measures the impact in supporting the organizational processes<br />
(Does it deliver substantive value to the Organization?)<br />
• Level 5: Economics. Do the approach/system performs economically well?<br />
Level 0: Do the system performs technically well? This level reflects the performance of<br />
the system according to a technical perspective. It covers elements such as speed, scalability,<br />
reliability, (technical) flexibility, simplicity (capability to evolve) and openness (ability to<br />
interoperate with other systems).<br />
Level 1: Users’ acceptance (do they like it)? This level reflects the perception of the system<br />
by the user and is sometimes called the “smile sheet”. It is important to note that a good<br />
perception by the user do not guaranty that the system is useful (for instance, very nice<br />
looking graphics or gizmos will please the user, but will not contribute to his performance).
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 9 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
However, the importance of user’s perception should not be underestimated, since, it<br />
contributes to the motivation of the user, which can be considered very important for the<br />
adoption any radically new system.<br />
The elements that can be evaluated at this level include the user -friendliness of the system,<br />
and the perceived value and limitation that the user gets from the system, or level of<br />
interactivity. The table 1 summarize the different levels of evaluation.<br />
Level 2: Assessment: Do the approach/system function? This level reflects the richness &<br />
the completeness of the functionalities of the system. In particular, it covers elements such as<br />
the ability to conduct a full session of the system, the number of functions available, and how<br />
deep the system support the user’s activities.<br />
Level 3: Transfer: Is the approach/system used? This level reflects the adoption of the<br />
system in an operational context (beyond the testing). In particular, it covers elements such as<br />
the adoption of the approach/system by an organization, and also clarifies the context of use.<br />
Indicators may include the number of installation effectively used in the organization, the<br />
number of activities supported for the organization, the features adopted by the use, and the<br />
rate of growth of new installation.<br />
Level 4: Impact: Measures the impact on the organization. This level reflects the<br />
substantive value of the approach/system for the organization, and for the people that use it.<br />
Indicators of impact include: improved work productivity, better work quality, faster result,<br />
and cognitive / behavioural transformation of the users leading to an organization better<br />
adapted to its environment.<br />
Level 5: Do the approach/system performs economically well? This level is concerned<br />
with the evaluation of the system according to a market perspective, and in particular its<br />
capability to be exploited in a competitive environment. Indicators include the cost of<br />
acquisition and the cost of ownership of the solution.<br />
It is important to remind that users organizations are ultimately driven by results & therefore<br />
should mainly be interested by level 4 (Impact on the Organization), and in particular to the<br />
answer to the following questions: (1) Is the proposed approach & system delivers<br />
substantive value to the organization that is translated into improved flexibility and better<br />
adequation to the company needs? (2) What are the key elements of this approach & system<br />
that contribute the most to the impact, and what is the nature of their contribution? However,<br />
Level 4 cannot easily be measured, and the answer to these questions can mainly be observed<br />
in the long run.<br />
The evaluation of Level 2 (assessment) provides a useful perspective: It reflects the level of<br />
functionalities delivered by the approach & system. However, this level can definitively not<br />
be considered sufficient since it only provide a partial picture that do not take into account the<br />
meaningfulness & usefulness of the approach and of the system. For instance, we can very<br />
well imagine systems (and actually, this applies to most approaches & systems which design
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 10 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
is only technology driven) that works perfectly and provides very powerful functionalities,<br />
but that are considered as totally useless by the final user!<br />
Evaluation level<br />
Level 0:<br />
Technical<br />
performance<br />
Level 1:<br />
User’s<br />
acceptance<br />
Level 2:<br />
Functionalities<br />
Level 3:<br />
Transfer<br />
Level 4:<br />
Impact<br />
Level 5:<br />
Economic<br />
performance<br />
Question addressed<br />
(elements assessed)<br />
Do the system performs<br />
technically well?<br />
(technical performance)<br />
Do the users like it?<br />
(perceived value)<br />
Do the approach/system function?<br />
(functionalities)<br />
Is the approach/system used?<br />
(level of adoption)<br />
What is the impact on the<br />
organization?<br />
(substantive value)<br />
Do the approach/system performs<br />
economically well?<br />
(economic value)<br />
Description<br />
Measures & Indicators<br />
Performance of the tools supporting this<br />
process.<br />
Ergonomics of the tools supporting this<br />
process.<br />
Level of support of the users’ activities.<br />
Richness or limitations of the functions.<br />
Features adopted by the users.<br />
Impact of the improvement of this<br />
process on the performance of the<br />
organization.<br />
Cost / benefit of the proposed solution.<br />
Table 1: Evaluating the support of domain processes<br />
2.2.2 The phases of the evaluation: formative, summative and “substantive value”<br />
The evaluation can be done before any system has been implemented (for instance using<br />
paper based scenario). In this case, the objective of the evaluation (referred as the formative<br />
evaluation) is to provide an assessment that will be used to provide guidance for the design<br />
of the system.<br />
A system can also be evaluated after its implementation. In this case, this category of<br />
evaluation (referred as the summative evaluation) aims at testing the proper functio ning of<br />
the system.<br />
Finally, a system can be evaluated according to its more substantive value in a competitive<br />
perspective (taking also into account the unique characteristics that this system brings when
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 11 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
compare to other systems). We will refer to this category of evaluation as the “substantive<br />
value” evaluation. Note: contrary to the formative and summative evaluations, which are<br />
terms well recognized and accepted by evaluation practitioners and experts, the term<br />
“substantive” evaluation has only been introduced in this document for emphasizing the<br />
importance of assessing the effectiveness dimension of systems.<br />
The elements that are evaluated at the formative phase:<br />
• The needs. This assessment determines who needs the system, how great the need is,<br />
and what might work to meet the need.<br />
• The evaluability. The evaluability assessment determines whether an evaluation is<br />
feasible and how stakeholders can help shape its usefulness.<br />
• The feasibility. The feasibility evaluation helps to determine and identity the risks<br />
associated to the implementation of the solution both from a technical and nontechnical<br />
(socio-psycho-eco) perspective.<br />
• The effectiveness. The evaluation of the effectiveness helps stakeholders to better<br />
understand the substantive value of the solution, and in particular, the impact it can<br />
have in the organization.<br />
The elements that are evaluated at the summative phase:<br />
• The outcome . The outcome evaluations investigate whether the system or technology<br />
caused demonstrable effects on specifically defined target outcomes<br />
The elements that are evaluated at the “substantive value” phase:<br />
• The impact. The impact evaluation is broader and assesses the overall or net effects -<br />
- intended or unintended -- of the system or technology as a whole<br />
• The cost. The cost-effectiveness and cost-benefit analysis address questions of<br />
efficiency by standardizing outcomes in terms of their dollar costs and values.<br />
• The capability to develop. This analysis re-examines all the elements in order to<br />
understand the real potentia l of the solution, and it capability to evolve and deliver<br />
more value in the future.<br />
• The overall value. The meta -analysis is done to integrate the outcome estimates from<br />
multiple studies to arrive at an overall or summary judgement on an evaluation<br />
question.<br />
These phases of evaluations formative, summative and “substantive value” can be considered<br />
as presenting another perspective of the evaluation level presented in the previous chapter.
2.3 The Evaluation Methods & Tools<br />
2.3.1 Evaluation: an overview<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 12 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Evaluation is based on some observations, and of a model of interpretation. In quantitative<br />
research, these observation usually correspond to quantitative data that can be captured, and<br />
the model of interpretation correspond to a set of measures that consist in a me ans of<br />
associating numerical values to these data that provide some scale of comparison. In<br />
qualitative research (McBride and Schostak, 1995), these observations may take the form of<br />
the capture of events (such as “the user has done a specific action”) or the result of an<br />
interview with a user (.i.e. some more or less structured text or a video that record the content<br />
of the interview). The model of interpretation corresponds to method of analysis (the next<br />
chapter will present the main methods available).<br />
The quality of the sample used (characteristics and the size of the data) is also an element to<br />
take into account in the evaluation, since it has some important implications in the validity of<br />
the evaluation (for instance if the size of the observation is too small, or if the sample use for<br />
the observation is biased). Many works on this subject exist in quantitative research (and<br />
actually constitutes a major dimension of statistics) and to a less extent in qualitative research<br />
methodology. In the case of quantitative research, probability theories will be used to<br />
calculate a size of the sample that correspond to a given error margin. In qualitative<br />
evaluation, the size of the sample may be chosen more empirically in a way that is considered<br />
reasonable and significant enough to provide useful insight.<br />
Note: The size and on the characteristic of the sample used for the observation is a variable<br />
that should be manipulated with precaution when defining an evaluation since it is directly<br />
correlated to the ef fort needed to run the evaluation. Besides, the choice of the size and of the<br />
composition of the sample may be limited by real world constraints (for instance finding a<br />
large sample of motivated test users can represent an important difficulty).<br />
Many evaluation methods can be used to capture and analyse the data. Some methods<br />
(considered to be more objective) try to observe and evaluate the system in real operations,<br />
while other methods use more indirect proxy such as the collection of users’ perception and<br />
opinions. Each method has its advantages and disadvantages (related to the reliability, the<br />
level of effort required, the time, the complexity, etc.), and different methods should be used<br />
conjointly for the evaluation of a system.<br />
The next paragraph will present different categories of methods available.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 13 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
2.3.2 Observational methods<br />
Observational methods involve real people using working systems. They are based on<br />
observing users interacting with the actual system and can range from being almost entirely<br />
informal to highly structured. Observational methods should try to be as invisible as possible<br />
to the user.<br />
Direct observation. Users are observed directly doing special or ordinary task in their work,<br />
while the observer makes notes about the their behavio ur and performance that she consider<br />
important to record. The evaluators do not have direct contact with the user.<br />
Indirect observation and software logging. Videoing a user interaction with a system<br />
allows recording several aspects of the user activity. Combined with some forms of automatic<br />
keystroke or interaction logging enable to collect vast amount of data. In some cases,<br />
software-logging tools can be used to record time-stamped key presses and real time<br />
recording of the interaction between users and technology. Besides, in many cases,<br />
information systems are usually able to track and monitor the activities of the users and to log<br />
them (for instance, that a user has posted a document, or has launched a search).<br />
Think aloud protocol. The data collected by think aloud protocol contains users´-spoken<br />
observation by addressing the cognitive activity and provide a subjective feeling about the<br />
activity they are performing. Users are asked to think aloud to describe what they are doing at<br />
each stage, and why. The evaluators record the users actions by using tape recordings, video,<br />
or pencil and paper.<br />
Note: Of course, the think aloud protocol may not be considered as totally neutral on the<br />
activity of the user since it introduce a cognitive load that can dist ract this user.<br />
2.3.3 User’s Opinions methods<br />
The objective of the users’ opinion method is to elicitate the (subjective) perception of the<br />
system by the different categories of users.<br />
Structure interview. Interviews are particularly useful in eliciting information about user<br />
attitudes, impressions, and preferences. Interviews can be used in conjunction with thinkingaloud<br />
protocols for retrospective analysis of user responses and events.<br />
Questionnaires. Questionnaires have been used as instruments to assess different aspects of<br />
usability of the human-computer interface. Collecting users' subjective opinions about a<br />
system can remove unpopular and unusable parts early in the design or after delivery.<br />
Focus groups. Focus Groups are effective Research Tool and are very commonly used in<br />
software development. They are similar to interviews, but because we have a group of<br />
participants the group dynamics generate more data then one to one interviews. Focus groups<br />
involve gathering 8 - 12 members of a target audience for an open-ended discussion of
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 14 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
particular topic of interest. Each group session typically lasts 1 to 1 ½ hours and should be<br />
moderated by a professionally trained facilitator. Groups should always be audiotaped or<br />
videotaped.<br />
2.3.4 Interpretative methods<br />
The interpretative evaluation consists in deriving, from the observation of the users in natural<br />
settings (working on at least partly controlled tasks), the structure and thinking behind the<br />
visible behaviours.<br />
Cooperative evaluation. Cooperative evaluation uses the think aloud protocol, which allows<br />
the user to ask questions, comments and suggest appropriate alternatives of the evaluators,<br />
and the evaluators to prompt the user. Here, the user is encouraged to act as a collaborator in<br />
the evaluation to identify usability problems and their solutions.<br />
Ethnography. Ethnography is an approach developed by anthropologist, which is used in<br />
HCI to inform system design. This method is used "in order to look at aspects of social<br />
interaction and work practices in natura l work environments. By doing so, they attempt to<br />
reveal the detailed and systematic interaction between members in an organization and<br />
therefore explicate the underlying work practices".<br />
Scenario -based design. The phrase “scenario-based design” represents a diverse field of<br />
design practice (Carroll, 1995). The various techniques are connected by the use of scenarios<br />
to ground the design process in the situated tasks of users. These scenarios describe<br />
sequences of actions taken by a user with a specific goal in mind. These methods are meant to<br />
make explicit those assumptions and activities of the design process which are usually<br />
implicit. Dialogue can for instance be used as to conduct the inquiry (Erskine, Carter-Tod,<br />
and Burton, 1997). Practically, the user participants are asked in a dialogue to provide a<br />
context (who they are), a specific goal (for instance they may want to capitalize the<br />
knowledge of a just finished project), and an action (a detail description of how they would<br />
proceed to achieve this goal using the system that is being tested).<br />
Contextual inquiry. Contextual Inquiry (Beyer and Holtzblatt, 1998) is a field study method<br />
where usability specialists observe people performing their usual job tasks in the context of<br />
their actual work situations. Contextual inquiry is basically a structured interviewing method<br />
based on a few core principles (context, partner, focus): (1) the users are observed in their<br />
own working situation; (2) the researcher and the users are equal partners in investigating and<br />
understanding the usage of a system; (3) a focus is defined, which can be seen as a number of<br />
assumptions and beliefs concerning what needs to be accomplished and how to accomplish it.<br />
2.3.5 Predictive methods<br />
Experts separately review a system and categorize and justify problems. Predictive evaluation<br />
can be based on a specific theory (for instance in cognitive psychology) or on a short set of<br />
heuristics (rules of thumb).
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 15 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Cognitive walkthrough. Cognitive walkthrough is an evaluation technique that considers<br />
psychology theory into the informal and subjective walkthrough technique, proposed by<br />
(Polson et al., 1992). It aims to evaluate the design in terms how well its supports the user as<br />
he learns how to perform the required task. The designer or an expert in cognitive psychology<br />
performs the walkthrough. The expert works through the design for a particular task, step by<br />
step, identifying potential problems against psychological criteria. The analysis is focused on<br />
the users goals and knowledge. The cognitive walkthrough must show if, and how the<br />
interface will guide the user to generate the correct goal to perform the required task, and to<br />
select the necessary action to fulfil a goal.<br />
Heuristic evaluation. Heuristic evaluation (proposed by Nielsen and Molic h) involves<br />
experts assessing the design. In this approach a set of usability criteria or heuristic is<br />
identified to guide design decision. Evaluators independently run through the performance of<br />
the task set with the design and assesses its conformance to the criteria at each stage. Nielsen<br />
suggests that around five evaluators found about 75% of potential usability evaluation.<br />
2.3.6 Test, Experiment and Usability Engineering methods<br />
Tests, experiment and assessments evaluation methods consist in setting up a more formal<br />
setting for the evaluation, and in particular the organization of evaluation sessions with clear<br />
evaluation objectives.<br />
Tests and assessments can be useful tools in evaluation to measure the impact of the system<br />
on the participants’ processes. The tests are conducted successively without and with the use<br />
of the system, and the results are compared.<br />
Self-reports participants write reports on how much the system has contributed to improve<br />
their work process (more than an opinion, users are asked for an explanation).<br />
Usability engineering. Usability engineering is an approach to system design where the<br />
usability issues of a system are specified quantitatively in advance. The development of the<br />
system is based in these metrics, which can be used as cr iteria for testing the usability of a<br />
system or prototype. Examples of such metrics are 'percentage of errors', and 'number of<br />
commands used'.<br />
Experiments. Experiments are the most formal methods of usability testing that requires<br />
preparation and knowledge to evaluate the design of a system or prototype by quantifying<br />
user performance such as 'time on task', 'number of errors', etc.
2.4 Defining the evaluation process<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 16 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
For the different reasons we have already mentioned (cost, time, availability to find<br />
participants, etc.), evaluating everything is not possible, and to some certain extend it is not<br />
really desirable (lack of focus, introduction of rigidity, etc.).<br />
An evaluation process aiming at delimiting the research and conducting it realistically can be<br />
used.<br />
The different stages of this research process in social sciences are the following:<br />
• Stage 1: The definition of a research question. This involves contributing to<br />
building a body of knowledge and developing theory.<br />
• Stage 2: The development of an instrument. Having defined the research question,<br />
the research investigator needs to develop measurement instruments to capture the<br />
data for future analysis and to select the context (the site, the users, etc.) in which the<br />
measurement will take place.<br />
• Stage 3: The data gathering. This stage is related to the execution of the chosen<br />
instruments in the selected context in order to collect the data. This stage may include<br />
the execution of some preliminary actions, such as for instance trust building in the<br />
case that the data has to be collected from people.<br />
• Stage 4: The analysis of the data. This stage consists in the analysis of the data in<br />
the perspective of the research questions that have been previously identified, and the<br />
determination of the learning of the research.<br />
• Stage 5: The dissemination. This stage consists in the selection and the<br />
dissemination of the most important (originality, impact) research finding generated<br />
by this research.<br />
These different stages can be slightly relaxed in the Case Study Research method (Eisenhardt<br />
1998; Yin 1994; Meredith 1998), a method favoured for conducting research in a field of<br />
research that is still largely unexplored, and which characterised by a paucity of theories, and<br />
the complexity and lack of well supported definitions and metrics (Stuart et al., 2002). For<br />
example, Eisenhardt (1989), while acknowledging the role of good research questions and<br />
theoretical constructs, argues that propositions can be developed (and tested) during data<br />
collection, rather than prior to it. Because the aim is to obtain a rich understanding of the<br />
cases in all their complexity, insights gained during data collection can be used to inform the<br />
theory.<br />
2.4.1 The definition of the research questions<br />
An initial definition of the research question should be made in order not to become<br />
overwhelmed by the volume of the data. A priori specification of constructs can also help to<br />
shape the initial design of theory-building research, although this type of specification is not
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 17 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
common in such studies. It is important to recognize that the early definitions of the research<br />
question and the possible constructs are only tentative.<br />
2.4.1.1 Identifying the goals and the scope<br />
The identification of the most important points and objectives is very specific to the syste m &<br />
approach that is subject to the evaluation.<br />
Some of the critical points to identify are the main processes and dimensions that the system<br />
is expected to support. For instance, in context of learning systems, the processes and<br />
dimension may include “individual learning”, “experiential learning” or “collaborative<br />
learning” (via learning network). In the context of knowledge management, the selected<br />
processes and dimension are to be taken from the different knowledge processes used in<br />
organization such as “knowledge creation” (creativity), “knowledge acquisition”, “knowledge<br />
structuring”, “knowledge capitalization”, “knowledge searching”, knowledge sharing (and in<br />
particular the support of the social aspects), etc.<br />
These objectives and points should include both short-term objectives & long-term<br />
objectives, distinguish the points that are very general or very narrow and also be used to<br />
provide insights both for the design process itself (summative evaluation), for the functioning<br />
of the result (formative evaluation), and its effectiveness (substantive value of the solution).<br />
More horizontal & global characteristics may also be evaluated such as:<br />
• “Usability”: A system is usable if desired task quality can be achieved with acceptable<br />
costs to users.<br />
• “Flexibility”: A system is flexible if it is able to adapt with a reasonable level of<br />
effort, to changes that were not initially planned.<br />
• “Social acceptability”: A system is socially acceptable, if its introduction does not<br />
generate a too important level of resistance from the people due to a too radical<br />
change in their role.<br />
• Etc.<br />
Again, the choice of these characteristics is very dependant of the target characteristic of the<br />
system to be evaluated. For instance, “flexibility” may be considered of very little relevance<br />
in the application to the design of a satellite system for which the possibilities of intervention<br />
are very limited once the satellite has been launched.<br />
2.4.1.2 Defining elements (criteria and indicators) for evaluating the goals.<br />
The evaluation of each goa l will rely on the definition a set of elements (criteria and<br />
indicator) that reflect the best how the different goals are fulfilled by the approach / system<br />
that is to be evaluated.<br />
The selection of the elements to be evaluated will take into account the relevance of these<br />
elements related to the importance of the goals and points that have been identified, as well as<br />
the level of detail desired or possible (affordable).
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 18 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
First, these elements should include the most important mechanisms that the system is<br />
providing to support in the domain addressed. For instance “the search function” will be<br />
considered as a critical function to be evaluated related to the Goal “providing support to<br />
knowledge identification”.<br />
Second (and perhaps even more importantly), these elements should include the features that<br />
are considered as unique for this system. For instance, if automatic clustering represents a<br />
unique feature of this system, it will have to be evaluated in dept, since it will represent the<br />
competitive advantage of this particular system when addressing the goal of “supporting<br />
knowledge identification”.<br />
Another element not to underestimate is that the choice of the priority also has consequences<br />
on the dynamics of the project. The evaluation if not neutral, and we can believe that the<br />
elements that be evaluated will receive more attention in the execution of the project.<br />
Besides, the evaluation may trigger some resistance both from people participating to this<br />
evaluation (they may see it as an additional effort, and consider the associated monitoring<br />
threatening their position) and from organizations (who may be reluctant to put some light on<br />
some of their current practices and systems they are currently using).<br />
2.4.2 Selecting the evaluation methods (defining the instrument)<br />
The concept of population is crucial (what kind of data we are interested to collect). It defines<br />
the set of entities from which the research sample is to be drawn, controls variation and helps<br />
define the limits of generalizing the findings. Theory-building research relies on theoretical<br />
sampling Crafting Instruments and Protocols. Theory-building researchers typically combine<br />
multiple data collection methods.<br />
The choice of the evaluation methods to use and the elaboration of the action plan will<br />
depend of different factors such as:<br />
• The elements and indicators that have been previously identified.<br />
• The stage of system development<br />
• The resources available<br />
• The time available<br />
• The availability of potential users<br />
• Acceptability of intrusion on user<br />
• The type of output required<br />
• The precision / reliability desired<br />
• Etc.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
The action plan will consist in different phases:<br />
Page : 19 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• The formative phase. Conducted mainly using interviews, mock-ups and early<br />
prototypes, the objective of the formative evaluation is to pr ovide an assessment that<br />
will be used to provide guidance for the design of the system.<br />
• The summative phase. This phase will focus more on assessing the functionalities of<br />
the system being designed (this phase will focus on the testing of the prototype, and<br />
will collect data via interviews, and observational methods).<br />
• The “substantive value”. This phase will focus on evaluating the value of the system.<br />
It will involve the users testing a consolidated version of the prototype, as well as<br />
experts and business development manager assessing the real value of the system<br />
(what are the unique features, how it compare with other systems) and its<br />
“economical” potential (ROI, business plan, etc.).<br />
2.4.3 Executing the instrument and gathering the data<br />
For each phase will have to be defined the following steps:<br />
• Identification of the source of data for the evaluation (and in particular identifying the<br />
users’ groups participating in the evaluation and the experts).<br />
• Any preparation that would be necessary before the evaluation (for instance some<br />
evangelisation work towards the users, and their recruitment).<br />
• The setting of dates for the tests to be conducted negotiated with the users’ groups<br />
(date of sessions, starting date and duration in the case of continuous evaluation, etc.).<br />
• The setting of date for the analysis of the data (milestones).<br />
• The definition of how the results of the analysis are going to be fed-back to the<br />
system.<br />
Note: The evaluation is not necessary as linear as it may appear, but consist in a series of<br />
iterations: test, analysis, and integration of the findings.<br />
2.4.4 Analysing the data, theory confirmation & building<br />
Analysing the data. This is the least codified part of the process. The overall idea is to<br />
become intimately familiar with the data collected. This process allows the unique patterns of<br />
each case to emerge before investigators push to generalize patterns across cases.<br />
Searching for Patterns. There are various tactics for doing that. The idea behind these<br />
tactics is to force investigators to go beyond initial impressions, especially through the use of<br />
structured and diverse lenses on the data.<br />
Shaping Hypotheses. Researchers constantly compare theory and data - iterating toward a<br />
theory, which closely fits the data. This stage includes sharpening the constructs by refining
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 20 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
their definition as well as building evidence, which measure them in each case. Another step<br />
is verifying that the emergent relationships between constructs fit with the evidence in each<br />
case.<br />
Enfolding Literature. Linking results to the literature is crucial in theory building research<br />
because the findings often rest on a very limited number of cases.<br />
Reaching Closure. Two issues are important in reaching closure: when to stop adding cases<br />
and when to stop iterating between theory and data. In both cases, researchers should stop<br />
when a theoretical saturation is reached. That is when the incremental improvement to the<br />
theory is minimal.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
3 Evaluating the Ontologging System<br />
Page : 21 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
This section of the document will present now how the principles that have been presented<br />
previously are being applied for the evaluation of the Ontologging knowledge management<br />
system.<br />
3.1 Overview<br />
The evaluation of the Ontologging project will consist in the use of a combination of different<br />
techniques described in the previous section and in particular: (1) predictive methods aiming<br />
at exploiting the expertise and know-how present in the consortium related to knowledge<br />
management (both from a technical and an organizational perspective) and Ontology; (2) user<br />
opinion methods aiming at integrating the user’s needs (and not the user’s want) into the<br />
proposed system, as well as validating the substantive value of the approach proposed to the<br />
users; (3) observational methods and test and experimental methods aiming at assessing the<br />
capability of the designed system to fulfil the objective of the system, as well as its adoption,<br />
both from a technical and organizational perspective (overcoming resistance to change) by<br />
the users.<br />
The result of the evaluation will be used for the following purpose:<br />
• To help the design process. Indeed, Ontologging uses an iterative process (spiral of<br />
analysis of needs, specification, testing of result, adjustment) in which the system is<br />
progressively detailed (versus the top-down V design process). The feedback from the<br />
evaluation will help the process of definition and adjustment of the system.<br />
• To assess the substantive value of the system that is being generated, and to guaranty<br />
that the focus of the project is not lost in favour of other sub-goals (such as technical<br />
goals) and remain consistent with the original high level objectives.<br />
As indicated in the previous section of this document, different phase will be used for the<br />
evaluation of Ontologging:<br />
• The formative evaluation phase. The objective of this phase is to provide a first<br />
feedback that will flow into the design of the system. This phase will mainly be done<br />
via the collection of users’ and expert opinion (via questionnaire, interview), as well<br />
as the initial testing of the mock-up / early version of the prototype for validating the<br />
main ideas (versus testing the functionalities).<br />
• The summative evaluation phase. The objective of this phase is to assess the<br />
functioning of the system. This phase will start with the availability of the first<br />
version of the prototype (and not the mock-up), and will aim at assessing the different<br />
functions of the system, both according to its performance, the richness of the<br />
functionality, and their ease of use. The methods used will mainly be based on users’<br />
opinion (related to ergonomics), as well as on the organization of specific test for<br />
evaluating the different functions of the system (such as the authoring of ontologies,<br />
the population of content, the search for content and of collaborative capabilities)
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 22 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• The “substantive value”. The objective of this phase is to assess and reflect the<br />
substantive value (from a usability and economic perspective) of the Ontologging<br />
system in addressing the high level objective of the project (how ontology concept<br />
can help to better support the knowledge management processes). This phase will<br />
mainly make use of the consolidated version of the prototype (the tester will be able<br />
to focus their attention on the possibilities of using the system to support their<br />
working practices). Practically, the method used will rely on users’ opinion methods<br />
(related to their perceived usefulness to contribute to their work and their willingness<br />
to adopt the system) and on observational methods (in order to identify what is the<br />
real usage of the system, and in particular what are the functions that are effectively<br />
used).<br />
3.2 Goals and scope of the evaluation of Ontologging<br />
One of the main objectives of Ontologging is to investigate how ontologies technologies<br />
approaches can be used to design knowledge management systems that better support to<br />
knowledge management processes in organizations.<br />
We can indeed expect, that ontology-based knowledge management systems will be able to<br />
benefit of the number of advantage of Ontology approaches such as: powerful knowledge<br />
representation, easy evolution, interoperation, and minimum ambiguity.<br />
3.2.1 Knowledge Management & Ontologies, some clarification<br />
Before going into the exercise of defining the goals and the scope, let’s first provide some<br />
clarification of the different concepts such as knowledge management, knowledge<br />
management systems and Ontology.<br />
What is knowledge management and what is a knowledge management system?<br />
Knowledge management refers to the set of activities and processes which aims at explicitly<br />
supporting the way the knowledge is managed and in particular:<br />
• How the knowledge is acquired (business intelligence, training, etc.)?<br />
• How the knowledge is created (innovation)?<br />
• How the knowledge is assessed?<br />
• How the knowledge processes are structured?<br />
• How t he knowledge is structured and transformed?<br />
• How the knowledge, stored, and more generally capitalized?<br />
• How the knowledge retrieved and searched?<br />
• How the knowledge is shared amongst people (CoP, benchmarking)?<br />
• How the knowledge is applied (adapted, helping to the decision, etc.)?<br />
• Etc.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 23 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Knowledge management systems represent technical infrastructures and methods that can be<br />
used to support a set of this knowledge activities and processes.<br />
Note:<br />
Our definition of knowledge is very broad, and refers to any form of information which<br />
embed some context and some level of actionability. Knowledge may refer to some form of<br />
intellectual assets that people can activate in other to generate new objects, but also to<br />
processes (knowledge process, organizational knowledge) that in embedded in the<br />
functioning of the organization.<br />
What is an Ontology?<br />
Ontology represent a proposed approach for representing knowledge.<br />
"An ontology is an explicit specification of a conceptualization.” (Gruber, 1993)<br />
“An ontology provides an explicit representation for structuring all the entities and<br />
relationships that are to be talked about in some domain” (Bateman, 2004).<br />
More concretely, ontology propose a conceptual framework that provide the capability:<br />
• To model deeply the knowledge (semantic web) making the search more powerful<br />
(possibility to define more precise requests) and less ambiguous (less noise).<br />
• To evolve relatively easily. Ontology technologies support the change in the structure<br />
of the knowledge without impacting (or at least in a very localized way) the whole<br />
system.<br />
• To interoperate: Ontology models provide very clear and rigorous interfaces, which<br />
facilitate the interoperability between different systems. Besides, mapping<br />
mechanisms allowing ontologies translation from one ontology to another can help<br />
the connection of systems that implement similar knowledge & data models but that<br />
were defined separately.<br />
• To help to structure the knowledge processes in organization by contributing to the<br />
establishment of a common language. Ontology provides the means to define<br />
unambiguously the different terms of a domain.<br />
Ontology technologies / systems (such as Ontology editors) are just some tools that have<br />
been design to support this approach.<br />
Practically, Ontology systems & technolo gies propose mechanisms that help:<br />
• The definition of ontologies formalizing a target domain (the semantic of this domain<br />
is explicitly defined).<br />
• The use of these ontologies to formalize explicitly the different knowledge objects<br />
(documents, practices, people, etc.) of the domain addressed.<br />
• Semantic search of the different knowledge objects (the queries express the semantic,<br />
and not only the existence of keywords).
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 24 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• People communication, and organizational interoperation (Ontologies provide nonambiguous<br />
languages, facilitating the interoperation between people and<br />
organization).<br />
3.2.2 The goals of Ontologging: the design of a Knowledge Management System that better<br />
support some knowledge processes<br />
The goal of the Ontologging project is to design a system that take advantage of the Ontology<br />
approach to better support some of the knowledge management process of the organization.<br />
This general goal can be refined in the following sub-goals:<br />
• Goal 1: The Ontologging system will help organizations to better structure knowledge<br />
and elicitate their knowledge processes.<br />
• Goal 2: The Ontologging system will help organizations to better capitalize their<br />
knowledge assets.<br />
• Goal 3: The Ontologging system help organizations to better retrieve (locate / search)<br />
their knowledge.<br />
• Goal 4: The Ontologging system helps people in organizations to better share their<br />
knowledge (assets & processes).<br />
3.2.2.1 Goal 1: Ontologging will help the structuring of the organization<br />
Ontologging provides a framework (tools and theories) for representing unambiguously how<br />
the knowledge is structured. Practically, a domain is modelled via the definition of concepts<br />
and properties associated to these concepts, and the relationships between these concepts.<br />
Ontologging will in particular help the organizations to capture (elicitate) the complexity of<br />
the knowledge and the processes used in the organization, via the definition of a common and<br />
semantically well specified language (ontologies) for describing the domains in which the<br />
organization operate and the structure of the different processes.<br />
3.2.2.2 Goal 2: Ontologging will help the capitalization of the knowledge of the<br />
organization<br />
Ontologging provides the means (based on the previous structure definition) to represent the<br />
knowledge (knowledge asset) of the organization. Practically, this representation consists in<br />
categorizing the “chunks” of knowledge (documents, people description, project description,<br />
etc.) according to the shared ontologies of the organization, and the creation of semantic<br />
networks that rely on these ontologies.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
3.2.2.3 Goal 3: Ontologging will facilitate knowledge retrieval<br />
Page : 25 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
By contributing to the explicit definition of the semantic of the knowledge, Ontologging<br />
approaches makes the operations browsing and searching this knowledge more powerful and<br />
higher level. In particular browsing and searching can be done not only at the lexical level,<br />
through keywords, but also exploit the high level concepts that have been defined in the<br />
Ontology and also navigate the semantic networks.<br />
3.2.2.4 Goal 4: Ontologging will facilitate knowledge sharing and exchange<br />
Ontologging by providing shared vocabulary (the ontologies), and making the knowledge<br />
more visible, will contribute to improve the shared understanding of the values of the<br />
organization, and facilitates the communication and the inter-operation between the different<br />
parts of the organizations.<br />
3.3 Selecting the elements to be evaluated (evaluation criteria)<br />
The support of all the knowledge processes / sub-goals that have been identified previously<br />
(structuring role, capitalizing knowledge, searching and browsing knowledge, sharing<br />
knowledge) will have to be evaluated according to a formative, summative and substantive<br />
dimension.<br />
As indicated however, a complete and detailed evaluation of each goal according to the levels<br />
of evaluation introduced in the first section of this document would require a too big effort<br />
and would take too much time. Therefore, only a partial evaluation will be considered, and<br />
will consist in the selection of a subset of the most important and representative elements that<br />
will be evaluated.<br />
This section will first try to identify for each of the goals the different elements that could be<br />
considered candidate for the evaluation. Then, base on their importance, it will select a few of<br />
these elements that will be subject to the effective evaluation of the system in the evaluation<br />
plan.<br />
3.3.1 Identifying the elements associated to each of the goals<br />
Ontologging proposes different means, tools and approaches for fulfilling the goals that have<br />
been previous ly identified.<br />
3.3.1.1 Elements associated to a structuring role (Goal 1)<br />
Ontologging proposes different components / elements to define a rich vocabulary of<br />
concepts, with a well defined (unambiguous) semantic, and agreed across the whole<br />
organization:
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 26 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• The Ontology Editor for the creation of the ontologies for modelling a domain of<br />
application (domain ontologies, events ontologies, users ontology) in term of concepts<br />
and properties.<br />
• The Ontology & Knowledge Evaluators (OKE) provides the means to evaluate (and<br />
therefore to readjust) the authored ontologies.<br />
Other less technical elements will need also to be considered such as:<br />
• The effort and the expertise required to author complex ontologies.<br />
• The difficulty to make these ontologies evolve.<br />
• How to address the eventual conflicts that may occur when defining ontologies at the<br />
companies level (what happen if two departments have some diverging definition of a<br />
term?).<br />
3.3.1.2 Elements associated to knowledge capitalization (Goal 2)<br />
Ontologging provides several tools that can be used for representing the knowledge of the<br />
organization:<br />
• The DUI (Distributed User Interface) is a tool that allows users (authors) to describe<br />
(using the Ontology “share vocabularies”) and enter new knowledge into the<br />
Knowledge Base.<br />
• The Office Connector allows users (authors) to enter knowledge directly from inside<br />
Microsoft office. The Office Connector is a visual basic extension of Microsoft<br />
Office, which can be used to annotate document being edited with the domain<br />
Ontology and enter the result in the Knowledge Base.<br />
• The user profile editor allows the users to define their profile (based on the user<br />
ontology that has been previously defined using the ontology editor). The User profile<br />
editor contributes to support the management of the tacit knowledge of the<br />
organization, and in particular the access of the knowledge that in only present in the<br />
people head.<br />
• The OKE provides the means to evaluate the knowledge stored in the Ontologging<br />
server, and therefore provide some indication useful for its reorganization.<br />
Non- technical elements will need also to be considered such as:<br />
• What is the overhead effort in the knowledge capitalization process?<br />
• Will formal roles (such as authors) have to be defined in the organization?<br />
• What are the incentive measures that will encourage people in the organization to<br />
capitalize knowledge, or to update their profile?<br />
• How to overcome resistance in adopting new knowledge capitalization practices?<br />
3.3.1.3 Elements associated to knowledge searching and browsing (Goal 3)
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 27 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Ontologging provides several tools that can be used for searching and navigating the<br />
knowledge:<br />
• The Ontologging DUI provides the means for browsing and searching the knowledge<br />
that was stored in the Ontologging server. The underlying search engine is aware of<br />
the semantic of the knowledge, and in particular goes beyond the single keyword<br />
search. Besides, the inference engine (embedded in the DUI) allows querying of<br />
ontologies to discover information not explicitly present in the ontology.<br />
• The user profile editor provides a specialized tool to enter and maintain user<br />
information. This tool helps in particular to navigate the network of expertise (people)<br />
of the organization.<br />
Non- technical elements will need also to be considered such as:<br />
• What is the expertise necessary to exploit properly the technology?<br />
• Will training be necessary?<br />
• Will the value of Ontology techniques be visible enough (versus more traditional<br />
methods)?<br />
3.3.1.4 Elements associated to knowledge sharing and exchange (Goal 4)<br />
Different tools in Ontologging that contributes to the facilitation of knowledge sharing in the<br />
organization, and knowledge exchange between the departments:<br />
• The Ontology Editor for the creation of the shared ontologies. Besides, Ontologging<br />
even provide some mechanisms for the collaborative authoring of ontologies (using<br />
agents for notification of conflicts).<br />
• The Ontology mapping tool, provides the means to reconcile ontologies that have<br />
been defined separately by different parts of the organization (different departments),<br />
allowing them to interoperate.<br />
• The user profile editor, by making more explicit people profile, can help the<br />
circulation of tacit knowledge (present in the people head) of the organization.<br />
Non- technical elements will need also to be considered such as:<br />
• Will the availability of tools be enough, and in particular strong signal from the top<br />
management will also be important in better inter-department communication?<br />
• What will be the incentive for people for sharing their knowledge, in particular for the<br />
sales people (what is it for me)?<br />
3.3.2 Levels of evaluation for Ontologging. What could be evaluated<br />
Let’s provide for information a perspective of the evaluation according to the levels as it was<br />
described in the first part of the document.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
3.3.2.1 Level 0: Do the system performs technically well?<br />
Page : 28 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
This level addresses the evaluation of the technical characteristic of the Ontologging platform<br />
& components. In particular, if the different tools (DUI, Ontology editor, User profile editor,<br />
the reconciliatory agent, etc.) are easy to set-up, are reliable, are fast, are interoperable, etc.<br />
3.3.2.2 Level 1: Users’ acceptance (do they like it)?<br />
This level is related to the ergonomic of the Ontologging component such as the perceived<br />
speed, the aesthetics, the readability of the fonts, the navigability, and the interactivity (drag<br />
& drop capabilities, etc.).<br />
3.3.2.3 Level 2: Assessment: Do the approach/system function?<br />
This level addresses the functionality of the different components of Ontologging and for<br />
instance: can we create any ontology or what are the limitations? What are the functions<br />
currently implemented related to knowledge searching? What are the different means for an<br />
agent to intervene to support the collaborative authoring process? What can be evaluated, etc.<br />
3.3.2.4 Level 3: Transfer: Is the approach/system used?<br />
This level addresses the use of the Ontologging systems in the tester companies (and in<br />
particular Indra) for supporting their knowledge management processes. This level also<br />
assesses the condition of use of Ontologging, in particular related to other knowledge<br />
management approaches (such as the Meta4 product Knownet). For instance, it evaluates the<br />
use of the system in better supporting the work processes (for instance the tender process)<br />
and the collaborative dimension (for instance, are the agent mechanisms effectively used to<br />
support collaborative ontology authoring?). Other questions: are the advanced searching<br />
capabilities (inference) used? How often the different Ontologies are updated? Do user use<br />
knowledge evaluation features?<br />
3.3.2.5 Level 4: Impact: Measures the impact on the organization<br />
This level addresses the effectiveness of the Ontologging system in making the employee to<br />
better manage the knowledge of the company and making it operate better (for instance<br />
improving the relationships with its customers, increasing its flexibility and its reactiveness).<br />
More concretely, it may provide some assessment in how ontologies contribute to provide a<br />
better management of the knowledge (because it enables flexibility and also the support for<br />
the structuring of the knowledge-oriented activities). Finally, this level will also try to assess<br />
the cognitive transformation of the user, and the changes in his work practices: are the beliefs<br />
and the behaviours of the users different been radically transformed by the system (or has the<br />
user radically learned)?
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
3.3.2.6 Level 5: Do the approach/system performs economically well?<br />
Page : 29 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
This level is concerned with the evaluation of Ontologging in the perspective of exploitation<br />
in a competitive environment. In particular this level will analyse what are its unique features,<br />
as well as its weaknesses.<br />
The following table present some illustrations of the evaluation of knowledge processes along<br />
the different levels.
Evaluation level<br />
Level 0:<br />
Technical<br />
performance<br />
Level 1:<br />
User’s<br />
acceptance<br />
Level 2:<br />
Functionalities<br />
Level 3:<br />
Transfer<br />
Level 4:<br />
Impact<br />
Level 5:<br />
Economic<br />
performance<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Structuring role<br />
Performance of the<br />
ontology authoring<br />
tools.<br />
Easiness to author<br />
an ontology (create<br />
new concepts &<br />
properties)<br />
Features of the<br />
ontology editor.<br />
Level of adoption of<br />
Ontologies.<br />
Do the structuring<br />
of the knowledge<br />
(via ontology)<br />
brings a better<br />
performance for the<br />
organization.<br />
What is the effort /<br />
benefits to structure<br />
the knowledge<br />
Deliverable ID: D8b<br />
Knowledge processes & Goals<br />
Capitalise<br />
knowledge<br />
Performance of<br />
the DUI<br />
(publisher mode)<br />
and of the MS<br />
Word extensions.<br />
Ergonomics of<br />
entering and<br />
annotating new<br />
documents in the<br />
DUI, and the MS<br />
Word extensions.<br />
Features &<br />
limitation of the<br />
DUI-publishing<br />
function, and of<br />
the MS Word<br />
extensions.<br />
level of<br />
capitalization of<br />
knowledge<br />
Do the<br />
capitalisation of<br />
the knowledge<br />
assets provide<br />
benefit to the<br />
organization.<br />
What is the cost<br />
of capitalizing the<br />
knowledge?<br />
Retrieve<br />
knowledge<br />
Performance of the<br />
DUI (user mode).<br />
General ergonomics<br />
of the DUI.<br />
Capability to display<br />
and manipulate<br />
complex<br />
information.<br />
level of reuse of<br />
existing knowledge<br />
Is the knowledge use<br />
in the organization<br />
less reinvented,<br />
conducing to a more<br />
efficient<br />
organization.<br />
What is the effort /<br />
benefits of searching<br />
the knowledge<br />
(training, support,<br />
etc.).<br />
Table 1: Examples of level evaluation according to the knowledge processes<br />
Page : 30 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Share knowledge<br />
Scala bility of the<br />
multi-ontology<br />
mapping.<br />
3.3.3 The different phases of the evaluation<br />
As indicated in the first section, the evaluation will fulfil three objectives:<br />
• Provide some first feedback to the design process (formative evaluation).<br />
Performance of the<br />
user profile editor<br />
Ergonomics of the<br />
user profile editor.<br />
Scalability of the<br />
multi-ontology<br />
mapping.<br />
Evolutivity of the<br />
User Ontology in<br />
the user profile<br />
editor<br />
level of sharing<br />
knowledge amongst<br />
people<br />
Is the acceleration<br />
of the diffusion of<br />
knowledge inside<br />
the organization<br />
translated to<br />
tangible benefit for<br />
the organization<br />
(reactivity,<br />
flexibility, etc.)?<br />
What is the effort /<br />
benefits to put to<br />
facilitate the<br />
knowledge sharing.<br />
(facilitator,<br />
incentives, etc.)
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 31 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• Assess the functionalities of what has been designed (summative).<br />
• Assess the substantive value of what has been designed (substantive value).<br />
In the first case, formative evaluation will help to readjust the plans and the priority of the<br />
Ontologging project, to better understand the users’ needs, and to refine the definition of the<br />
different Ontologging components, putting more emphasis on what is considered to be the<br />
most important.<br />
This phase will first be based on setting-up at the user place an early version of the prototype,<br />
and asking them for their feedback. This phase will also consist in evaluating how the<br />
different components interoperate with one anot her, and in identifying the functionalities that<br />
may be missing, or that would need more attention.<br />
In the second case, summative evaluation will help to access what Ontologging has produced,<br />
and what are the functionalities available.<br />
This phase will consist in the evaluation of the main Ontologging components at the users’<br />
site, and will be achieved with a functioning version of the prototype.<br />
In the last cases, the substantive value evaluation will help to understand what are the areas of<br />
knowledge management in which the system can contribute the most, and where is the real<br />
innovation (incremental or radical) in the Ontologging system.<br />
This phase will consist in the analysis of the different components of Ontologging according<br />
to a usability perspective (what is really usable for the organization). This phase will be<br />
achieved using the result of a consolidated version of the prototype<br />
3.3.4 A pre-selection of the elements to be evaluated<br />
Let’s now define, for each of the sub-goals that we want Ontologging to achieve, some<br />
elements and criteria to evaluate.<br />
3.3.4.1 Goal 1: Ontologging for helping to structure the work in the organization<br />
The structuring role of Ontologging originates from its capability to support the definition of<br />
the shared Ontologies that specify the semantic of the different concept and that are used<br />
across the whole organization to communicate.<br />
Several elements can be taken into account when evaluating the capability of Ontologging to<br />
support this structuring role.<br />
Technical view<br />
The first element is the technical and instrumental, and is related to the capability of the<br />
Ontology authoring tools that are provided, to support the definition of the different<br />
Ontologies that need to be represented. The evaluation of this element mainly consists in the<br />
evaluation of the technical components that are used to author the ontologies (these
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 32 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
components include the Ontology editor, and the Ontology Knowledge Evaluator for its<br />
diagnostics functions). The evaluation of technical tools represents a relatively well-known<br />
territory, which consist in assessing several dimensions such as: the reliability of the system<br />
and its scalability, the richness of the functionalities, the ergonomics, etc.<br />
Usability view<br />
A second element to be considered is related to the ability the ontology approach to<br />
effectively model the concept of domain in a complete way, and to take into account<br />
evolution. In order words, are the theories of knowledge representation rich and powerful<br />
enough to represent the concepts that need to be represented? A subsidiary question is related<br />
to the difficulty to effectively model these concepts: what level of competency is required<br />
from an author to model and maintain an ontology (does he need to be a rocket scientist, or is<br />
just common sense enough)? Answering these questions may consist in setting up some<br />
experiment in modelling the semantic of a particular domain (for instance the tendering<br />
process), and measuring the effort required to arrive the definition of an Ontology that satisfy<br />
the modelling need of this domain.<br />
A final question is related to the amount of effort that is required to create and then, to<br />
maintain these ontologies. The experience of using ontologies will help to answer this<br />
question and in particular the key question: will the ontologies stabilize rapidly enough to<br />
shared ontologies that will require very minimum maintenance, or on the contrary, will we<br />
the ontologies will have to enter into a process of constant evolution, in which the concepts of<br />
the ontologies will have to be modified, and new concept will have to be introduced to take<br />
into account the need of the organization.<br />
Organizational view<br />
The last element is organizational and relates to role and responsibility: which structure in the<br />
organization will be responsible and entitled (has the resources) to author these ontologies?<br />
How to guaranty that these ontologies will effectively be adopted by all the parts of the<br />
organization? How should conflict on the definition of the semantic be managed? The<br />
answer to these questions will have to be addressed as part of the deployment process (related<br />
to the non technical dimensions of this deployment).<br />
3.3.4.2 Goal 2: Ontologging for supporting knowledge capitalisation<br />
The knowledge capitalization role of Ontologging originates relates from to the formalization<br />
of the knowledge of the organization. It has to be reminded that this knowledge can include<br />
both very concrete knowledge elements such as documents, but also less tangible elements<br />
such as processes knowledge (present in cases, experiences shared in a bulletin board), an<br />
expertise (via the explicit representation of people competency).<br />
Technical view
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 33 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
The technical elements to be considered here include all the software components that can be<br />
used to formalize the knowledge. These elements have already been identified and include:<br />
The DUI (Distributed User Interface) and the Office Connector used for representing the<br />
most concrete knowledge elements; the user profile editor that is used by people to publish<br />
their competency (and other personal information); the OKE for providing a diagnostic of the<br />
knowledge enter, and helping the restructuring of this knowledge.<br />
As previously indicated, the evaluation of these software components consists in the usual<br />
assessment, with a more im portance focussed on productivity and ergonomics (since the<br />
capitalisation of knowledge should not become a too important burden for the contributors).<br />
Usability view<br />
The second element to consider is related to effort necessary for capitalizing the knowledge<br />
(annotating knowledge takes time), and its difficulty (bad knowledge annotation may result in<br />
a system that generate inaccurate results and noise). Another question has to do with how<br />
broadness of this capitalization: will this capitalization concern only the most critical<br />
knowledge element of the organization, or will it concern the all knowledge of the<br />
organization?<br />
The answer to this question will also depend on the productivity of the knowledge<br />
capitalization tool.<br />
Organizational view<br />
The last ele ment is organizational and relates to role and responsibility: which structure in the<br />
organization will be responsible and entitled (has the resources) to capitalize the knowledge?<br />
Will all the people using the system have an authoring role as part of their activity, or will<br />
this knowledge capitalisation role will be dedicated to a limited number of people. In the<br />
former case, what will an incentive structure will have to be put in place to encourage this<br />
knowledge capitalization, or can we expect that people will spontaneously integrate this task<br />
as part of their activity?<br />
3.3.4.3 Goal 3: Ontologging for supporting knowledge retrieval (searching and browsing)<br />
The knowledge searching & browsing view relates to the effective retrieval of the knowledge<br />
by the people of the organization to better do their work.<br />
Technical view<br />
The technical elements to be considered here include all the software components that can be<br />
used to browse, search and retrieve the knowledge. These elements have already been<br />
identified and include: The DUI (Distributed User Interface) for its role searching and<br />
browsing functionalities, and the user profile editor for viewing the different people profile.<br />
Since these tools represent the main user interface of the system by the users (both expe rts<br />
and non-experts) and will finally determine the perceived and substantive value delivered by<br />
the system to the organization, a particular emphasis will be the ergonomics of the system,<br />
and its powerfulness (expressiveness of the search engine, accurate level of the results).
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 34 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Usability view<br />
The second element to consider is related to the effective use of the system to support the<br />
work of the knowledge worker.<br />
Will Ontologging system become a central tool supporting the work of the knowledge worker<br />
(use d intensively in many of its activity), or will it only be peripheral, and be used in some<br />
limited and not very critical case?<br />
The answer to this question will obviously depend a lot on the ergonomic of the tools and of<br />
their powerfulness. However, it also depends on more profound consideration related to the<br />
effort to put to knowledge capitalization, and knowledge obsolescence. For instance, it might<br />
be desirable not to try to capitalize every knowledge of the organization (knowledge that can<br />
become too ra pidly obsolete to justify the effort), more only the more critical one (important<br />
document, process knowledge, etc.), and put more effort on the mechanisms (such as the<br />
description of the people expertise) that facilitate the circulation of the tacit knowledge of the<br />
organization.<br />
Organizational view<br />
The organizational view in this case relates to the importance the INDRA organization will<br />
see in using the system across the organization. The use of the main browsing & searching<br />
tools is intuitive enough to be used very easily by any user of the organization, once perhaps<br />
solved the questions of confidentiality (which as to be decided at the organization level, and<br />
can be very critical for INDRA because of its involvement in the defence sector). Other uses<br />
could also be considered, such as the possibility to open the system (searching and browsing<br />
only) too some close partners via an Intranet.<br />
Technical limitation (the DUI is a special client that need to be installed and not a web<br />
application) may also play a role in giving the INDRA organization the flexibility to deploy<br />
the Ontologging application.<br />
3.3.4.4 Goal 4: Ontologging for facilitating knowledge sharing and exchange<br />
Technical view<br />
The technical elements to be considered here include all the software components that<br />
facilitate the exchange of knowledge in the organization as well as the sharing of the<br />
knowledge among people. These elements include: the Ontology Editor (including its agent<br />
coordination component) for its contribution to the creation of a common language used<br />
across the organization, the Ontology mapping tool, for its contribution the better<br />
interoperability of the communication between departments (for connecting Ontologies that<br />
have been designed separately), and the user profile editor for its support to the management<br />
of the tacit knowledge (facilitating people to people communication via the contribution to<br />
people expertise and trust).
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 35 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Usability view<br />
The second element to consider is related to the effectiveness of Ontologging for supporting<br />
knowledge exchange between departments, and knowledge sharing amongst people. To<br />
which extent a more explicit specification of the semantic of the concepts manipulated by the<br />
organization will really contribute to the communication process? Will it reduce the risk of<br />
ambiguity? Will it increase trust (because understand better each other)? Will it reduced the<br />
coordination time (because people share the same understanding of the concepts, and do not<br />
need to invest as much time to know how to communicate with each other)? More concretely,<br />
what will be the issues related to the mapping of different Ontologies in term of cost and<br />
difficulty, and will it generate a really useable solution (for instance if the mapping conduct<br />
to inconsistencies)?<br />
Organizatio nal view<br />
The organizational view relates to the importance the INDRA organization is attaching to the<br />
Ontologging system to contributing to the communication and interoperation between the<br />
different departments, and the development of a shared culture. Are there any role, resources,<br />
and responsibility to be assigned or dedicated to support this role? On the knowledge sharing<br />
sides, will the availability of tools guaranty the willingness of people to share their<br />
knowledge (what’s in for me? in particular concerning the sales forces)? Will there be some<br />
incentive system to be set? Will the effective working of the approach need the commitment<br />
of the top management?<br />
This pre-selection of the elements to be evaluated will be refined in the next chapter in the<br />
elaboration of the evaluation plan, and in particular take into account the efforts that can<br />
effectively accomplished for the evaluation (the evaluation of all the pre-selected elements<br />
might not be possible).
4 Gathering the Data<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 36 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
In this section, we are going to present the orientation chosen for this evaluation (qualitative<br />
rather than quantitative), the selection and elaboration of the instruments for capturing the<br />
data, as well as a description of the process of gathering of the data that was conducted.<br />
4.1 Previous work<br />
We do not pretend that the use of Ontology in the context of knowledge management is<br />
totally new, and that nothing has been done in the field of evaluating the performance of<br />
these systems. However, we believe that the previous work does not address (or at least only<br />
partially cover) some of the critical dimension of the contribution of ontology to the design of<br />
the next generation knowledge management systems, in particular related to the new usages.<br />
In the next chapters, we are going to examine different approaches that have been used to<br />
evaluate ontology-based knowledge management systems, and that we could consider using<br />
for our evaluation.<br />
4.1.1 The IR (Information Retrieval) approach<br />
Many of the previous researches work on evaluation originate from a database conceptual<br />
background, and focus in evaluating the performance of information retrieval (IR) from a<br />
search perspective. In this context, one of the principal methods used for the evaluation is the<br />
“precision and recall” method (Ragha van, Jung, and Bollmann, 1989), which consists in<br />
measuring the noise associated to the result of a search (percentage of relevant documents<br />
among the retrieved ones) and the coverage of this search (percentage of retrieved relevant<br />
documents among all existing relevant documents).<br />
This method has been applied in several cases to the evaluation of Ontology-based system<br />
(Ehrig, 2002), sometime to demonstrate how Ontology can contribute to the improvement of<br />
the search process (Aitken and Reid, 2000; Clark, et al, 2000).<br />
Investigating how an ontology-based knowledge management system can compare to a<br />
traditional document-centred (IR focussed) present however little interest. Indeed documentcentred<br />
knowledge management systems have mainly failed to get widely adopted by the<br />
companies (with the exception of some niches such as competitive intelligence), probably<br />
because the functions they propose and usages they support do not really fulfil enough the<br />
companies’ and people’s needs. Besides, trying to make a innovation only to improve the<br />
support of an existing practice is doomed to fail. Innovations succeed because of their<br />
capacity to invent and to support new practices. The example of the Object oriented databases<br />
technologies that have tried in the past to mimic the approach of the older relational database
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 37 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
technology (defining a schema that appeared not to be flexible, inventing a similar –even if<br />
more powerful- query language) is here to testify that this is not a good strategy.<br />
4.1.2 The Ontology-based approaches<br />
Ontology-based approaches clearly follows another direction grounded to the theory of the<br />
semantic networks, which we believe is more conform to the “grand vision” of the semantic<br />
web (Berners-Lee, Hendler, and Lassila, 2001). According to this perspective, the underlying<br />
idea consists in the explicit definition of the concepts of a domain, the explicit structuring of<br />
the knowledge and information via semantic networks, and the navigation through these<br />
networks. In other words, the identification of know ledge happen principally not by searching<br />
terms that have been previously clearly defined (the user knows very well what he/she is<br />
looking for), but by the serendipitous discovery of this knowledge (the user do not have a<br />
precise idea of what he/she is looking for, and the navigation process helps him/her to<br />
concretise it).<br />
Note: we can considerer that these two modes of searching to be complementary, the former<br />
one being more appropriate in a context of relatively well specified knowledge processes,<br />
whereas serendipitous discovery being more adapted to support more fuzzy and creative<br />
knowledge processes (increasingly important in the learning organization).<br />
Actually, ontology-based systems include aspects that go much beyond knowledge retrieval,<br />
and in particular include elements such as ontology building and evolution, content<br />
population and maintenance, and new usages (peer-to-peer knowledge exchange in<br />
communities, knowledge evolution and usage monitoring, etc.).<br />
Two different perspectives can be distinguished for the evaluation of a semantic web<br />
approach:<br />
(1) An approach aiming at evaluating the performance according to an operational<br />
perspective (Angele and Sure, 2002) and the capacity of the tools to support the<br />
different phase of the design and the operation of a system;<br />
(2) Another approach aiming at evaluating the usage and business value of ontologybased<br />
systems (OntoWeb, 2002).<br />
4.1.2.1 Evaluation according to the performance perspective<br />
The first approach addresses the efficiency dimension (covering the leve l 0 and 2 of our<br />
adapted Kirkpatrick model) and consists in the evaluation of the usability of ontology-based<br />
tools for supporting the different phases intervening in a knowledge management system that<br />
have been mentioned previously (ontology building, content population, knowledge retrieval,<br />
etc.).
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 38 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
For instance (Missikoff, Navigli and Velardi, 2002) reports some experiences in the design of<br />
domain ontology building, and for instance the use of some tools (OntoLearn) for speedingup<br />
the elaboration of such ontology and its content population. These tools propose for<br />
instance to provide support to the automatic extraction of concepts, to the validation of the<br />
extracted concepts and to ontology content population. The assessment of these tools has<br />
used as criteria some proxy for the quality of the ontology (coverage, consensus and<br />
accessibility) and for the quality of the content (measured via precision and recall).<br />
In a similar way (Guarino and Welty, 2002) proposes, with OntoClean, a methodology for<br />
designing and evaluating “good” ontology, in a more systematic manner than what is the<br />
current practice.<br />
Whilst these approaches are useful, and important to really asset the quality of a technical<br />
system, they do not guaranty the value of this system according to an organizational<br />
perspective. In particular, they fail to recognize (or do not justify) the profound factors that<br />
are important to the real adoption and success of a system in an operational environment.<br />
Besides they can, in particular in an early stage of a research (pursuing radical innovation),<br />
distract the attention from the factors that are really matters. We refer here to the metaphor of<br />
“the person looking for his lost keys under the street lamp” (although the keys were most<br />
likely lost far from the street light, the person laments that he has little chance of finding the<br />
keys in the dark so why waste time looking there). Indeed, the availability of instruments<br />
introduces some cognitive bias, and in particular tend to reinforce research in areas that have<br />
already been explored in disfavour of more peripheral areas that can potentially lead more to<br />
radical innovation, and for which the instruments has not been invented yet. For instance, in<br />
the case of an Ontology-based system, such an evaluation may acknowledge the efficiency of<br />
a search mechanism but fail to recognise that value of this mechanism is limited (people use<br />
mainly navigation). In a similar way, this evaluation may acknowledge the capability of this<br />
system to manage an important quantity of documents (given a sophisticated model to<br />
capitalize documents) but fail to recognise that in a real environment, it may be undesirable to<br />
formalize extensively documents (because of problems of obsolescence, and because of<br />
problems of diminishing (or even negative) return related of the generation of noise). For<br />
instance, (Buckingham Shum 1997) advocates to avoid premature formalization and that only<br />
stable and sanctioned knowledge should be formalised. This confirms the personal experience<br />
of the author of this Ontologging evaluation report that documents represent generally chunk<br />
of knowledge that outdate very rapidly, and therefore should not be systematically<br />
formalized. This is to compare to “knowledge items” such as projects, people and topics on<br />
the contrary have a much longer lifespan (experts usually do not easily radically change their<br />
domain of expertise, projects have generally stronger identity than documents and can be<br />
recalled a long time after their completion, topics are not really changin g but are mainly<br />
getting richer) and that are better candidate for extensive capitalization.<br />
To conclude, we believe that this category of evaluation is particularly meaningful in a<br />
second stage once the ideas underlying an approach have been clarified and validated. Used<br />
too early, these methods bring the risk of distracting the attention to the investigation of the<br />
radically new knowledge management practiced that Ontology-based systems support. To be
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 39 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
also frank, these methods also usually require a cons iderable amount of effort and resources<br />
that may better be allocated in an initial stage, to more qualitative mode of evaluation.<br />
4.1.2.2 Evaluating according to the usage, perception and business value perspective<br />
The second approach is more oriented towards usage, cognitive perception and business<br />
value of the systems and the tools, and addresses the effectiveness of the approach (covering<br />
the level 3, 4 and 5 of our adapted Kirkpatrick model). The objective of the evaluation<br />
consists in this case in assessing to which level the new approach support new practices, that<br />
are more effective at the individual level (for instance by supporting higher level cognitive<br />
processes) and at an organizational level (for instance, by enabling the organisation to be<br />
more flexible, more continuously learning, more able to exploit previous experiences, and<br />
more creative).<br />
Practically, the nature of the evaluation is more qualitative, and relies on instruments such as<br />
questionnaire or experiments via scenarios (Giboin et. al., 2002) trying to elicitate needs,<br />
perception, and more effective usage patterns that could be better supported by a technical<br />
infrastructure, and offered to companies.<br />
4.2 The approach used for evaluating Ontologging<br />
The Ontologging project possesses the following characteristics:<br />
• A domain relatively new, for which little previous work (theories, experience, etc.) is<br />
available.<br />
• The relatively immaturity of the technologies (the system is still far from an industrial<br />
product).<br />
• The limited size of the “population” conducting the test (it is difficult indeed to get a large<br />
sample of users for a system that is only in a prototypical stage).<br />
• The desire to evaluate the high-level usage value of the knowledge that is generated (and<br />
in particular, radically new usage patterns).<br />
Besides, it was decided to orient the focus of the evaluation towards the effectiveness of the<br />
approach (what is the substantive value delivered to the users?), rather than the efficiency of<br />
the technical infrastructure (are the tools “functioning” well?) that was being developed.<br />
As a consequence, it was decided to use a more qualitative form of evaluation.<br />
Questionnaires were therefore elaborated and their result analysed, focussed groups and<br />
experiments (cognitive walkthroughs) were conducted.<br />
Note:
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 40 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
The use of more quantitative and “rigorous” methods was actually considered at the<br />
beginning of the project, but they were finally discarded, because they would have distracted<br />
the attention of evaluating the new usages (the core of the innovation), and would have<br />
necessitated too much resource that would have to be taken from the qualitative evaluation.<br />
A next chapter of this document will however indicate some directions of how to conduce a<br />
more quantitative evaluation that could be envisaged in the future (as an initial stage of a<br />
productarisation effort).<br />
4.3 The definition of the instruments and the gathering of the data<br />
The Ontologging system has been installed in the three competency centres, and has been<br />
tested by a number of employees belonging to theses centres.<br />
Questionnaires were the instruments that were the most extensively employed for capturing<br />
the data, since this method is relatively simple to set-up, and is particularly well adapted to<br />
accomplish a qualitative evaluation. But other methods were also employed to complement<br />
this capture such as: focus groups , interviews and experiments.<br />
4.3.1 The context of the evaluation<br />
Let’s now enter more in the details of this evaluation by presenting the context in which this<br />
evaluation took place.<br />
The evaluation of how the Ontologging system is able to support knowledge processes has<br />
mainly taken place at INDRA, an important (6000 employees) Information Technology<br />
service Spanish company, and to a lesser extent at Meta 4, a smaller IT company designing<br />
enterprise software products (such as accounting, people management, knowledge<br />
management software).<br />
In the following part of this document, we will principally focus our attention on the<br />
evaluation that has been conducted at INDRA.<br />
The evaluation of the Ontologging system at INDRA has consisted in testing the systems for<br />
supporting three activities:<br />
• Tendering.<br />
This activity consists in the elaboration proposals answering to call for tenders for new IT<br />
contracts.<br />
• Development<br />
Business development activities (looking for new projects, etc).<br />
• Technology watch.<br />
This activity consists in the elaboration of reports on topics/disciplines that are considered<br />
as important for the INDRA organization.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 41 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
These activities are fulfilled by a horizontal structure of the IN DRA organization (the<br />
competency centres), which role is to provide more long-term services to the operational<br />
centres (the production centres and the sales lines) of the INDRA organization.<br />
Competence centres mission is to lead the innovation of Indra’s services and solutions. These<br />
centres provide support to the operational centres (the production centres), and in particular<br />
play the role of interface with the business lines (the commercial forces in contact with the<br />
clients), lead the innovation and capitalize expertise for INDRA. See the Annex 2 for a detail<br />
description of the role of competence centres. Annex 2 also presents an overview of the<br />
tendering process at INDRA. (INDRA, 2002)<br />
The profiles of the employees participating to the test are diverse (consultants, software<br />
analysts, system architects), and the employees originate from different domains (security,<br />
ERP, etc.).<br />
Besides, different roles can be distinguished such as:<br />
• The end-users (focussed on content)<br />
o The occasional users. (Main activity: browse and search the content)<br />
o The traditional users. (Main activity: browse, search & edit)<br />
o The “power” users. (Main role & activity: browse, search, edit & organize)<br />
• The Knowledge Managers (focussed on structure)<br />
o The technical knowledge manager<br />
Main role: set-up and maintain the IT/knowledge infrastructure; activity: set-up,<br />
clean, merge, restructure.<br />
o The knowledge management consultant / manager.<br />
Main role: help to structure the work processes in the organization; Activity:<br />
defines ontologies.<br />
The size of the group of the users participating in the evaluation has been decided in with the<br />
objective of: (1) collecting a reasonable amount of data that will make the result of the<br />
evaluation useful and significant; (2) making the evaluation doable (realistic) in the time<br />
frame, and with a good level of quality (the running of session can be considered as a<br />
relatively “heavy” operation).<br />
4.3.2 The Questionnaires<br />
A series of questionnaires has been elaborated to collect the data:<br />
• An early questionnaire (given in Spanish), which aim was to make a very early<br />
assessment of the situation and of the users’ need.<br />
• A pre-questionnaire, which was to get a good understanding of the users participating to<br />
the test.<br />
• A questionnaire for evaluating the Ontologging knowledge structuring and content<br />
population process.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 42 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• A questionnaire for evaluating the usage of the document -centred tools of Ontologging<br />
system by the final end-users.<br />
• A questionnaire for evaluating the more user -centred tools of the Ontologging system by<br />
the final end-users.<br />
These questionnaires have been distributed both to INDRA and to Meta4 (for which not all<br />
the questionnaires have been distributed though).<br />
The process of the elaboration of the questionnaires was relatively straightforward, and<br />
included both close questions (the user has to select a choice) and open questions (the user is<br />
asked to use free text to answer). The main difficulty was to address the risks of<br />
misunderstanding of some of the terms used and of the blank answers (questions considered<br />
as irrelevant to some users). The experience showed that these problems were not totally<br />
avoided since a few misunderstanding were identified (such as the definition of the semantic<br />
of a knowledge management tool), and some of the questions requiring some effort from the<br />
users (such as the description of knowledge management usage scenario) were left answered<br />
by some users.<br />
An example of an answer indicating that the definition of knowledge tools was not totally<br />
clear is given bellow:<br />
“Google may not be KM in pure sense as I understand it, but it definitely solves most of my<br />
problems most of the time”.<br />
4.3.2.1 The early questionnaire (in spanish)<br />
The objective of this early (spanish) questionnaire “Cuestionario de Opinión sobre<br />
Características para Seleccionar Ontologías” was to get the initial situation and needs of the<br />
organization INDRA. This questionnaire was given at an initial stage of the project, well<br />
before the prototype was designed, and can be considered as participating to the formative<br />
evaluation of the Ontologging system. The result was used as an input for the user need to<br />
help to define and drive the design of the system.<br />
The content of this questionnaire was relatively high level, and was principally addressing<br />
aspects related to knowledge representation via Ontology. The content of this questionnaire is<br />
available in Annex 7.<br />
4.3.2.2 The pre-questionnaire<br />
In a first stage when the prototype was still not fully functional, a pre-questionnaire has been<br />
used to collect data related to the end-users knowledge and practices of knowledge<br />
management. The objective of this pre -questionnaire was to get a better understanding of the<br />
people participating to the test and also to help to adjust the design of the prototype before its<br />
initial release.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 43 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
This pre-questionnaire surveyed knowledge processes of the knowledge workers, what are<br />
the knowledge management tools used, what are the perceived needs of the users for<br />
knowledge oriented processes and what are the suggested improvements related to current<br />
KM tools.<br />
Practically, this questionnaire has consisted in a series of question related to:<br />
• The profile of the participant: Title, position and role in the organization (programming,<br />
managing, consulting, marketing, sales, etc), knowledge activities in which he/she is<br />
involved (write reports, write tenders, programme, prospect clients).<br />
• The perception of this participant of the discipline of knowledge management and its<br />
current situation (opinion).<br />
• His/her experience and practices in this domain (what were the knowledge process used<br />
to perform the work).<br />
• His / her expectation in this domain (what are the support he/would need / like to have).<br />
This questionnaire was distributed to the group of users a little time before the system was<br />
deployed, and 14 questionnaires have bee n collected.<br />
An extract of the answers to the questionnaire to some of the open questions related to the<br />
KM definitions and suggested problems related to KM are given bellow:<br />
“For me a KM tool is the one that allows me to access the information I need to perform the<br />
tasks associated to my job in a fast and efficient manner”.<br />
“KM should help people benefit from experience not to have to redo things again”.<br />
“Part of the problem is to have a system in which people are willing to contribute.”<br />
KM tools don’t improve work. “That’s because currently it’s necessary to spend to much time<br />
looking for material.”<br />
The content of this questionnaire is available in Annex 5. “Annex 5: Pre -questionnaire for<br />
the participants in the Ontologging usability test”.<br />
4.3.2.3 Questionnaire 1: Ontology-based approach for the structuring of knowledge<br />
The objective of this questionnaire was to evaluate the Ontologging system (tools and<br />
approach) capacity to help the structuring of the knowledge (and in particular the elaboration<br />
of the Ontology that was to be used to organize the knowledge) and the population of the<br />
content. Practically, this questionnaire was trying to identify the main limitations and<br />
problems that occurred during the process of design of the main ontology, both from a<br />
conceptual point of view (methodology used), and from a technical point of view (how did<br />
the tools helped to support this process).<br />
The content of this questionnaire is available in Annex 6.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
4.3.2.4 Questionnaire 2: Ontologging Project Questionnaire<br />
Page : 44 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
The objective of this questionnaire was to evaluate the main document-centred tools and<br />
approach of the Ontologging system, and in particular the DUI (Distributed User-Interface).<br />
The final users were asks:<br />
• To indicate some usage scenarios about how they had used the tools.<br />
• To provide feedback on the efficiency of the main Ontologging tool (DUI). Were the<br />
functionalities complete enough, did it have a good ergonomic, etc.?<br />
• To provide feedback on the effectiveness of this tool. Was the tool supporting well<br />
knowledge management processes?<br />
• To indicate more in detailed knowledge management processes that were supported.<br />
• To provide a longer-term (3 years) perspective about possible and desire evolution.<br />
The content of this questionnaire is available in Annex 9.<br />
4.3.2.5 Questionnaire 3: Evaluation of user modelling processes and the evaluation of the<br />
knowledge distribution agents<br />
The objective of this questionnaire was to evaluate the Ontologging usage according to its<br />
more people-centred dimension, and in particular to assess the aspects related to the<br />
modelling of the users, the sharing of knowledge amongst the users, and the knowledge<br />
distribution via agent mechanisms. More specifically, this questionnaire has collected data<br />
concerning the user profile editor tool and the knowledge distribution agent mechanism. This<br />
questionnaire has also addressed the question of privacy and knowledge sharing, as well as<br />
the personalisation issues.<br />
The content of this questionnaire is available in Annex 8.<br />
4.3.3 Focus groups<br />
A series of meetings were organized with the user participants at different periods of the<br />
advancement project.<br />
First, several meeting were organised between INDRA and the development teams in order to<br />
introduce the concepts of the system to the users, and raise their interest. These meetings<br />
were also the opportunity to assess the initial situation and to collect the user needs and<br />
therefore contributed to the formative evaluation of the project. Some of the focus of these
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 45 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
meeting were the identification of the current practices, the nature and the amount of<br />
knowledge manipulated by the different competency centres of INDRA participating to the<br />
evaluation, and some investigation of the issues related to the complexity of the main<br />
ontology to be used for structuring the knowledge.<br />
For instance, for the tender use case (one of the activity that will be used to test the<br />
Ontologging system), the original information system was organized in a similar way as a file<br />
system, and the documents could be searched by year project/customer. Concerning the order<br />
of magnitude, the use case could generate around 250 documents per year, person instances<br />
could add up to around 150, project instances around 50.<br />
Second, after he Ontologging system had been deployed, the consortium took the opportunity<br />
of different consortium meetings in the INDRA settings, to organize meetings involving<br />
some (and sometime all) the partners and the different users. The objective and focus of these<br />
discussions was to collect the user feedback, and in particular to identify and understand the<br />
difficulties and problems faced by the user as well as the different practices that had been<br />
adopted by these users. For instance, these discussions have helped to spot a number of<br />
elements that had not been really envisaged at the beginning of the project such as: (1) a<br />
difficulty of the design process of the ontology that had been largely underestimated,<br />
resulting in an initial low quality domain ontology that made the content population difficult<br />
and the result not very convincing, and finally to the redesign of the Ontology and the<br />
migration of the initial content; (2) a usage pattern that was clearly oriented towards the<br />
navigation via a variety of objects (projects, people, topics, etc.) connected to one another,<br />
rather than document search using the (ontology-based) search engine provided in the system.<br />
These reasons of these results, which should not come to a surprise today given the better<br />
understanding that we have today of the semantic web concepts, originate from the fact that<br />
the perspective of several of the designers of the Ontologging system was biased towards<br />
document centred systems (for which they had most of their experience) or towards a very<br />
technical and tools orientation, rather than an knowledge engineering orientation that would<br />
be more appropriate when designing system including an important knowledge (content)<br />
dimension.<br />
4.3.4 Interviews<br />
As for the focus groups, several interviews were conducted at different period of the project<br />
principally via phone calls.<br />
Interviews were particularly used during the phase of the reengineering of the INDRA<br />
domain ontology, and of the content migration. This phase (and the lessons learned) was<br />
indeed particularly rich of insights related to some of the most critical issues of ontologybased<br />
systems such as: (1) the quality of the Ontology: the reengineering had to address the<br />
problem of the initial ontology such as a set of concepts that were incomplete and not<br />
consistent with the “implicit” ontology used by the INDRA organization; (2) the content<br />
population and migration: the value of some of the knowledge engineering tools such as ORA<br />
(Ontology Reconciliation Agents) could be acknowledged or in some case questioned such as
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 46 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
in the case of the agent mechanism for Collaborative Ontology Authoring or the Ontology<br />
Knowledge Evaluator (OKE) that still appear more than ever as potentially valuable, but for<br />
which easier way of operationalising them would be desirable; (3) the usage patterns and<br />
expectation of an ontology system: some usage feedback was available to which kind of<br />
knowledge should be present in the system (users were expecting a rich variety of high<br />
quality content but that would be limited in quantity).<br />
4.3.5 Experiments, via Scenarios<br />
Finally, a set of usage scenarios was identified (from current practices), further elaborated<br />
and tested to assess the capability of the Ontologging system to effectively support and<br />
improve some of the knowledge processes, and also to understand the usage patterns of a next<br />
generation of an Ontology-based knowledge management system.<br />
4.4 Some directions towards a more quantitative (and rigorous?) evaluation<br />
4.4.1 The reason of the qualitative evaluation<br />
As indicated previously, the Ontologging project has favoured the choice of a more<br />
qualitative evaluation versus a more quantitative evolution. This decision was actually not<br />
taken initially (a quantitative evaluation was even considered), but appeared later after<br />
observing the usage patterns of the end users. Indeed, contrary to the what was foreseen at the<br />
beginning of the project, the users did not see the Ontologging management system as a more<br />
powerful document management system that would allow to search more effectively an<br />
important amount of documents, but rather as a system able to capture the complex<br />
relationship between a relatively small numbers of a variety of objects (project, people,<br />
topics, and documents) intervening in knowledge management processes, and giving the<br />
possibility to the user to navigate this network of semantic relations. In this perspective,<br />
quantitative evaluation aiming at assessing the capability (quality) and the performance of the<br />
system at managing (manipulating, searching, retrieving, etc.) huge amount of data presented<br />
little interest since it was only covering usages (Information retrieval, and document<br />
orientation) that were not central for Ontologging.<br />
Does it mean that qualitative evaluation is meaningless?<br />
The answer is definitively no: as indicated previously, more quantitative evaluation should be<br />
useful at a later stage, once the different knowledge processes have been better understood,<br />
and a more precise evaluation is needed to assess how much the system is able to support<br />
these processes.<br />
4.4.2 Some direction for a quantitative evaluation
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 47 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Concerning the quantitative evaluation of aspects that would be more semantic web oriented<br />
(semantic domain definition, semantic knowledge capture, and semantic navigation), much<br />
less previous experience is available (than evaluation accordin g to an Information Retrieval<br />
perspective).<br />
We can first indicate some work in this direction related to semantic disambiguation that was<br />
done by (Missikoff, Navigli and Velardi, 2002), and which measured the performance of<br />
different disambiguating heuristics, and more generally all the work on Ontology building,<br />
and ontology learning and population that has developed over recent years. However, the<br />
perspective of these approaches remains very technical, and is quite far from a more semantic<br />
web and cognitive vision that were adopted by the ontologging users and that we would like<br />
to promote in this project.<br />
A probably better perspective to explore is the domain Organizational Memory research<br />
(Abecker et al. 2003), which is more in line with higher-level knowledge management<br />
concepts, but for which evaluation still would need more elaboration (see for instance<br />
(Weinberger, Teeni and Frank 2003) for some work in this direction, for instance in<br />
evaluating the completeness of a manual knowledge population).<br />
Fina lly, an even more promising approach (but also far fetched) is the approach of knowledge<br />
management emphasizing cognitive and social factors (Thomas, Kellogg, and Erickson,<br />
2001). Along this line is all the work related to social translucence, which rely on tools<br />
monitoring quantitatively the knowledge activity of a whole community. Obviously, the<br />
monitoring of the activity via the usage log represents a source of quantitative data that could<br />
easily be exploited (actually it is exploited by the Ontologging OKE) by quantitative<br />
evaluation, and in particular related to the (not only usage retrieval) usage of the system.<br />
4.4.3 Some prospective operationalization of “SMART” quantitative evaluation<br />
Let’s now indicate some direction of how we could build some quantitative evaluation for the<br />
Ontologging project. In the illustration that will be given later will try to follow the SMART<br />
principle and be Specific, Measurable, Attainable, Relevant and Time-bound.<br />
4.4.3.1 Ontology building<br />
The first quantitative evaluation could relate to Ontology building, and could assess the<br />
capacity of the Ontology approach and tools to support the elaboration of the domain<br />
Ontology.<br />
Different quantitative indicator could be used here:<br />
• The time to design or redesign the Ontology. How many hours, days, or weeks would be<br />
necessary to design the main Ontology, or some sub-ontology?<br />
• The complexity of the designed ontology. What is the level of complexity of the<br />
Ontology being elaborated (number of concepts, number of properties associated to this<br />
concept, level of nestledness / how need is the inheritance hierarchy).<br />
• Quality of the resulting Ontology (redundancy, ambiguity, etc.).
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 48 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• Completeness. For instance, when trying to capitalize a given set of relevant documents,<br />
or other knowledge objects, what is the percentage of time that a concept or a relationship<br />
appears to be missing?<br />
Example of a scenario of a quantitative evaluation:<br />
Evaluate (with the criteria that have been indicated) the design of the sub-Ontology for the<br />
military sector at Indra. This Ontology has to be able to be used to elaborate proposals<br />
ranging from 500000 Euros to 5 million Euros. The size of the existing data to be captured in<br />
this sector is in the order of magnitude of: 250 documents, 400 people, 80 projects, and 120<br />
topics. The first version of this Ontology will have to be available in a maximum 2 months<br />
and a half; the definitive version will have to be available in 8 months.<br />
4.4.3.2 Ontology content population<br />
The second quantitative evaluation could relate to Ontology content population, and could<br />
assess the effort necessary to populate the content of an Ontology with the Ontologging tools.<br />
Different quantitative indicator could be used here:<br />
• The time to populate the content of the Ontology. How many hours, days, or weeks would<br />
be necessary to design the main Ontology, or some sub-ontology?<br />
• The richness of this ontology population. Number of instances, average number of<br />
relationships between the instances and more generally complexity and nature of the<br />
resulting network (according to graph theories).<br />
• Quality of the resulting Ontology population (noise, navigability, etc.).<br />
• Completeness. Percentage of knowledge considered as relevant that has been captured.<br />
• Level of knowledge sharing in the organization. For instance average num ber and nature<br />
of the distribution, of the sharing of documents and other knowledge item by the<br />
population of the knowledge workers.<br />
Example of a scenario of a quantitative evaluation:<br />
Evaluate (with the criteria that have been indicated) the population of the sub-Ontology for<br />
the military sector at Indra that has been described in the previous phase.<br />
Note: some shorter evaluation experience tests could also be considered here such as:<br />
Time necessary to capitalize a set of 10 documents. Time necessary to model 5 people<br />
instance. Time necessary to model 3 projects.<br />
4.4.3.3 Usage
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 49 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
The usage quantitative evaluation objective is to assess the capability of the Ontologging<br />
system to support knowledge related activities, and in particular to improve the productivity<br />
of the knowledge worker (time to complete a task, quality of the work).<br />
Different quantitative indicator could be used here:<br />
• The time to identify the knowledge items (project, people, documents, etc.) in the initial<br />
phase of a knowledge management task (for instance, the elaboration of a new tender).<br />
• Level of quality of the retrieve information (precision and recall).<br />
• Level of cognitive load (could be connected to the precision).<br />
• Level of use of the different knowledge items (statistical distribution in term of level of<br />
access to the different items).<br />
Example of a scenario of a quantitative evaluation:<br />
Evaluate (with the criteria that have been indicated) the elaboration of the answer to a set of<br />
tenders in the military sector. An idea could also consist in the comparison of the effort to<br />
answer to a proposal with and without the Ontologging tools. This category of experience<br />
may however appear not to be Attainable (or realistic) at Indra for human and deontological<br />
reasons.<br />
4.4.4 Concluding remarks on the quantitative evaluation<br />
As we have seen, some level of quantitative evaluation would have been possible in this<br />
project. However, we can observe that it is more difficult with this category of evaluation to<br />
capture aspects that are difficult to formalize, such as for instance qualitative practices (such<br />
as the use of navigation instead of search) or level of knowledge sharing that however appear<br />
to be key elements in this project. If we had done so, consequence would probably have been<br />
an over-emphasis and some bias on knowledge retrieval methods, and the overlook of the<br />
elements that we can consider as the most important in this projects (in particular a semantic<br />
web orientation).<br />
If in the longer term, and as a further step, we can however consider that a more quantitative<br />
evaluation would be very desirable (and actually almost mandatory in order to validate<br />
“scientifically” the knowledge generated), in particular if we manage to define some<br />
quantitative indicators able to measure realistically the most important elements of the<br />
semantic web “vision”.
5 Results & Analysis<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 50 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
In this analysis, we are going to distinguish and evaluate the different aspects and phases that<br />
correspond to the total cycle of the setting-up and the use of an ontology-based knowledge<br />
management solution.<br />
These aspects and phases include:<br />
• The evaluation of the initial situation (to intervene in the formative evaluation)<br />
• The elaboration of the structure (ontology building)<br />
• The population of the content<br />
• The usage<br />
• The overall result<br />
5.1 Phase 0: State of the situation (formative evaluation)<br />
An analysis of the initial questionnaire and of the pre-questionnaire has helped to make a<br />
state of the initial situation that was used at the level of the formative evaluation to provide<br />
some input for driving and adjusting the design (via a prioritisation of what appeared to be<br />
the most critical elements from a user perspective).<br />
5.1.1 Results<br />
Different elements were collected from these questionnaires:<br />
• The perceived needs of the end-users.<br />
• The practices and tools currently used.<br />
• The users’ expectation (longer term).<br />
5.1.1.1 Perceived needs of the end-users<br />
The answers to the pre-test questionnaires revealed the perceived needs and expectations of<br />
the end-users. In particular, users expect Knowledge Management systems to help them to<br />
access to information, to reuse past experience, to save time, and to locate experts.<br />
The following paragraph quotes some answers of the pre-questionnaire:<br />
(Good Access to information)<br />
“To facilitate access to information/knowledge relevant to the current work tasks in order to<br />
optimize work processes and improve productivity”;
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 51 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
“The most important function is to facilitate access (on time) to useful information in order to<br />
make my work easier and best.“<br />
“Searching the information that I need”<br />
“Discover and consume the knowledge generated by the company that could help me in my<br />
everyday tasks.”<br />
“To get briefings of news” Julian<br />
(Better re-use of past experience)<br />
“Re-use past experience”.<br />
“ Reuse Indra’s past experience: documents, experien ces, etc.”<br />
“Not loosing time studying problems which have been solved before”<br />
(Saving time)<br />
“Saving time when I am searching for a solution”, Jose.<br />
“When I must do a tender, I need a lot, and diverse information about products, prices,<br />
references, news, etc. There is, normally, scarce time to do it and to have information on time<br />
if it is necessary.”<br />
“A major advantage is saving time when I’m searching for a solution”<br />
“The most important functionality of this system is that it save me time in searching any kind<br />
of information that I need”<br />
(To find the right person, and get help)<br />
“Look for experts in order to ask for a tacit knowledge”<br />
“Find the right people to solve concrete problems”<br />
“Also some KM tool should provide information to know what people knows about something<br />
or has a previous experience with some technologies and products”.<br />
“Direct chat with experts”.<br />
5.1.1.2 Current practices (and KM tools used)<br />
In order to accomplish various work tasks, the Competence Centres employ a variety of<br />
(structured and unstructured) knowledge tools supporting a large set of knowledge processes.<br />
These tools include:<br />
• An Enterprise portal (Indr@web): integrates and manages a wide variety of corporate<br />
information channels and services.<br />
• A Knowledge and information repository infrastructure: databases and electronic<br />
document management systems<br />
• Knowledge maps and guides to available knowledge resources: thesaurus, taxonomy and<br />
ontology generation<br />
• Search and delivery information services to access analysis and strategy external sources<br />
of information
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 52 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• Collaboration services<br />
o Directory, calendar, agenda services<br />
o Threaded discussions (forums)<br />
o Asynchronous (e-mail)<br />
o Shared spaces (document sharing, white -boarding).<br />
• E-learning and human resource management portal (Employee Application Portal). This<br />
portal is still however in a prototypical stage.<br />
All these tools are not integrated under the common framework of a knowledge management<br />
system, but appear mainly as a set of tools independent from one another. The level of usage<br />
is also different and can be ranked according to the following order: email, search tools (such<br />
as Google), databases, communication tools, and the portal.<br />
It has to be noted that since the core business of INDRA is centred on information<br />
technologies (they propose and integrate IT solutions to their clients), the company’s culture<br />
is very favourable to the use of technology, and people are very keen to adopt new<br />
technologies.<br />
5.1.1.3 Suggestions for improvements of the actual KM tools<br />
The end-users do not seem to be satisfied by the solution currently offered by the knowledge<br />
management tools and ask (for instance in the pre-questionnaire) for many improvement and<br />
advanced features.<br />
Examples of desires include:<br />
• To better organize the content of the knowledge management tools and the ”quality of<br />
content” is perceived as a very important issue. “Content is not correctly organized, not<br />
updated or duplicated.”<br />
• To make experience of people more visible in organization: “to know what people know<br />
and to make their experience with technology and products accessible”.<br />
• Personalization and adaptive features “to include mechanisms in order to acquire<br />
knowledge about user profile and filter information and noise”, “adapt the tools to each<br />
company or secto r”<br />
• To integrate collaborative tools<br />
• To improve the interface and to integrate the functionality under a common framework;<br />
• KMSs are still “document management systems” enabling to access documents in an<br />
organized way. “A KM should be more powerful, should be able to relate in an active<br />
way people and knowledge, should be more dynamic and make know people profile<br />
evolve.”<br />
• To address Information overload. “Yes, this tools need major improvement to allow users<br />
use Knowledge tools in an easy way, spending few time and don’t lose among hundreds<br />
of document”<br />
• More intelligent search of documents (“not only by descriptors”)
• List of connected users to the system<br />
• Direct chat with experts<br />
5.1.2 Analysis<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 53 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
The analysis of the different initial questionnaire, of the pre -questionnaire, and of the<br />
different discussions have helped to assess the situation, to identify some (perceived) needs<br />
and has revealed a set of important features that need to be enhanced in actual KM tools.<br />
An organization already equipped but with disparate tools<br />
As it can be observed, INDRA competence centres are already relatively well equipped with<br />
information systems providing support to knowledge intensive work of the knowledge<br />
workers. Besides, the level of acceptance of these technologies is good, tha nks to a<br />
company’s culture very favourable to the use of technology.<br />
However, these tools are not integrated and the support of the knowledge processes remains<br />
shadow: most of these tools appear to be closer to information tools than knowledge tools.<br />
The need to better organize the content<br />
The organization of the content and the quality of content are some of the most important<br />
issues. Actual knowledge management tools facilitate access to knowledge, but the<br />
knowledge workers complain that “content is not correctly organized, not updated or<br />
duplicated”.<br />
Enhanced support for searching and filtering knowledge;<br />
The knowledge workers want to “save time when I am searching for a solution” and “not<br />
loosing time studying problems which have been solved before”. The enhanced support for<br />
filtering and searching knowledge is related to “saving time” in achieving the various work<br />
related tasks.<br />
Adaptive features and personalization<br />
Personalization issues are related to the issue of filtering information but also a need of easy<br />
to be used expressed as: “Yes, these tools need major improvement to allow users use<br />
Knowledge tools in an easy way, spending few time and don’t lose among hundreds of<br />
document.”; but also a need “to include mechanisms in order to acquire knowledge about<br />
user profile and filter information and noise”. Adaptation features are required in order to<br />
“adapt the tools to each company or sector”.<br />
Support for expertise finding or a need “to know what people know”
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 54 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Being allocated to different projects, writing projects proposals for various clients and<br />
proposing IT solutions are current tasks for Indra’s knowledge workers. The access to the<br />
right information and collaboration with experts help their efficiency and help them taking<br />
better decisions. Therefore it is necessary to make experience of people more visible in<br />
organization. The need “to know what people know, to make their experience with<br />
technology and products accessible” and “to find the right people” is useful for solving<br />
everyday tasks.<br />
These needs of the users validate the main research challenges of the project namely:<br />
• Research on mechanisms to better structuring and reusing knowledge (e.g. ontologies),<br />
• Research on more powerful mechanisms for searching and retrieving knowledge (e.g.<br />
advanced query mechanisms and the use of semantics annotations for retrieving<br />
knowledge),<br />
• Personalized information spaces for avoiding a condition of information overload and<br />
easiness of the use of the system (e.g. taking into account preferences of the users, the<br />
users’ needs, etc)<br />
• Better management of the tacit knowledge expressed as: “to know what people know and<br />
to make their experience with technology and products accessible”.<br />
5.2 Phase 1: Ontology Building (and some content population)<br />
Ontologies aim to structure and represent domain knowledge in a generic way which may be<br />
reused and shared across applications and groups.<br />
“Disparate backgrounds, languages, tools and techniques are a major barrier to effective<br />
communication among people, organizations and/or software systems (…) the implementation<br />
of an explicit account of a shared understanding (i.e. an “ontology”) in a given subject area,<br />
can improve such communication, which in turn can give rise to greater reuse and sharing,<br />
interoperability and more reliable software.” (Uschold and Gruninger, 1996).<br />
5.2.1 Ontology building (at INDRA)<br />
5.2.1.1 Description of the building process<br />
The process of knowledge domain modelling at INDRA, namely the tendering process<br />
modelling, went through an iterative phase in order to get to a shared understanding of<br />
concepts, to get to a complete ontology and to reach consensus towards the definition of<br />
concepts. The process of building the domain ontology was used in the process of the<br />
evaluation of the ontology modelling processes and tools. The design and implementation of<br />
the domain ontology at INDRA brought some interesting insights, which were captured<br />
through a specialized questionnaire. The questionnaire is attached in Annex 6.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 55 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
In a first phase, the ontology modelling process was assessed at INDRA during may-june. A<br />
Spanish questionnaire was designed with this purpose. (See Annex 7) Some conclusions<br />
drawn by the ontology engineers are briefly summarized bellow:<br />
• Related to ontology’s concepts:<br />
It is very important that the concepts match the domain’s needs.<br />
Concept relevance is more important that completeness.<br />
The number of concepts defined is not important.<br />
• Related to ontology’s relations:<br />
It is very important that relations match the domain’s reality.<br />
• Related to the visual clarity of the ontology:<br />
It is very important that the ontology interface displays in a clear manner the information<br />
captured in the ontology.<br />
From this experience of building and refining the domain ontology we have extracted some<br />
thoughts of the ontology engineers. The ontology engineers from Indra and Meta4 were asked<br />
to share their experience related to the ontology modelling process. The questionnaire was<br />
designed with the purpose to capture their experience, problems and it was complemented<br />
with some discussions via telephone. Indra and Meta4 faced different problems.<br />
The first domain ontology used at Indra faced a series of problems at the usage stage. For<br />
example the terminology used in the everyday tasks by the knowledge workers was not the<br />
same with the conceptualization of the domain ontology. A set of too generic concepts were<br />
confusing for the end-users so they were misused or used for everything. Implicitly too<br />
specific concepts were never used. A set of missing concepts has been also identified at the<br />
usage.<br />
5.2.1.2 Findings and analysis<br />
The ontology reengineering process emphasized the fact that getting to a shared<br />
conceptualization is not a straightforward process . The design of good domain ontology<br />
is an iterative process. Amongst the main reasons that determined the ontology reengineering<br />
process are:<br />
1. The initial ontology wasn’t offering enough support for the given scenario (tender,<br />
development, technology)<br />
2. The initial ontology was not complete<br />
3. The terminology used was not consistent with the general usage<br />
4. It is difficult to reach a consensus towards a definition of concepts
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Some other reasons were suggested bellow by the ontology engineer of Indra:<br />
Page : 56 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
One important problem of the ontology was located in some concepts. These concepts<br />
weren’t too clear to users, so they misused it, causing some chaos in the information. This<br />
problem was related with the terminology (proposed reason 3), because some of the<br />
problematic concepts weren’t used in daily work, and also related with consensus about<br />
concepts (proposed reason 4), because if a concept is not specific, each user will apply his or<br />
her own idea of the concept.<br />
For example:<br />
• Term and Topic were fuzzy concepts, so people tend to use it for everything they did not<br />
know where to put. Therefore the instances for these two concepts were totally<br />
heterogeneous because the name of the concept was too general, so each user had are own<br />
meaning.<br />
• Technology and its sub-concepts: some of these concepts refer to similar ideas (software,<br />
operating systems, commercial products), some of them were useless (requirements), or<br />
some were too general (services). The consequence of these problems was that nobody<br />
used these concepts and their instances homogeneously.<br />
In case of other concepts, the problem was with the nomenclature or formats: the format for<br />
dates, for numbers. For example, in the attribute duration of a project, the unit was not<br />
specified, so each user had its own idea about the format: using months, years, days, the<br />
initial and final date of the project.<br />
Therefore, a first conclusion may be that it is very important to use specific concepts, in order<br />
to avoid misunderstandings. Although the DUI includes the possibility of adding some kind<br />
of help to users (specifying the format and a brief tool-tip for each concept), it would be very<br />
interesting that it would be possible attaching a description to each concept of the ontology,<br />
making this description or help available in the DUI. This way, the users can always review<br />
the help for a concept in order to clarify their doubts without receiving a previous course<br />
about the ontology. For example: “The organizations related with an operation using the<br />
relationship -implies- can be clients, partners, providers…”<br />
Regarding the reasons 1, these were clearly present in the Documents concepts definition.<br />
This definition was very scarce, so it was difficult to distinguish the different types of<br />
documents involved in a proposal or in a project. Also the original ontology didn’t include<br />
the option to relate a Proposa l to the corresponding project (if the proposal is awarded).<br />
The ontology was also uncompleted in people definition, because there was only a concept<br />
for all the people. In the new ontology this has been addressed including three sub-concepts<br />
of the “People” concept.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 57 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Therefore, the original ontology was too general, especially in some concepts, which are key<br />
for some of the proposed scenarios (especially Tendering). On the contrary, it was too<br />
specific in some of its branches, so a more general approach ha s been used in the new<br />
ontology.<br />
The usage of the domain ontology (queries, submitted documents) helps the ontology<br />
engineer to decide what concepts are missing, which concepts have not been used<br />
appropriately and which ones cause problems. The queries of the users can also help to<br />
identify new relationships between concepts.<br />
In addition to the suggestions from the other users some limitations of the domain ontology<br />
were identified when the ontology engineer used the tool as a normal user (e.g. the missing<br />
concepts or relationships can be detected when entering or querying data). Therefore, a first<br />
step to take before refining the ontology is review users’ work and suggestions, and also<br />
experiment the DUI as a normal user, trying to reproduce the more frequent use cases. Once<br />
compiled all this notes, the ontology engineer begins to make the corresponding changes.<br />
5.2.2 The reengineering of the Ontology<br />
As the level of quality of the first version of the Ontology (used for the first stage of the<br />
evaluation) was found not adequate (not complete, confusing, noisy, etc.), a process of major<br />
redesign was initiated.<br />
5.2.2.1 Description of the ontology re-reengineering process<br />
This section describes the process accomplished to update the original version of Indra’s<br />
domain ont ology, according to the users’ comments, and to the work made for the domain<br />
modellers in order to solve the detected problems and to adapt the ontology to the proposed<br />
use scenarios.<br />
A detailed description of the process of refining and migrating the domain ontology (by the<br />
ontology engineer of Indra) is presented bellow.<br />
Step 1 - ‘In paper’ redesign<br />
The first step carried out when redesigning the ontology was to analyse the original ontology<br />
to detect which ‘branches’ of the ontology store homogeneous and appropriated data and<br />
which ones not. This allows the managers to distinguish those concepts that are well defined<br />
from those one that are confused and cause misunderstandings or problems.<br />
It is also important that the manager use the DUI as other users, because the best way of<br />
viewing what the user may need is being in the user’s place. Therefore, it is important to take<br />
into account the use scenarios, in order to detect which may be the most frequent queries that
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 58 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
the users would need, or which concepts may be interesting for them. These allow us to add<br />
those concept missed in the original ontology.<br />
Once a first draft of the new ontology has been sketched, it is better to work directly with the<br />
Ontologging system tools, which provide a richer environment to redesign ontologies, even if<br />
the new ontology is not totally defined.<br />
Step 2 – From paper to Ontologging: ‘rough’ migration<br />
The ‘sketched’ ontology was transferred to the system using KAON. The original ontology<br />
was used as a reference, since some of the concepts were the same.<br />
Once the first version was introduced in the system with KAON, the next task was defining<br />
the mappings or translations required to move the already existing data from the original<br />
ontology to the new one. This process was ma de using the ORA (Ontologging Reconciliation<br />
Agent), programming in XML the required mappings.<br />
This tool allowed avoiding the loss of a huge amount of data, already introduced in the<br />
original ontology. However, this migration process does not allow to map all the instances to<br />
their corresponding concept, because when a concept from the original ontology is mapped to<br />
different concepts in the new ontology, it may be difficult to establish a way to distinguish<br />
the to which concept the instances must be assigned. Therefore, it will be necessary an<br />
additional work to separate this instances in a later stage.<br />
Step 3 – Polishing the ontology – ‘sharp’ migration<br />
Although ORA allowed moving an important amount of data, the result still required some<br />
additional work of adjustment and tuning. The DUI was used at this stage for completing the<br />
work. The DUI was used:<br />
• To check if its design is correct from the point of view of the final users<br />
• To complete the data population with some instances that were not present in the original<br />
Ontology. Indeed the new Ontology had also introduced new concepts and new relations,<br />
for which associated instances and relationships had to be created.<br />
At this final stage, the work was an iterative cycle between KAON (the Ontology editor) the<br />
DUI, and sometime the ORA (with was used to move some data not included in the ‘gross’<br />
migration).<br />
Following is the iteration cycle:
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 59 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• The DUI was used to review a part of the ontology and to see which instances were<br />
required to complete the attributes or relationships that are important and have not been<br />
included in the new ontology.<br />
o We had to check if those required instances or values where present in the original<br />
ontology and their links had been destroyed in the migration process. For this<br />
purpose , the KAON was used to navigate the original ontology to check the old<br />
values of these properties or attributes. This procedure allowed us to locate those<br />
instances that in the migration process were moved to a wrong concept, so we can<br />
take note of them to change the parent concept later with the KAON.<br />
• We created a lot of instances to complete the empty attributes or relationships, using the<br />
DUI. These instances were new, and were not present in the original ontology.<br />
• We modified the instances already existing in the original ontology but moved to a wrong<br />
concept in the new one. Using the KAON, we changed their parent concept to the correct<br />
one. Later, again with the DUI, we relate appropriately these instances to others.<br />
• Sometimes, it was easier to re-create the instance from scratch, just using the instance in<br />
the original ontology as a reference to copy its attributes and to re -link its relationships.<br />
This is so because sometimes we want also to change the tag that identifies the instance,<br />
perhaps because the original was wrong according to the nomenclature established.<br />
Therefore, in the iteration we kept going over the instances of a concept, checking if all the<br />
key relations were established, filling the pending values. This process implied the creation of<br />
new instance, which again meant reviewing all their properties and perhaps creating new<br />
instances.<br />
This process was repeated over the instances of other concept.<br />
Once all the data was complete, the ontology was again ready to be used for all the users.<br />
5.2.2.2 Analysis of the redesign process<br />
The Ontology resulting from the re-engineering was considered of an adequate level of<br />
quality, and in particular made the final system usable by the end users.<br />
This re-engineering represented a major effort, and took several weeks to be completed.<br />
This redesign was very useful in that it provided the setting for investigating deeply and<br />
concretely the issues related to Ontology design, one of the most critical element in the<br />
design of an ontology-based system.<br />
Indeed, we can consider that reengineering represents a facet inherent to the Ontology design<br />
process, and not only an operation that we had to carry out in this project because we would<br />
been to hasty in design of the initial ontology. Ontologies are in any case subject to<br />
evolutions (this flexibility is actually one of the main reason of using ontology-based system)<br />
during the lifecycle of a knowledge management system. Ontology building is also inherently
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 60 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
iterative, since it is difficult to have it right the first time (actually ontology building is<br />
sometime considered today more as an art than as science).<br />
This redesign presented the opportunity to evaluate the capability of the Ontologging system<br />
to support the work of more experienced designers (the designers had indeed acquired some<br />
experience since the beginning of the project), which provided a better Outlook of how the<br />
Ontology design process would happen in the future.<br />
5.2.3 Some lessons learned.<br />
It is clear that the difficulty of the process of Ontology building was under-evaluated in this<br />
project, even if the more user -oriented participants of this project (the main user partner Indra<br />
and the partner in charge of the evaluation) had raised some concerns early after the<br />
beginning of this project. However, addressing this issue was not an easy task, and would<br />
have required a lot of attention that was perhaps beyond the scope of this project.<br />
Yet, this issue was actually not totally ignored, and was to some limited extend undertook by<br />
the work of (OKE) Ontology Knowledge Evaluation, which role was to provide some tools to<br />
do some diagnostics of the quality of an Ontology and about ontology population, and that<br />
should definitively help the ontology design and population processes. However, this<br />
category tool relies a lot on the definition of some heuristics that were not really available at<br />
the beginning of the project (the theoretical heuristics appeared to be very artificial and of<br />
little value), and are also difficult to make easy to use. Another category of work in this<br />
project is related to the mechanisms (agent notifications) supporting the collaborative<br />
authoring of the Ontologies. As indicated, this mechanism was however not really tested<br />
(excepted technically) for the design of the Indra Ontology given the relatively limited size of<br />
the ontology design team. Finally, worth to be mentioned is the sophisticated and very<br />
powerful rollback mechanisms present in the KAON system, which allow the Ontology<br />
designer to test in advance and without risks the consequence s of a design decisions.<br />
The use of an Ontology building methodology, such as OntoClean (Guarino and Welty,<br />
2002)), the benchmarking with projects having to confront the same difficulties (see for<br />
instance (Missikoff, Navigli and Velardi, 2002)), and a be tter identification of tools and<br />
environments supporting Ontology building may have probably helped to identify earlier the<br />
problems and provided some directions for overcoming these problems.<br />
5.3 Phases 3: Content population<br />
Two tentative of ontology content population can be distinguished: An initial content<br />
population that happened with the initial domain Ontology, and a final content population<br />
that happened with the new re-engineered Ontology.<br />
The initial tentative of ontology content population was done by different users with<br />
moderate level of expertise, that have tried to capitalized into the system using the DUI, with<br />
documents and information that they though would be useful. As already indicated, they
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 61 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
found this capitalization process difficult and painful, first because the quality of the<br />
Ontology was bad (not the right concepts) and second because they did not have some<br />
guideline or style to follow. The result was not surprisingly of low quality: not homogeneous,<br />
and noisy (the end-users complaine d that the knowledge had “a lot of crap”), and almost<br />
unusable.<br />
The second (successful) tentative of ontology content population was accomplished only by<br />
the limited team (two-three) of people that elaborated (or reengineered) the main Ontology.<br />
This manner of proceeding had the advantage of better controlling the quality of this<br />
population, since experts were used for this purpose. It had also the advantage of making the<br />
processes of ontology elaboration and population happen concurrently and in an inte rrelated<br />
way with the design of the main Ontology itself, facilitating therefore the elaboration of this<br />
Ontology (populating the content helps in the identification the “good” concepts to be present<br />
in the ontology).<br />
Whilst the quality of the resulting Ontology and Ontology population was considered as good<br />
(the designer dedicated a lot of effort for this), this situation cannot be considered as<br />
satisfactory. Indeed, every person in the organization should be able to share his/her<br />
knowledge with others and contribute to “feed“ the electronic memory of the organization.<br />
We can however imagine (this can even become a research question to investigate in the<br />
future) that this process of collective content population would be easier once a minimum of<br />
Ontology population has been accomplished by the team of expert (users have good examples<br />
and styles to follow), in particular if it is followed by a set of accompanying measures<br />
(coaching, training, knowledge population review process, etc.) guarantying that the quality<br />
is preserved.<br />
What is the conclusion and lesson learned of this population? That as for Ontology-building,<br />
Ontology content population has appeared to be more difficult than what was expected.<br />
Besides, this problem do not seem to be related to the tools for Ontology population that have<br />
been designed in Ontologging, but more profoundly connected to Ontology theories and<br />
usage methodologies. Indeed, the quality of the content in Ontology-based system is more<br />
critical than in the case of traditional information systems, and the capitalization process<br />
inherently more difficult (because for instance of ambiguity (Garigue, 2003)).<br />
The answers to these questions would require some further investigations. The directions that<br />
have already been indicated for Ontology building appear also applicable and consist in more<br />
methodological work, as well as some work on tools and environments helping to support<br />
these methodologies.<br />
5.4 Phases 4: Evaluating Ontologging “knowledge retrieval”<br />
The evaluation of the system from an end user perspective (knowledge retrieval) has<br />
distinguished two dimensions that will be presented in the next chapter: first, a basic
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 62 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
knowledge retrieval dimension aiming at evaluating the capability of the system to support<br />
the more basic knowledge retrieval processes (navigating and searching knowledge); second<br />
a more advanced dimension more centred on the users and groups of people in the<br />
organization, and the advanced means to support these users and groups (via personalization<br />
mechanisms, support for knowledge sharing, etc.).<br />
5.4.1 Evaluating of the basic knowledge retrieval<br />
Questionnaire 2 (Ontologging Project Questionnaire) was used to collect the “user” feedback,<br />
related to the main usage of the Ontology system. In particular, this questionnaire has helped<br />
to collect information related to the use of the central Ontologging tool: the DUI (Distributed<br />
User Interface). It has to be reminder that this tool allows the final user to visualize<br />
knowledge, to navigate into the knowledge, to search knowledge, and also to add new<br />
knowledge items (knowledge capitalization).<br />
Some additional feedbacks were collected from the different focus groups and interviews that<br />
were organized. Finally, experiments (via scenarios) were used to validate different usages<br />
and in particular to identify the difficulties, and to elicitate (cognitive walkthrough) the<br />
internal cognitive process followed by the end users.<br />
5.4.1.1 Evaluating the main tool (DUI)<br />
The main tool (the DUI) was well perceived and considered as adequate, but some<br />
improvement would be well appreciated.<br />
Bellow, are some opinions extracted from the questionnaire:<br />
The system is all right …<br />
“The word enjoy is not correct (but) the DUI is ok and the experience is satisfactory”, “(liked) the<br />
flexibility of the tool to navigate the taxonomy, enabling and disabling the desired relationships,<br />
choosing which concept to see and which one not”; “(liked) to have a global and comprehensive view<br />
of the elements, entities and items involved in the tendering process”, “(liked) the possibility to search<br />
information through link in natural language”, “(liked) using the interface for navigation”.<br />
… but not perfect.<br />
“The way of showing all the information related with one instance is not very useful, is much better<br />
viewing everything navigating the ontology”, “Could be more attractive” , “(problem with)<br />
Navigation when there is too much documents stored”, “Sometimes it’s difficult to know the best way<br />
to perform the searches” , “(disliked) The hierarchical representation of the information. I prefer a<br />
more graphical layout.”<br />
5.4.1.2 Usage scenarios<br />
More interesting are the usage scenarios of the tools that help to understand how the end user<br />
perceived and appropriated the system.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 63 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
The scenarios bellows represent use cases that the users were asked to elaborate in order to<br />
later experiment the system:<br />
Tendering scenario 1: Finding ‘experts’ in a specific knowledge area.<br />
Approach: Use of a query template called ‘Experts in Knowledge area’.<br />
Examples of selections to enter in the query are: OCR, ‘Automatización de procesos<br />
(Workflow)’, ‘Gestion de redes’<br />
Tendering scenario 2: Querying all the activity related with a specific customer<br />
Approach: Use of a browse template called ‘Customer’ and so you can browse all the<br />
operations related with a customer, and all the attributes and relations of these operations.<br />
Examples of selections:<br />
o AMENA: to see the status of the different operations (activating the two Current Status<br />
properties)<br />
o Comision Europea (IST): To see involved companies (activating the involves relation)<br />
o AENOR: To see that Indra has a strong position in EDM technology in this client<br />
(activating refers-to relations)<br />
Tendering scenario 3: Viewing the operations categorised by the market.<br />
Approach: Use a browsing template (Operations by Market)<br />
Examples of selections:<br />
o ‘Administración Pública y Sanidad’: many operations<br />
o Energía, Operadores y Media: médium number of operations<br />
o Espacio: No operations at all<br />
Project development scenario 1: Finding the people who worked on a proposal<br />
Approach: Just navigating the ontology, going to operations->proposals and selecting some<br />
specific as ‘Implantación de un Proyecto Piloto SIG’ (activating has-participant and refers-to)<br />
to see that Juan Antonio Alamillos is responsible, and then we can navigate through the<br />
Knowledge Area to see other similar projects.<br />
Project development scenario 2: Searching documents and contact people related to a third<br />
party product.<br />
Approach: Just browsing the ontology and going to products, activating (~related with<br />
(products)) to see related documentations, and activating ‘manufactured-by’ to get the<br />
manufacturer, then the produced-by and then the ‘~works-in’ to see the contact people for<br />
this product.<br />
Examples of selections: ‘Eyes & Hands FORMS’, ‘BEA Weblogic’
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 64 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Project development scenario 3: View original proposal and project management issues<br />
corresponding to a project.<br />
Approach: Use of a query template called ‘All management documents for a project’. Then<br />
you can view the type of each document activating the Parent Concepts property to see the<br />
concept for each document.<br />
Examples of selections: Smartgov, SINEA<br />
Examples extracted from the Questionnaire 2 are also given bellow:<br />
“I need to write an specific type of document, and I search documents of the same type in order to<br />
have an index, some examples, ideas about how to structure the information… Even I can find other<br />
documents of the same type and directed to the same customer, so that I can see if they use an specific<br />
format, and if usually they ask for some specific information”.<br />
“I have to add Indra’s references in a tender, showing in which similar project the company has<br />
worked. Using the DUI, I can search project or proposal related with the same knowledge area, or<br />
with customer in the same market, for example… or even previous experience with the same<br />
customer.”<br />
“I am working as developer in a project and the user is asking me for some requirements that are new<br />
for me. I can search my project, the related proposal, and locate if that requirements were originally<br />
included in the contract or in our proposal, or the customer is trying to get more developments for<br />
free. I also can locate the person who made the deal, so I can talk with him about the relation with the<br />
customer, how flexible we have to be…”<br />
“Tenderi ng process<br />
Purpose: to get effective co-ordination of tendering process between parties involved”<br />
“Project Development. Designing of the system and programming (developing). I used the tool for<br />
searching documents related with my project in the designing and developing stages.”<br />
etc…<br />
5.4.1.3 Analysis<br />
One of the main finding of this study is that people have adopted a view of knowledge<br />
management system radically different than the document-oriented and search driven<br />
approaches of the traditional knowledge management systems. Practically, the users have<br />
reasoned in a way that is much more conform to the vision of the semantic web, and which<br />
consists in visualizing and navigating a web of a variety of knowledge elements.<br />
This approach should not come to a surprise today when we know about some the recent<br />
orientation of the work conducted in the Ontology-based systems, and in particular the<br />
Ontology based portals that have flourished in different places (see for example the OntoWeb
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 65 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
project web site at http://www.ontoweb.org/). However, it was relatively unexpected to some<br />
of the designers of the Ontologging system which experience in knowledge management<br />
system was very document centred.<br />
5.4.2 Evaluating of the user-centred usages<br />
Questionnaire 3 (Evaluation of user modelling processes and the evaluation of the knowledge<br />
distribution agents) was the main questionnaire used to evaluate usage-centred usage of the<br />
Ontologging system. Questionnaire 2, also was useful to collect information related to<br />
knowledge sharing and motivational aspects.<br />
The problem of user modelling in KMSs relates to the last two issues mentioned above<br />
namely: the information overload issue and the need to better manage the tacit knowledge.<br />
The need for enhance d user support for filtering and retrieving the knowledge available in the<br />
system expressed as “to not get lost” amongst hundreds of documents and to filter<br />
“information and noise” relates to research on personalization and adaptive hypermedia. A<br />
series of user modelling techniques for personalized interaction enables to build systems that<br />
adapt to the user’s characteristics. The evaluation of the user modelling tools has been done<br />
combining the questionnaire with other empirical evaluation methods such as: focus group discussion<br />
and semi-structured interviews.<br />
The main issues addressed by the evaluation of the advanced user centred usages were:<br />
• Employees view on sharing personal information and user modelling processes<br />
• The perceived need of personalization of KM tools and the use of knowledge distribution<br />
agents<br />
• Knowledge sharing incentives<br />
What is the end-user’s view on sharing personal information and user modelling?<br />
Personalized systems require users to submit user data (personal information). The disclosure<br />
of user data opens a series of problems like privacy and security but it also opens up new<br />
forms of personalization, communication, collaboration and social interactions. Some of the<br />
user data can be acquired explicitly by filling in a form or implicitly by various usermodelling<br />
techniques.<br />
In our case the UPE (User Profile Editor) enables the users to enter and update personal<br />
information (instantiate the user ontology). The user is in control of his “user profile” data.<br />
The UPE also enables to visualize the other’s profile in order to support collaboration and<br />
communication between the employees.<br />
User modelling techniques enable to capture certain characteristics of the users interacting<br />
with a KMS, a so-called behaviour of the users in the system. But the integration of the user<br />
models and user modelling in KMSs is a sensitive issue. The questionnaires and semi-
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 66 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
structured interviews with the end-user emphasized the fact that certain users are concerned<br />
with privacy and trust issues. This categor y of users seems to be reluctant related to the use of<br />
their data by the organization. Therefore according to the user opinion the user profiles<br />
should be made partial available to the other end-users and fully available to human<br />
resources.<br />
What would be the main motivation for knowledge sharing and creation for Indra end-users<br />
(money, virtual rewards, recognition versus other incentives)?<br />
The behaviour of the users in the system can be associated with incentives provided to the<br />
users to share their knowledge and be active in the system. Of course the issue of sharing<br />
knowledge and contribute in the system is complex and it shouldn’t be limited to simple<br />
incentives. It might imply changes of the current work practices and it can be associated with<br />
other managerial interventions. We have surveyed different types of incentives a company<br />
might use to stimulate knowledge sharing and knowledge creation. According to the user’s<br />
opinion from Indra, reputation and promotion in organization would be the right incentives to<br />
stimulate a knowledge sharing culture in the organization. However a bonus associated with<br />
the salary seems to be also a right incentive for experts to spend extra time-sharing their<br />
knowledge. Some expert knowledge workers have expressed their concern in being<br />
recognized as experts and having to do extra work.<br />
Conclusions on the evaluation of the advanced user -centred mechanisms<br />
o Personalization of KMSs is important due to two important factors: “information<br />
overload” problem and the heterogeneity of the users. Ontologging uses very simple<br />
personalization mechanisms such as: the adaptation of content using templates and<br />
notifications of users through knowledge distribution agents. Both mechanisms are<br />
perceived as very useful by the knowledge workers.<br />
o Expertise modelling is important because it facilitates collaboration between the peer<br />
knowledge workers and implicitly facilitates taking decisions and work processes. At the<br />
same expertise modelling facilitates a better management of the tacit know ledge of the<br />
organization.<br />
o Some users are concerned with privacy and trust issues and therefore user’s profile should<br />
be only partial available for all the users of a KMS<br />
o Recognition seems to be key incentive for knowledge sharing;<br />
5.5 A Comparison with a more traditional knowledge management system (KnowNet)<br />
To conclude our work, we have also compared the Ontologging system with a more<br />
traditional knowledge management product (KnowNet) that is currently developed by Meta4,<br />
one of the main partner of this pr oject. The aim of this comparison is to better evaluate the<br />
aspects of Ontologging that are directly connected to the use of an ontology approach rather<br />
than the aspects related to knowledge management in general.
5.5.1 Description of the evaluation<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 67 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Meta4 has considered two alternative scenarios for Onto-KnowNet The evaluation focused<br />
on the approach, (i.e. what are the benefit of introducing ontologies in the considered<br />
scenarios.) Focus on some of the aspects (browse and search, knowledge edition), in<br />
particular to allow a comparison with their existing knowledge management product<br />
(KnowNet).<br />
Two different ontologies, oriented to different type of users:<br />
• First modelled scenario described the knowledge base of customer support cases, and<br />
product bugs. This was intended for developers that were currently using a proprietary<br />
application based on relational database.<br />
• Second modelled scenario described competences, knowledge areas and knowledge units.<br />
This scenario was presented to some managers to compare with our existing KM and HR<br />
solutions.<br />
5.5.2 Comparing the two systems<br />
Meta4 has compared and assessed the added value of an ontology approach comparing it with<br />
KnowNet. The comparison was done not only at product level but also taking into account<br />
usage experience, acceptance, etc. Product managers wanted to evaluate whether the<br />
approach of ontologies/semantic techniques could be a strategy for some future products.<br />
The main enhancements compared to KnowNet are:<br />
• Providing navigational capabilities: allow browsing fr om one object to other related<br />
objects, finding important knowledge by visual means. Ontology navigation complements<br />
the directed search, similarly to browsing the Internet.<br />
• Easiness of the knowledge creation process, the integration with the Office environment<br />
• Enhanced search capabilities: allow searching for documents related to other entities<br />
(projects, etc.)<br />
• Permitting easy extension/modification of the data model (domain ontology)<br />
• Integrating logical inferences<br />
5.5.3 Lessons learned from this comparison<br />
• Use of ontologies seems to be an important enhancement to knowledge-oriented<br />
applications, and it is critical to support a flexible evolution of the domain.<br />
• Ontology navigation complements the directed search, similarly to browsing the Internet.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 68 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• Technically, the approach was adequate to allow a possible integration with other<br />
products (Web services, import/export tools). However, introducing ontologies is not by<br />
itself the solution nor is “magic”.<br />
• Therefore, Ontologging must be considered as a platform, a set of tools that should aim to<br />
integrate with – and not replace – existing applications.<br />
5.6 Final words<br />
The ontology engineers from Indra and Meta4 have been questioned related to the advantage and<br />
limitations of the use of ontologies, and it appears useful to report this more high level perspective.<br />
What are the difficulties and what are the advantages of using ontologies?<br />
The key advantage is in the power of the relationships, which enables users to navigate from<br />
one concept (and its instances) to another (and its instances) very easily. The main difficulty<br />
is that the users are not used to work with this kind of map or representation. Some of the<br />
Indra knowledge workers faced problems “understanding what an ontology really is.” People<br />
are used with taxonomies, trees for structuring knowledge and they faced problems to think<br />
in terms of concepts, relationships, etc.<br />
However, a taxonomic view, a well-known interface similar to folders in Windows explorer<br />
made easier understanding the structure beneath the application to the end-users.<br />
From the Meta4 perspective, the advantage is the flexibility: it is very easy to change the<br />
model maintaining consistency. No major difficulties were found, if performed as an<br />
individual process. Collaborative modelling should be given more support.<br />
Is an ontology based management information meta -model useful for better structuring the<br />
domain model? Why?<br />
Yes, because it enables users to see all the domain elements and its relationships, providing<br />
easy navigation. There is no sim ilar system with the same power and possibilities.
6 Discussion and Conclusions<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 69 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Concluding the work and outcome of the Ontologging project, which the evaluation has<br />
helped to determine, appears to be difficult:<br />
On one hand, the technical objectives of this project have been fulfilled: a whole set of tools<br />
relying on ontology based technologies that provide support to many of the very important<br />
dimensions of the knowledge processes have been elaborated. These tools provides the means<br />
to define “knowledge schema” that will be used to structure the organization of the<br />
knowledge in a KM system, to populate this system with content, to retrieve this knowledge,<br />
and finally to evaluate this knowledge as well as the different knowledge processes being<br />
conducted in the system. Even if some room for improvement still remains, the design of<br />
ontology-based knowledge management systems does not appear an unachievable overtaking.<br />
From this perspective, moving to a productisation phase would not to raise any major<br />
difficulties, given in particular the fact that most of the technological building blocks for<br />
designing these systems are now available.<br />
On the other hand, this project has “kind of ” revealed a real “Pandora box”: Ontology<br />
oriented knowledge management systems are radically different systems than the traditional<br />
document centric knowledge management system of today, and raises many more nontechnical<br />
issues that are not particular trivial to solve.<br />
For instance the design of the structure (ontology building) is difficult, and require time. In<br />
the case of the Ontologging project, it took several months and a major redesign to obtain a<br />
correct domain Ontology. (Missikoff, Navigli and Velardi, 2002) indicates that it took one<br />
year for the project Harmonize project to release their first domain Ontology (in the domain<br />
of tourisms) that comprises about 300 concepts. Even if we can obviously imagine that in the<br />
later case the participants were not dedicating all their time on this design (they also<br />
mentioned some techniques that have helped to reduced considerably this time), ontology<br />
design is a complex operation that requires time and expertise.<br />
In a similar way, the ontology content population requires also an important amount of effort<br />
and rigor, and cannot be improvised. Hence, in the Ontologging project, the first<br />
(unsupervised) population resulted in a result that was barely usable. Indeed, Ontology<br />
systems appear also to be less tolerant to low quality and noise than more traditional<br />
information-centred systems. The solution that was finally adopted to have the content<br />
population be accomplished by a reduced and specialised team, was working all right, but<br />
seems to go against the general idea that all the employees should participate to the<br />
knowledge capitalisation process and contribute to enrich the repository of knowledge of the<br />
company. Even if this solution is only temporary, some investigation needs to be<br />
accomplished to make possible the collaborative capitalization process.<br />
Finally, the retrieval of this knowledge is also more complex (but also richer) and prone to<br />
dispersion, since it goes much beyond the use of search mechanisms but also include the<br />
navigation in a maze of node of information elements connected to one another, and the more
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 70 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
closely support highly cognitive knowledge management processes (in one case, the<br />
elaboration of a tender).<br />
One of the most interesting finding of this project is that a totally new vision has emerged<br />
related to perception of the usage of an ontology-based knowledge management system than<br />
what was originally envisaged. Ontology-based knowledge management systems do not<br />
represent only the (incremental) evolution of the traditional document -centred knowledge<br />
management system, but a radically different concept: the document in no longer the unique<br />
element capturing the all knowledge of the organization (other elements also intervene such<br />
as knowledge object representing people or projects, but also all the relations connecting<br />
these different objects) and the knowledge retrie val do not happen via the global search of<br />
information fulfilling some criteria (typically a set of key words), but via the navigation<br />
thorough a semantic network and progressive discovery of this knowledge. Ontology-based<br />
systems also appear to provide a much better support to the management of many of the more<br />
complex and difficult to formalize knowledge processes. First as indicated, they provide a<br />
much higher and cognitive level of abstraction, closer to the ones that are used by the<br />
knowledge worker in his/her reasoning. Second, by giving to possibility to deeply capture<br />
people information (via user modelling) they are able to provide some support to the<br />
management of the tacit knowledge of the organisation. They are able to support the<br />
implementation of much more sophisticated mechanisms such as personalization or active<br />
mechanisms such as agents. Finally, they also offer some way to integrate seamlessly the<br />
different information of an organization into a single system by being the wrapper of all the<br />
information systems of the organization (one of the idea that appeared to be the most<br />
appreciated by the users).<br />
These finding should be considered as globally positive. Traditional knowledge management<br />
have mostly failed to be adopted for a numerous number of deep reasons. These reasons<br />
include the lack of flexibility and poor “alignment” with the company processes (Malhotra,<br />
2002), little or no support for the management of the tacit knowledge and of complex<br />
knowledge management processes, ignorance of the human and social factors; a whole set of<br />
reasons to which Ontology-based systems seem to offer some answers and some new ideas.<br />
And trying to make the traditional knowledge management system just evolve was hopeless.<br />
Will Ontology-based knowledge management system be more successful than their more<br />
traditional counterpart? There is much hope and belief that it will be the case, and that<br />
Ontology will represent a major innovation in the next generation information systems<br />
(Pisanelli, Gangemi and Steve, 2002). Perhaps they will also enable the design of the<br />
knowledge management “killer app” that we have been expected (for instance with systems<br />
able to better support the knowledge networking (Smith. and McKeen, 2003))?<br />
Still a lot of issues and problems remain open, and we have the impression in this project<br />
only to have scratched the surface, and that the whole field of knowledge management field<br />
should be revisited from an knowledge representation, engineering, and cognitive perspective<br />
in order to address questions of design, population, evolution, and usage.<br />
Besides many additional questions that are deeply rooted to the Ontology concepts and global<br />
vision of knowledge management that would be worth to be investigated such as: (1)
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 71 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
effectiveness of serendipitous knowledge discovery versus simple search retrieval; (2)<br />
cultural implication of the introduction of ontology in an organization (influence on the<br />
knowledge sharing, contribution to a shared set of value); (3) cultural and personal style<br />
adequation of an Ontology based (semantic web) representation. For instance concerning the<br />
later case, we can mention the recent work that tends to demonstrate a cognitive orientation<br />
difference between East Asian and Westerners (typically Americans) in the way of perceiving<br />
object and networks (Nisbett et al., 2001; Chiu, 1972), and the possibility that some culture<br />
(Asians) be more able to adopt a semantic web point of view.
7 References<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 72 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Abecker Andreas, Bernardi Ansgar, van Elst Ludger, and Klein Bertin (2003); Organizational<br />
Memory Information Systems for Global Organizations - Design Principles and Research<br />
Directions. Submitted for: Hamid R. Nemati & Riad Ajami (eds.): Global Knowledge<br />
Management - Challenges and Opportunities. Ivy League Publishing. To appear 2003.<br />
ACM SIGCHI (1992). Curricula for Human -Computer Interaction . ACM Special Interest<br />
Group on Computer-Human Interaction Curricula Development Group, New York<br />
Aitken, S. and Reid, S. (2000); Evaluation of an Ontology-Based Information Retrieval Tool<br />
Workshop on the Applications of Ontologies and Problem-Solving Methods, (eds) Gómez-<br />
Pérez, A., Benjamins, V.R., Guarino, N., and Uschold, M. European Conference on<br />
Artificial Intelligence 2000, Berlin<br />
Andre, E., Klesen, M., Gebhard, O., Rist, T. (2000); ‘Exploiting Models of Personality and<br />
Emotions to Control the Behavior of Animated Interactive Agents’, Fourth International<br />
Conference on Autonomous Agents, pp. 3-7, Barcelona, 2000<br />
Angehrn A, J. Atherton (1999), A Conceptual Framework for Assessing Development<br />
Programmes for Change Agents, ECIS '99, COPENHAGEN, 1999.<br />
Angehrn A., J.-F. Manzoni (1997), Understanding Organisational Implications of Change<br />
Processes: A Multimedia Simulation Approach, 30th Annual Hawaii International<br />
Conference on Systems Sciences, Vol. II, IEEE Computer Society Press, 1997, pp. 655-664.<br />
Angele, J., Sure, Y. (2002); Whitepaper: Evaluation of Ontology-based Tools. Excerpt from<br />
the IST-2001-29243 Report, OntoWeb. D1.3. Tools. (2001). Available at:<br />
http://www.aifb.uni-karlsruhe.de/WBS/ysu/publications/eon2002_whitepaper.<strong>pdf</strong><br />
Berners-Lee Tim, Hendler James, and Lassila Ora (2001); The Semantic Web; Scientific<br />
American, 284(5):34-43, May 2001<br />
Bateman John (2004), Ontology Portal,<br />
http://www.fb10.uni-bremen.de/anglistik/langpro/webspace/jb/infopages/ontology/ontology-root.htm<br />
Beyer, H. & Holtzblatt, K. (1998); Contextual Design: Defining Customer-Centered Systems.<br />
San Francisco: Morgan Kaufmann Publishers, 1998.<br />
Brusilovsky P. (2001); Adaptive Hypermedia, User Modeling and User-Adapted Interaction,<br />
Kluwer, Academic Publishers, 2001, Printed in the Netherlands, pp. 87-110<br />
Buckingham Shum, S. (1997); Balancing Formality with Informality: User-Centred<br />
Requirements for Knowledge Management Technologies. AAAI Spring Symposium on<br />
Artificial Intelligence in Knowledge Management (1997), Stanford Univer sity, Palo Alto,<br />
CA. AAAI Press. Available at:<br />
http://kmi.open.ac.uk/people/sbs/org-knowledge/aikm97/sbs -paper1.html<br />
<strong>CALT</strong> Team, 2000, Advanced Learning Approaches & Technologies: The <strong>CALT</strong><br />
Perspective, Working paper, <strong>INSEAD</strong> <strong>CALT</strong>, October 2000.<br />
Chiu, L.-H. (1972). A cross-cultural comparison of cognitive styles in Chinese and American<br />
children. International Journal of Psychology, 7, 235-242.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 73 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Clark, P., John Thompson, Heather Holmback, and Lisbeth Duncan. (2000). Exploiting a<br />
Thesaurus-Based Semantic Net for Knowledge-Based Search. Proceedings of IAAI-2000.<br />
Pp. 988-995. AAAI Press.<br />
Dix A., Finlay J., Abowd G. and Beale R. (1993): Human-Computer Interaction, Prentice<br />
Hall, New York.<br />
Eisenhardt, Kathleen M. (1989): "Building Theories from Case Study Research", Academy of<br />
Management Review , Oct89, Vol. 14 Issue 4, p532<br />
Ehrig Marc (2002), Ontology-Focused Crawling of Documents and Relational Metadata;<br />
Master’s Thesis University of Karlsruhe, (MatrNr. 0926607), January 31, 2002<br />
Erskine, L. E., Carter -Tod D. R. N., and Burton, J. K. (1997). Dialogical techniques for the<br />
design of web sites. International Journal of Human Computer Studies, 47, 169-195.<br />
Fink, J., Kobsa, A. (2000);A Review and Analysis of Commercial User Modeling Servers for<br />
Personalization on the World Wide Web, in User Modeling and User Adapted Interaction,<br />
Special Issue on Deployed User Modeling, 10, p.204-209, 2000<br />
Garigue Robert (2003); Managing Ontological Ambiguity: Extending the Knowledge<br />
Management Framework; 2nd Annual Knowledge Summit 2003 Doctoral Consortium,<br />
Queen's KBE - Centre for Knowledge -Based Enterprises<br />
Giboin A., Gandon F., Corby O., and Dieng R. (2002); Assessment of Ontology-based Tools:<br />
Systemizing the Scenario Approach; Proceedings of EON2002: Evaluation of Ontologybased<br />
Tools Workshop at the 13th International Conference on Knowledge Engineering and<br />
Knowledge Management EKAW 2002, Siguenza (Spain), 30th September 2002. Available<br />
at: http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS//Vol-62/<br />
Gruber T. R. (1993); A translation approach to portable ontologies. Knowledge Acquisition,<br />
5(2):199-220, 1993<br />
Guarino Nicola and Chris Welty (2002). Evaluating Ontological Decisions with OntoClean.<br />
Communications of the ACM. 45(2):61-65. New York: ACM<br />
Jirotka M. (1992) Ethnomethodology and Requirements Engineering. Technical Report.<br />
Oxford University.<br />
Kay, J. (2001); Scrutability for personalised interfaces, ERCIM NEWS, Special Theme Issue<br />
on Human Computer Interaction, 46, July, 49-50, 2001.<br />
Kirkpatrick, D.L. (1994), Evaluating Training Programs: The Four Levels, Berrett-Koehler<br />
Publishers, San Francisco, 1994-1996.<br />
Kobsa, A., Koenemann, J. and Pohl, W. (2000); Personalized hypermedia presentation<br />
techniques for improving online customer relationships, The Knowledge Engineering<br />
Review 16, p111-155<br />
Malhotra, Y. (2002), Why Knowledge Management Systems Fail? Enable rs and Constraints<br />
of Knowledge Management in Human Enterprises. In Holsapple, C.W. (Ed.), Handbook on<br />
Knowledge Management 1: Knowledge Matters, Springer-Verlag, Heidelberg, Germany,<br />
577-599, 2002 http://www.brint.org/WhyKMSFail.htm<br />
McBride Rob and Schostak John (1995), An Introduction to Qualitative Research.<br />
http://www.uea.ac.uk/care/elu/Issues/Research/Res1Cont.html<br />
Menon Tanya, Jeffrey Pfeffer (2003); “Valuing Internal vs. External Knowledge: Explaining<br />
the Preference for Outsiders”, Management Science, Volume: 4,, Number: 4, April 2003.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 74 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Meredith, Jack (1998) "Building operations management theory through case and field<br />
research", Journal of Operations Management, 16(4): 441-454.<br />
Missikoff M., R. Navigli, and P. Velardi (2002); The Usable Ontology: An Environment for<br />
Building and Assessing a Domain Ontology. Proceedings of the International Semantic<br />
Web Conference 2002, Springer, 2002, pp. 39-53.<br />
Nielsen J. and Molich P. (1992). Heuristic evaluation of user interfaces. In Empowering<br />
people - CHI'90 conference proceeding. ACM Press, New York<br />
Nisbett, R. E., Peng, K., Choi, I., & Norenzayan, A. (2001). Culture and systems of thought:<br />
Holistic vs. analytic cognition. Psychological Review, 108, 291-310.<br />
OntoWeb (2002); Deliverable 2.1: Successful scenarios for ontology-based applications;<br />
OntoWeb Consortium, 2002.<br />
Oppermanna Reinhard and Harald Reitererb (1997). Software Evaluation using the 9241<br />
Evaluator, Paper published in: Behaviour Information Technology. 16 (1997), 4/5, 232<br />
Pisanelli D.M., Gangemi A., Steve G. (2002); Ontologies and Information Systems: the<br />
Marriage of the Century?; Proceedings of Lyee Workshop, Paris, 2002.<br />
Polson P., Lewis C., Rieman J. and Wharton C. (1992) Cognitive walkthroughs: A methods<br />
for theory-based evaluation of use interfaces. International Journal of Man - machine<br />
Studies, 36, 741-73<br />
Preece J., Rogers Y., Sharp H., Benyon D., Holland S. and Carey T. (1994). Human-<br />
Computer Interaction. Wokingham: Addison Wesley.<br />
Raghavan V. V., Jung G. S., and Bollmann P. (1989); A Critical Investigation of Recall and<br />
Precision as Measures of Retrieval System Performance; ACM Transactions on Office<br />
Information Systems , pages 205--229, July 1989.<br />
Smith Heather A. and McKeen James D. (2003); Network: Knowledge Management’s ‘Killer<br />
App’?; working paper 03-06, Queens University Centre for Knowledge-Based Enterprises,<br />
2003. http://business.queensu.ca/kbe/papers/abstract_03_06.htm<br />
Stephanidis, C. (2001); Adaptive Techniques for Universal Access, User Modeling and User-<br />
Adapted Interaction, 2001, 11: 159-179, Kluwer Academic Publishers<br />
Stuart, I., McCutcheon, D., Handfield, R., McLachlin, R. Samson, D. 2002. Effective case<br />
research in operations management: a process perspective. Journal of Operations<br />
Management, 20: 419-433<br />
Thomas J. C., W. A. Kellogg, and T. Erickson (2001); The knowledge management puzzle:<br />
Human and social factors in knowledge management; IBM Systems Journal, Volume 40,<br />
Number 4, 2001<br />
Uschold, M. and Gruninger, M. (1996); “Ontologies: principles, methods, and applications”,<br />
KnowledgeEngineering Review, volume 11, number 2, pages 93–155, 1996.<br />
Weinberger H., Teeni D. and Frank A. (2003); Ontologies of Organizational Memory as a<br />
Basis for Evaluation; 11th ECIS'03 European Conference on Information Systems, Naples,<br />
Italy, June 2003.<br />
Yin, Robert K. (1994): “Case Study Research: Design and Methods”; Sage, Thousand Oaks,<br />
CA; first published in 1984.
8 Annex<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
8.1 Annex 1: Ontologging Goals & focus. The consortium perspective.<br />
Page : 75 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
The different partners of the consortium were asked to identify what they considered to be the<br />
goals of the project, as well as their individual goals in the projects.<br />
More concretely, each partner was asked to provide some answers to the following questions:<br />
• Application goals : what we expect the Ontologging application will perform.<br />
• End user goals: what our end-users (INDRA) expect to gain by using the<br />
Ontologging application.<br />
• Intangible goals : some of our goals are more intangible (hence more<br />
difficult/subjective to measure).<br />
• Individual partners goals: each partner may have different individual objectives.<br />
The answers were sent via email to the coordinator, who aggregated these answers. The result<br />
was furthermore consolidated at a consortium meeting.<br />
The next paragraphs present the result of this process:<br />
Application goals:<br />
• User-friendly browsing capabilities.<br />
• Improved search by means of ontology capabilities.<br />
• Combination of ontology search with document keyword search.<br />
• Integration/embedding with MS Office.<br />
• Personalized interest notifications.<br />
• Integration with KM (or other) solutions.<br />
• Reusability + future exploitability.<br />
• Sca lability.<br />
End-user goals:<br />
• Helping internal processes: reusing, classifying, searching documents, people.<br />
• Improving knowledge sharing, common understanding, use of a common language,<br />
lead to better communication.<br />
• Enrich document definition.<br />
• Introduce semantic-based knowledge organisation and retrieval (there are no semantic<br />
tools currently at INDRA).<br />
• Help the tendering process.<br />
• Ease of use and deployment.<br />
• No interference with the current system and practices.<br />
• Customisation + flexibility.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
• Mail integration + single sign-on (not for the pilot).<br />
Page : 76 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Intangible goals:<br />
• Bringing new value into ontology management that has scientific merit.<br />
• Research on Ontology evolution-reuse-query answering.<br />
• Assess the impact of introducing ontologies in KM.<br />
• Assess the use of ont ologies versus relational database.<br />
• Ability to capture the complexity of corporate processes and knowledge.<br />
Individual goals:<br />
• Implement an open source ontology management system.<br />
• Added value for KnowNet: ontology navigation, data model flexibility, more<br />
powerful search + inference, ease creation process.<br />
• Modelling different aspects of the users in a deep way.<br />
• Experimenting with new technologies.<br />
The individual partners focus:<br />
• Ontology representation-querying-evolution(FZI).<br />
• User-friendly MSOffice -integrated interface (Deltatec).<br />
• To improve KnowNet by integrating ontologies (Meta4).<br />
• JADE integration, contribution to JADE (Archetypon).<br />
• Evaluating new Knowledge Management approach and impact on organizations<br />
(<strong>CALT</strong>).<br />
• To help day-to-day processes as tendering a nd engineering (Indra).<br />
8.2 Annex 2: Description of INDRA (the main user group)<br />
8.2.1 An overview<br />
Indra is the leading Spanish company in Information Technologies. Indra’s activities are<br />
distributed into three lines of business: Information Technologies (80%) and Simulation and<br />
Automatic Test Systems, and Electronic Defence Equipment (20%).<br />
In year 2000, Indra had over 5,000 employees. Over 76% of these 5,000 employees are<br />
specialised graduates.<br />
In regards to financial performance, Indra’s year 2000 revenues tota led 676.9 million Euros.<br />
About 40% of its revenues were derived in the international marketplace (outside of Spain).<br />
During 2001, Indra exceeded its growth objectives for financial year 2001 in terms of both<br />
revenues and profits, earning a net profit of 48 million Euros (€), which represents a 25%<br />
increase (compared to the 18% figure established as the company's initial objective). The<br />
company also achieved growth of 28% in revenues from its Information Technologies
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 77 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
business area (not counting balloting projects) and 26% in its Simulation and Automatic<br />
Testing Systems (SIM/ATS) and Defense Electronics Equipment (DEE) businesses.<br />
Indra’s head office is located in Madrid. Indra has 15 other locations in Spain as well as<br />
offices in Argentina, Chile, Peru, USA , Germany, China, Portugal and Venezuela.<br />
Worldwide, Indra is present in more than 40 countries on five continents.<br />
8.2.2 Activities of Indra’s Competence Centres<br />
Competence Centres are one of the basic pillars supporting Indra’s strategy of growth and<br />
adaptation to clients’ needs. They come about as a result of market requirements. Clients<br />
which makeup the market that Indra serves demand a quick and effective capacity to react<br />
and respond.<br />
The Competence Centres’ mission is to lead the innovation of Indra’s services and solutions.<br />
Although the number and type of the Competence Centres vary depending on the market and<br />
the clients’ needs, the following lines of business are related to knowledge management and<br />
ontology building:<br />
1. Supply chain management (SCM): e-business services aimed at both private individuals<br />
(B2C, G2C, etc.) and companies (B2B).<br />
2. Customer relationship management (CRM): Business Intelligence systems and Contact<br />
Centres.<br />
3. Network and systems management: Internet/Intranet infrastructure (including access<br />
from mobile phones and network security) and knowledge management systems.<br />
The commercial operations which address customer demands and project execution are the<br />
two major activities carried out in Competence Centres. Thus, tight coordination is ne eded<br />
with other organisational units involved in the marketing, sale, and production of information<br />
systems.<br />
The following diagram depicts the relationships that exist between the Competence Centres<br />
and the Production Centres. The vertical axis depicts the sequential activities and the<br />
horizontal axis shows the participating units.
User<br />
requirements<br />
Activities<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Line of<br />
business<br />
Identification<br />
for marketing<br />
Commercial<br />
Proposal<br />
Deliverable ID: D8b<br />
Competence<br />
Centers<br />
Technical<br />
Solution<br />
Execution<br />
Plan<br />
Production<br />
Centers<br />
Page : 78 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Organization<br />
Installation &<br />
Development<br />
Installation<br />
At the conclusion of these activities is “installation.” Installation may occur at either an Indra<br />
customer’s site or an Indra employee, or both.<br />
In the Competence Centres, the key work practices that are particularly important to focus on<br />
are:<br />
• Identify knowledge and skill gaps and manage the development of human resources<br />
competencies.<br />
• Provide expert, trained and skilled consultants and technicians to meet service<br />
requirements.<br />
• Plan and implement learning and training activities.<br />
• Select, test or prototype and certify new technology resources and providers as needed.<br />
• Provide and deliver the service to specific customers ensuring the maximum quality of<br />
service.<br />
The generic role of and the guiding principles of the Competence Centres are:<br />
- Provide engineering and commercial support to the remainder of Indra.<br />
- Set up a knowledge-sharing environment within Indra.<br />
- Mobilise the best practices in technology across the organisation.<br />
- Generate reusable components for problem resolution.<br />
- Enable Indra to maximise performance and innovation.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 79 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
- Reinforce the growth of both Indra and the people aim at enhancing creativity and<br />
effectiveness.<br />
In order to accomplish these key work practices and fulfil the guiding principles, the<br />
Competence Centres employ technologies that provide users with access to structured and<br />
unstructured information. Technologies also facilitate the ability to identify users with<br />
relevant skills and expertise.<br />
Here is a brief explanation of the Competence Centres’ tools.<br />
♦ Enterprise portal (Indr@web): integrates and manages a wide variety of corporate<br />
information channels and services.<br />
♦ Knowledge and information repository infrastructur e: databases and electronic document<br />
management systems<br />
♦ Knowledge maps and guides to available knowledge resources: thesaurus, taxonomy and<br />
ontology generation<br />
♦ Search and delivery information services to access analysis and strategy external sources<br />
of information<br />
♦ Workflow services for corporate process automation in the area of human resource<br />
management<br />
♦ Collaboration services<br />
♦ Directory, calendar, agenda services<br />
♦ Threaded discussions (forums)<br />
♦ Asynchronous (e-mail)<br />
♦ Real-time communication (chat, net-meeting, conferencing, audio, video).<br />
Scheduled, not implemented yet.<br />
♦ Shared spaces (document sharing, white-boarding). Scheduled, not<br />
implemented yet.<br />
♦ E-learning and human resource management portal (Employee Application Portal)<br />
Prototyped, but not implemented ye t.<br />
For 2002, the Competence Centres are focused on these goals:<br />
• Increase the commercial contacts with clients.<br />
• Increase our offering volume in products and services.<br />
• Serve as a reference for the remaining business lines.<br />
• Provide counselling for improving project’s problems and opportunities.<br />
• Reach the maturity phase in the emergent KM line at Indra.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 80 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
8.2.3 The tendering process at INDRA<br />
The tendering process at INDRA aims to select a supplier and a proposal for the considered<br />
services and systems, and to agree with the chosen supplier on a contract defining the<br />
deliveries and responsibilities of both parties.<br />
The tendering process includes the following sub-processes:<br />
• preparation of request for proposal<br />
• response preparation,<br />
• supplier selection<br />
• contract preparation.<br />
These sub-processes do not need to be executed in the sequence of Figure 2: they may<br />
overlap in time, and there may be iterations of groups of these sub-processes.<br />
Tendering<br />
Preparation of<br />
request for proposal<br />
Response<br />
preparation<br />
Supplier selection<br />
Contract preparation<br />
Contract monitoring<br />
Decision point<br />
execution<br />
Contract completion<br />
Administrative<br />
completion<br />
Procurement 1<br />
Acquisition Initiation<br />
Tendering<br />
Preparation of<br />
request for proposal<br />
Response<br />
preparation<br />
Supplier selection<br />
Contract preparation<br />
Contract monitoring<br />
Decision point<br />
execution<br />
Contract completion<br />
Administrative<br />
completion<br />
Procurement 2<br />
Tendering<br />
Preparation of<br />
request for proposal<br />
Response<br />
preparation<br />
Supplier selection<br />
Contract preparation<br />
Contract monitoring<br />
Decision point<br />
execution<br />
Contract completion<br />
Administrative<br />
completion<br />
Procurement n<br />
Figure 2: The acquisition process model
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 81 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
See (INDRA, 2002) or a detailed description of the tendering process at INDRA.<br />
8.2.4 References<br />
Albert A. Angehrn and Larry Todd Wilson (2002), ‘Knowledge Management & Ontology<br />
Building at INDRA’, <strong>INSEAD</strong> Case 2002.<br />
INDRA (2002). ‘Indra Case Study: Using Ontologies in Tendering Process’, INDRA internal<br />
document.<br />
8.3 Annex 3: Ten Usability Heuristics (Nielsen J. and Molich P.)<br />
Visibility of system status<br />
The system should always keep users informed about what is going on, through<br />
appropriate feedback within reasonable time.<br />
Match between system and the real world<br />
The system should speak the users' language, with words, phrases and concepts<br />
familiar to the user, rather than system-oriented terms. Follow real-world conventions,<br />
making information appear in a natural and logical order.<br />
User control and freedom<br />
Users often choose system functions by mistake and will need a clearly marked<br />
"emergency exit" to leave the unwanted state without having to go through an<br />
extended dialogue. Support undo and redo.<br />
Consistency and standards<br />
Users should not have to wonder whether different words, situations, or actions mean<br />
the same thing. Follow platform conventions.<br />
Error prevention<br />
Even better than good error messages is a careful design which prevents a problem<br />
from occurring in the first place.<br />
Recognition rather than recall<br />
Make objects, actions, and options visible. The user should not have to remember<br />
information from one part of the dialogue to another. Instructions for use of the<br />
system should be visible or easily retrie vable whenever appropriate.<br />
Flexibility and efficiency of use<br />
Accelerators -- unseen by the novice user -- may often speed up the interaction for the<br />
expert user such that the system can cater to both inexperienced and experienced<br />
users. Allow users to tailor frequent actions.<br />
Aesthetic and minimalist design<br />
Dialogues should not contain information which is irrelevant or rarely needed. Every<br />
extra unit of information in a dialogue competes with the relevant units of information<br />
and diminishes their relative visibility.<br />
Help users recognize, diagnose, and recover from errors
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 82 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Error messages should be expressed in plain language (no codes), precisely indicate<br />
the problem, and constructively suggest a solution.<br />
Help and documentation<br />
Even though it is better if the system can be used without documentation, it may be<br />
necessary to provide help and documentation. Any such information should be easy to<br />
search, focused on the user's task, list concrete steps to be carried out, and not be too<br />
large.<br />
8.4 Annex 4: Glossary of terms<br />
Flexibility the multiplicity of ways the user and system exchange information.<br />
Learnability the ease with which new users can begin effective interaction and achieve<br />
maximal performance.<br />
Robustness the level of support provided to the user in determining successful achievement<br />
and assessment of goals.<br />
Usability a measure of the ease with which a system can be learned or used, its safety,<br />
effectiveness and efficiency, and the attitude of its user towards it.<br />
Evaluation method a procedure for collecting relevant data about the operation and usability<br />
of a computer system.<br />
Formative evaluation an evaluation that takes place before actual implementation and which<br />
influences the development of the product.<br />
Summative evaluation an evaluation that takes after implementation and has the aim of<br />
testing the proper functioning of a product.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 83 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
8.5 Annex 5: Pre -questionnaire for the participants in the Ontologging usability test<br />
The following pre-questionnaire what distributed to a group of users at Indra in October<br />
2003.<br />
Title: Pre -questionnaire for the participants in the Ontologging usability test<br />
This pre questionnaire is intended to get to know better the end-users/testers of Ontologging<br />
system and their real needs. It serves to the evaluation of Ontologging system; it is a first<br />
phase of evaluation. The evaluation of the Ontologging system might include a comparison of<br />
a classical approach (actual KM tools) with an ontology-based approach.<br />
Name (optional)…………………………………<br />
Function…………………………………………<br />
Gender…………………………………………..<br />
Years of experience………………………………<br />
What are your main job tasks and responsibilities at INDRA? ( e.g. programming, managing,<br />
consulting, marketing, sales, etc.)<br />
Can you indicate some of your knowledge oriented activities? ( e.g. customer relationship<br />
management, work on various projects, write tenders, write reports, etc.)
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Can you please indicate briefly what KM means for you?<br />
Page : 84 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Have you worked with other KM tools before? Which ones? (KM tools can be a Knowledge<br />
Management System (KMS) or various separate tools such as: Intranet portals, groupware<br />
tools, forums of discussions, databases, search engines, email systems)<br />
Are you using KM tools in your daily work processes?<br />
(If yes give 2 or 3 concrete examples)<br />
What is the main purpose of using this KM tools for you? (or What is the most important<br />
functionality of this system for your daily tasks?)<br />
Does the KM tool in use improve your work? If yes, why and what is the main advantage
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
of using the actual KM tool in your everyday work tasks?<br />
Page : 85 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
From your perspective do the KM tools you use require further improvements? Can they be<br />
designed better? What improvement would you suggest? (e.g. quality of content, organization<br />
of content, missing functionality, collaborative tools, filtering of irrelevant data, notifications,<br />
etc.)<br />
How often have you used the Ontologging system?<br />
(never, once, more than once, several times)<br />
For any further clarifications please contact: Liana.Razmerita@insead.edu
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 86 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
8.6 Annex 6: Questionnaire Ontology-based approach for the structuring of<br />
knowledge<br />
Title: Ontology-based approach for the structuring of knowledge<br />
• What were the major problems/limitations of the initial domain ontologies?<br />
Amongst possible reasons are:<br />
1. The ontology wasn’t offering enough support for the given scenario? (Tender,<br />
development, technology)<br />
2. The ontology was not complete?<br />
3. The terminology was not consistent with the general usage?<br />
4. It is difficult to reach a consensus towards a definition of concepts?<br />
5. Other reasons<br />
• Can you describe a little this process of “refining” the domain ontology?<br />
How did you proceed?<br />
• Can you describe the process of conceptual modelling of the domain? By conceptual<br />
modelling we understand the process of defining concepts, sub-concepts and their<br />
relationships? (Add new concepts, Delete, Create new properties?)
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 87 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• What is the current complexity of the domain ontology? Number of concepts and<br />
subconcepts? Number of relations? Number of instances?<br />
• What are the difficulties and what are the advantages of using ontologies?<br />
• Is an ontology based management information meta-model useful for better structuring<br />
the domain model? Why? (e.g. The ontology offers good support for the business process<br />
in general)<br />
Could you tell us your views on the use of ontology to support the given scenario?<br />
Do you believe it is:<br />
1. Strongly beneficial<br />
2. Beneficial<br />
3. Not very beneficial<br />
4. Not at all beneficial<br />
5. Not sure
Ontology based tools :<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 88 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• What are the limitations and advantages of using KAON ontology editor for building the<br />
domain ontology?<br />
• Any functionality is missing?<br />
• What are the features that you enjoyed the most?<br />
• KAON Ontology editor<br />
KAON Ontology editor<br />
Tool perspective<br />
Functionalities provided<br />
Interface design<br />
Ergonomics<br />
Friendliness<br />
Usability<br />
Help level<br />
Response time<br />
Effort to load data<br />
Effort required to use the tool<br />
Very<br />
poor<br />
Poor Adequate Good Very<br />
good
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 89 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
8.7 Annex 7 Spanish questionnaires<br />
In order to facilitate the evaluation of the domain ontology, the following questionnaire has<br />
been prepared. Due to the users are Spanish speaking, the first version of this one is in<br />
Spanish.<br />
Cuestionario de Opin ión sobre Características para Seleccionar Ontologías<br />
En este anexo se muestra un cuestionario que propone preguntas sobre la importancia de<br />
características de ontologías.<br />
Mapa del cuestionario<br />
C- Sobre el contenido.<br />
C1) Sobre la coincidencia de los conceptos (clases).<br />
C2) Sobre las relaciones.<br />
C3) Sobre la taxonomía de los conceptos.<br />
C4) Sobre los axiomas o reglas.<br />
T- Sobre el editor de la ontología.<br />
T2) Sobre la visualización de los términos de la ontología.<br />
T3) Sobre la edición de los términos de la ontología.<br />
T8) Sobre integración de ontologías.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 90 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
CUESTIONARIO SOBRE CARACTERÍSTICAS PARA SELECCIONAR ONTOLOGÍAS<br />
Entrevistado:_______________________________________________________________<br />
_____<br />
El supuesto es construir un sistema y buscar una ontología existente para integrarla en él. ¿En<br />
qué debemos fijarnos?, ¿qué es lo importante?, ¿de qué va a depender elegir una ontología<br />
frente a otras? Por favor, contesta a este cuestionario evitando pensar en un proyecto<br />
concreto, sino considerando las características que a ti te parezcan importantes para<br />
seleccionar una ontología para cualquier tipo de sistemas. Vamos a suponer también que<br />
existe un gran número de ontologías que pueden ser reutilizadas.<br />
Valora el cuestionario sobre características de ontologías indicando qué importancia das a<br />
cada una de ellas. Para ello, marca una o varias casillas, e indica tu opinión sobre la<br />
valoración realizada. Si no estás seguro de lo que significa la pregunta, preferimos que no<br />
contestes. Si piensas que olvidamos o no es necesaria alguna característica te agradeceríamos<br />
que nos lo comentaras.<br />
SOBRE EL CONTENIDO<br />
Se valora aquí la importancia que tiene la coincidencia de los términos que están<br />
representados en la ontología, con los términos que se supone debe utilizar el nuevo sistema.<br />
Es decir, sobre la parte del dominio que la ontología cubre de las necesidades de nuestro<br />
nuevo sistema.<br />
En general, sobre el grado de coincidencia del contenido de la ontología con las<br />
necesidades del sistema:<br />
Es suficiente que la ontología tenga un dominio parecido al buscado. Servirá como base<br />
para reelaborar una ontología para el sistema.<br />
La ontología debe coincidir en parte con las necesidades del sistema. Se deberá hacer un<br />
proceso de adaptación.<br />
La ontología debe coincidir en un alto porcentaje con las necesidades del sistema. Las<br />
adaptaciones deberían ser mínimas.<br />
La ontología debe coincidir casi totalmente con las necesidades del sistema. Las<br />
adaptaciones deberían ser sólo puntuales.<br />
La ontología debe coincidir totalmente con lo buscado. No debería hacerse ninguna<br />
adaptación.<br />
No estoy seguro.<br />
Otra idea:
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 91 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
SOBRE LA COINCIDENCIA DE LOS CONCEPTOS (CLASES)<br />
Se evalúa en este apartado la coincidencia de los conceptos existentes en la ontología con las<br />
necesidades del sistema que se va a desarrollar.<br />
C1 En general, di si te parece<br />
importante que coincidan los conceptos<br />
en la ontología evaluada con lo<br />
necesitado en el sistema.<br />
C11 Los conceptos que son esenciales<br />
para el sistema se encuentran en la<br />
ontología.<br />
C12 Los conceptos que son esenciales<br />
para el sistema se encuentran en los<br />
niveles superiores de la ontología.<br />
C13 Los conceptos se encuentran<br />
descritos convenientemente en lenguaje<br />
natural.<br />
C14 La especificación formal de los<br />
conceptos coincide con la descripción<br />
de los conceptos en lenguaje natural.<br />
C15 Los atributos de los conceptos los<br />
describen de forma precisa.<br />
C16 El número de conceptos que son<br />
representados en la ontología.<br />
Comentario sobre la valoración:<br />
No es<br />
importante<br />
No es<br />
fundamental<br />
Es<br />
importante<br />
Es muy<br />
importante<br />
Es<br />
fundamental<br />
SOBRE LAS RELACIONES<br />
Se evalúa en este apartado la coincidencia de las relaciones entre conceptos existentes en la<br />
ontología con las necesidades del sistema que se va a desarrollar.<br />
C2 En general, di si te parece<br />
importante que coincidan las relaciones<br />
definidas en la ontología con lo<br />
necesitado en el sistema.<br />
C21 Las relaciones que son esenciales<br />
para el sistema se encuentren definidas<br />
en la ontología.<br />
C22 Los conceptos de la ontología se<br />
encuentran relacionados tal y como se<br />
necesitan para el sistema.<br />
No es<br />
importante<br />
No es<br />
fundamental<br />
Es<br />
importante<br />
Es muy<br />
importante<br />
Es<br />
fundamental
C23 La especificación formal de las<br />
relaciones coincide con la descripción<br />
de las relaciones en lenguaje natural.<br />
C25 Las relaciones tienen especificadas<br />
propiedades formales necesarias para el<br />
sistema: (reflexividad, irreflexividad,<br />
simetría, asimetría, antisimetría,<br />
transitividad, intransitividad).<br />
C26 El número de relaciones definidas<br />
en la ontología.<br />
Comentario sobre la valoración.<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 92 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
SOBRE LA TAXONOMÍA DE LOS CONCEPTOS<br />
Sobre cómo están clasificados los conceptos en la ontología y su coincidencia con la<br />
organización necesaria para el sistema.<br />
C3 En general, di si te parece<br />
importante la coincidencia de las<br />
taxonomías de conceptos con las que<br />
necesita el sistema.<br />
C31 Los conceptos de la ontología se<br />
encuentran clasificados desde varias<br />
perspectivas.<br />
C32 Existen especificadas relaciones del<br />
tipo "Not-Subclass-Of' necesarias para<br />
el sistema.<br />
C35 La profundidad máxima a la que<br />
llega la jerarquía de conceptos.<br />
C36 El número medio de subclases que<br />
tienen las clases.<br />
Comentario sobre la valoración:<br />
No es<br />
importante<br />
No es<br />
fundamental<br />
Es<br />
importante<br />
Es muy<br />
importante<br />
Es<br />
fundamental<br />
SOBRE LOS AXIOMAS<br />
La existencia de axiomas en la ontología quizá sea necesaria en el nuevo sistema para hacer<br />
deducciones, mantener la consistencia o restringir el valor de las instancias.<br />
C4 En general, di si te parece<br />
importante que existan axiomas o reglas<br />
C41 Los axiomas pueden utilizarse para<br />
hacer deducciones, utilizados para<br />
No es<br />
importante<br />
No es<br />
fundamental<br />
Es<br />
importante<br />
Es muy<br />
importante<br />
Es<br />
fundamental
esolver consultas.<br />
C42 Existen axiomas para completar los<br />
valores sobre los atributos de conceptos<br />
C43 Existen axiomas que se utilizan<br />
para verificar la consistencia de<br />
términos en la ontología.<br />
C44 Existen axiomas definidos como<br />
elementos independientes (no<br />
vinculados a conceptos).<br />
C45 El número de axiomas definidos en<br />
la ontología.<br />
Comentario sobre la valoración:<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 93 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
SOBRE LA VISUALIZACIÓN DE LOS TÉRMINOS DE LA ONTOLOGÍA<br />
Sobre cómo nos muestra los términos y su contenido las aplicaciones software de<br />
visualización de ontologías.<br />
T2 En general, di si te parece<br />
importante cómo el editor muestra el<br />
contenido de los términos.<br />
T21 Permiten visualizar toda la<br />
información que se encuentre<br />
espe cificada en la ontología.<br />
T22 Permiten ver la información con el<br />
nivel de detalle deseado.<br />
T23 Permiten ver, de alguna forma, la<br />
taxonomía de los conceptos<br />
T24 Permiten usar el modo gráfico para<br />
mostrar las relaciones ad-hoc.<br />
Comentario sobre la valoración<br />
No es<br />
importante<br />
No es<br />
fundamental<br />
Es<br />
importante<br />
Es muy<br />
importante<br />
Es<br />
fundamental<br />
SOBRE LA EDICIÓN DE LOS TÉRMINOS DE LA ONTOLOGÍA<br />
En el caso de tener que hacer modificaciones en los términos de la ontología para adaptarla al<br />
nuevo sistema, deben valorarse ciertas características de las aplicaciones que ayudarán a<br />
realizar estas tareas. Debe tenerse en cuenta que los entornos pueden estar limitados por las<br />
capacidades expresivas del lenguaje que soporta la ontología.<br />
T3 En general, di si te parece<br />
importante la forma en la que el editor<br />
permite modificar los términos.<br />
T31 Con el entorno podemos<br />
No es<br />
importante<br />
No es<br />
fundamental<br />
Es<br />
importante<br />
Es muy<br />
importante<br />
Es<br />
fundamental
implementar todo lo que podemos<br />
implementar directamente con el<br />
lenguaje de implementación.<br />
T32 Permite realizar modificaciones de<br />
los términos en cualquier momento; es<br />
decir, siempre podemos editar la<br />
ontología y añadir nuevos términos,<br />
modificar y borrar términos, sus<br />
propiedades, sus relaciones, etc<br />
T33 Permite realizar modificaciones en<br />
las relaciones taxonómicas de forma<br />
gráfica.<br />
T34 Permite trabajar de forma gráfica<br />
con relaciones ad-hoc.<br />
Comentario sobre la valoración:<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 94 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 95 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
8.8 Annex 8 User modelling tools and knowledge distribution agents questionnaires<br />
GOAL<br />
Evaluation of user modeling processes and<br />
the evaluation of the knowledge distribution agents<br />
• The evaluation of the user modeling tools;<br />
• The evaluation of the knowledge distribution agents;<br />
Hypothesis/questions to test<br />
• Employees view on sharing personal information and user modeling processes;<br />
• The perceived need of personalization of KM tools and the use of knowledge distribution<br />
agents;<br />
Name (optional)…………………………………<br />
Function…………………………………………<br />
Gender…………………………………………..<br />
Years of experience………………………………<br />
Methodology: questionnaire<br />
After the end-users fill in user profiles using the User Profile Editor (UPE), they are asked to answer a<br />
set of questions:<br />
User Profile Editor<br />
Tool perspective<br />
Functionalities provided<br />
Interface design<br />
Ergonomics<br />
Friendliness<br />
Usability<br />
Help level<br />
Response time<br />
Effort to load data<br />
Effort required to use the tool<br />
Functionality of the system<br />
Very<br />
poor<br />
Poor Adequate Good Very<br />
good
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
What are the main shortcomes of the UPE?<br />
Deliverable ID: D8b<br />
Page : 96 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• Was the terminology (concepts) used for describing user profiles clear? Any missing<br />
characteristic of the user? Was the user profile too detailed?<br />
• What did you like the most in the UPE?<br />
• What enhancements would you suggest?
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 97 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• Are you happy to disclose personal data in exchange of personalization, communication, and<br />
collaboration facilities? (e.g. for example being able to look for experts easily in different<br />
areas)<br />
• Would you prefer to enter your data explicitly using the user profile editor or to be modeled<br />
implicitly through user modeling techniques (based on your activity in the system, e.g. based<br />
on your queries, based on your browsing behavior)?<br />
• Through user modeling techniques the system can infer your “Behaviour” in the system. ( e.g<br />
type of activity (reader, writer, lurker) and level of activity –(active, inactive, passive) in the<br />
system. These inferred characteristics would enable the organization to provide incentives to<br />
share your knowledge and to be more active in the system (e.g. provide answers to the<br />
problems in the forum of discussions).<br />
What would be the appropriate incentives for you to be active in the system and to share your<br />
knowledge more (recognition, promotion versus other types of incentives e.g. bonus, money,<br />
virtual rewards)?
Privacy issues<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 98 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
• Are you concerned with any privacy issues related to your personal data being available in the<br />
system? If yes, what particular aspects are you concerned with?<br />
• Do you think that a “privacy policy” related to the user’s data would be required in Indra?<br />
Future enhancements of KM tools/ KMSs and personalization issues<br />
How is perceived the agent support for the distribution of knowledge?<br />
For example, suppose you are interested in Knowledge Management.<br />
Scenario of notification agents:<br />
* Raphael Smith from team unit X has started a new project in the area of Knowledge<br />
Management;<br />
* A new document related to optimizing the tendering process has been authored by Fernando<br />
Salinas;
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Would these notifications be an added value in a KMS for you? Why?<br />
Page : 99 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
What type of notification of news (e.g. new documents available in your domain of interest, new<br />
events at Indra) would you prefer?<br />
1. character-based notification<br />
2. e-mail notification<br />
3. pop-up windows<br />
• What kind of adaptations would you prefer to be included in a next generation of Knowledge<br />
Management Tools? (Check the feature that you would enjoy to be available, put a question<br />
mark if it’s unclear for you)<br />
Adaptation of content of the KMS system:<br />
Filtering of content according to my interests and expertise;<br />
Optional detailed information and automatic summarization of documents<br />
Notification agents as similar to the previous scenario<br />
Adaptation of the modality<br />
Different types of layouts,<br />
Different skins;<br />
Different colors;<br />
Adaptation of structure<br />
Personalized view of the KM system and adapted functionality according to my own<br />
work tasks and needs;<br />
Personalized view of the KM system and adapted functionality according to the<br />
specificity of the team I am working in;
Other suggestions:<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 100 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Which type of adaptations amongst the previous three is the most important for you? If you<br />
have other ideas suggest other types of personalization which KMSs could include:<br />
8.9 Comments to: Liana.Razmerita@insead.edu
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Annex 9 Ontologging project questionnaire<br />
This questionnaire is divided into 5 sections:<br />
Ontologging Project Questionnaire<br />
Page : 101 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
A. Ontologging System –DUI, Usage Perspective ............................................................... 102<br />
B. Ontologging System –DUI Tool Perspective................................................................... 104<br />
C. Ontologging System – DUI Ontology Perspective .......................................................... 105<br />
D. Detailed KM Process View............................................................................................. 106<br />
E. Ontologging System -Future Perspective ....................................................................... 110<br />
Glossary<br />
Knowledge capitalisation-includes processes such as submitting valuable knowledge assets in<br />
the system (e.g. writing a tender and making it available in a database or KMS or other<br />
knowledge assets which you have not necessarily written.) Adding annotations is part of the<br />
knowledge capitalisation process.<br />
Knowledge sharing- the process of knowledge sharing can be facilitated by the knowledge<br />
management tools (shared folders, collaborative tools, forum of discussions) but also by other less<br />
informal means like coffee break conversations with peers.<br />
Knowledge retrieving- is facilitated by querying/searching and browsing m echanisms included in the<br />
KM tools, but also by a good conceptualization of the domain knowledge.
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Name (optional)…………………………………<br />
Function…………………………………………<br />
Gender…………………………………………..<br />
Years of experience………………………………<br />
A. Ontologging System –DUI, Usage Perspective<br />
Describe three contexts / scenarios in which you used the DUI ?<br />
Scenario 1:<br />
Scenario 2:<br />
Scenario 3:<br />
What have you enjoyed the most in your experience using the DUI ?<br />
What have you disliked in the DUI?<br />
Page : 102 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public
1 The functionalities provided by<br />
DUI were useful<br />
2 The functionalities provided by<br />
DUI were complete<br />
3 The functionalities provided by<br />
DUI fulfilled my expectations<br />
4 DUI supports very well knowledge<br />
capitalisation process<br />
5 In general, the functionalities<br />
support very well navigation<br />
processes<br />
6 DUI supports very well knowledge<br />
querying (searching/ process)<br />
7 DUI supports very well knowledge<br />
sharing process<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Strongly<br />
disagree<br />
Disagree Slightly<br />
disagree<br />
Page : 103 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Slightly<br />
agree<br />
Agree Strongly<br />
agree
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
B. Ontologging System –DUI Tool Perspective<br />
Ontologging system<br />
DUI tool perspective<br />
1 Functionalities provided<br />
2 Interface design<br />
3 Ergonomics<br />
4 Friendliness<br />
5 Usability<br />
6 Help level<br />
7 Response time<br />
8 Effort to load data<br />
9 Effort required to use the tool<br />
Very<br />
poor<br />
Page : 104 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
poor adequate good Very<br />
good<br />
What have you perceived as the main strengths and weaknesses of the different tools?<br />
Knowledge capitalisation<br />
Strengths<br />
Knowle dge capitalisation<br />
Weaknesses<br />
Knowledge searching<br />
Strengths<br />
Knowledge searching<br />
Weaknesses<br />
Knowledge sharing<br />
Strengths<br />
Knowledge sharing<br />
Weaknesses
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
C. Ontologging System – DUI Ontology Perspective<br />
Page : 105 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Have you noticed any advantage in using an ontology approach for defining the domain<br />
knowledge?<br />
DUI an ontology perspective Strongly<br />
disagree<br />
1 The ontology offers good support<br />
for the given scenario (tender,<br />
development, technology)<br />
2 The ontology offers good support<br />
for the business process in<br />
general<br />
4 The ontology is complete<br />
(concepts,<br />
instances)<br />
relations and<br />
5 The terminology is consistent<br />
with general usage<br />
Disagree Slightly<br />
disagree<br />
Slightly<br />
agree<br />
Agree Strongly<br />
agree<br />
Have you annotated documents using DUI? From your perspective are these annotations<br />
useful?
D. Detailed KM Process View<br />
Capitalising knowledge<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
1 Report a concrete and personal experience<br />
of knowledge capitalization in your<br />
organization.<br />
What knowledge did you intend to<br />
store/capture? Why? How?<br />
2 Assessment of the effort to load<br />
documents<br />
How many documents have you submitted<br />
in the system?<br />
How long did it take you?<br />
3 What were the difficulties and limitations<br />
you encountered?<br />
Was it simple? Was it time consuming?<br />
Quality of the result?<br />
Were you able to capture what you<br />
wanted?<br />
4 What is your general conclusion about this<br />
experience?<br />
Was it positive? Would you do it again? If<br />
you had to do it again, what would you<br />
change?<br />
Additional comments<br />
Retrieving knowledge<br />
1 Report a concrete and personal experience<br />
of searching for knowledge that has been<br />
previously stored in a database, file system<br />
or Ontology-based system.<br />
What kind of knowledge were you looking<br />
for? What was the context?<br />
2 How did you proceed?<br />
Page : 106 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
Using the DUI system Using traditional KM tools<br />
(DB or file system)<br />
Using the DUI<br />
system<br />
Using traditional KM tools (DB or<br />
file system)
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Keyword search, browsing, navigating?<br />
Deliverable ID: D8b<br />
3 Effort spent on the search and<br />
effectiveness of the result.<br />
Was the proc ess of searching satisfactory<br />
(all right or too long, amount of noise)?<br />
What were the main limitations?<br />
4 What is your general conclusion about this<br />
experience?<br />
Was it positive? Would you do it again?<br />
What would you suggest as improvement?<br />
Additional comments<br />
Page : 107 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public
Sharing knowledge<br />
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 108 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
What is the most important way to share knowledge for you?<br />
1. Through shared document repositories (databases, file systems, ontologies)<br />
2. Through informal ways to share knowledge (via your social network, via email,<br />
coffee-room)<br />
3. Through structured processes (for instance your department organize regular<br />
meetings)<br />
Efficiency of the process<br />
In your opinion, are these knowledge sharing processes adequate?<br />
What is working well or not working well in these knowledge-sharing processes? What are the actual<br />
barriers?
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
Page : 109 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
General conclusion about knowledge sharing as it is now<br />
Does it work well? What are the really important knowledge sharing processes or the ones that do not<br />
work? How do you think they could be improved?<br />
What would be the appropriate incentives for you to be active in a KM system and to share your<br />
knowledge (a better recognition, promotion versus other types of incentives e.g. bonus, money, virtual<br />
rewards)?<br />
Can you assess the value of using ontology for groups or inter-groups communication?<br />
Ontology defines a shared vocabulary. Amongst the possible reasons are:<br />
1. It enables modelling more deeply the knowledge<br />
2. It enables connecting knowledge with people, etc.<br />
3. It helps people to gain visibility as it connects knowledge assets with the people who<br />
authored them or commented them.<br />
Other reasons:
Evaluation report of the use of Onto-Logging<br />
platform in the user site<br />
Deliverable ID: D8b<br />
E. Ontologging System -Future Perspective<br />
What would be your recommendations for further development of the system?<br />
Could you see yourself using a similar system in three years time?<br />
Page : 110 of 110<br />
Version: 1.0<br />
Date: 27 january 2004<br />
Status: Final<br />
Confid.: Public<br />
If you had to focus some efforts on the design of the next generation ontology-based system, where<br />
would it be?<br />
Can you imagine another usage of Ontology to support the knowledge processes?