CONFERENCE PROCEEDINGS - IPSC - Europa
CONFERENCE PROCEEDINGS - IPSC - Europa
CONFERENCE PROCEEDINGS - IPSC - Europa
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
C O N F E R E N C E P R O C E E D I N G S<br />
EUR 24948 EN - 2011
VALgEO 2011<br />
Workshop Proceedings<br />
JRC, Ispra, 18-19 19 October 2011<br />
Edited by<br />
C. Corbane, D. Carrion,<br />
M. Broglia and M. Pesaresi
The mission of the JRC-<strong>IPSC</strong> is to provide research results and to support EU policy-makers in<br />
their effort towards global security and towards protection of European citizens from accidents,<br />
deliberate attacks, fraud and illegal actions against EU policies.<br />
European Commission<br />
Joint Research Centre<br />
Institute for the Protection and Security of the Citizen<br />
Contact information<br />
Address: JRC - TP 267 - Via E. Fermi, 2749 - 21027 Ispra (VA), Italy<br />
E-mail: marco.broglia@jrc.ec.europa.eu<br />
Tel.: +39 0332 785435<br />
Fax: +39 0332 785154<br />
http://ipsc.jrc.ec.europa.eu/<br />
http://www.jrc.ec.europa.eu/<br />
Legal Notice<br />
Neither the European Commission nor any person acting on behalf of the Commission<br />
is responsible for the use which might be made of this publication.<br />
Europe Direct is a service to help you find answers<br />
to your questions about the European Union<br />
Freephone number (*):<br />
00 800 6 7 8 9 10 11<br />
(*) Certain mobile telephone operators do not allow access to 00 800 numbers or these calls may be billed.<br />
A great deal of additional information on the European Union is available on the Internet.<br />
It can be accessed through the <strong>Europa</strong> server http://europa.eu/<br />
JRC66899<br />
EUR 24948 EN<br />
ISBN 978-92-79-21379-3 (print)<br />
ISSN 1018-5593 (print)<br />
ISBN 978-92-79-21380-9 (PDF)<br />
ISSN 1831-9424 (online)<br />
doi:10.2788/73045<br />
Luxembourg: Publications Office of the European Union<br />
© European Union, 2011<br />
Reproduction is authorised provided the source is acknowledged<br />
Printed in Italy
CONTENTS<br />
RATIONALE.................................................................................................................................................................................1<br />
AGENDA .....................................................................................................................................................................................3<br />
FORWARD ..................................................................................................................................................................................7<br />
SESSION I ....................................................................................................................................................................................9<br />
THE ROLE OF VALIDATION IN INFORMATION AND COMMUNICATION TECHNOLOGIES FOR CRISIS MANAGEMENT ...........9<br />
Potential applications of tracking macro-trends within Crisis Management.................................................................11<br />
The OpenStreetMap way to data creation and validation in emergency preparedness and response .........................13<br />
Impact Opportunities and Methodology Challenges: Crisis Mapping and Geoanalytics in Human Rights Research....15<br />
Web and mobile emergencies network to real-time information and geodata management. .....................................17<br />
An Integrated Quality Score for Volunteered Geographic Information on Forest Fires .................................................29<br />
SESSION II .................................................................................................................................................................................35<br />
VALIDATION OF REMOTE SENSING DERIVED EMERGENCY SUPPORT PRODUCTS ...............................................................35<br />
Definition of a reference data set to assess the quality of building information extracted automatically from VHR<br />
satellite images ...............................................................................................................................................................37<br />
On the Validation of An Automatic Roofless Building Counting Process .......................................................................47<br />
Evacuation plans : interest and limits.............................................................................................................................55<br />
Outside the Matrix, a review of the interpretation of Error Matrix results....................................................................57<br />
On the complexity of validation in the security domain – experiences from the G-MOSAIC project and beyond .........69<br />
SESSION III................................................................................................................................................................................71<br />
USABILITY OF WEB BASED DISASTER MANAGEMENT PLATFORMS AND READABILITY OF CRISIS INFORMATION .............71<br />
Emergency Support System: Spatial Event Processing on Sensor Networks ..................................................................73<br />
Near‐real‐time monitoring of volcanic emissions using a new web‐based, satellite‐data‐driven, reporting system:<br />
HotVolc Observing System (HVOS) .................................................................................................................................81<br />
Image interpreters and interpreted images: an eye tracking study applied to damage assessment. ...........................83<br />
Crisis maps readability: first results of an experiment using the eye-tracker ................................................................93<br />
SESSION IV ...............................................................................................................................................................................95<br />
TOWARDS ROUTINE VALIDATION AND QUALITY CONTROL OF CRISIS MAPS ......................................................................95<br />
A methodological framework for qualifying new thematic services for an implementation into SAFER emergency<br />
response and support services........................................................................................................................................97<br />
A methodology for a user oriented validation of satellite based crisis maps...............................................................105<br />
Quality policy implementation: ISO certification of a Rapid Mapping production chain.............................................107<br />
AUTHORS INDEX ....................................................................................................................................................................109
RATIONALE<br />
Over the past decade, the international community has responded to an increasing number of major<br />
natural and man-made disasters. In parallel, the emergency management has become increasingly complex<br />
and specialized due to the necessity for various authorities and organizations to cooperate during emergency,<br />
and to the emergence of disasters of an unexpected or unknown nature. With these growing challenges, the<br />
need for more sophisticated tools for the production, sharing and integration of geospatial information<br />
without prejudice to the usability of end-user products, has given rise to a rapid development of geoinformational<br />
technologies to assist in crisis management operations. Recent events such as the earthquake in<br />
Japan, the flooding in Australia and the crisis in the Middle East and North African region showed that Earth<br />
observation, ICT and Web-mapping technologies are now playing a vital role in crisis management efforts,<br />
especially during the preparedness and response phases.<br />
No matter what the origin of crises, their geographical context and the dimension of their impacts, there is a<br />
common need by all actors involved in crisis management for timely, relevant, usable and most of all reliable<br />
information. For the community concerned with validation of geo-information, this poses new challenges in<br />
terms of having access to methodologies that can address the increasing variety and amount of data, and that<br />
help to render validation closer to a routine process.<br />
Following the two successful VALgEO workshops held in 2009 and 2010, we are pleased to announce the<br />
organization of the 3rd edition of the international workshop on validation of geo-information for crisis<br />
management. The annual VALgEO workshop sets out to act as an integrative agent between the needs of<br />
practitioners in situation centers and in the field guiding the Research and Development community, with a<br />
special focus on the quality of information.<br />
The following topics will be addressed in four main sessions:<br />
• The role of validation in Information and Communication Technologies (ICT) for crisis<br />
management<br />
• Validation of Remote Sensing derived emergency support products<br />
• Usability of Web based disaster management platforms and readability of crisis<br />
information<br />
• Towards routine validation and quality control of crisis maps<br />
1
Workshop chair<br />
Martino Pesaresi, Joint Research Centre, Italy<br />
martino.pesaresi@jrc.ec.europa.eu<br />
Organizing committee<br />
Christina Corbane, Joint Research Centre, Italy<br />
Daniela Carrion, Joint Research Centre, Italy<br />
Marco Broglia, Joint Research Centre, Italy<br />
Barbara Secreto, Joint Research Centre, Italy<br />
christina.corban@jrc.ec.europa.eu<br />
daniela.carrion@jrc.ec.europa.eu<br />
marco.broglia@jrc.ec.europa.eu<br />
barbara.secreto@ec.europa.eu<br />
Scientific committee<br />
Michael Judex, German Federal Office of Civil Protection, Germany<br />
Daniel Stauffacher, ICT4Peace Foundation, Switzerland<br />
Peter Zeil, University of Salzburg, Austria<br />
Tom De Groeve, Joint Research Centre, Italy<br />
Marco Broglia, Joint Research Centre, Italy<br />
Daniela Carrion, Joint Research Centre, Italy<br />
Christina Corbane, Joint Research Centre, Italy<br />
michael.judex@bbk.bund.de<br />
daniel_stauffacher@ict4peace.org<br />
peter.zeil@sbg.ac.at<br />
tom.de-groeve@jrc.ec.europa.eu<br />
marco.broglia@jrc.ec.europa.eu<br />
daniela.carrion@jrc.ec.europa.eu<br />
christina.corban@jrc.ec.europa.eu<br />
2
AGENDA<br />
TUESDAY, October 18 th 2011<br />
9:00 Workshop Opening and Welcome Address<br />
Delilah Al Khudhairy – Joint Research Centre (Head, Global Security and Crisis Management Unit)<br />
Martino Pesaresi- Joint Research Centre (ISFEREA Action Leader)<br />
9:40 Torsten Redlinger- European Commission (DG ENTR, GMES Bureau)<br />
SESSION I– THE ROLE OF VALIDATION IN INFORMATION AND COMMUNICATION TECHNOLOGIES (ICT) FOR<br />
CRISIS MANAGEMENT<br />
Chair: Daniel Stauffacher. ICT4Peace Foundation, Geneva, Switzerland<br />
10:00 Invited Speaker: Daniel Stauffacher. ICT4Peace Foundation<br />
10:20 Potential applications of tracking macro-trends within Crisis Management<br />
Invited Speaker: Douglas Hubbard. Hubbard Decision Research, United States<br />
10:40 The OpenStreetMap way to data creation and validation in emergency preparedness and response<br />
Invited Speaker: Nicolas Chavent, Open Street Map, France<br />
11:00 Coffee Break<br />
11: 10 Impact Opportunities and Methodology Challenges: Crisis Mapping and Geoanalytics in Human<br />
Rights Research<br />
Scott Edwards & Koettl C. – George Washington University/Amnesty International<br />
11:30 Web and mobile emergencies network to real-time information and geodata management<br />
Elena Rapisardi 1-3 , Lanfranco M. 2-3 , Dilolli A. 4 & Lombardo D. 4<br />
1<br />
Openresilience<br />
2<br />
Doctoral School in Strategic Sciences, SUISS, University of Turin<br />
3<br />
NatRisk, Interdepartmental Centre for Natural Risks, University of Turin<br />
4<br />
Vigili del Fuoco, Comando Provinciale di Torino<br />
11: 50 An Integrated Quality Score for Volunteered Geographic Information on Forest Fires<br />
12:10 Lunch<br />
Ostermann Frank & Spinsanti L.<br />
European Commission, Joint Research Centre, SDI Unit, IES<br />
3
SESSION II – VALIDATION OF REMOTE SENSING DERIVED EMERGENCY SUPPORT PRODUCTS<br />
Chair : Dirk Tiede , University of Salzburg, Austria<br />
13:40 Recent experiences on the use of remote sensing for damage assessment and its validation: 2010<br />
Pakistan flood, 2011 Tohoku Tsunami and 2011 (February) Christchurch earthquake<br />
Keiko Saito 1,6 , R. Eguchi 2 , G. Lemoine 3 , L. Barrington 4 , J. Bevington 2 , S. Gill 5 , A. King 6 , A. Lin 4 , M. Green 7 ,<br />
R. Spence 1 & P. Wood 8<br />
1<br />
Cambridge Architectural Research Ltd, UK - 2 ImageCat Inc, USA ; ImageCat Ltd, UK- 3 Joint Research<br />
Centre, EU - 4 Tomnod, USA- 5 Global Facility for Disaster Risk Reduction, The World Bank- 6 GNS, New<br />
Zealand - 7 EERI, USA - 8 New Zealand Society for Earthquake Engineering<br />
14:00 Definition of a reference dataset to assess the quality of building information extracted<br />
automatically from VHR satellite images<br />
Annett Wania, Kemper T., Ehrlich D., Soille P. & Gallego J.<br />
Joint Research Centre<br />
14:20 On the Validation of An Automatic Roofless Building Counting Process<br />
Lionel Gueguen, Pesaresi M. & Soille P.<br />
Joint Research Centre<br />
14: 40 Evacuation plans : interest and limits<br />
Alix Roumagnac & Moreau K.<br />
PREDICT Services<br />
15: 00 Outside the Matrix, a review of the interpretation of Error Matrix results<br />
Pablo Vega Ezquieta 1 , Tiede D. 2 , Joyanes G. 1 , Gorzynska M. 1 & Ussorio A. 1<br />
1<br />
European Union Satellite Centre- 2 Z-GIS Research, University of Salzburg<br />
15: 20 On the complexity of validation in the security domain – experiences from the G-MOSAIC project<br />
and beyond<br />
Thomas Kemper, Wania A. & Blaes X.<br />
Joint Research Centre<br />
15: 40 Coffee Break<br />
SESSION III– USABILITY OF WEB BASED DISASTER MANAGEMENT PLATFORMS AND READABILITY OF CRISIS<br />
INFORMATION<br />
Chair : Tom De Groeve, Joint Research Centre<br />
15:50 Emergency Support System: Spatial Event Processing on Sensor Networks<br />
Roman Szturc, Horáková B., Janiurek D. & Stankovič J.<br />
Intergraph CS<br />
16:10 Near-real-time monitoring of volcanic emissions using a new web-based, satellite-data-driven,<br />
reporting system: HotVolc Observing System (HVOS)<br />
Mathieu Gouhier 1 , Labazuy P. 1 , Harris A. 1 , Guéhenneux Y. 1 , Cacault P. 2 , Rivet S. 2 & Bergès J-C. 3<br />
1<br />
Laboratoire Magmas et Volcans, CNRS, IRD, Observatoire de Physique du Globe de Clermont-Ferrand,<br />
Université Blaise Pascal- 2 Observatoire de Physique du Globe de Clermont-Ferrand, CNRS, Université<br />
Blaise Pascal - 3 PRODIG, UMR 8586, CNRS, Université Paris 1<br />
16:30 1- Image interpreters/interpreted images, study of recognition mechanisms<br />
2- Crisis maps readability: first results of an experiment using the eye-tracker<br />
Roberta Castoldi, Carrion D., Corbane C., Broglia M. & Pesaresi, M.<br />
Joint Research Centre<br />
16: 50 Closure of DAY 1<br />
20:00 Social dinner at Hotel Belvedere<br />
4
WEDNESDAY, October 19 th 2011<br />
SESSION IV–TOWARDS ROUTINE VALIDATION AND QUALITY CONTROL OF CRISIS MAPS<br />
Chair: Michael Judex, German Federal Office of Civil Protection, Germany<br />
9:00 Validation, standardisation, innovation and ability to respond to user requirements<br />
Joshua Lyons - UNITAR/UNOSAT<br />
9:20 A methodological framework for qualifying new thematic services for an implementation into<br />
SAFER emergency response and support services<br />
Hannes Römer , Zwenzner H. , Gähler M. & Voigt S.- German Aerospace Center (DLR)<br />
9:40 A methodology for a user oriented validation of satellite based crisis maps<br />
Michael Judex 1 , Sartori G. 2 , Santini, M. 3 , Guzmann, R. 3 , Senegas, O. 4 & Schmitt, T. 5<br />
1<br />
Federal Office of Civil Protection and Disaster Assistance, Germany- 2 World Food Programme, Italy-<br />
3<br />
Dipartimento della Protezione Civile- 4 United Nations Institute for Training and Research (UNOSAT)-<br />
5<br />
Ministère de l’intérieur, Direction de la Sécurité Civile<br />
10:00 Quality policy implementation: ISO certification of a Rapid Mapping production chain<br />
Bernard Allenbach 1 , Rapp JF. 1 , Fontannaz D. 2 & Chaubet JY. 3<br />
1<br />
SERTIT- 2 CNES - 3 APAVE<br />
10:20 Introduction to the LIVE EXERCISE AT THE CRISIS ROOM<br />
10:30 Coffee break & transfer to the crisis room<br />
SESSION V- LIVE EXERCISE AT THE CRISIS ROOM<br />
Coordinator: Alessandro Annunziato, Joint Research Centre, Italy<br />
11:00 Presentation of crisis room and related tools<br />
Tom de Groeve & Galliano D.<br />
European Commission, Joint Research Centre<br />
11:20 Simulation of a case study<br />
All<br />
12:30 Lunch<br />
SESSION VI- WRAP UP SESSION<br />
14:00 Panel discussion<br />
Scientific committee of VALgEO 2011<br />
15:00 Recommandations and conclusion<br />
All<br />
5
SESSION V- LIVE EXERCISE AT THE CRISIS ROOM<br />
Coordinator: Alessandro Annunziato, Joint Research Centre, Italy<br />
11:00 Presentation of the crisis room and the related tools<br />
Tom De Groeve & Alessandro Annunziato<br />
European Commission, Joint Research Centre (JRC)<br />
11:10 Presentation of collected data & discussion of interoperable mobile applications for data collection<br />
iphone application: Beate Stollberg (JRC)<br />
Field Reporting tool: Daniele Galliano (JRC)<br />
11:30 Presentation and demonstration of the PDNA suite<br />
Daniele Galliano (JRC)<br />
11:45 End user tailored interface application for collaboration in GIS environment - solution example<br />
Michał Krupiński (Space Research Centre Polish Academy of Sciences)<br />
Piotr Koza (Astri Polska)<br />
12:00 IQ demonstration for automatic image information extraction<br />
Lionel Gueguen & Vasileios Syrris (JRC)<br />
12:20 Questions and Answers<br />
12:30 Lunch<br />
6
FORWARD<br />
This third edition of the international workshop on validation of geo-information for crisis<br />
management confirms that the topics we are addressing are important and that we need this sort of platform<br />
to regularly discuss the new challenges we face as a result of continuous evolution in technology, especially ICT<br />
(including space), which is impacting both the quality and quantity of information relevant to crisis<br />
management.<br />
2010 marks the start of a new era in the way ICT is beng used in crisis management. I would like to begin with<br />
the Haiti earthquake in 2010 which even though it was not a typical disaster, it marked a new epoch in the way<br />
various novel and traditional ICT solutions were used in an integrated manner by the emergency response<br />
communities as well as professional organizations and voluntary intiatives. But it also confirmed that we have<br />
yet to apply lessons learned from past major disasters. The rapid advances in ICT, including space, are not<br />
necessarily facilitating the work of the international humanitarian relief, emergency rescue and post-disaster<br />
recovery/reconstruction communities. On the contrary, today we are facing an increasing deluge of<br />
information, with the risk that only a small fraction of it is relevant to, or reliable enough for, effective crisis<br />
management.<br />
Sanjana Hattotuwa (2010) captures this helplessness very well in the quotation “Where is the knowledge we<br />
have lost in information?”<br />
In Haiti alone, there were hundreds of email messages exchanged amongst the disaster response community<br />
and hundreds of information products in the form of maps being produced by various entities on a daily basis.<br />
How much rich and relevant knowledge was present in the deluge of information which cost significant<br />
amount of resources to produce and disseminate….? Can we measure this?<br />
Furthermore, Haiti showed that in addition to the contributions of traditional communities engaged in crisis<br />
response, the citizens of impacted countries were also making potentially important contributions, thereby<br />
shifting the balance we have become familiar with from impacted communities that are at the receiving end of<br />
assistance and information, to one in which the impacted communities are empowered and are becoming<br />
increasingly responsible for themselves through actively engaging in the crisis management process. It is only a<br />
matter of time when the type of community engagement we saw in Haiti will become familiar as opposed to<br />
exceptional.<br />
The crowd sourcing/social media and collaborative analysis and mapping technologies we saw being used in<br />
Haiti at different levels mark the beginning of a new era in crisis management. The era of “citizen or<br />
community crisis management”.<br />
Time will tell if this new marriage between technology and the community will have a sustainable future. Some<br />
experts reckon it will take a decade. And by then, we could expect to live in a world whereby citizens will<br />
become important sources of local and regional human observations who can supply information to the<br />
disaster response communities that are not readily met by increasingly improved remotely sensed data<br />
including space and airborne. Equally important, we expect that citizens will likewise become increasingly<br />
responsible for guiding their recovery and reconstruction. I agree with the predictions of these experts.<br />
Moreover I think there will be a future between technology and communities in crisis management. In a<br />
decade, children born between the 1990s and 2000s will be between their late teens and late 20s. These<br />
young adults would have grown up in a world where the digital camera and the internet are things that have<br />
7
always been there. So, they will be equipped to contribute and participate in a future, where they will be able<br />
to contribute directly to help themselves in the event of a crisis. This is something we, as an older generation,<br />
can only begin to imagine and contemplate.<br />
Today the crisis management community is living in an increasingly complex information and technological<br />
world. Interactive and real-time type platforms are edging in on static maps, but many challenges and<br />
problems will have to overcome before the static maps with which we have become familiar in crisis<br />
management become relics of the past. We cannot afford to have less effective responsiveness to disasters,<br />
whilst advances in technologies and the way they are being used outweigh the benefits they can bring.<br />
In other words, today, we face important questions such as: Are rapid advances in ICT and new ways of using<br />
ICT helping us make improvements with regard to producing relevant and trusted information and making it<br />
available in a reliable and timely manner to the stakeholders engaged in crisis management? Not necessarily.<br />
For effective crisis management we do not need a fast growing and enormous amount of information available<br />
through a variety of media. With the increasing number of information sources and contributing actors<br />
(specialists, citizens, mapping and ICT volunteers) in crisis management, we risk even less knowledge and value<br />
at the expense of more information. Validation and trusted analysis have become more critical than before to<br />
creating value and trust in knowledge in crisis management.<br />
This is why we need a platform like VALgEO to bring us together to discuss the elements of validation that will<br />
result in value and trust in knowledge for crisis management. But in our discussions during this workshop and<br />
subsequent ones we already have to think 10 years ahead. We have to discuss and identify the ‘validation and<br />
trust’ elements that accommodate not only the traditional geo-information products and information sources<br />
that are being used by today’s crisis management communities, but we have to already think ahead in terms<br />
of the eventual regular use of new information sources such as the citizen as well as community / participatory<br />
ICT and mapping volunteers. Without agreed principles and standards related to validation and trusted<br />
analysis, we risk having even less content and trusted knowledge in the future at the expense of yet ever more<br />
tools, services and technologies, to the dis-benefit of the disaster affected communities and the disaster<br />
response and post-response communities.<br />
The VALgEO community which you have helped to establish at our first and second workshops and through<br />
your participation this year, can make important steps in developing recommendations for agreed principles<br />
and standards as well as other important components of validation in a new crisis management information<br />
landscape in which the information sources are extending beyond traditional remote and in-situ sources, and<br />
in which the community or the citizen will become increasingly engaged.<br />
We look forward to a vibrant and exciting workshop, and we are optimistic that we, the VALgEO Community,<br />
can make progress both at this year’s workshop and in future workshops towards achieving these goals. We<br />
are entering a very exciting period in which we have never had it so good in terms of the variety of sources and<br />
technologies which can be used to produce and disseminate crisis relevant information and knowledge. Let us<br />
now take the time to reflect and understand this landscape in order to come up with recommendations and<br />
principles that will benefit the crisis management process in the longer-term.<br />
AL-KHUDHAIRY D.<br />
Head, Global Security and Crisis Management Unit<br />
European Commission - Joint Research Centre, Institute for the Protection and Security of the Citizen (<strong>IPSC</strong>)<br />
8
SESSION I<br />
THE ROLE OF VALIDATION IN INFORMATION AND<br />
COMMUNICATION TECHNOLOGIES FOR CRISIS<br />
MANAGEMENT<br />
Chair: Daniel Stauffacher<br />
The contemporary global crisis management environment is increasingly relying on ICT<br />
as a source for critical, timely decision-making information. Effective crisis management requires<br />
not only quick decisions for an immediate response but most of all a co-ordinated reaction. In<br />
order to reach a coherent action at all levels, crisis management organizations need to rely on<br />
accurate information that must be produced, transmitted and shared with speed and precision.<br />
This places the challenges of ICT less on technical capacities rather than on the effective<br />
management and integration of an optimum amount of quality information.<br />
The challenge of ICT for crisis management today is in building trust in both the systems used to<br />
process the information and the people handling it. Validation and multiple checking of<br />
information flows are therefore essential to avoid the risk of having less knowledge at the<br />
expense of more information. The workshop aims to i) assess the needs for a formal validation<br />
within ICT for crisis management, ii) help in understanding the attitudes of the end-users towards<br />
these technologies and finally iii) define an agenda for research on valid methods and measures<br />
to assess the quality and accuracy of the information.<br />
9
ABSTRACT<br />
Potential applications of tracking macro-trends within Crisis<br />
Management<br />
HUBBARD D.<br />
Hubbard Decision Research<br />
dwhubbard@hubbardresearch.com<br />
Abstract:<br />
The objective of this workshop is to have a discussion exploring how crisis management might<br />
benefit from adding other macro-trend tracking to geolocation data. Douglas Hubbard, the author<br />
of Pulse: The New Science of Harnessing Internet Buzz to Track Threats and Opportunities will<br />
facilitate a discussion about how tools like Google Trends, Facebook, and Twitter might be used to<br />
track trends relevant to crisis management including when does not include specific geolocation<br />
data. Methods of analyzing social networks have been developed that would not only track but<br />
forecast the transmission of disease throughout a population. Just as Twitter and Facebook helped<br />
to mobilize social unrest in the Middle East, they can also be used to forecast social upheavals<br />
before they become a humanitarian crisis. The possibility exists for more elaborate models that<br />
use multiple data sources that could actively track a macroscopic picture of certain kinds of risks.<br />
11
ABSTRACT<br />
The OpenStreetMap way to data creation and validation in<br />
emergency preparedness and response<br />
CHAVENT N.<br />
Open Street Map, France<br />
nicolas.chavent@gmail.com<br />
Abstract:<br />
This presentation will look back at past activations of the Humanitarian OpenStreetMap Team<br />
(HOT) since the Haiti Earthquake January 2010 featuring remote and on-the-ground work of the<br />
OpenStreetMap (OSM) project in the context of emergency preparedness and emergency response<br />
to discuss how this wiki approach to geodata management had been and is currently addressing<br />
“the challenge of ICT for crisis management today [which] is in building trust in both the systems<br />
used to process the information and the people handling it”.<br />
This discussion will feature the following elements:<br />
• The HOT/OSM approach to geodata creation in emergency preparedness and response<br />
to be a source for “critical, timely decision-making information”.<br />
• The way that HOT/OSM ensure coordination with the humanitarian system through the<br />
emergency preparedness and response cycle to help contributing to crisis management as a coordinated<br />
reaction.<br />
• The typology of validation flows emerging from past operational contexts depending on<br />
the intensity of the remote mapping work, the strength of the local OSM groups, the level of<br />
coordination and interaction between OSM (remote and on the ground) and the humanitarian<br />
response system.<br />
We feel that the analysis of those use cases of the OSM work in emergency preparedness and<br />
response are likely to contribute in a significative manner to the goals of the workshop<br />
i) assess the needs for a formal validation within ICT for crisis management,<br />
ii) help in understanding the attitudes of the end-users towards these technologies and finally<br />
iii) define an agenda for research on valid methods and measures to assess the quality and<br />
accuracy of the information.<br />
13
ABSTRACT<br />
Impact Opportunities and Methodology Challenges:<br />
Crisis Mapping and Geoanalytics in Human Rights Research<br />
EDWARDS S. 1 and KOETTL C. 2<br />
1 George Washington University/Amnesty International, USA<br />
2 Amnesty International, USA<br />
sedwards@aiusa.org<br />
Abstract<br />
Crises are inherently complex—with the intertwining of multiple interdependent causal processes<br />
and emergence of properties at differing levels of societal aggregation. This complexity is especially<br />
challenging when crises are approached from a right-based perspective. As in disaster relief, the<br />
ability to source timely, geo-referenced information in human rights emergencies provides—at<br />
minimum—critical situational awareness for researchers in the midst of great need, and<br />
overwhelming complexity.<br />
Further—and based on cursory evaluation instances of web-based crowd maps—it is likely that<br />
these tools offer the ability to capture representative human rights data above current legal<br />
research methodologies, in many contexts. By layering crowd-derived events data into GIS analytic<br />
products, human rights researchers and advocates may demonstrate the constituent elements of<br />
grave crimes, such as qualities of “widespread” or “systematic” in the case of Crimes Against<br />
Humanity. Additionally, the layering of events data into analytic tools can allow human rights<br />
researchers to offer policy recommendations with greater technical specificity, and thus with<br />
greater effect.<br />
In the context of human rights research, this paper will evaluate current opportunities and<br />
challenges as it relates to the integration of mapping and GIS research and analytic tools<br />
increasingly used in disaster response. Challenges related to the verification of events data entail—<br />
for most human rights organizations—serious risk to the credibility of reporting, and thus to policy<br />
impact. These and related challenges will be explored, as well as analytic measures that can be<br />
employed to minimize them, particularly in the context of crisis.<br />
15
SHORT PAPER<br />
Web and mobile emergencies network to real-time information<br />
and geodata management.<br />
RAPISARDI E. 1-3 , LANFRANCO M. 2-3 , DILOLLI A. 4 and LOMBARDO D. 4<br />
1 Openresilience, http://openresilience.16012005.com/<br />
2 Doctoral School in Strategic Sciences, SUISS, University of Turin, Italy.<br />
3 NatRisk, Interdepartmental Centre for Natural Risks, University of Turin, Italy; www.natrisk.org.<br />
4 Vigili del Fuoco, Comando Provinciale di Torino, Italy.<br />
e.rapisardi@gmail.com<br />
Abstract:<br />
Major and minor disasters are part of our environment. The challenge we all have to face is to<br />
switch from relief to preparedness. Recent events from Haiti to Japan revealed a new scenario:<br />
web and mobile technologies can play a crucial role to manage the disasters, increasing and<br />
improving the information flow between the different actors and players - citizens, civil protection<br />
bodies, local and central governments, volunteers, media. In this perspective, “the post-Gutemberg<br />
revolution” is changing our communication framework and practices. Mobile devices and advanced<br />
web data management may ameliorate preparedness and boost crises response in the shadow of<br />
natural and man-made disasters, and are defining new approaches and operational models. Key<br />
words are: crowdsourcing, geolocation, geomapping. A full integration of web and mobile solutions<br />
allows geopositioning and geolocalization, video and photo sharing, voice and data<br />
communications, and guarantees accessibility anytime and anywhere. This can also give the direct<br />
push to set up an effective operational dual side system to “inform, monitor and control”. Starting<br />
from the international experiences, Open Resilience Network and Italian Firefighters have carried<br />
out tabletop and full scale exercises to test tools and procedures and experiment the use of new<br />
technologies to better manage information flow from/to different actors. The paper will focus the<br />
ongoing experimental work on missing person emergency, led by Italian Firefighters TAS team -<br />
Andrea Di Lolli and Davide Lombardo - and supported by a multi-competences team from Open<br />
Resilience Network and University of Turin - Elena Rapisardi and Massimo Lanfranco. The aim of<br />
the paper is to share methods and technologies used, and to show the operational results of the<br />
exercise carried out during PROTEC2011, in order to stimulate comments that will be taken into<br />
account in the further research steps.<br />
Keywords: missing person, disaster relief, crowdsourcing, geolocation, geomapping<br />
17
1. Introduction<br />
Disasters preparedness and relief operations have been widely studied and debated in the last 20 years.<br />
“At risk”, edited by Ben Winser (Winser et al., 1994), expands the disaster consequences management to the<br />
preemptive measures linked to social vulnerability, switching from a “war against nature” (hazard reduction)<br />
to a “fight against poverty” (risk reduction), that the same year led UNDP to the human security concept<br />
introduced in the Human Development Report (UNDP, 1994).<br />
Quarantelli (1998) drafted a comprehensive review of previous works, implementing the technical point of<br />
view with a sociological approach that lead to a full spectrum approach to Disaster Risk Management.<br />
9/11 Twin Towers attack boost and refreshed studies on disasters: the “war against terror” it’s a new paradigm<br />
that remind the “fight” against natural disasters (struggling the effects rather than the root causes), but some<br />
authors (Alexander, 2001) pointed out that effects management of natural and anthropogenic disasters have<br />
the same operational needs and procedures.<br />
On the other hand, also the well defined “disaster cycle” (fig. 1) has been investigated by sociological<br />
approach, that led to the community based risk reduction and the resilience concept. These concepts fit well<br />
with UN efforts to overrun the simple humanitarian relief, which became more and more costly in last 10<br />
years.<br />
Figure 1. The disaster cycle: a life long work to web/mobile technologies<br />
Web access and mobile devices seem to be the key for achieving all the goals that scholars and practitioners<br />
were debating in the last 20 years at global and local levels:<br />
- Citizens engagement in preparedness, planning, relief, rebuilding;<br />
- Faster relief with improved situational awareness;<br />
- Communication strategy with a Bottom/Up and Top/Down merge (two way data exchange);<br />
- Resilience enhancing with local storytelling.<br />
UN Foundation (HHI, 2011) points up mobile technologies involvement during Haiti Earthquake, drawing the<br />
state of the art situation.<br />
Since early 2000, the “GeoSITLab” (GIS and Geomatics Laboratory) at the University of Turin started to<br />
enhance the application of Geomatics technologies for geothematic mapping and for geological and<br />
geomorphological field activities (Giardino et al., 2004). These activities were implemented at NatRisk<br />
Interdepartmental Centre (natural risks) and at Strategic Sciences School (man-made risks) with different<br />
approach relates to “natural sciences” and “social” approaches.<br />
18
In the shadows of Haiti earthquake, GeoSITLab developed a mobile GIS application based on ArcPad software<br />
for direct mapping and damages assessing with smartphones and deliver it on the ground with AGIRE NGO<br />
(Giardino et al., 2010). Data collected by NGO operators in Haiti were immediately transmitted to Italian<br />
Operational Centre for retrofitting / rebuilding cost evaluation and donors search.<br />
OpenResilence, whose members started working in VGI with Ushahidi and Crisis Mappers Net, offer to<br />
professional and practitioners of forest fire fighting the next step, meshing mobile technologies and Webmapping<br />
2.0 (http://openforesteitaliane.crowdmap.com/) .<br />
The aim of our research is to come up with ideas that should link and connect governmental emergency<br />
operators and citizens (Civil Protection 2.0), both on the side of collaborative mapping (data exchange) and<br />
information dissemination (http://www.slideshare.net/elenis/protec-informing-the-public).<br />
2. The Talent of the Crowd in face of emergency and disasters<br />
In 1455 the Gutemberg revolutionary printing system changed the institutionalized information model and<br />
lowered the production costs, increased the books production, favored the democratic access to knowledge,<br />
stimulated literacy and contributed to the critical thinking.<br />
“For more than 150 years, modern complex democracies have depended in large measure on an industrial<br />
information economy for these basic functions. In the past decade and a half, we have begun to see a radical<br />
change in the organization of information production. Enabled by technological change, we are beginning to<br />
see a series of economic, social, and cultural adaptations that make possible a radical transformation of how<br />
we make the information environment we occupy as autonomous individuals, citizens, and members of<br />
cultural and social groups.” (Benkler, 2006).<br />
In this scenario, we are individuals with multiple and crossing socio-cultural-economic memberships, where<br />
information could be seen as the channel of the Simmel’s “Intersection of Social Circles”; a sociological<br />
concept, that in some ways Google+ recently transformed in a social media, with a distinctive approach with<br />
respect to Facebook and Twitter.<br />
The first Web 2.0 Conference, on October 2004, could be taken as the turning point to a new approach to the<br />
information: Web 2.0 (O’Reilly, 2007) introduced a set of principles and practices that tie together a veritable<br />
solar system of sites, where the first one principle was: “The web as platform” [Tim O’Reilly].<br />
This stream of thoughts and actions proposes a new approach that consider the collective<br />
knowledge/intelligence as superior to the single individual knowledge/intelligence. Web 2.0 radically changed<br />
the basis and the ways in which information is created, spread and consumed. In the post-Gutemberg<br />
revolution “with advances in technology, the gap between professionals and amateurs has narrowed, paving<br />
the way for companies to take advantage of the talent of the public.” [Darren Gilbert].<br />
Apart from the light and shadows of the “social media” success, we can state that the post-Gutemberg<br />
revolution is “The end of institutionalised mediation models” [Richard Stacy], and the crowdsourcing as a<br />
participatory approach.<br />
#share, #collaborate, #communicate, #cooperate, #support, #include - e.g. #diversity.<br />
Key words that would be appreciated by the society models of the utopian socialism the first quarter of the<br />
19th century. In 2011 Web 2.0 has become an everyday reality, web 2.0 has an impact also in emergency and<br />
disaster response.<br />
When a disaster or an emergency occurs, it is crucial to collect and analyze volumes of data and to distil from<br />
the chaos the critical information needed to target the rescue mission most efficiently.<br />
19
Since the Haiti earthquake, a completely new “engagement” took place “For the first time, members of the<br />
community affected by the disaster issued pleas for help using social media and widely available mobile<br />
technologies. Around the world, thousands of ordinary citizens mobilized to aggregate, translate, and plot<br />
these pleas on maps and to organize technical efforts to support the disaster response. In one case, hundreds<br />
of geospatial information systems experts used fresh satellite imagery to rebuild missing maps of Haiti and plot<br />
a picture of the changed reality on the ground. This work—done through OpenStreetMap—became an<br />
essential element of the response, providing much of the street-level mapping data that was used for logistics<br />
and camp management.” (HHI, 2011).<br />
“Without information sharing there can be no coordination. If we are not talking to each other and sharing<br />
information then we go back 30 years.” [Ramiro Galvez, UNDAC].<br />
This is a clear and undoutable effect of the post- Gutemberg revolution on the emergency and crisis response,<br />
that is leading to the creation of Volunteer and Technical Communities (VTCs) working on disaster and conflict<br />
management. This 2.0 world wide community is allowing the setting up of technical development community<br />
and operational processes/procedures, that are changing risk and crisis management as focused on “citizens as<br />
sensors” and on “preparedness”. On the other hand, the VTCs communities are now facing the issue of trust<br />
and reliability of a wide information flow involving the “crowd” and the emergency bodies.<br />
3. Italian Civil Protection system<br />
Italian Civil Protection National Service is based on horizontal and vertical coordination of central and local<br />
bodies (Regions, Provinces, municipalities, national and local public agencies, and any other public and private<br />
institution and organisation). One of the backbone of the Italian Civil Protection System are the civil protection<br />
volunteering organizations, whose duties and roles differ on regional basis. The Civil Protection Volunteers are<br />
called to action during small emergencies and major disasters. The Abruzzo earthquake, in 2009, highlighted<br />
the need of a more efficient communication flow between the volunteers organizations and professionals, and<br />
of common shared protocols and tools to manage information. As a matter of fact, the “diversity” in managing<br />
information causes a sort of “friction” and a weak collaboration in terms of data and information sharing.<br />
Despite the adoption of softwares and device (radio), there is a low level of awareness on how the web 2.0<br />
usage, in line with the web 2.0 litteracy of the internet population. The mobile phones and web penetration<br />
(Italy has the European record for mobiles per capita with 122 phones for 100 inhabitants, 70% of population<br />
with internet access, 13% of population with mobile internet access) and the social network “fever”, can be<br />
considered as a driving factor to raise awareness and develop skills so to allow a wider adoption of web 2.0<br />
solutions and tools. Moreover the volunteers organisation have to cope with small budgets that should<br />
include equipments first. In this perspective the free and open tools (e.g. android market, content sharing<br />
platforms) are a concrete opportunity to increase the web 2.0 penetration and develop acknowledged<br />
practices to implement web 2.0 information sharing in C3 activities (Command, Control, Communications).<br />
Fire and rescue services are provided by Vigili del Fuoco (VVF - Fire Fighters), a national government<br />
department ruled by Ministry of Interior. Territorial divisions are based on provincial Fire Departments with<br />
operational teams at biggest municipal level. Fire Fighters are also the primary emergency response agency for<br />
HAZMAT and CBRN accidents.<br />
According to the national legal framework, fire and rescue departments have the duty to operate as first<br />
responders under a well-defined command structure providing 24-hour emergency response. Unlike law<br />
enforcement, which operates individually for most duties, fire departments operate under a highly organized<br />
20
team structure with the close supervision of a commanding officer. Fire departments also act at the direction<br />
of the Prefect (Ministry of Interior local coordinator) during major disasters.<br />
Fulltime professional personnel staff fire and rescue departments but volunteers provide reinforcement at<br />
minor municipality’s stations.<br />
Recently, after a big failure in procedures for search of a kidnapped girl, Fire Fighters were assigned to the<br />
overall coordination of search for missed persons.<br />
TAS Teams (Topografia Appicata al Soccorso - Topography Applied to Rescue) were set up during L’Aquila<br />
Earthquake (April 2009) to support relief operation and damage assessment, through the use of GIS<br />
technology. The TAS teams work to coordinate Fire and Rescue teams from Operational Room (SO115) and to<br />
guide tactical activities from a mobile Incident Command Post (UCL - Unità di Comando Locale – Local<br />
Command Unit) placed on special vans.<br />
4. The Real Time Data Management<br />
The use of digital base maps in relief operations can be considered as the first step towards an innovation of<br />
practices and procedures of the TAS teams, and in a broader sense of the relief activities as a whole. As stated<br />
in the previous paragraph, any emergency requires an information flow between different actors , physically<br />
located in different places.<br />
Starting from other experiences in the field, specifically the one of Centro Intercomunale di Protezione Civile<br />
Colline Marittime e Bassa Val di Cecina [COI] 1 , a joint research group [the authors of this article] has been set<br />
up to test and experiment open and free web solutions in order to guarantee sharing and collaboration on<br />
geographical data. Despite the budget lacking, the choice to use easy and common tools and web solutions<br />
available for free on the internet, although used in other scenarios and with diverse purposes, gave the<br />
possibility to start trials. The concrete experiences of the wider VTCs community played a fundamental role to<br />
benefit from, avoiding to start from scratch.<br />
After some testing, the team focused the testing phase on two different tools: Ushahidi (to ensure the<br />
participation of the citizens - crowdsourcing) and Google Maps (see also Google Crisis response).<br />
On the 27 th of June in the town of Carignano (TO), for the first time during a real rescue mission for a missing<br />
person emergency, the TAS used a geodata software to acquire and record the geolocated information related<br />
to the occurence. The processing of geographic data through the use of GIS software used by staff of the TAS<br />
Turin Provincial Fire Department, have been published on the web using Google My Maps, so to be shared by a<br />
restricted number of users, as the Operational Rooms (SO115) in Turin and Aosta, the Municipal Police Station<br />
of Carignano and the local media.<br />
This process allowed a real-time information flow from the incident area: data and physical condition of the<br />
missed person, the zoning of area of operation, the point of last sighting, the geolocation of search units, the<br />
geolocation of discovered personal effects.<br />
These were basic information but very useful to the immediate reconstruction of the incident scenarios also<br />
for Judicial Police activities.<br />
1 During the exercise, the team used the tools and solutions tested and adopted by the Centro Intercomunale di Protezione<br />
Civile Colline Marittime e Bassa Val di Cecina (COI), to manage and share geolocated information between, volunteers<br />
teams, COI Operational room, and COC (Centro Operativo Comunale - municipal operational centre). These solutions,<br />
including a blog website to inform in real time the population and media representatives, have been successfully tested<br />
during a missing person intervention in Cecina.<br />
21
Missed person search procedure provide the locating of an ICP, based on UCL van when possible, where TAS<br />
personnel must:<br />
1. zone the search area,<br />
2. upload GPS devices with appropriate maps and search routes or areas,<br />
3. settle Search And Rescue (SAR) teams area of operation (AO) and tune radio devices (TETRA system for<br />
VVF teams),<br />
4. monitor communication, facilitate cooperation and head operations,<br />
5. download GPS tracks (once SAR teams come back) to check not covered areas,<br />
6. inform Operational Room (SO115) on activities.<br />
A common platform to share information uploaded by different organizations professionals (Fire Fighters,<br />
municipal and national police forces, Civil Protection volunteers, specialized SAR teams) should improve<br />
dramatically operations efficiency.<br />
Information sharing on web 2.0 platform would be used for missed person search as for every emergency<br />
operation.<br />
Nevertheless this is a goal not only for Italian Fire Fighters internal procedures, that linked ICP to field teams<br />
and SO115, but also for all public bodies involved in emergency and disaster management.<br />
The platform is suitable to coordinate different emergency operation and major disasters relief.<br />
Real time information sharing is proper to address, for example, technical support by geologists during severe<br />
storms that lead to floods and landslides or by air analysts during CBRN terrorist attacks.<br />
At the same time the platform would facilitate information dissemination to media and directly to citizens.<br />
5. Protec2011 Exercice<br />
The Protec2011 Exercise was based on a missing person search scenario and it was carried out during Protec<br />
2011 Exibition (http://www.protec-italia.it/indexk.php). This might allow to involve the conference attendees<br />
as VGI’s sensors and to get independent feedbacks on procedures and activities.<br />
The TAS team was interested to test interaction among GPS devices and data formats, radios, mobile phones<br />
and geo mapping software and also to verify the IT infrastructure capacities.<br />
OpenResilience aimed to test VGI platforms like Ushahidi, Google Maps and Twitter to see if they satisfy the<br />
requirements related to the rescue operations. We are also involved to see the results of real time translation<br />
among different GIS data formats (shp, KML, wpt, GPX, PLT) and different software platforms using GIS or<br />
web-GIS (OziExplorer, ArcPad, Google-maps, Ushahidi, Global Mapper).<br />
Usually each format or platform is used for a specific purpose, this creates many difficulties in emergency<br />
management (U.S. House of Representatives, 2006). The winning idea is to develop a “black box” able to<br />
contain and share different information from different actors and make them available to everyone.<br />
An extra test is the opportunity offered by open source software for smartphones, with automatic delivery of<br />
georeferenced informations (SMS, MMS, photos, videos) to an emergency service number (like US 911 or EU<br />
112), that would allow a more effective rescue response.<br />
As the exercice location was ideal (full Wi-Fi, WiMAX, cellular phone, TETRA coverage), the interaction among<br />
different infratructures and the device switch among them was to be tested too.<br />
This will allow better exercise tuning before country tests within difficult terrain. Moreover the urban search<br />
give interesting data to future improvement for fire operations, earthquake USAR and damage assesment,<br />
HAZMAT pollution and CBRN contamination.<br />
22
The exercise focuses on the test of web technologies and mapping instruments for the emergency<br />
management of information fluxes among different actors and aims to open a two-way communication<br />
channel with citizens.<br />
3.1. The scenario<br />
Mrs. Paola Bianchi, 75 years old, affected by Alzheimer’s disase, is missed from her house during the morning.<br />
His family raised alarm at 2:00 pm. The Police department calls up, as protocol, the Fire Department drills<br />
Prefect and Civil Protection volunteers responsible.<br />
At Operational Room (COC, placed inside Protec2011 Green Room) a Command Post is activated.<br />
TAS Team join the last seeing area with the UCL van (Photo 2), that will be used as ICP and technical rescue<br />
management centre (as decreed by Italian Law). A TAS professional will receive search area zoning ruled by OR<br />
and upload GPS devices, while a second professional will facilitate information exchange between SAR teams<br />
and OR.<br />
3.2. The crew<br />
OpenResilience and TAS Team planned the exercise and partecipate as described in Table 1. Turin and Aosta<br />
Fire Departments provided SAR personnel and K9 teams, while students from the University of Turin played as<br />
civil protection volunteers, media reporters and citizens. A UNITO technician was a perfect Paola Bianchi,<br />
whose photo was published on exercise blog (http://esercitazioneprotec.wordpress.com/). Some Protec2011<br />
conference attendees partecipate as witness.<br />
[1 ] UCL DiLolli A. & Lombardo D. (search coordinators) + 1 VVF + 2 Prisma<br />
Engineering (LSUnet)<br />
[2] Operational Room Rapisardi E. (exercice coordinator) + 2 web 2.0 specialists + 2 VVF<br />
[3] Search team 1 2 VVF + K9 unit<br />
[4] Search team 2 2 VVF + K9 unit<br />
[5] Search team 3 Lanfranco M. (devices tester) + 2 GIS specialists (UNITO students)<br />
[6] Civil Protection<br />
Volunteers<br />
UNITO students<br />
[7] Citizens UNITO students + Protec 2011 attendees<br />
[8] Audio / Video Operators 2 VVF + 2 UNITO students<br />
[9] Media Observer http://www.ilgiornaledellaprotezionecivile.it/<br />
Table 1. Crew composition<br />
3.3. Communication Infrastructure<br />
Commercial GSM/UMTS cellular network<br />
Lingotto Fiere internal Wi-Fi (plus an outdoor ad-hoc exercise network)<br />
Fire Department WiMAX<br />
23
The whole Turin Province is covered by a WiMAX 5GB network, utilized by Provincial Command Centre for data<br />
exchange among Detachments and SO115. This network can be also used for terminals connecting within<br />
urban and sub-urban zones, through a fixed antennas network and “on demand” mobile repeaters. The access<br />
is secured by password.<br />
Fire Department radio network<br />
Italian Fire Department own a nation wide radio network. The radio network link rescue teams to Provincial<br />
Operational Rooms while backbones link Regional Commands with National Crisis Room. The VVF radio<br />
network never failed during disasters since the Friuli Erthquake, back in 1976. TAS teams are able to geolocate<br />
VHF vehicle devices and some UHF personnel radio.<br />
GSM/UMT cellular network: Prisma Engeenering (http://www.prisma-eng.com/lsu_net.html)<br />
LSUnet can carry (in a backpack or with a trolley) a GSM (or UMTS) wherever necessary. Disaster, often,<br />
undermine mobile networks directly (i.e. interrupting the power supply) or indirectly (network congestion due<br />
to an excess of information exchange among people involved).<br />
A LSUnet emergency network allows first responders to restore a cellular coverage in a short time (10 minutes)<br />
to use standard phones or smartphones to coordinate relief efforts and/or to get a two way contact with<br />
affected citizens.<br />
Photo 1. COC (Operational Room)<br />
Photo 2. UCL (Incident Command Centre)<br />
6. Discussion<br />
The Protec2011 Exercise has been an important test to highlight how the VVF procedures could be transferred<br />
in a web 2.0 environment and which are the strength and weak points of the adopted solutions .<br />
From a wider perspective, the excercise underlined that geolocated information sharing is perceived as a need<br />
in any rescue or relief operation, as real time communication, e.g. between the UCL and the COC, at least<br />
allows the situational awareness and remote tactical control.<br />
The citizens involvement [crowdsourcing] has been undoubtable considered a plus, never experimented<br />
before. The new emerging geolocation tools and platforms, althought considered “poor” and low-reliable by<br />
academia community, represent a new challenge in a world where stakeholders’ information and geolocated<br />
informations needs are rapidly increasing and are the expression of a democratization of geodata access, that<br />
24
eflects a collaborative and proactive approach to cope with risks and disasters [“Towards a more resilient<br />
society” - Third Civil Protection Forum, 2009].<br />
However, to have a more reliable data, the post-Gutenberg map makers should acquire some sort of<br />
competence on mobile applications or to be prepared through specific information campaign (web and<br />
mobile litteracy). The challenge is to “design a more robust collaborative mapping strategy” (Kerle, 2011)<br />
defining common guidelines.<br />
From the technological point of view, crowdmapping should take into account that geolocation accuracy is<br />
highly depending on the device’s GPS quality [in tested commercial cellulars - iphone, blackberry, HTC,<br />
Samsung - GPS chips showed different level of accuracy].<br />
One more feature to be introduced is the sms channel to allow citizens to send reports, even though the sms<br />
has no geolocated information.<br />
On the back-end side, we are aware that we should focus more on the capability of the VVF and COC operators<br />
to interpret and validate citizen’s reports. Ushahidi experience teaches that a validation processes must be set<br />
up and should follow specific rules: this means that the personnel in charge of the validation should be trained<br />
on this specific issue and should develop some experience in the field.<br />
During debriefing, the participants underlined that the whole system should use a unique platform in order to<br />
have all data in the same map: trackings, citizens reports, VVF operations.<br />
On the connectivity side, the internal Wi-Fi infrastructure (used by COC and UCL) was not appropriate for the<br />
purpose and the WiMAX did’t work inside, but the test of the LSUnet by Prisma Engineering for cellular voice<br />
communication was extremely positive; however this communication network would not support any public<br />
web platforms as, in this exercise, it sets up only a local voice channel .<br />
7. Step forward<br />
Collaborative mapping is the crucial need in any rescue and relief operation. Our recent experience lead us to<br />
focus the research on the development of a unique platform [web and mobile] that allows different levels of<br />
geolocated information sharing, on a “user permissions” base [anonymus user, registered user level 1, ….]. Our<br />
approach is to use the solutions that are free and open [such as Google Maps, Google Earth, Google 3d,<br />
Ushahidi, OpenStreetMap, or Android apps for route tracking] and to develop a stable tool through the<br />
integration of diverse solutions ensuring a high level of sharing and collaboration among different players.<br />
Step by step<br />
The next steps of our research team, apart from the crucial fundraising task, should start with a more strict<br />
evaluation of the information formats and standards used by the different civil protection players and bodies,<br />
and with an analysis of the information flow in some sample operations (e.g. ,missing persons search, critical<br />
infrastructure crippling, USAR).<br />
Further on we will carry out a platform/projects/solutions review to draw a bigger picture, so to acquire the<br />
necessary information to design and implement the whole web/mobile system, that will be tested in TAS<br />
Team exercises and operations during next winter season.<br />
A vademecum and the setting up of training package targeting different users, will complete the basic<br />
research and highlight the spread of adopted platform to all other Fire Departments and further involvement<br />
of local government.<br />
25
Acknowledgements<br />
The project has been started on behalf of the Turin Provincial Chief of Fire Fighters (Ing. Silvio Saffiotti) and<br />
with the Ministry of Interior authorization. The Lingotto Fiere crew strongly support our activities during<br />
PROTEC2011, also with ad-hoc Wi-Fi network. Prisma Engineering gave us LSUnet station. Student from<br />
University of Turin feed volunteers team. Barbara Bersanti and Antonio Campus from Centro Intercomunale<br />
Protezione Civile Colline Marittime e Bassa Val di Cecina ruled Operational Room.<br />
References<br />
ALEXANDER D.E, 2001, Nature's Impartiality, Man's Inumanity. Disasters 26(1), pp.1-9.<br />
BENKLER Y., 2006. The wealth of networks. http://cyber.law.harvard.edu/wealth_of_networks/<br />
BURNINGHAM K., FIELDING J., THRUSH D., 2008, "It'll never happen to me": understanding public awareness of local<br />
flood risk. doi:10.1111/j.0361-3666.2007.01036.x., pp. 216-238.<br />
DRABEK T.E., 1999, Understanding Disaster Warning Responses. The Social Sciences Jounal 36(3), pp. 515-523.<br />
GIARDINO, M., GIORDAN, D., AND AMBROGIO, S., 2004. GIS technologies for data collection, management and<br />
visualization of large slope instabilities: two applications in the Western Italian Alps. Natural Hazards and Earth<br />
System Sciences 4, pp. 197–211.<br />
GIARDINO M., PEROTTI L., LANFRANCO M., PERRONE G., 2010. GIS and Geomatics for disaster management and<br />
Emergency relief: a proactive response to natural hazards. Proceeding of Gi4DM 2010 Conference – Geomatics<br />
for Crisis Management. Turin (I).<br />
HARVARD HUMANITARIAN INITIATIVE, 2011, Disaster Relief 2.0: The Future of Information Sharing in Humanitarian<br />
Emergencies. Washington, D.C. and Berkshire, UK: UN Foundation & Vodafone Foundation Technology<br />
Partnership.<br />
KERLE N., 2011., Remote Sensing Based Post-Disaster Damage Mapping - Ready for a collaborative approach?,<br />
www.earthmagazine.org.<br />
MECHLER R., KUNDZEWICZ Z.W., 2010, Assesing Adaptation to Extreme Wheater Events in Europe - Editorial. Mitig<br />
Adapt Strateg Glob Change 15(7), pp. 611-620.<br />
MCCLEARY P., 2011, Battlefield 411. Defense Technology International 6, vol. 5, p. 48.<br />
MCCLEARY P., 2011, Small-Unit Comms. Defense Technology International 7, vol. 5, p. 47.<br />
PEEK L.A., SUTTON J.N, 2003, An Explorating Comparision of Disasters, Riots and Terrorism Acts. Disasters (27)4,<br />
pp. 319-335.<br />
PERRY R.W., LINDELL M.K., 2003, Preparadness for emergency Response: Guidelines for the Emergency Planning<br />
Process. Disasters 27(4), pp. 336-350.<br />
PLOTNICK L., WHITE C., PLUMMER M., 2009, The Design of an Online Social Network for Emergency Management:<br />
A One-Stop Shop. In: Proceeding of 15 ACIS, San Francisco (USA).<br />
QUARANTELLI, E.L. (ed.), 1998. What is a Disaster? Perspectives on the Question. Routledge, Oxon (UK).<br />
26
O'REILLY T., 2007. What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software.<br />
http://oreilly.com/web2/archive/what-is-web-20.html<br />
UN/ISDR, 2005, Hyogo Framework for Action 2005-2015: Building the resilience of nations and communities to<br />
disasters (HFA). United Nations International Strategy for Disaster Reduction, Kobe, Hyogo (J).<br />
UNDP, 1994. Human Development Report. http://hdr.undp.org/en/reports/global/hdr1994/<br />
U.S. HOUSE OF REPRESENTATIVES, 2006. A Filure of Initiative. Final Report of the Select Bipartisan Committee to<br />
Investigate the Preparation for and Response to Hurricane Katrina.<br />
http://www.gpoaccess.gov/katrinareport/mainreport.pdf<br />
WINSER, B., BLAIKIE P., CANNON T., DAVIS I., 2005, At Risk. 2nd edition, Routledge, Oxon (UK).<br />
WHITE C., PLOTNICK L., KUSHMA J., HILTZ S.R., TUROFF M., 2009, An Online Social Network for Emergency<br />
Management. In: Proceeding of 6 ISCRAM Conference, Gotheburg (S).<br />
SHEN X., 2010, Flood Risk Perception and Communication within Risk Management in Different Cultural<br />
Context. Graduate Research Series 1, UNU-EHS, Bonn (D).<br />
ZLATANOVA S., DILO A., 2010, A data Model for Operational and Situational information in Emergency Response:<br />
the Dutch Case. in: Proceeding of Gi4DM2010, Turin (I).<br />
27
SHORT PAPER<br />
An Integrated Quality Score for Volunteered Geographic<br />
Information on Forest Fires<br />
OSTERMANN F. and SPINSANTI L.<br />
Joint Research Centre of the European Commission, Italy<br />
frank.ostermann@jrc.ec.europa.eu<br />
Abstract: The paper presents the most recent developments in an exploratory research project<br />
that investigates the potential utility of volunteered geographic information (VGI) for fighting forest<br />
fires. As social media platforms change the way people communicate and share information in<br />
crisis situations, we focus on the value and options to integrate VGI with existing spatial data<br />
infrastructures (SDI) and crisis response procedures. Two major obstacles to using VGI in crisis<br />
situations are (1) a lack of quality control and (2) an increasing amount of information.<br />
Consequently, the overall quality and fitness-for-use of VGI needs assessment first. One year ago,<br />
we proposed a workflow for automatically processing and assessing the quality and the accuracy of<br />
VGI in the context of forest fires. This contribution presents the advancements since then, focusing<br />
on the approach to define and implement a measure of the overall quality/fitness-for-use of the<br />
content analyzed. A proposed integrated quality score (IQS) consists of two main criteria, i.e.<br />
relevance and credibility. For both criteria, we have identified several contributing components.<br />
The geographic context of a message has crucial significance, since we argue that it allows<br />
assessing both relevance and credibility. However, the geographic context is difficult to establish,<br />
since a single piece of VGI can contain multiple types of geographic references, each of varying<br />
quality itself.<br />
Keywords: VGI, Forest fire, quality measure, crisis management, social networks.<br />
29
1. Introduction<br />
There is already a substantial amount of information provided by the general public on/during natural and<br />
man-made disasters (Palen & Liu, 2007a). However, the expected future development and adoption of<br />
integrated mobile devices such as smart phones makes it likely that the amount of near-real time,<br />
geographically referenced volunteered information will increase manifold during the coming years. In our case<br />
study, as it is possible to observe in the Table 1 in next section, 2011 retrieved data is more than doubled with<br />
respect to 2010 one.<br />
This development is going to change the way information is collected, distributed and used. The uni-directional<br />
vertical flow of information from officials to public via traditional broadcasting media like radio or television of<br />
the past is overcome by horizontal peer-to-peer(s) communication. The lines between public and official<br />
already blur when official administrative agencies (e.g. in British Columbia 2 ) use regular accounts of private<br />
companies like Facebook or Twitter for communication services. However, until now these newly created<br />
back-channels do not yet integrate well with traditional established emergency response protocols. Clearly,<br />
there are a lot of open questions to be investigated, and recent examples for research on the role of<br />
volunteered information during concrete disasters includes wildfires (De Longueville et al., 2010; De<br />
Longueville, Smith, & Luraschi, 2009; Hudson-Smith, Crooks, Gibin, Milton, & Batty, 2009; Liu & Palen, 2010).<br />
In the case of volunteered information on crisis events, its potential utility depends on the possibility to<br />
georeference the information - if we cannot locate the user-generated content, it is impossible to act on it.<br />
While some volunteered information is explicitly geographic by itself (e.g. OpenStreetMap), other is only<br />
implicitly geographic, since it mentions a place or has geographic coordinates as meta-data. We group both<br />
types under the label of Volunteered Geographic Information (VGI). Another notable aspect is that this VGI is<br />
intrinsically heterogeneous as it is provided by different people, using different media such as photographs,<br />
text, or video, and authors often overcome device and software limitations in imaginative and unpredictable<br />
ways.<br />
The work presented here is part of an exploratory research project with the objectives to (i) to develop, test,<br />
and deploy workflows able to quality control volunteered geographic information and (ii) to assess the value of<br />
volunteered geographic information in supporting both early warning, and local impact assessments of forest<br />
fires. For more details see(Spinsanti and Ostermann 2010).<br />
As test cases for a proof-of-concept implementation, we decided to analyze two different social networks:<br />
Twitter and Flickr. The first is a micro-blogging network while the second is a photo sharing network. The<br />
research aims to study the two separately to individuate their specific characteristics but also to investigate<br />
how the two can complement each other.<br />
We started harvesting VGI at the beginning of the forest fire season in July 2010, and continued until end of<br />
September 2010. At the moment of writing, we are collecting the 2011 season data. Using the public Twitter<br />
streaming API with a filter of 12-17 wildfire related keywords (e.g. fire, forest, evacuation) in 8 different<br />
languages, we collected a total of 24.5 GB of data, equaling around 8 million Tweets for 2010. Using a similar<br />
2 British Columbia Forest Fire Information - http://www.bcforestfireinfo.gov.bc.ca/<br />
The Twitter profile http://twitter.com/#!/BCGovFireInfo<br />
The Facebook profile http://www.facebook.com/group.php?gid=2290613964<br />
30
set of keywords for searching Flickr, we retrieved meta data for around 700 thousand images for 2010. For<br />
2011 we can see (Table 1) the trend of retrieved information is increased of more than +200% for both Twitter<br />
and Flickr. All these VGI are potentially related to a forest fire. The large amount of information leads to the<br />
essential need of an automatic methodology to assess the quality and the accuracy of the VGI. First, however,<br />
we have to geocode any location information.<br />
2. Creating VGI - geocoding user-generated content<br />
As we have defined in the previous section, VGI is information (text, image, video, etc.) with one or more<br />
geographic references, which can be explicit (coordinates), or implicit in the form of placenames (toponyms).<br />
The explict georeferences can be generated in two ways: either automatically by the device if it has Global<br />
Positioning System (GPS) hardware and then added by the software used to transmit the information; or<br />
alternatively, some platforms offer the user to select a place from a list (or to select a point on a map), which is<br />
then added to the message meta-data in the form of text or coordinates. Implicit geo-references are created<br />
when the contributor uses toponyms in the message content or adds them as tags. As we show below, even<br />
already explicitly geocoded information needs to be examined for toponyms and possible re-geocoding.<br />
Looking at the Twitter/Flickr for August 2010/2011 we can observe that the number of explicit geographic<br />
information is very low compared with the large amount of retrieved information.<br />
Table1: number of retrieved VGI and explicit geographic information<br />
TWITTER FLICKR 3<br />
August 2010 August 2011 August 2010 August 2011<br />
Number of retrieved VGI 2,904,065 7,996,228 7,991 17,850<br />
Percentage with 35% 27% 53% 50%<br />
toponym<br />
Percentage with geocode 1.1% 0.92% 20% 21%<br />
Yet a simple string matching search for toponyms using a large database of toponyms reveals that much higher<br />
percentage of messages potentially contains implicit geographic reference: for Twitter the implicit georeferences<br />
are around 30% against 1% of the explicit ones; for Flickr the implicit geo-references are around<br />
50% against 20% of the explicit ones. For both data type, in the 28% of the cases the implicit geographic<br />
reference is the unique reference. In order to make this implicit VGI usable for crisis management, we have to<br />
made explicit it first (i.e. geocode) and this can be done using different applications and strategies.<br />
Independently from the methodologies we can choose, we have to consider the different types of geographic<br />
reference we are likely to encounter. Let’s consider in the next two examples and the information we want to<br />
analyze: the part of VGI that describes a location, the associated real world object or event that is located and<br />
the geo-referencing of this location information.<br />
In the Tweet example in the left part of Figure 2, the geographic reference is contained in the text in the<br />
toponym “Funchal”, more precisely the mountains behind the city of Funchal. The information refers to a<br />
forest fire event. Suppose the Tweet message itself has some coordinates originating from the source GPS<br />
device: this represents the location of the message. Nevertheless, we can suppose that the user was sending<br />
the message safely far away from the forest fire, so the tree different locations are not overlapping. In the<br />
Flickr example in the right part of Figure 2, consider a person taking a picture with a camera or a smartphone<br />
3 Retrieved without English keywords<br />
31
with GPS integrated system. The person and the device coordinate overlap, while the subject of the photo<br />
coordinate has a distance from the camera. This distance could be considerable. Let’s imagine that the content<br />
represented in the photo is the Mount Everest. The user with the camera is necessarily far from the mountain<br />
peak to include into the photo. As shown in Fig.1 on the right side, the mountain and the camera could be<br />
consistently far one from the other. Moreover the user can set the photo coordinates manually: in this case<br />
the precision depends on several factors such as his/her surrounding knowledge or the application interface.<br />
Figure 2. Geographic Information in VGI<br />
These examples illustrate that there is a discrepancy between the location of the content (the registered<br />
device location at the time the message is sent or the photo is taken) and the geographic content contained in<br />
the message itself. The location of the device is not necessarily equal to the location of the reported content:<br />
they can overlap or be far away as in the examples. This inconsistency is not of technological nature, but will<br />
always include semantic aspects.<br />
Because the number of explicit geographic information in VGI is low, but also because of the semantic<br />
uncertainty of this geographic information, the geocoding of the VGI is a crucial step.<br />
3. Geocoding VGI and context information<br />
Geocoding is the process of finding associated geographic coordinates (often expressed as latitude and<br />
longitude) from other geographic data, such as street addresses, or zip codes (postal codes). In our case we are<br />
looking for place names to be associated with a fire event. In natural language, place names (toponyms) are<br />
used to refer to these locations without having to mention the actual geographic coordinates. We select a<br />
granularity level of the communes names in Europe because the European Forest Fire Information System<br />
(EFFIS) use this level for the forest fire database collection and at the end of the workflow we want to be able<br />
to integrate data. We decided to limit the proof of concept experiments to four Mediterranean countries:<br />
Italy, France, Spain and Portugal. This choice was driven by several reasons: they are the most probable place<br />
for forest fires in summer, the language of this country use Latin characters and exclude most of the USA<br />
users. We used communes names extracted from GISCO 4 , an Eurostat service which promotes and stimulates<br />
the use of GIS within the European Statistical System and the Commission. There are more than 57,000<br />
communes in the four counties. We search for the presence of the commune name in the title and tags for the<br />
photos and in the text for the Tweets. Toponym disambiguation (a.k.a. toponym resolution) is the task of<br />
determining which real location is referred to by a certain instance of a name. Toponyms, as with named<br />
entities in general, are highly ambiguous. From (Habib, M.B. and van Keulen (2011)) can be observed that<br />
around 46% of toponyms have two or more, 35% three or more, and 29% four or more references. In natural<br />
4 http://epp.eurostat.ec.europa.eu/portal/page/portal/gisco_Geographical_information_maps/introduction<br />
32
language, humans rely on the context to disambiguate a toponym. In human communication, the context used<br />
for disambiguation is broad: not only the surrounding text matters, but also the author and recipient, their<br />
background knowledge, the activity they are currently involved in, even the information the author has about<br />
the background knowledge of the recipient, and much more. Moreover, in our task, we are dealing with short<br />
text messages (often with poor grammar and syntax) and tags: the methods implemented for larger text<br />
corpora are often not valid. Our approach is to use several existing systems and combine the final result in a<br />
geographic score. The first step is to identify VGI that can potentially be geocoded, by filtering it using the<br />
commune names labels. This brute approach ensures that VGI without any geographic reference in the text<br />
are discarded. The remaining VGI are sent to the Europe Media Monitor (EMM) geographic module for<br />
toponym extraction. The results are compared with the previous one to reinforce or decrease the confidence<br />
in the retrieved place name. For the uncertain one another step is generate sending the VGI to<br />
Yahoo!Placemaker service and compare again with the previous retrieved. At the end a geocoding score is<br />
generated and used in the Integrated Quality Score (IQS) as described in the next section.<br />
4. Integrated Quality Score<br />
The aim of assigning a score to each VGI has to deal with several facets each of them contributing to the final<br />
value. We aim to score each of these facets independently, and then arrive at an integrated quality score<br />
combining all aspects. While the idea of additive quality score is not new (Friberg, Prödel, and Koch 2011), we<br />
intend to focus on the spatial context of the information, which to our knowledge has not been attempted so<br />
far. The following figure shows the sequence of procedure: After the information has been successfully<br />
geocoded (see previous section), we gather and assess information on the geospatial context, and rate the<br />
degree of topicality, i.e. the likelyhood of the information being about forest fires. We intend to plug-in further<br />
modules dealing with source credibility later.<br />
VGI<br />
Geocoding<br />
Geo-Spatial<br />
Context<br />
information<br />
Topicality<br />
Integrated Quality<br />
Score<br />
Figure 3. Several aspects of integrated quality score<br />
In more detail, the Geo-Spatial Context considers land use and land cover, population density, and distance to<br />
known hot spots discovered by satellite imagery.<br />
Topicality is about the content of the message: is the VGI referring to a forest fire? To calculate this score each<br />
keyword gets assigned a value based on statistical analysis made on VGI evaluated by hand as ground truth<br />
basis.<br />
These values are combined in a weighted sum: IQS(io j ) = ∑ N i=1w i v i (s ji ) with w being weight for criterion i, v<br />
being the value function for criterion i and s being the score for the information object j.<br />
In our specific case IQS = (topicality*weight1) + (geocoding*weight2) + (context*weight3).<br />
5. Conclusion<br />
In this paper, we have argued that the emerging use of social media by members of the public will become an<br />
important pathway for communication during a crisis event, while the notion of citizens as sensors can provide<br />
the decision-makers of the crisis management team with important information. A large part of volunteered<br />
33
information has a geographic component and the number will increase constantly. However, the increasing<br />
usage will also amplify two main challenges: the volume of information will need some sort of filtering, and in<br />
order to be useful for official emergency response, its quality, relevance and credibility needs to be assessed.<br />
Humans have carried out both tasks so far, but the tasks need to be automatized to cope with the expected<br />
increase of volunteered information. We propose an integrated quality score for VGI assessment from social<br />
media. The geographic component will play an important part in assessing the data and in clustering the data<br />
to fit specific (sub-) events. The next steps will include the integration of VGI with official spatial data<br />
infrastructures, and its evaluation for fire events.<br />
Acknowledgements<br />
This work was partially funded by the exploratory research project "Next Generation Digital Earth: Engaging<br />
the citizens in forest fire risk and impact assessment" from the Institute for Environment and Sustainability of<br />
the European Commission – Joint Research Centre.<br />
Thanks to the EFFIS and EMM teams for their contribution and support.<br />
References<br />
De Longueville, B. D., Luraschi, G., Smits, P., Peedell, S., & Groeve, T. D. (2010). Citizens as Sensors for Natural<br />
Hazards: A VGI integration Workflow. Geomatica, 64(1), 355-363.<br />
De Longueville, B. D., Smith, R. S., & Luraschi, G. (2009). "OMG, from here, I can see the flames!": a use case of<br />
mining location based social networks to acquire spatio-temporal data on forest fires. Proceedings of the 2009<br />
International Workshop on Location Based Social Networks. doi:http://doi.acm.org/10.1145/1629890.1629907<br />
Friberg, Therese, Stephan Prödel, and Rainer Koch. 2011. Information Quality Criteria and their Importance for<br />
Experts in Crisis Situations. In Proceedings of the 8th International ISCRAM Conference. Lisbon.<br />
Habib, M.B. and van Keulen, M. (2011) Named Entity Extraction and Disambiguation: The Reinforcement<br />
Effect. In: Proceedings of the 5th International Workshop on Management of Uncertain Data, MUD 2011, 29<br />
Aug 2011, Seatle, USA. pp. 9-16. CTIT Workshop Proceedings Series WP11-02. Centre for Telematics and<br />
Information Technology, University of Twente. ISSN 0929-0672<br />
Hudson-Smith, A., Crooks, A., Gibin, M., Milton, R., & Batty, M. (2009). NeoGeography and Web 2.0: concepts,<br />
tools and applications. Journal of Location Based Services, 3(2), 118 - 145.<br />
Liu, S. B., & Palen, L. (2010). The New Cartographers: Crisis Map Mashups and the Emergence of<br />
Neogeographic Practice. Cartography and Geographic Information Science, 37, 69-90.<br />
Palen, L., & Liu, S. B. (2007a). Citizen Communications in Crisis: Anticipating a Future of ICT-Supported Public<br />
Participation. In CHI 2007 Proceedings (pp. 727-726). Presented at the Computer Human Interaction 2007, San<br />
Jose, USA.<br />
Spinsanti, Laura, and Frank O. Ostermann (2010). Validation and Relevance Assessment of Volunteered<br />
Geographic Information in the Case of Forest Fires. In Proceedings of the 2nd International Workshop On<br />
Validation Of Geo-Information Products For Crisis Management, ed. Christina Corbane, Daniela Carrion, Marco<br />
Broglia, and M. Pesaresi, 101-108. Ispra, Italy: Publications Office of the European Union, Luxembourg.<br />
34
SESSION II<br />
VALIDATION OF REMOTE SENSING DERIVED<br />
EMERGENCY SUPPORT PRODUCTS<br />
Chair: Dirk Tiede<br />
New generation remote sensing technologies open today new application areas<br />
and demonstrate their effectiveness providing geo-information in support to all the<br />
phases of the crisis management cycle. The presentations in this session establish a<br />
consistent framework on the use of remotely sensed data and its validation during the<br />
preparedess phase (e.g. baseline data on built-up areas), the early warning phase (e.g<br />
evacuation plans), the post-disaster phase (e.g. damage assessment) and the<br />
reconstruction phase (e.g. monitoring reconstruction). A particular emphasis is given to<br />
the technical (e.g. sampling issues), the practical (e.g. validation in the secutiry domain)<br />
and even the theoretical challenges (e.g. review of the error matrix) related to the<br />
collection of reference data and their use within the different validation methodologies.<br />
35
SHORT PAPER<br />
Definition of a reference data set to assess the quality of building<br />
information extracted automatically from VHR satellite images<br />
WANIA A., KEMPER T., EHRLICH D., SOILLE P. and GALLEGO J.<br />
Joint Research Centre of the European Commission, Italy<br />
Annett.wania@jrc.ec.europa.eu<br />
Abstract<br />
Rapid urbanisation continues to be an issue with one third of the world’s urban population living<br />
under poor living conditions in informal settlements or shanty towns. Improving the lives of this<br />
population, which is one objective of the United Nations’ Millennium Development Goals, requires<br />
knowledge about the areas under concern. This information is currently still collected in intensive<br />
field studies. Earth observation data could be an alternative source of information that can support<br />
the process of information collection and on longer terms also serves for monitoring the evolution<br />
of those areas. Today’s sub-meter resolution satellites provide very detailed information allowing<br />
identification of different urban pattern. However, the huge amount of data requires an automatic<br />
information extraction to derive relevant information in a fast and consistent way.<br />
Reference data is crucial for the assessment of those algorithms but in case of absence of a<br />
relevant data set, which is especially the case in developing countries, an alternative database<br />
needs to be defined. In this paper we present a robust approach to produce a reference data set<br />
with limited field surveys for the city of Harare (Zimbabwe). This data set is defined to serve two<br />
objectives: first, quality assessment of the automatically extracted information and second, further<br />
analysis to identify built-up pattern.<br />
Two reference data sets are defined using systematic and cluster sampling. The building stock of<br />
the city is systematically sampled by visual image interpretation of regular grid points covering the<br />
entire image area. Cluster sampling in several stages is performed to define a small sample of<br />
buildings that are surveyed in the field. The first stage consists of constructing clusters for the<br />
entire city area based on building density and height. From each of these clusters, a sample is<br />
randomly selected and in each a sample of buildings is finally randomly selected.<br />
Keywords: reference data set, sampling, building stock, VHR, quality assessment.<br />
37
1. Introduction<br />
The monitoring and evaluation of global urban conditions and trends has become an important issue against<br />
the background of rapid, continuous urbanisation worldwide and the high percentage of urban citizens living<br />
under poor conditions. UN-HABITAT has established the Global Urban Observatory (GUO) to address the<br />
urgent need to improve the world-wide base of urban knowledge by helping governments and local<br />
authorities and organisations to develop and apply urban indicators, statistics and other urban information.<br />
Earth observation data, and especially today’s generation of sub-meter resolution satellites could be one<br />
source of information that allows collecting information about the physical characteristics of the building stock<br />
systematically and world-wide. It could complement field studies and could be used to optimise field sampling.<br />
With the recurring acquisition of images it could also support the monitoring of urban areas.<br />
Against this background, the Joint Research Centre is currently developing a workflow to extract information<br />
on the building stock (Kemper et al. 2009). The ultimate goal is to develop a workflow that can be applied to<br />
any image in any region of the world, with a parameterisation that is consistent and as much as possible<br />
independent from local conditions. The information is extracted from very high resolution optical imagery<br />
(panchromatic resolution ≤ 1 m) and provides information at fine scale relating to homogeneous built-up<br />
structures. Since its first set-up at the end of last year, the workflow is continuously improving and was run for<br />
several cities on different continents. The first results were discussed with potential users from the GUO<br />
network which outlined the potential of the derived urban indicators. By that time the quality of the results<br />
was assessed using manually digitised buildings which were available only for two of the ten analysed cities.<br />
The data sets were used to validate information which relates to the footprint of buildings. No information<br />
about the height was available. Furthermore it was problematic that one of the data sets was older than the<br />
image used.<br />
In the frame of the GMES project G-MOSAIC the same workflow was recently run on the city of Harare,<br />
Zimbabwe. As there is no ground truth available for the quality assessment, we decided to build a systematic<br />
reference data set of building samples. The sample set should serve two purposes. First, it will be used to<br />
assess the quality of the building stock extracted using the automatic workflow. Second, the building samples<br />
will be used to classify the entire city. The aim here is to identify spatial clusters according to type, usage,<br />
height of buildings and the housing quality (poor, rich).<br />
This paper presents an approach for the sample definition, which combines systematic and cluster sampling.<br />
Systematic sampling is used to define a data set for the entire city area. Systematic sampling with a random<br />
origin is used because it provides a smaller variance than random sampling in spatially correlated populations<br />
(Bellhouse, 1988, Dunn and Harrison, 1993). The main drawback of spatial sampling is that there are no<br />
unbiased estimators for their variance. Available estimators are usually conservative in the sense that the<br />
variance is overestimated (Osborne, 1942, Wolter, 1984). We have considered that this is not a major problem<br />
for this application. Clustering points does not give a major advantage for the first sampling phase (points to<br />
be photo-interpreted), but is necessary to reduce working time in the in-situ survey (second phase sample).<br />
The data set is created by visual interpretation of the input image. Cluster sampling is used to define a small<br />
subset of buildings that are surveyed in the field. While the visual interpretation allows collecting very limited<br />
two-dimensional building characteristics, the field survey provides the opportunity to collect height<br />
information and characteristics that require a close or front view on the sampled object.<br />
38
2. Study area and data<br />
The study area covers 340 km 2 of the larger urban zone of Harare, which is the capital of Zimbabwe and the<br />
centre of industrial production and commerce in the country (Figure 1 left). In comparison to other Sub-<br />
Saharan countries, Harare with its approx. 1.6 million inhabitants (2007) does not follow the general trend of<br />
fast urban growth in mostly slum settlements. According to UN-HABITAT (2008) only 6.3 % of the population of<br />
Harare were living in slum conditions (sub-Saharan average 63%). However, with limited expansion of Harare’s<br />
housing stock, backyard shacks and illegal extensions to formal housing units are a dominant feature for the<br />
poor and much of the middle class whose incomes do not qualify them for private sector housing (UN-HABITAT<br />
2008).<br />
For this area, GeoEye-1 satellite data with 0.5 m spatial resolution for the panchromatic band and 2.0 m for<br />
the multispectral bands were acquired on 26 April 2010. The data were delivered already as a pan-sharpened<br />
multispectral data set with 4 bands in the visible and near infrared spectrum.<br />
The ultimate goal of the project is the characterisation of the building stock of Harare achieved with an<br />
automated information extraction. This is achieved with a processing chain based on morphological image<br />
analysis taking into account the local image contrast and shadows. The processing chain provides information<br />
related to the building height derived from the shadow length, building size based on the contrast and the<br />
vegetation density based on a vegetation index. More detailed information on the methodology is provided in<br />
Kemper et al. 2009.<br />
Figure 1. GeoEye-1 image over the city of Harare (left) and automatically extracted building stock (right).<br />
39
3. Methodology<br />
The sampling scheme for the definition of the reference data set of buildings has two phases: in the first phase<br />
a grid of points covering the entire image is defined for visual interpretation. In the second phase a small set of<br />
buildings is selected for the field survey. In the following paragraphs we will first specify which building<br />
characteristics are collected in each of those sample sets and second, describe the sampling method.<br />
a. Building characteristics<br />
In view of building a reference data set for quality assessment and further analysis of the building stock,<br />
several building attributes were defined for both the visual interpretation and the field survey. Table 1 shows<br />
the attributes and categories for each and whether the attribute is collected in the visual interpretation, in the<br />
field survey or in both. For the visual interpretation a procedure was implemented in ESRI ArcGIS to collect the<br />
information. In the field survey the information is collected on print-out evaluation sheets.<br />
Table 1. Building attributes and values for visual interpretation (VI) and/or field survey (FS).<br />
Attribute VI FS Values<br />
Functional use x x Industrial<br />
Commercial/business<br />
Education/government/hospital<br />
Mixed use<br />
Residential<br />
Level of income<br />
(only for residential use)<br />
x x High or medium<br />
Low<br />
Very low (squatter)<br />
Size x x Area of the digitised building footprint<br />
Height x Number of storeys<br />
Arrangement x Attached buildings<br />
Single buildings<br />
Main construction material x Concrete<br />
Bricks<br />
Corrugated iron<br />
Assembled material<br />
Other<br />
Degree of planning x Planned<br />
Not planned (informal)<br />
b. Master sample grid and systematic sampling by visual image interpretation<br />
The basis for the sampling is a regularly spaced 400 m grid which covers the image mosaic in its full extension<br />
as shown in Figure 2 (master grid). From the initial grid (1988 cells), cells which are not fully covered by the<br />
image and those which are to a large extend affected by cloud cover were removed (see “No data” cells in<br />
Figure 2). Likewise, cells affected by the associated cloud shadow were removed as the thematic information<br />
extracted in those areas is not reliable. In total, the final sample grid is composed by 1662 cells covering an<br />
area of approximately 266 km 2 .<br />
40
In each of the 400 m grid cells the four centroids of the 200 m sub-grid are selected (see Figure 2 inlet). Those<br />
6648 points are used for the visual interpretation of the building stock. If the point falls on a building, three<br />
attributes are collected: functional use, likely income level (for residential use) and size (see Table 1).<br />
Figure 2. Sampling design: the main figure shows the sampling grid over the image extent with the stratification into<br />
three classes and the location of the 50 sample cells (clusters) for the field survey. The inlet shows the example of one<br />
grid cell with the four sample points (systematic) for the visual interpretation.<br />
c. Cluster sampling for field survey<br />
A smaller sample of points was defined for the field survey. Field surveys provide the opportunity to observe<br />
more than the three attributes from visual image interpretation due to the vicinity to and the multidimensional<br />
view on the object of interest. In addition to the attributes from the visual interpretation, the field<br />
survey aims at collecting also the height, arrangement, main construction material, and the degree of planning<br />
(see Table 1).<br />
We applied cluster sampling to reduce the survey cost and to obtain, at the same time, a set of buildings which<br />
are representative for the building stock of Harare. The sampling was performed in three steps:<br />
1. Stratification of the automatically extracted building stock (classification of the 1662 cells from the 400<br />
m x 400 m grid)<br />
2. Stratified sampling of 50 cells<br />
3. Sampling of single buildings.<br />
41
In the first step, the 1662 cells were stratified on the basis of the automatically extracted built-up information.<br />
Three strata were defined based on the relative built-up area and the average height per cell. Table 2 shows<br />
the cluster definition and the number of cells included in each.<br />
Table 2. Stratification of the 1662 cells and definition of the subsample of 50 master grid cells for the building sample<br />
definition.<br />
Built-up<br />
Nb of<br />
area (%)<br />
cells<br />
Cluster<br />
Average<br />
height<br />
(m)<br />
Percentage<br />
of cells<br />
(%)<br />
Built-up<br />
area<br />
(km 2 )<br />
Percentage<br />
of total<br />
built-up<br />
area (%)<br />
Nb<br />
sampling<br />
cells<br />
Percentage<br />
of<br />
sampling<br />
cells<br />
1
Figure 3. Final step of the cluster sampling to identify buildings for the field survey.<br />
In some cases, this method led to the selection of less than five buildings. This was the case either in cells<br />
where only few buildings were present or where buildings were very small and dispersed or concentrated in<br />
one area. In those cases the following was applied:<br />
• All buildings were included in case there were equal or less than five buildings.<br />
• Probability Proportional to Size sampling (using the building size) was applied in case of more than five<br />
buildings. For this purpose all present buildings were digitised and the area computed. The method is<br />
most useful when the sampling units vary considerably in size because it assures that those in larger<br />
sites have the same probability of getting into the sample as those in smaller sites, and vice versa.<br />
Besides the coordinates of the sample buildings, the person performing the field survey is equipped with one<br />
map for each of the selected 50 cells showing the relative location in the city area and the exact location of the<br />
sample buildings. The building characteristics are recorded on an evaluation sheet (see Figure 4).<br />
43
Figure 4. Material for the field survey. Left: Example of the maps that were provided for each of the 50 cells showing<br />
their location in the city area and the location of the sample buildings. Right: evaluation form.<br />
4. Results<br />
The visual interpretation was accomplished by four different interpreters that spent approximately 34 hours<br />
on the visual interpretation. To assure that they followed the same criteria, they were equipped with an<br />
interpretation key providing also examples. In addition interpreters marked doubtful cases and discussed them<br />
together. Finally, the data were randomly checked for consistency.<br />
Out of the 6648 points that were visually interpreted on the image, 562 buildings were identified and the<br />
footprint of each of them was digitised. Table 3 below summarises the main characteristics of the building<br />
stock. More than half of the mapped buildings were identified as residential buildings and the majority of the<br />
residential buildings (78%) were marked as high or medium income levels. These groups are also clearly<br />
distinguishable by the dwelling size.<br />
Table 3. Summary of the results of the visual interpretation of 6648 points in Harare.<br />
Residential<br />
high or<br />
medium<br />
income<br />
Residential<br />
low income<br />
Industrial Commercial/<br />
business<br />
Education/<br />
government<br />
Mixed<br />
use<br />
Number 249 70 160 38 31 12<br />
Percentage 44 12 29 7 6 2<br />
Average size [m 2 ] 221 164 3237 973 742 1505<br />
At the time of writing the field assessment was not yet available. Hence, the quality of the visual interpretation<br />
could not be assessed.<br />
5. Discussion and conclusion<br />
The methodology described in this paper was designed to allow collecting information on the building stock of<br />
a city in a systematic and repeatable way. This is important in situations where alternative reference data is<br />
missing or out-dated. The information collected remotely from the visual interpretation can be used to<br />
44
validate automated information extraction procedures. This can be based on standard confusion matrices with<br />
information about omission, commission and overall quality, or using more sophisticated tools, which take into<br />
account also the area and/or shape of structures (Congalton & Green, 2009).<br />
The experience from the visual interpretation shows that determining the usage categories is ambiguous in<br />
some cases (especially between different residential classes). The definition of the classes requires local<br />
knowledge to be as objective as possible in the definition of the classes. The same, the definition of a<br />
building‘s footprint is not always straight forward for complex buildings or complex roof structures (e.g.<br />
industrial buildings with non-flat roof segments). In both cases it is important to establish clear interpretation<br />
keys and to exchange with the interpreters during the process. In this context, it might be important to<br />
consider also what type of processing the data are compared with. Will the procedure map built-up areas,<br />
which may include also open spaces (e.g. Pesaresi et al. 2008) or will it extract building footprints?<br />
One advantage of our sampling approach lies in the partial overlap of the two data sets. As such it allows<br />
evaluating the consistency of the reference data sets. Building characteristics, which are collected by two<br />
different persons (field surveyor and interpreter) can be compared and as such assessed. Such comparison will<br />
not only allow drawing conclusions on the information of the quality of the reference data set but also on the<br />
design of future samplings.<br />
Acknowledgements<br />
The research is conducted within the frame of the FP7 GMES project G-MOSAIC (GMES Services for<br />
Management of Operations, Situation Awareness and Intelligence for Regional Crises). The GeoEye-1 image<br />
was acquired in the frame of DAP ID DAP_MG2b_23. We would like to thank Xavier Blaes from JRC for his<br />
contribution in the visual image interpretation.<br />
References<br />
BELLHOUSE, D.R., 1988, Systematic sampling, Handbook of Statistics, vol. 6, ed. P.R. Krisnaiah, C.R. Rao, pp. 125-<br />
146, North-Holland, Amsterdam.<br />
CONGALTON, R.G. and GREEN, K., 2009, Assessing the accuracy of remotely sensed data: principles and practices,<br />
second edition. CRC Taylor & Francis, Boca Raton, 183 p.<br />
DUNN, R. and HARRISON, A.R., 1993, Two-dimensional systematic sampling of land use. Journal of the Royal<br />
Statistical Society Series C: Applied Statistics 42 ( 4), pp. 585-601.<br />
KEMPER, T., WANIA, A. and PESARESI, M., 2009, Supporting slum mapping using very high resolution satellite data.<br />
In Conference Proceedings: 33rd International Symposium on Remote Sensing of Environment. Tuscon,<br />
Arizona: International Center for Remote Sensing of Environment (ICRSE); p. 480-483.<br />
OSBORNE, J.G., 1942, Sampling errors of systematic and random surveys of cover-type areas, Journal of the<br />
American Statistical Association 37, pp. 256-264.<br />
PESARESI, M., GERHARDINGER, A. and KAYITAKIRE, F., 2008, A robust built-up area presence index by anisotropic<br />
rotation-invariant textural measure. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 1 (3), pp. 180-192<br />
WOLTER, K.M., 1984, An investigation of some estimators of variance for systematic sampling. Journal of the<br />
American Statistical Association 79 (388), pp. 781-790.<br />
UN-HABITAT (2008), The state of African Cities 2008, a framework for addressing urban challenges in Africa.<br />
Assessed 12.09.2011: http://www.unhabitat.org/pmss/getElectronicVersion.aspx?nr=2574&alt=1<br />
45
SHORT PAPER<br />
On the Validation of An Automatic Roofless Building Counting<br />
Process<br />
GUEGUEN L., PESARESI M. and SOILLE P.<br />
Joint Research Centre of the European Commission, Italy<br />
lionel.gueguen@jrc.ec.europa.eu<br />
Abstract: Roofless buildings are encountered in case of conflict and disasters as well as<br />
construction sites. Thanks to the recent progress in image information extraction methodologies,<br />
characterizing and counting roofless buildings can now be obtained by automatic process. The<br />
result is a map of roofless building fuzzy membership, which can be used to estimate the number<br />
of roofless buildings. While being less accurate than photo-interpretation, such methodology<br />
enables to assess a crisis situation in a very short period of time (some minutes). In this paper, the<br />
validity of such automatic products is assessed and compared to a photo-interpretation based<br />
roofless map. The validation is conducted on a WorldView1 panchromatic image representing the<br />
city of Tskhinvali, Georgia.<br />
Keywords: automatic detection, ROC.<br />
1. Introduction<br />
The contribution of space technologies was demonstrated to be effective for regional/continental damage<br />
assessment using low- or medium-resolution remotely sensed data input (ranging from 30m to 1 km), and<br />
both automatic and manual interpretation approaches have been successfully used for extraction of<br />
information [1]. With new image products providing detailed scene description, information extracted at high<br />
resolution (ranging from 5m to 10m) is crucial for calibration and estimation of the reliability of low- and<br />
medium-resolution assessment, planning logistics for relief action in the field immediately after the event, and<br />
planning the resources needed for recovery and reconstruction.<br />
Local or detailed damage assessment can be addressed using very-high-resolution (VHR) satellite data with a<br />
spatial resolution ranging from 0.5 to 2.5m. At this level, the operational methodology for extraction of the<br />
information is based on manual photo-interpretation of the satellite data which are processed on the screen<br />
by the photo-interpreter as for any other aerial imagery. The drawbacks of traditional photo-interpretation<br />
methodology are linked first to the time and cost needed for manual processing of the data and second to the<br />
difficulty in maintaining coherent interpretation criteria in case there are large numbers of photo-interpreters<br />
working in parallel for interpretation of wide areas in a short time. In order to tackle the problem, automatic<br />
processes for detecting damaged buildings were presented in the literature, either exploiting optical sensors<br />
[2]-[4] or SAR images [5], [6]. Nevertheless, these methods are generally dedicated to one type of damage and<br />
47
equire pre- and post-damage images. Following some particular event, like an armed conflict or a hurricane,<br />
the observable damages are roofless buildings as illustrated in Figure 1.<br />
a) b) c) d)<br />
Figure 1. Fig. 1. Example of roofless buildings observed in VHR optical images. (a) Following conflict in<br />
Georgia.(WorldView1) (b) Following conflict in Sri Lanka.(WorldView2) (c) Following conflict in Nagorno-<br />
Karabakh.(QuickBird) (d) Construction site in Haiti.(Aerial sensor)<br />
Some automatic methods were proposed in literature [7] for characterising the roofless buildings from VHR<br />
panchromatic images. The output products is a fuzzy membership map which assigns to each pixel an index<br />
values between 0 and 1 representing its belonging to the semantic class roofless building. Such method are<br />
based on the aggregation of morphological characteristics. More recently a method for estimating the number<br />
of roofless buildings was presented in [8].<br />
The validation of such methods is crucial in understanding the reliability of the given products. This paper<br />
adresses the problem of validating automatic characterization and counting of roofless buildings from VHR<br />
optical images. In order to perform the validation, a WorldView 1 panchromatic image has been photointerpreted,<br />
resulting in geolocalized points on top of the roofless buildings. First, such collection of points<br />
enables to assess the fuzzy membership map validity through a receiver operational characteristics analysis,<br />
estimating the probabilities of false alarms and missed detections (commissions, omissions). Secondly, a<br />
method for estimating the global number of roofless buildings is reminded [8]. Such method exploits the<br />
availability of partial ground truth in order to learn an optimal parameter, then giving the number estimate.<br />
The biais and variance of this estimate are experimentally derived giving understanding about the product<br />
reliability.<br />
Approximate ROC Analysis<br />
A VHR image can be formally represented by a map function I(x) from the grid space to the measurement<br />
space. By an automatic processing of the image I(x), an automatic detection of relevant structures can be<br />
derived. Let be the roofless building fuzzy membership function associating a confidence to each pixel .<br />
Such function takes its values in the interval [0,1]. Due to the underlying process the fuzzy membership<br />
contains blobs around roofless buildings, as depicted in Figure 2.<br />
a) b)<br />
Figure 2. a) A subregion of the VHR image I(x). b) The corresponding fuzzy membership function .<br />
48
The photo-interpretation and digitalization being a time consuming task, roofless buildings are generally<br />
identified by geolocalized points. The comparison of points to blobs is not trivial and require some<br />
approximations. Instead of reducing the fuzzzy membership map to points, a mask of roofless buildings is<br />
synthesized from the points. It is assumed that the roofless buildings have an average shape corresponding to<br />
a disk of diameter 15 meters, such that a disk is centered at each geolocalized point, as depicted in Figure 2.<br />
Such a mask can formally be represented by the function .<br />
In order to validate the automatic process, both functions and are compared by computing the<br />
Receiver Operational Characteristics pixelwise, providing an approximation of the false alarm and missed<br />
detection<br />
probabilities:<br />
(1)<br />
(2)<br />
where is the threshold of . Varying the threshold , gives the parametric function<br />
which is commonly called ROC curve. As the ground truth is synthesized from geolocalized<br />
points, the ROCs are approximations of the real errors.<br />
Figure 3. A WorldView panchromatic scene of size 13 × 5 km2, representing the city of Tskhinvali is depicted on the top.<br />
Geolocated points indicating the roofless buildings, which were obtained by photo interpretation, are overlaid in<br />
green. A zoom of an area of interest is displayed below. The points are extended to 15m diameter disks to build a<br />
ground truth mask.<br />
The ROC associated to the full scene analysis is depicted in Figure 4. At equal error probability, the fuzzy<br />
membership provides 20% of missed detections and false alarms in comparison to the synthesized mask<br />
. Thus, such automatic product can be used in crisis scenario in order to assess quickly the extent of<br />
damages.<br />
49
missed detection rate<br />
1<br />
0.9<br />
0.8<br />
0.7<br />
0.6<br />
0.5<br />
0.4<br />
0.3<br />
0.2<br />
0.1<br />
0<br />
0 0.2 0.4 0.6 0.8 1<br />
false alarm rate<br />
Figure 4. The receiver operational characteristics curve associated to the roofless building detector .<br />
The global probability of error is obtained knowing the prior distribution of damaged locations in the image<br />
. Assuming spatial independence of variables, the pixelwise probability of error is then<br />
obtained by:<br />
(3)<br />
(4)<br />
The threshold giving the minimum global error probability provides the best pixel classification. However,<br />
the minimization of the pixel error can be meaningless depending on the application.<br />
Roofless Building Count Estimation<br />
A method for estimating the number of roofless buildings from a fuzzy membership map was proposed in [8].<br />
In this paragraph, the method is briefly reviewed.<br />
We assume that a subpart of the whole scene is photo-interpreted and the corresponding ground truth is used<br />
to select the best threshold of the roofless membership function . Knowing the average size of a roofless<br />
building and the total destructed area in the scene, the ratio of both quantities indicates the number of<br />
roofless buildings. The total hit area from the ground truth mask is given by:<br />
, where<br />
isthe number of pixels in the image. The average building size is given by the selected disk area (a disk of<br />
diameter m). Then, by construction the true number of roofless buildings is .<br />
To estimate this number, an estimate of the ROC curve and of prior probabilities is required. The ROC curve<br />
can be estimated using the photo-interpreted subregion thanks to the equations (1)-(2). For some threshold ,<br />
the estimated total area is thus given by<br />
. The roofless buildings detection produces<br />
two types of errors: the false alarms and the missed detections. While the false alarms increase the estimated<br />
area, the missed detections decrease it. Therefore, an optimal threshold can be selected such that both types<br />
of error compensate:<br />
(5)<br />
50
where<br />
is the prior probability estimated from the considered subregion. The total roofless area<br />
estimate is thus given by and the number of roofless buildings is obtained by .<br />
Validation of the Count Estimation<br />
Having the ground truth of a subpart of the scene, the number of roofless buildings can be estimated. In this<br />
section, the count estimate is analysed depending on the available ground truth coverage.<br />
To run the first experiment, a coverage size is selected and is expressed in percentage of the scene area. Then,<br />
a square of the chosen size is randomly selected from the scene and is considered as representing the<br />
available ground truth. Finally, an optimal threshold is derived from the knowledge of the approximate<br />
ground truth and the membership function in this subregion thanks to the optimization criterion of (5).<br />
Applying the threshold on the whole membership function enables to estimate the total area , thus the<br />
global number of roofless buildings<br />
. For one chosen size, the estimation is run 50 times from<br />
randomly selected subregions, providing information about the estimator variability. The obtained count<br />
estimates are summarized in Figure 5, where the ground truth coverage is varying. The graph represents the<br />
, and percentiles of the estimate depending on the ground truth coverage. For any<br />
coverage, the median estimate is close to the true number which is . However, the variation of the<br />
estimator decreases when the available ground truth coverage increases. When reaching a ground coverage of<br />
, the estimator variability decreases slowly, such that in half of the cases, the count estimates lies in<br />
.<br />
5000<br />
4500<br />
4000<br />
Count estimate<br />
3500<br />
3000<br />
2500<br />
2000<br />
1500<br />
1000<br />
500<br />
0<br />
3% 6% 9% 13% 17%<br />
Ground truth coverage<br />
Figure 5. Box plot representation of the roofless count estimator variabilities, where each box represents their ,<br />
and percentiles. The horizontal axis represents the percentage of ground truth available with respect to<br />
the whole scene.<br />
The previous analysis does not incorporate the expert knowledge for selecting the subregion to be photointerpreted.<br />
To gain in understanding, a second analysis is conducted with the same data set. Random<br />
subregions of fixed size are selected from the image and their optimal threshold is computed, giving an<br />
estimate of the global number<br />
. Then, each pixel in the region is associated to this estimate, such<br />
that an average of the estimate can be computed per pixel:<br />
(6)<br />
51
Such a map gives insight in the contribution of each pixel to the global number estimate given a fixed<br />
subregion size. In addition, it enables to understand the effects on over or under estimation. The map is<br />
computed for regions occupying 6% of the total area and is displayed in Figure 6.<br />
Figure 6. The map is represented and color coded for the whole scene, presented in Figure 3.<br />
Underestimation is color coded in blue, while overestimation is coded in red, yellow. The ground truth points are<br />
overlaid in black dots.<br />
One can observe that considering subregions which do not contain any roofless buildings lead to an<br />
underestimation of the total number of destroyed building (blue parts). Two subregions producing an accurate<br />
estimate and an overestimation are selected and represented in Figure 7.<br />
a) b)<br />
Figure 7. a) A subregion producing an accurate estimate. c) A subregion producing an overestimate.<br />
The subregions producing overestimations are contaminated by clouds of smoke modifying the illumination<br />
conditions and the automatic process response. The subregions producing accurate estimation contain a<br />
variety of patterns, including roofless buildings, and they are representative of the global scene content.<br />
By rapid visual inspection an expert is able to select a subregion to be photo-interpreted such that it<br />
represents the overall scene content, avoiding problematic subregions containing cloud of low density of<br />
roofless building. Analyzing the histogram of the green part of Figure 7 gives a confidence interval for the<br />
count estimate which is of [745, 1453]. By sampling correctly, the interval is improved in comparison to the<br />
worst-case interval [450, 2000] displayed in Figure 5 for subergions of 6% coverage.<br />
Conclusion<br />
This paper presents the validation of an automatic roofless building count process exploiting photointerpreted<br />
ground truth. The intermediate fuzzy membership map enables to recover the photointerpretation<br />
up to 20% of false alarm and missed detection. Then, the roofless membership map is exploited<br />
52
in order to estimate the number of roofless buildings in the whole scene exploiting partial photo interpreted<br />
ground truth. The validation results show that the number of roofless buildings can be well approximated<br />
considering a sub region representative of the situation.<br />
References<br />
[1] A. S. Belward, H.-J. Stibig, H. Eva, F.Rembold, T. Bucha, A. Hartley, R. Beuchle, D. Al Khudhairy, M.<br />
Michielon, and D. Mollicone, “Mapping severe damage to land cover following the 2004 Indian ocean<br />
tsunami,” International Journal of Remote-Sensing, vol. 28, no. 13, pp. 2977 –2994, 2007.<br />
[2] M. Pesaresi, A. Gerhardinger, and F. Haag, “Rapid damage assessment of built-up structures using VHR<br />
satellite data in tsunami-affected areas,” International Journal of Remote-Sensing, vol. 28, no. 13, pp. 3013–<br />
3036, 2007.<br />
[3] M. Pesaresi and E. Pagot, “Post-conflict reconstruction assessment using image morphological profile and<br />
fuzzy multicriteria approach on 1-m- resolution satellite data; application test on the Koidu village in Sierra<br />
Leone, Africa,” in Proc. Urban Remote Sensing Joint Event, Apr. 11–13, 2007, pp. 1–8.<br />
[4] E. Pagot and M. Pesaresi, “Systematic study of the urban postconflict change classification performance<br />
using spectral and structural features in a support vector machine,” IEEE Journal of Selected Topics in Applied<br />
Earth Observations and Remote Sensing, vol. 1, no. 2, pp. 120–128, Jun. 2008.<br />
[5] M. Matsuoka and F. Yamazaki, “Use of satellite sar intensity imagery for detecting building areas damaged<br />
due to earthquakes,” Earthquake Spectra, vol. 20, no. 3, pp. 975–994, 2004.<br />
[6] P. Gamba, F. Dell’Acqua, and G. Trianni, “Rapid damage detection in the bam area using multitemporal sar<br />
and exploiting ancillary data,” Geoscience and Remote Sensing, IEEE Transactions on, vol. 45, no. 6, pp. 1582 –<br />
1589, june 2007.<br />
[7] L. Gueguen, M. Pesaresi, P. Soille, A. Gerhardinger, “Morphological Descriptors and Spatial Aggregations<br />
for Characterizing Damaged Buildings in Very High Resolution Images,” Proc.of the ESA-EUSC-JRC 2009<br />
conference. Image Information Mining: automation of geospatial intelligence from Earth Observation, Nov.<br />
2009.<br />
[8] L. Gueguen, M. Pesaresi, A. Gerhardinger, P. Soille, “Characterizing and Counting Roofless Buildings in Very<br />
High Resolution Optical Images,” IEEE Geoscience and Remote Sensing Letters, in press.<br />
53
ABSTRACT<br />
Evacuation plans : interest and limits<br />
ROUMAGNAC A. and MOREAU K.<br />
PREDICT Services, France<br />
alix.roumagnac@predictservices.com<br />
Abstract<br />
Facing natural disasters, the elaboration and use of evacuation plans becomes crucial. The<br />
emergency management requires timely, fast and also reliable informations. In this objective the<br />
use of space data has been identified and used to elaborate evacuation plans in short times. The<br />
purpose of this communication is to present through an example, the interest and also the limits<br />
and area for improvement of such initiatives.<br />
As a subsidiary of EADS Geo-information Services, Météo-France and BRL, Predict Services has been<br />
working on elaboration of safety plans and helping in decision for the activation of these plans for 8<br />
years, in France and Haïti. Its crisis management expertise and knowledge has been employed to<br />
evaluate the efficiency of evacuation plans elaborated with short time method and space data in<br />
Haïti.<br />
The communication will present the analysis of the short time and space data maping method used<br />
for the elaboration of emergency plans in Haïti. The analysis will highlight the limits and aspects<br />
that should be improved through two examples : the plan elaborated for Port au Prince, and the<br />
other one elaborated for Léogâne facing a cyclone warning.<br />
The analysis will focused on spatialisation of threats, location of final accomodation, priority<br />
evacuation areas, escape roads, public reception facilities.<br />
55
SHORT PAPER<br />
Outside the Matrix, a review of the interpretation of Error Matrix<br />
results<br />
VEGA EZQUIETA P. 1 , TIEDE D. 2 , JOYANES G. 1 , GORZYNSKA M. 1 , USSORIO A. 1<br />
1<br />
European Union Satellite<br />
2 Z-GIS Research, University of Salzburg<br />
p.vega@eusc.europa.eu, a.jimenez@eusc.europa.eu<br />
1. Introduction<br />
In the context of validation of a geographic information product there are several methodologies that could be<br />
potentially applicable. The Error Matrix is a validation methodology widely accepted. The reasons for that are<br />
many. First of all, is a scientific approach that follows a methodology that is equal for all validation processes,<br />
and the results of different processes could be comparable to each other. Also, the Error Matrix “…is a very<br />
effective way to represent map accuracy in that the individual accuracies of each category are plainly described<br />
along with both the errors of inclusion (commission errors) and errors of exclusion (omission errors) present in<br />
the classification.”(Congalton and Green, 2009).<br />
As defined by Congalton and Green, the Error Matrix is without any doubt an optimal way of measuring the<br />
degree of convergence of two datasets, the one used for the map, and the one used as reference data, also<br />
referred as “ground truth”.<br />
The implementation of this methodology as a valid validation method (if you’ll forgive the repetition) has to<br />
take into consideration the conditions of the input data. The Error Matrix processes classified data. Both Map<br />
Data (or Classified Data) and the Reference Data (or Ground Truth) need to be provided to the Error Matrix<br />
under a classification scheme. The classification scheme must follow certain conditions. In particular the<br />
classes of the scheme must be mutually exclusive and totally exhaustive. To these two conditions we would<br />
add the concept of semiotically balanced that will be explained later on in this article.<br />
The application of the Error Matrix methodologies to products that are not classified following these criteria<br />
produces distortions in the interpretation. The problem starts when some products, commonly accepted by<br />
the user community, cannot match the conditions of Congalton and Green. The lack of any other analytical<br />
methodology alternative to this makes many products being validated with an Error Matrix, even if they don’t<br />
match these conditions.<br />
The distortions of the interpretation of these products come actually from the implementation of a<br />
methodology that, even if optimal in some cases, is not universal. The distortions of the Error Matrix lie,<br />
actually, outside the Matrix.<br />
In this paper we will make an intellectual exercise, a test with several example products, taken from<br />
operational activations of the project G-MOSAIC, and test them against the Error Matrix (at least theoretically).<br />
This exercise will help us understand the limits of application of the Error Matrix.<br />
57
2. Error Matrix in a nutshell<br />
The purpose of this paper is not to explain in detail the functionality of an Error Matrix. This paper is written<br />
under the assumption of a basic knowledge of Error Matrices and its application in validation. Nevertheless, a<br />
very schematic explanation is provided here for the sake of those who are not so familiar with the topic.<br />
As defined by Congalton, “…an error matrix is a square array of numbers set out in rows and columns that<br />
expresses the number of sample units assigned to a particular category in one classification relative to the<br />
number of sample units assigned to a particular category in another classification.”<br />
An example of an Error Matrix could be seen here:<br />
Reference Data<br />
Water Crop Trees Row Total<br />
Water 75 5 12 92<br />
Map Data<br />
Crop 13 50 14 77<br />
Trees 2 3 76 81<br />
Figure 1 Error Matrix Example<br />
Column Total 90 58 102 250<br />
The columns represent one dataset (the Reference Dataset, for example), and the rows represent the Map<br />
Data (or Classified Data). So, in 50 occasions, a sample of Crop was correctly assigned, because both the<br />
Reference and the Map data matched.<br />
Out of 250 samples (the sum of all rows total and columns total), 201 (75 + 50 + 76) were correctly assigned<br />
(and 80% of accuracy). Nevertheless, this figure represents an overall accuracy. The accuracy can also be<br />
calculates per class, and different from the point of view of the producer of the map and the point of view of<br />
the user of the map. For example, the classification of Trees has an accuracy of 74% from the point of view of<br />
the producer (76/102), but it has an accuracy of 93% from the point of view of the user of the map (76/81).<br />
This difference means that too little areas in the map were classified as Trees in the map.<br />
3. Typologies of products to be evaluated<br />
Products to be tested against an Error Matrix would be selected following a classification of products oriented<br />
to the type of data used to compose a map. A map itself, as a final product, is always a raster file where the<br />
values of the pixels are grouped representing meaningful (if possible) information that the reader in the map<br />
can decode in its mind. The problem is that, under the point of view of validation, looking at the final map is in<br />
fact an error. The resampling method used to visualize the map could alter the values of the pixels, some pixels<br />
do not represent geographical information (such as a scale bar) or do not represent ground information (such<br />
as geographical grid).<br />
For this reason, the data to be assessed should be rather the information of the map, not the map itself. And<br />
here, in assessing the information used to compose the map, is when we find different types of information.<br />
There is a commonly accepted difference between raster and vector, as the two basic ways of storing<br />
geographical information. There are other ways to classify the information, and attending to the conditions of<br />
classification schemes exposed by Congalton, an appropriate classification of geographical data could be<br />
according to its continuity or discontinuity. This way, under this classification perspective, the geographical<br />
information used to compose a map could be either Coverage Composed or Feature Composed. What is<br />
meant by each of those two types in now explained:<br />
58
Coverage Composed: Information is coverage composed if all the places of one specific area have a<br />
meaningful value. The first interpretation of this would be believe that “coverage composed” is equal<br />
to “raster”, but this is not necessarily true. The next table provides examples of Continuous Datasets.<br />
This dataset, for example, is continuous<br />
vector coverage. The vector information<br />
represents the urban blocks, and the<br />
color the amount of damage identified.<br />
Transparent has been chosen as the<br />
color for “Not Damaged”, being<br />
meaningful for that reason. This dataset<br />
is continuous and mutually exclusive, so<br />
it could be potentially tested by the Error<br />
Matrix, as long as the process to create<br />
the Reference Data is equivalent to the<br />
one used to create the Map Data.<br />
This dataset is an abstract surface<br />
representing damage derived<br />
automatically. The indicators of damage<br />
are, in this case, the changes in the<br />
shadows of buildings. These indicators<br />
are then interpolated generating a<br />
continuous surface. This product does<br />
meet the conditions exposed by<br />
Congalton and Green, but because there<br />
is a difference between the data and the<br />
representation of the data, this could<br />
lead to distortions if validated with an<br />
Error Matrix.<br />
This dataset is a classified image where<br />
all the information belongs to one of the<br />
classes of the schema. This is the type of<br />
dataset that Congalton and Green uses<br />
as example when explaining the use of<br />
Error Matrix and it matches perfectly all<br />
conditions. This dataset can be perfectly<br />
validated with that method, and it will<br />
provide relevant information.<br />
© Digitalglobe, 2010<br />
Figure 2 Damage Assessment Chile Earthquake.<br />
Concepción. Feb 2010.<br />
Figure 3 Automatic Damage Assessment. Carrefour, Haiti.<br />
Jan 2010.<br />
© Digitalglobe, 2011<br />
Figure 4 Example of a classified image<br />
<br />
Feature Composed: Information is Feature Composed if not all places in the area have meaningful<br />
value. The first interpretation of this would be to believe that “vector” is equivalent to “Feature<br />
Composed”, but this is not necessarily true because a raster layer can be composed of a mask-type<br />
59
data, in which certain areas are highlighted with a value and the background is left as values of 0,<br />
meaning areas with no relevant meaning.<br />
This dataset is a situation map. Within a<br />
certain level of complexity, and if it<br />
includes terrain information, these maps<br />
are also known as topographic maps.<br />
This datasets is vector discontinuous,<br />
with different geometries used to<br />
represent different objects. For this map,<br />
29 different elements are extracted and<br />
have a differentiated symbol of the<br />
legend. Some of these symbols are<br />
semantically related (main road and<br />
highway) and some are complete<br />
different classes. If this dataset is<br />
validated using Error Matrix, distortions<br />
of the result will be produced.<br />
This is a crisis map. It is different from<br />
the situation map from the point of view<br />
that it adds a layer of analysis to the<br />
background data. From this point of view<br />
is more and object oriented dataset,<br />
where the quality of the information will<br />
be directly related with the capacity to<br />
properly associate objects related with<br />
the issue that needs to be mapped. A<br />
map of built up areas at risk would be an<br />
example. Another example is the one<br />
present here in the figure on the right.<br />
The information represented here are<br />
changes related with the altering of a<br />
river course. Once again, this dataset is<br />
not continuous and is not mutually<br />
exclusive either (there is only one class).<br />
© Geoeye, 2011<br />
Figure 5 Example of situation Map over Abidjan, Ivory<br />
Coast.<br />
© Geoeye, 2011<br />
Figure 6 Example of criss map, where relevant changes are<br />
highlighted. Costa Rica - Nicaragua, December 2010.<br />
60
This in an analytical map. This map<br />
includes information that cannot be<br />
validated from the ground because it<br />
requires contextual interpretation. In this<br />
case, an evacuation plan representing<br />
the optimal routes to escape a given<br />
location. Each and all objects (the<br />
evacuation routes) must be validated as<br />
a whole, and taking samples along the<br />
route will not provide a real value of how<br />
good the plan is.<br />
© Digitalglobe, 2011<br />
Figure 7 Evacuation Plan Sana'a, Yemen. June 2011.<br />
4. Conditions of the Error Matrix, and how they are not met<br />
The conditions exposed by Congalton and Green are clear and a good filter for the information to be processed<br />
by an Error Matrix. As we have mentioned, not all products accepted today as useful products meet these<br />
conditions.<br />
4.1. Mutually Exclusive:<br />
The classification scheme of the input data can be considered as mutually exclusive when “…each mapped<br />
area fall into one and only one category or class.”<br />
Surface Abstraction: Like a density analysis, for example. The Automatic Damage assessment of<br />
Carrefour, as seen in Figure 3, can be represented as different classes but that is just a representation<br />
method. The number of classes is no more than an option in the process of an algorithm that<br />
calculates the density of damage indicators. Each class cannot be considered as mutually exclusive as,<br />
in fact, there are no classes, but an infinite range of values representing density that are, for<br />
cartographic purposes, represented in classes.<br />
Topographic Maps: Topographic maps do not have the information organized with this principle in<br />
mind. For example, buildings are understood as belonging to a Built Up Area, and a point features,<br />
such as a tower pylon, are decoded in our mind as “on top” of another feature (like a crop field, or a<br />
forest).<br />
Crisis Map: There could be no exclusion because there are no classes. In the example of Figure 6 all the<br />
information is “changes related with the alteration of a river course”. This level of abstraction is not<br />
trying to provide classification tags to particular geometries, but something completely different.<br />
Analytical Map: Some objects of the map are more relevant than other, and in fact they live in<br />
different layers of information. For example, in the case the same road can be a “highway” and an<br />
“evacuation route”. In a Mutually Exclusive environment, the only relationship allowed between<br />
objects in a map is “they touch each other and do not overlap”, which is clearly not true in an<br />
analytical map such as an evacuation plan.<br />
4.2. Totally Exhaustive:<br />
The classification scheme will be totally exhaustive when “…every piece of the landscape will be labeled,”<br />
(Congalton and Green, 2009) This condition can only be met by Coverage Composed data. All the examples<br />
seen before and classified as Feature Composed will not match this.<br />
From the examples seen before, several products would not be totally exhaustive, like the relevant changes<br />
map of Figure 6, topographic maps or evacuation plans like the one in Figure 7.<br />
61
The creation of an “unclassified” or “other” class solves the issue only in a formal way. With such class, the<br />
Error Matrix works, but the meaning, the interpretation of the validation result is altered, as the confusion<br />
between classes means something different if we are talking about two meaningful classes or not. Also, the<br />
creation of an “unclassified” class introduces a distortion between the positional accuracy of an object and its<br />
thematic assignment. How an object is extracted (where is the limit drawn) will influence the result, giving the<br />
idea that a commission of one class (class X) is the omission of another class (unclassified class), when this is<br />
actually not a thematic error, but a positional error.<br />
All this can be easily explained in the next diagram:<br />
Unclassified<br />
Class<br />
Unclassified<br />
Class<br />
Thematic<br />
Agreement<br />
between classes<br />
Figure 8 The inclussion of "Unclassified" class can distort the perception of positional and thematic validation.<br />
4.3. Semiotically Balanced<br />
There is another condition, not expressed by Congalton when enumerating the list of condition for a<br />
classification scheme. It is not specified as a condition, but when exploring the possibilities of fuzzy<br />
classification it can be inferred. For this reason, this condition should be explicitly included at the same level as<br />
the other two. Semiotically balanced means that all classes should have something in common. If they need to<br />
be mutually exclusive and totally exhaustive, they need to be also on a similar semiotic level. For example,<br />
“tower pylon” and “crops” are two non semiotically balanced classes because they have nothing in common.<br />
The existence of one (the commission of one) does not imply the non-existence of the other (the omission of<br />
the other). “Water” and “Built Up Area” are semiotically balanced because they are both land use or land<br />
cover features. That’s what they have in common. Depending on how they are balanced, the semiotic aspect<br />
of the class can be:<br />
- Thematically balanced: For example, if we take the next list: (Main Road, Secondary Road, Local Route,<br />
River), we create an unbalanced selection, because mistakenly tagging an object as Local Route when<br />
it is a Secondary Route is not the same as mistakenly tagging that same object as River.<br />
- Geometrically balanced: Objects of different geometries (points, lines and polygons) are not balanced<br />
because lines do not exclude polygons, points can be inside polygons… topological relationships<br />
among these objects are complex.<br />
5. Distortions in the interpretation derived from a wrong input in the Matrix.<br />
The distortions derived from using datasets that do not match the above mentioned conditions are many. The<br />
following list represents many of these distortions found so far in operational products. This list does not<br />
represent all possible distortions.<br />
5.1. Confusion of Spatial and Thematic Accuracy<br />
As advanced before in Figure 8 the comparison of thematic accuracy between a thematic object of a give class<br />
and the background can be confused with positional inaccuracies. This is particularly true in the case of<br />
situation and topographic maps.<br />
62
In the example in the right, there are several<br />
objects extracted. In particular there is one<br />
place identified as “Checkpoint”. If a sample<br />
for an Error Matrix is taken there, that<br />
sample would return as “Checkpoint”. If this<br />
point was displaced then the ground sample<br />
would probably be classified as “Harbour”. If<br />
the point was gone, missed, then the ground<br />
sample would probably be classified as<br />
“Harbour” as well. A positional inaccuracy<br />
(wrong location of the checkpoint) and a<br />
thematic inaccuracy (omission of a<br />
checkpoint) would be classified by the Error<br />
Matrix as a thematic inaccuracy, an omission<br />
of the “Checkpoint” combined with a<br />
commission of the “Harbour” class.<br />
© Geoeye, 2011<br />
Checkpoi<br />
nt<br />
Figure 9 Situation Map over Ivory Coast. Entrance of a<br />
Harbour facility.<br />
5.2. Equal Value of Errors<br />
In maps with a high complexity of classes, like a topographic map, the errors can have a different operational<br />
meaning. It is true that the Error Matrix allows us to identify the relationship of confusion between classes, but<br />
that does not imply a graduated quality of the errors, so the overall quality is not affected by the different<br />
qualities of errors.<br />
In the example in the right, in very dry areas<br />
streams can look like trails (and in fact they<br />
are used as such). Some classes are very<br />
close in concept, like an Unpaved Road and a<br />
Trail. Confusion one with the other has a low<br />
impact in the usability of the map. Confusing<br />
a dry stream with a trail means a much<br />
higher impact.<br />
Error Matrix can deal with this by applying a<br />
“fuzzy approach”. In this fuzzy approach, a<br />
step of one class of error is accepted as a<br />
correct result, in a certain way “widening”<br />
the central diagonal of the Matrix.<br />
© Geoeye, 2010<br />
Figure 10 This dry stream could be mistaken as a path,<br />
Somalia, June 2010.<br />
Unfortunately, this is only true for classification schemes where all classes cover the same topic and are<br />
gradually different (like types of roads, or levels of damage). In an Abstract Surface like the one presented in<br />
Figure 3 this error would be solved by applying a fuzzy approach.<br />
5.3. Different Results of validation depending on the representation method<br />
Density surface maps (i.e. damage density like the automated Carrefour damage assessment) start from base<br />
data (here: damage indications) and create a surface of information (density map), in a certain way blurring<br />
the information in order to decrease inaccuracies (avoiding selling a high level of accuracy the data does not<br />
63
have, since it is based on damage indications - which were in the case of the automated Carrefour damage<br />
assessment the change of shadows casted by the buildings). Again, the application of standard accuracy<br />
assessment routines for these kinds of maps is, however, limited (Kerle, 2010).<br />
This dataset was validated (see also Tiede et al. 2011) in an exercise post validation with the Haiti Earthquake<br />
2010 “Remote sensing damage assessment: UNITAR/UNOSAT, EC JRC and World Bank” several weeks after the<br />
event. For a comparison of damage intensities between the data sets a kernel density map was also derived<br />
from the reference data set by using damage grades which were likely to have affected the shadows cast by<br />
buildings, which was the indicator of damage identified in the automatic damage assessment of Carrefour<br />
performed by G-MOSAIC.<br />
A rank difference calculation between the different damage density classes was used for the evaluation of<br />
similarities between the two maps. A fuzzy approach, accepting neighboring classes (rank difference between -<br />
1 and +1) as still valid, was carried out. Following Congalton and Green (2009) such an approach is acceptable<br />
if the classes are very similar or even continuous (like in this case) and not discrete.<br />
The validation was then oriented only to the distribution and damage densities, and not to the damage<br />
assessments themselves (since both datasets were performed with a very different focus). This way, the<br />
validation was applied on the representation of both datasets (a kernel density map derived from both<br />
datasets). For the validation the density surfaces where categorized into different damge density classes..<br />
After that, the density surface is then represented is steps. Changes in this process of representation can alter<br />
the final look of the product (not so the essential idea of the map, which is to provide the user with the<br />
knowledge of how the damage intensity is spread in the area). This final look is what will be validated. The fact<br />
that changes in the representation can change the validation result highlight the possible inadequacy of the<br />
methodology used for validation.<br />
Figure 11 Cloud of Points<br />
Figure 12 Kernel Density derived from<br />
the previous points<br />
Figure 13 The same density layer<br />
represented in classes<br />
In the figures above we can see how different the data looks, and it is derived from the same dataset. Each of<br />
these steps is subject to parameters that may alter the final look. These alterations may not be important for<br />
the interpretation of the map by a human being (the human brain will understand which areas are more<br />
damaged and which ones are not and will process the area as a whole). An Error Matrix may return different<br />
values depending on these changes, which is a major flaw in a validation system.<br />
5.4. Limitations in the in the interpretation of Abstraction<br />
Some datasets are not exactly a direct representation of what is in the real world in the same exact location.<br />
Some imply a certain level of abstraction.<br />
- Abstract Surfaces: An Abstract Surface can be perfectly false in 99% of its surface, and still be a very<br />
good product. As in the example below, an Abstract Surface claims that the damage is evenly<br />
distributed. This is not true, damage is discrete, and in case of an earthquake, can be found in<br />
buildings in different levels. If samples are taken in the ground, probably many places can be<br />
considered as “Not Damaged” by the reference data, while will still be represented as “Damaged”<br />
because of its proximity to damaged buildings. The only way to avoid this would be to take a<br />
64
theoretical sample the same shape and size of one class and process it as one single sample, as<br />
explained in the diagram below. Most of the times this is too big and will always be very open to<br />
subjective interpretation.<br />
A sample here does not<br />
take into consideration the<br />
proximity of other values.<br />
A valid sample would be<br />
like this one, but it would<br />
be very open to<br />
subjectivity:<br />
Figure 14 Diagram showing the sampling problems in abstract surfaces<br />
- In case of an evacuation map, some features depend on the context. An evacuation route, for example<br />
is either good or bad for as long as the whole feature is good. Taking a sample in a part of the map<br />
defined as “evacuation route” will only reflect if that part matches the criteria in that point, as it<br />
cannot take into consideration context information.<br />
An evacuation route could have an<br />
unacceptable choke point somewhere,<br />
making the whole feature invalid. Plus, it is<br />
also interesting to assess if it is in fact the<br />
most efficient path between point A and B.<br />
That will not be reflected by an Error<br />
Matrix.<br />
As in the image in the right, taking it as a<br />
sample doesn’t provide us information<br />
about the starting and the end point of the<br />
route.<br />
© Geoeye, 2011<br />
Figure 15 An evacuation route in Yemen, June 2011<br />
5.5. Object Oriented Correlation<br />
Congalton and Green already mentioned this in their book. Non-Site Specific assessments would not account<br />
for the spatial correspondence, while Site Specific assessment would. Although, it is not clear how the<br />
application of an Error Matrix approach can help to reduce this problematic. A stratified random sampling<br />
technique will consider the distribution of classes in a map, but is still not object oriented.<br />
In the next example it can be seen how the lack of an object oriented correlation can produce distortions in its<br />
interpretation:<br />
65
© DigitalGlobe, 2011<br />
© DigitalGlobe, 2011 © DigitalGlobe, 2011<br />
A B C<br />
Figure 16 Ground Truth data for the<br />
Costa Rica / Nicaragua water stream<br />
modifications, March 2011<br />
Figure 17 One possible dataset for<br />
the Costa Rica / Nicaragua water<br />
stream modifications, March 2011<br />
Figure 18 An alternative dataset for<br />
the Costa Rica / Nicaragua water<br />
stream modifications, March 2011<br />
These three figures represent changes that are relevant as indicators of a modification of a river stream in a<br />
border area between Costa Rica and Nicaragua.<br />
- Dataset A: Represents the Reference Data or Ground Truth. This is the data B and C could be validated<br />
against following the Error Matrix method.<br />
- Dataset B: Is a dataset where one of the areas is missing, but the other two are exactly equal to the<br />
one in Dataset A. The correspondence between both datasets could be 65%.<br />
- Dataset C: All three areas are detected, but are roughly depicted, not matching well with dataset A. As<br />
a result, the correspondence between both datasets could be of 65%.<br />
As a conclusion, both datasets are validated as equally good (or wrong). Now, if the idea was to identify<br />
relevant indicators of the modification of a river stream, which of the two datasets represents better the<br />
Reference Data? C and B are not the same in terms of capacity to send information, as B completely misses<br />
one change that is relevant. The result from an Error Matrix point of view, although, could be the same.<br />
6. Conclusions<br />
There is one main conclusion obtained from this paper, and it is the fact that the Error Matrix is a methodology<br />
developed to validate geographical information produced through classification. Trying to extrapolate this<br />
technique to other typologies of datasets introduces distortions that are not so easily detected. They are not<br />
so easy to spot because the Error Matrix system still works and produces results. Only a detail examination<br />
based on experience can spot these distortions that are, in the other hand, impossible to measure.<br />
It is not an easy task to develop a full proposal for new validations systems, and that falls beyond the reach of<br />
this article, but few ideas could be drafted:<br />
- Any evaluation of the quality of a product should take into consideration the purpose of the product<br />
and not only the actual quality of some data parameters. When evaluating the data some features of<br />
the dataset might be easy to measure, but that doesn’t mean they are related with the purpose of the<br />
data. Geographical data can undergo several levels of abstraction, making the information found in the<br />
dataset less connected with the actual features found in the ground, but without decreasing the<br />
overall quality. Examples of very abstract datasets could be found in Analytical Maps and Crisis Maps.<br />
- Identifying different typologies of datasets and applying different methodologies of validation might<br />
seem cumbersome, but could be in fact simpler approach. With ad-hoc solutions applied to different<br />
types of datasets (like the ones proposed in this article, for example) the variability among the<br />
different datasets where validation is applied is drastically reduced and optimized systems can be<br />
developed. Error Matrices could be considered as the optimal way for validating classified imagery<br />
(thematic rasters).<br />
66
References:<br />
Congalton, R.G., and K. Green, 2009. Assessing the Accuracy of Remotely Sensed Data. Principles and Practices,<br />
CRC Press, Boca Raton, London - New York, 183 p.<br />
Kerle, N., 2010. Satellite-based damage mapping following the 2006 Indonesia earthquake - How accurate was<br />
it, International Journal of Applied Earth Observation and Geoinformation, 12(6):466-476.<br />
Tiede, D., Lang, S., Füreder, P., Hölbling, D., Hoffmann, C., Zeil, P., 2011. Automated damage indication for<br />
rapid geospatial reporting. An operational object-based approach to damage density mapping following the<br />
2010 Haiti earthquake. Photogrammetric Engineering & Remote Sensing, 77 (9), 933-942.<br />
Vega Ezquieta, González, Grandoni, Di Federico, 2010. Product Design, the in-line quality control in the context<br />
of rapid geospatial reporting, VALgEO 2010, ISPRA, Italy, 3 p.<br />
67
ABSTRACT<br />
On the complexity of validation in the security domain –<br />
experiences from the G-MOSAIC project and beyond<br />
KEMPER T., WANIA A and BLAES X.<br />
Joint Research Centre of the European Commission, Italy<br />
thomas.kemper@jrcj.ec.europa.eu<br />
Abstract<br />
The geospatial and remote sensing communities have understood that geospatial products in<br />
support of crisis management will be used in decision making only if they are validated properly<br />
and the accuracy is known – this workshop is a manifestation of this. Crisis managers need to know<br />
how accurate the information they receive is. With information on the accuracy of a product, they<br />
can often even accept less accurate information.<br />
Crisis management is not limited to natural or man-made disasters. Earth observation and<br />
geospatial information provide also ‘intelligence’ for humanitarian aid and civil protection<br />
operations. Applications include for example damage assessment during or after conflicts,<br />
identification and enumeration of refugee’s or IDP’s or the monitoring of landuse changes in the<br />
context of a crisis. While validation is very demanding in itself– it is even more complex in the<br />
security domain. In many cases, in particular in violent conflicts, the accessibility to the areas is<br />
limited or impossible and also the availability of trusted, unbiased reference sources is limited. On<br />
top of this, questions of confidentiality may have to be taken into account, where it might be<br />
impossible to disclose information related to such products.<br />
This presentation will give examples of the problems when validating services in the security<br />
domain and aims at providing ideas for validation strategies taking into account the above<br />
mentioned peculiarities. This will be presented based on case studies. An important role in such<br />
strategies is on the one hand the involvement of contacts to people in the field; on the other hand<br />
also ‘remote’ validation based on visual interpretation and cross-comparison are options to<br />
consider.<br />
69
SESSION III<br />
USABILITY OF WEB BASED DISASTER<br />
MANAGEMENT PLATFORMS AND READABILITY<br />
OF CRISIS INFORMATION<br />
Chair: Tom De Groeve<br />
Crisis management is a compelling domain for applying web-based GIS services as a<br />
mean to respond efficiently to demands of time-pressured disaster response. Crisis management<br />
applications require web mapping technologies accessible and customizable to non-specialist end<br />
users. Interoperable and open wed-based geographic information services are being developed<br />
and increasingly used in operational crisis management. What are the characteristics of these<br />
systems that make them suitable for emergency response and crisis management and<br />
differentiate them from traditional web mapping platforms? What are the specific usability issues<br />
that need to be taken into account when designing the interfaces and defining the functionalities<br />
of these systems? How decision makers in emergency management are operationally exploiting<br />
web-based GIS services for monitoring and responding to crisis situations? This session will spot<br />
the light on current state-of-the-art WebGIS implementations developed in support for crisis<br />
management. The focus will be on usability studies as a critical validation checkpoint in the<br />
development of these applications. The end-users feedback on experience using WebGIS services<br />
in everyday operations will also be discussed.<br />
Thanks to the increasing availability and capability of EO sensors, the monitoring of phenomena<br />
occurring on the Earth surface is improving continuously. The effort of public and private<br />
institutions is translated into the implementation of new services providing near-real-time<br />
information about disaster events. These services allow the actors involved in emergency<br />
management and rescue operations to have maps displaying up to date information about the<br />
crisis situation. It is crucial to evaluate how much these crisis products are actually used during<br />
the operations and for which specific purpose. Besides it must be analyzed if map layouts are<br />
optimized with respect to the users’ needs to allow a quick and effective interpretation. In this<br />
session, the map readability is also explored through real cases.<br />
71
SHORT PAPER<br />
Emergency Support System: Spatial Event Processing on Sensor<br />
Networks<br />
SZTURC R., HORÁKOVÁ B. , JANIUREK D. and STANKOVIČ J.<br />
Intergraph CS<br />
roman.szturc@intergraph.com<br />
Abstract:<br />
Command and control systems used in crisis and emergency management are designed for<br />
decision making and to control resources to successfully accomplish missions. An important aspect<br />
of command and control system is therefore situational awareness – information about the<br />
location and status of resources. Emergency Support System (ESS) is a suite of real-time spatial<br />
data centric technologies useful in abnormal events, as well as 7th Framework Programme project<br />
of the European Commission under theme Security.<br />
ESS integrates data from various data collection tools, like cell-phones, unmanned ground stations,<br />
unmanned aerial vehicles, air balloons, etc. Measurements of these data collection tools are based<br />
on the OGC Sensor Web Enablement (SWE) standard series and standards for video transmissions.<br />
Proprietary communication protocols and data formats are replaced by open interfaces. All data<br />
and services are harmonized in the Data Fusion and Mediation System (DFMS), a central<br />
component of ESS and published on the ESS Portal.<br />
Information sources made available through systems such as ESS may continuously generate a<br />
significant amount of data. If not filtered and processed appropriately, this data load may threaten<br />
to overwhelm a decision support system. The DFMS addresses these issues through the Spatial<br />
Event Processing (SEP) mechanism which defines publish/subscribe messaging pattern on the top<br />
of the SWE request/response pattern to facilitate the dissemination of information in a timely<br />
fashion.<br />
This approach in ESS enables command and control systems to get the information they are<br />
interested in as soon as this information becomes available. The mechanism supports the definition<br />
of precise filter and processing rules so that only the relevant information is transmitted. This saves<br />
both, communication and processing resources of systems outside the DFMS, such as the ESS<br />
portal and command and control systems.<br />
Keywords: data fusion, OGC, spatial event processing, sensor, subscribe/notify.<br />
73
Introduction<br />
Complexity of tasks in crisis and emergency management has significantly increased in last decades. One of the<br />
main reasons is the necessity to cooperate between various institutions and domains. More sophisticated<br />
tools are being developed to satisfy the needs for production, processing and displaying information to the<br />
users of command and control systems. Volume of data within these systems is arising. Effective crisis and<br />
emergency management is based on quick decisions. On the other hand, coordinated approach as well<br />
accurate and processed information are needed as well.<br />
Emergency Support System (ESS) has been designed as a suite of real-time spatial data centric technologies<br />
that collects process and provides relevant information about a crisis event. It should be seen as the<br />
supporting system to existing systems in crisis and emergency domain instead of the replacement of existing<br />
command and control systems. At the same time, ESS is an FP7 European research project, under theme<br />
Security, funded between 2009 and 2013. The ESS consortium, consisting of 19 partners, is developing a crisis<br />
communication system that is going to reliably transmit filtered and re-organized information streams to crisis<br />
command systems, which will provide the relevant information that is actually needed to make critical<br />
decisions.<br />
This paper will provide you with the issues of ESS high-level architecture, adoption of the OGC Sensor Web<br />
Enablement (SWE) as well contribution to principles and pilot implementation of the spatial event processing<br />
mechanism.<br />
ESS architecture<br />
ESS architecture is the basis for developing the Emergency Support System. For this reason it was analysed<br />
from several points of view:<br />
• functional and non-functional requirements model,<br />
• components derived from the requirements and the relationship matrix,<br />
• detailed component design model,<br />
• use case and dynamic model,<br />
• detailed description of fundamental architectural aspects that shall be taken into account for the<br />
major ESS layers sensing, service and portal as well as ESS alert system.<br />
Only overall component model architecture will be described further due to the limited extent of this paper.<br />
The ESS component model (see Figure 1 below) provides an overview of the high level architecture of ESS. The<br />
main purpose of this model is to define the organization and dependencies of the system components.<br />
External components are modelled as well in order to improve the understanding of ESS boundaries and<br />
potential interactions with external systems as described above. The architecture depicted in Figure 1 is not a<br />
monolithic package but a system consisting of components and subsystems. Note that not all aspects of the<br />
architecture are covered herein.<br />
ESS consists of three layers – sensor (Data Collection Tools), service (Data Fusion and Mediation System) and<br />
portal (ESS Portal). Sensor layer collects and pre-processes data from various sensors, service layer processes<br />
sensor data and adds different kinds of resources (like background spatial data and information from external<br />
services) and publishes the information via Web services. ESS Portal is a client of ESS services offering<br />
advanced business logic.<br />
74
ESS integrates several existing front end data collection technologies (sensor measurements, cell-phone data<br />
gained from IMSI catcher, etc.) into a unique platform, which is the primary task of Data Collection Tools<br />
(DCT). Besides inputs from DCT, other inputs (such as external Web map services, non-ESS resources and<br />
simulations) are intended as well.<br />
Figure 1. Emergency Support System architecture – main components and subsystems.<br />
Data Fusion and Mediation System (DFMS) is the centralized subsystem working over the ESS database, which<br />
is connected to all front end sensors – through SWE as described in section 3 and other resources activated in<br />
or connected to the system. DFMS oversees communication between sensors and the database, data<br />
harmonization from various sensor products of one type, the fusion of data from various types of sensors,<br />
spatial data localization, and the transmission of data to the ESS Portal subsystem via standardized interfaces.<br />
Transmission is based on open interfaces compliant to the OGC (Open Geospatial Consortium) specifications –<br />
especially Web Map Service (WMS), Web Feature Service (WFS), Filter Encoding (FE) and Catalogue Service for<br />
Web (CSW) as described for crisis and emergency management by Řezník, 2010. Intergraph CS, as one of the<br />
industrial partners, is involved in most of the project tasks including WP5 (DFMS) leadership.<br />
The ESS Portal is the client application of the DFMS within the Emergency Support System. It represents the<br />
user interface, which contains all graphical components, contextual components, log access, etc. and manages<br />
data exchange between underlying layers. It provides functionalities to export data from ESS to other systems.<br />
The ESS Portal can provide any kind of functionality to external systems on the basis of its internal capabilities.<br />
These “applications” are offered in the form of Web services.<br />
75
Sensor Web Enablement adoption<br />
Sensor Web Enablement (SWE) is a set of OGC standards, working group within OGC and hundreds of<br />
implementations around the world. According to Botts et al., 2008, SWE “refers to web accessible sensor<br />
networks and archived sensor data that can be discovered, accessed and, where applicable, controlled using<br />
open standard protocols and interfaces (APIs).” SWE is not a stand-alone initiative since it is harmonized with<br />
other OGC standards dealing with spatial data. Typical applications of the SWE are water sensors (flood<br />
applications), radiological sensors, pollution sensors, Webcams, air- and space-born sensors, mobile heart<br />
sensors and countless other sensors and sensor systems. Only the aspects highly relevant to the ESS will be<br />
described further due to the extent of this paper. More detailed description may be found for instance in Botts<br />
et al., 2008 or Jirka et al., 2009.<br />
ESS uses four basic standards of the OGC SWE portfolio – Observation & Measurements Schema (O&M),<br />
Sensor Model Language (SensorML), Sensor Observation Service (SOS) and Sensor Planning Service (SPS). O&M<br />
is the general conceptual schema where SensorML is the exchange format used within two ESS sensor services<br />
– SOS and SPS. SOS is used for accessing sensor data and metadata, while SPS serves for the parameterization<br />
of a sensor (system) like UAV (Unmanned Aerial Vehicle). SPS is responsible for customization of the<br />
measurements from several points of view – like positioning the sensor, setting the range of sensor<br />
measurements, etc. OGC Sensor Alert Service (SAS) as well as Web Notification Service (WNS) are not intended<br />
in the ESS architecture since their original functionality has been replaced and extended by the spatial event<br />
processing mechanism (as it is described in the following section).<br />
To be compliant with the SWE standards, ESS needs to support the core functionality defined by SOS and SPS.<br />
In addition the SOAP binding of these service operations has to be supported for both services.<br />
Further constraints for SOS are (beyond issues of data format profiling):<br />
• The latest SOS version defines a spatial filtering profile that enables enhanced observation retrieval by<br />
defining additional rules to perform more precise spatial filtering of observations. This profile should<br />
be supported by SOSs accessed by ESS system.<br />
• The GetFeatureOfInterest operation should be supported to enable clients like Portal to prepare the<br />
graphical display.<br />
• Some kind of indicator may be needed in the SOS metadata to distinguish whether a specific sensor is<br />
stationary or not. This would help to display the observation data appropriately. For example, creating<br />
a chart of observations made for one feature of interest is useful only if there are multiple<br />
observations for that feature – which is usually the case for stationary sensors but not for the moving<br />
ones.<br />
• The GetObservationById operation may be needed for auditing purposes, especially if derived<br />
information provides lineage information – such as which basic information ultimately led to the<br />
derived data.<br />
• The transactional operations are not needed in ESS at this stage as DFMS is primarily acting as a client<br />
to existing SWE services. As such, sensors that are deployed in the field should already bring their own<br />
service support with them. However, if that is not the case then for sensor plug-and-play these<br />
operations would be required.<br />
• The DeleteSensor operation is not needed in ESS, as sensor data should be permanently available to<br />
the system (primarily for auditing purposes).<br />
Further constraints and discussions for SPS are:<br />
76
• The definition of tasking profiles. Control of sensors like video cameras stationed on Unmanned<br />
Ground Stations (UGS) or Unmanned Aerial Vehicles (UAV) is an important task in ESS. The definition<br />
of a simple SPS tasking profile would allow using a uniform and open interface to control such sensors.<br />
• Cancelling a task is supported by sensor systems.<br />
• Task reservations are not required in ESS. At this stage of the project, a task prioritization mechanism<br />
is based on assignment of the SPS tasking functionality for certain sensor resources to the user with<br />
the highest priority.<br />
• Computation of feasibility of a specific tasking action is supported, as the service internally have to<br />
support such checks upon a task submission.<br />
• In order to support auditing functionality, the state logger conformance class of SPS, which defines<br />
that a complete status log is stored for a task or tasking request, is supported in ESS.<br />
• The minimum time period how long status information of a task / tasking request is stored by an SPS is<br />
not defined in the standard, only that each SPS instance defines such a time. However, that can be one<br />
month, one hour but also only one second. Thus in ESS defines a minimum time period that needs to<br />
be supported by all SPSs that ESS interacts with.<br />
• Publish/Subscribe functionality is realized by the services to support event processing functionality as<br />
discussed in the following section.<br />
SPS as well SOS support the DescribeSensor operation defined by the SWE. Support of the sensor history<br />
provider and sensor history manager conformance classes (which basically enable the time based retrieval and<br />
modification of sensor metadata) are not required in ESS. A minimal profile of Sensor Model Language<br />
(SensorML) is used within ESS to describe most basic aspects of an ESS sensor.<br />
Profiling the SWE services may continue during ESS implementation phase (i.e. between 2011 and 2012) in<br />
order to be able to adjust to new developments and requirements.<br />
Spatial Event Processing<br />
4.1. Principles<br />
Information sources made available through systems such as ESS may continuously generate a significant<br />
amount of data from heterogeneous information sources. This data load may overwhelm an ESS Portal or a<br />
decision support system if not filtered and processed appropriately. Usually, there are several manually or<br />
semi-manually based processes of filtering the information. Spatial Event Processing (SEP) mechanism offers<br />
significant advantages which are, among others, following:<br />
• automatic filtering and processing mechanism with possibilities of manual inputs,<br />
• based on the location of the event (and thus throwing away information from non-relevant places),<br />
• repeatable mechanism that may be re-used for other processes.<br />
Flood of information within a system such as ESS relies on the number of deployed, active and connected<br />
sensors, frequency with which they gather data, frequency of data received via services and settings of the<br />
video transmissions. ESS or a command and control system operator would be overwhelmed with this<br />
information. Therefore, advanced visualization as well as aggregation techniques are needed inside the ESS. In<br />
other words, the concept of a system like ESS should enable to aggregate low-level data into the higher-level<br />
information that is relevant for the ESS/command and control system operator.<br />
Principles of event processing are in general described for instance in Everding, T. et al., 2009. Spatial event<br />
processing adds location point of view. So-called spatial window defines location relationship between<br />
77
incoming events. As it is depicted in Figure 2, the spatial window is dynamic and computed as a buffer around<br />
the line. In this example, start and end of this line are defined by the vehicle location and destination. Interior<br />
nodes and edges of the line are determined by the route that the vehicle intends to drive. This example is<br />
familiar from car navigation systems. Events that are located within the buffer zone are part of the window's<br />
event set. At T1, the spatial window is shown in grey and events one and two are added to the event set. Let<br />
us assume that these events signal slow the traffic speed. If a cluster with a significant number of such events<br />
is detected in a given time interval, the driver would be notified and an alternative route suggested. This<br />
clustering could be performed as a special select function. The vehicle moves on along its route. At T2, it has<br />
moved about half its way. The window's buffer has been adjusted. Event four has been detected and is now<br />
part of the window. Event three will automatically be rejected as it does not fulfil the window's entry criteria<br />
(as it is not within the buffer zone). In T3, event one and two have been removed from the window's event set<br />
while event seven is added. A number of events happened in the area where events one and two happened.<br />
These events, e.g. indicating a traffic jam, would usually form a cluster. However, as they do not fall within the<br />
spatial window, they are no longer relevant and thus not reported to the driver.<br />
Figure 2. Dynamic spatial window for the Event Processing.<br />
4.2. OGC Event Service<br />
OGC has published the draft discussion paper in mid-2011, in version 0.9, dealing with the Event Service (see<br />
Echterhoff, J. et al., 2011). Work on this paper has been supported by two European research projects – the<br />
Emergency Support System project and the GENESIS project (European Commission, 2008).<br />
OGC Event service is based on the WS-Notification (WS-N). According to Echterhoff, J. et al., 2011 “the Event<br />
Service was developed to satisfy the need for having relevant data available at an OWS pushed to a client as<br />
soon as it is available rather than having the client repeatedly poll the service.” It is obvious, that such service<br />
is not intended as a stand-alone one. Combination with another (OGC) Web services was foreseen from the<br />
beginning with the primary area of interest in the OGC Sensor Web Enablement domain. It is more valuable to<br />
78
have a sensor with the publish-subscribe mechanism rather than use the request-response mechanism. OGC<br />
Event Service consumer receives real-time data matching the filtered criteria of the respective subscriptions.<br />
Publish-subscribe mechanism reduces the amount of needed workflows which results in the reduction of<br />
transmitted data. Therefore, a publish/subscribe mechanism brings significant advantages in applications of<br />
emergency and crisis management.<br />
OGC SWE working group has developed the Sensor Alert Service (SAS) which supported its own<br />
publish/subscribe based operations. Sensor Event Service (SES) was afterwards defined as the successor of the<br />
SAS and tested in OGC Test beds. Since the most of the SWE specific functionality was not present in SES, only<br />
pure WS-Notification based Event Service was finally deployed. SOAP (Simple Object Access Protocol) binding<br />
in versions 1.1 and 1.2 was foreseen during the proposals and tests.<br />
General OGC Event Service uses both patterns: request-response and publish-subscribe. Request-response<br />
pattern has to be used in the first phase of communication to see what are the details of an OGC Sensor Event<br />
Service (e.g. through a GetCapabilities operation). On the other hand, publish-subscribe pattern requires<br />
specific considerations since one-way messages are sent after subscription. Especially, problem of returning a<br />
fault in one-way communication arises. Another problem may arise when a simple client (i.e. not a Web<br />
service) cannot be reached from another service (for various reasons such as firewall or being offline).<br />
The event processing mechanism may contain filter statements. If there is not any, then all available data are<br />
provided. Otherwise, data needs to be matched against the defined filtering criteria. OGC Sensor Event Service<br />
defines three filter levels for such service:<br />
• XPath for filters based on the event structure;<br />
• OGC Filter Encoding for more sophisticated filtering (including spatial and temporal operators);<br />
• OGC Event Pattern Markup Language for Complex Event Processing (CEP) and Event Stream Processing<br />
(ESP) capabilities.<br />
4.3. Implementation<br />
As today’s systems use client-server architecture, data may be processed in general on either client or server<br />
side. Client-side processing has significant disadvantages, since all data have to be passed to the client which<br />
might result in a high traffic load. Server-side, on the other hand, is much more efficient solution. As<br />
mentioned in section 4.2, several standards support server-side approach.<br />
WS-Notification , an OASIS Standard currently in version 1.3, was selected as the ESS event processing system.<br />
WS-Notification is very complex and modular standard containing both mandatory and optional parts.<br />
Fortunately, mandatory parts form very base of the standard since all the rest are optional parts. ESS<br />
implements the most necessary parts supporting required functionality of the system.<br />
Events dispatched by WS-Notification can be categorized by so-called topics, so subscriber can receive only<br />
events related to a given category. Several topics have been defined for ESS. For example,<br />
NewResourceAvailable, MaintenanceEvent, SensorAvailabilityChanged, ExceptionDetected, etc. They indicate,<br />
that a new resource has been introduced to the system, a resource needs a maintenance, availability of a<br />
sensor changed, an unexpected situation occurred, respectively.<br />
Still, categorization by topics is not adequate in some cases and even finer selection of received events is<br />
required. For that purpose topics are able to support expressions specifying further reduction of events sent to<br />
subscriber. In case of ESS topics ConditionSatisfied and GeographicalEvent have been defined. They enable<br />
specifying a condition under which event is sent to subscriber. For instance “/S1/Temperature > 36,”<br />
“/S2/WindSpeed > 12 AND /S2/WindDirection BETWEEN(30, 60)” or “/S3/Position IN BBOX(…).”<br />
79
Note that WS-Notification standard does not define any formal language describing condition form. Instead, it<br />
allows specifying so called dialect, which contains precise definition of used language. Therefore, it is up to the<br />
user and server provider to define such a language.<br />
Acknowledgements<br />
This research has been supported by funding from the EU 7FP project with Grant agreement No. 217951<br />
entitled Emergency Support System.<br />
References<br />
European Commission: CORDIS: FP7 : ICT : Projects : GENESIS : Generic European sustainable information space<br />
for environment. (2008). Retrieved 2011-08-25, from<br />
http://cordis.europa.eu/fetch?CALLER=PROJ_ICT&ACTION=D&CAT=PROJ&RCN=87874.<br />
ESS Project. (2009). Retrieved 2011-08-15, from http://www.ess-project.eu/.<br />
BOTTS, M., PERCIVALL, G., REED, C., DAVIDSON, J. (2008). OGC® Sensor Web Enablement: Overview and High<br />
Level Architecture. In S. Nittel, A. Lambrinidis, & A. Stefanidis, GeoSensor Networks (Vol. 4540, 271 p.).<br />
Berlin Heidelberg: Springer-Verlag.<br />
ECHTERHOFF, J., EVERDING, T. (2011-08-11). OGC Event Service - Review and Current State. Retrieved 2011-08-<br />
22, from Open Geospatial Consortium (OGC):<br />
https://portal.opengeospatial.org/files/?artifact_id=45115.<br />
ESS consortium. (2010-02-15). Deliverable D2.2 Report on High Level Software Architecture. Retrieved 2011-<br />
08-22, from ESS Project: http://www.ess-project.eu/images/stories/Deliverables/ess%20d2.2%20.pdf.<br />
EVERDING, T., ECHTERHOFF, J. (2009). Event Processing in Sensor Webs. Proceedings of the Geoinformatik<br />
2009 - benefits for environment and society conference, 4 p. Osnabrück: Universität Osnabrück (IGF).<br />
JIRKA, S., BRÖRING, A., STASCH, C. (2009). Applying OGC Sensor Web Enablement to Risk Monitoring and<br />
Disaster Management. GSDI 11 World Conference, 13 p. Rotterdam.<br />
OASIS. (2006). OASIS Web Service Notification (WSN) TC. Retrieved 2011-08-24, from http://www.oasisopen.org/committees/wsn/.<br />
ŘEZNÍK, T. (2010). Metainformation in Crisis Management Architecture - Theorethical Approaches, INSPIRE<br />
Solution. In M. KONECNY, S. ZLATANOVA, & T. BANDROVA, Geographic Information and Cartography<br />
for Risk and Crisis Management, 429 p. Berlin: Springer.<br />
80
ABSTRACT<br />
Near‐real‐time monitoring of volcanic emissions using a new<br />
web‐based, satellite‐data‐driven, reporting system: HotVolc<br />
Observing System (HVOS)<br />
GOUHIER M. 1 , LABAZUY P. 1 , HARRIS A. 1 , GUEHENNEUX Y. 1 , CACAULT P. 2 , RIVET S. 2 , BERGES J. 3<br />
1 Laboratoire Magmas et Volcans, CNRS, IRD, Observatoire de Physique du Globe de Clermont-Ferrand,<br />
Université Blaise Pascal, Clermont-Ferrand, France<br />
2 Observatoire de Physique du Globe de Clermont-Ferrand, CNRS, Université Blaise Pascal, Clermont-Ferrand,<br />
France)<br />
3<br />
PRODIG, UMR 8586, CNRS, Université Paris 1, Paris, France<br />
M.Gouhier@opgc.univ-bpclermont.fr<br />
Abstract<br />
We present here a web-based system developed to achieve near-real-time detection and tracking<br />
of volcanic emissions using onsite ingestion of satellite data and output of useable products via<br />
implementation of off-the-shelf algorithms. The system, named HotVolc Observing System (HVOS),<br />
was set up to allowin near-real-time tracking of ash cloud and lava flow emissions, and was first<br />
tested during the April-May 2010 eruption of Eyjafjallajökull eruption and Etna’s January 2011<br />
eruption. HVOS is hosted by the Laboratoire Magmas et Volcans (LMV) which is part of the<br />
Observatoire de Physique du Globe de Clermont-Ferrand (OPGC) based at the Université Blaise<br />
Pascal (Clermont-Ferrand, France). This system is based on the real-time reception and processing<br />
of the full constellation of geostationary satellite data (MSG0, MSG-RSS, GOES-E, GOES-W, MTSAT,<br />
Meteosat-7) allowing worldwide monitoring of volcanic events every 15 minutes. Currently, we<br />
provide open-access, in real-time, to semi-quantitative data (ash index, lava index, SO2 index, RGB<br />
index). This capability was used by French authorities (CMVOA) during both Eyjafjallajökull and<br />
Grimsvötn volcanic crises. Quantitative products (i.e., ash concentration and radius, ash cloud<br />
altitude, SO2 concentration) are also provided in near-real-time either on request or during<br />
volcanic (explosive) crises, as are estimates of lava discharge rates during effusive crises.<br />
81
SHORT PAPER<br />
Image interpreters and interpreted images: an eye tracking study<br />
applied to damage assessment.<br />
CASTOLDI R., BROGLIA M. and PESARESI M.<br />
Joint Research Centre of the European Commission, Italy<br />
Roberta.castoldi@jrc.ec.europa.eu<br />
1. Rationale<br />
JRC is exploring the improvement of the human assessment of building damage by applying image<br />
enhancement processing before photo-interpretation phase. The JRC has designed a set of experiments to<br />
assess the effect of such processing on recognition mechanisms.<br />
In the frame of the Geo-Information and Visual Perception project, we apply a cognitive approach 5 to the<br />
remotely sensed imagery photo-interpretation process, exploring the possibility to improve the assessment of<br />
building damage, traditionally carried out by the time consuming and error prone human interpretation. This<br />
task is often performed following disasters to support the information needs of emergency rescue for<br />
humanitarian relief intervention. Therefore, while on the one hand there is a high pressure to deliver a result<br />
as quickly as possible, on the other hand it is of the highest importance to ensure the quality of the<br />
assessment.<br />
The ISFEREA action (Globesec Unit, <strong>IPSC</strong>, JRC) has developed several algorithms aimed at promoting the<br />
salience of targets in complex backgrounds, with the purpose of improving semi and fully automatic image<br />
information extraction. As a rich plethora of different processing methods could be at the photo-interpreter’s<br />
disposal, it becomes increasingly useful to test if different processing methods have an effect on the subjective<br />
task performance (quality and speed) of identifying building damage.<br />
There are severe limits on our capacity to process visual information, due to the limits of the brain energy and<br />
the neuronal activity 6 . Stimuli compete, attention filters. The more a stimulus is attractive, the more likely the<br />
information incorporated will be processed 7 .<br />
Therefore a processing method should support the interpreter - involved in a damage assessment - in visually<br />
filtering the huge amount of information at her/his disposal.<br />
5 See the work done by R. Hoffman (Hoffman 1984, 1989, 1990)<br />
6 M.Carrasco, Vision Research 51 (2011) 1484-1525<br />
7 Wolfe, J. M., Vo, M. L.-H., Evans, K. K., & Greene, M. R. (2011). Visual search in scenes involves selective and non-selective pathways.<br />
Trends Cogn Sci, 15(2), 77-84 and Wolfe, J.M., Horowitz, T.S. (2004). What attributes guide the deployment of visual attention and<br />
how do they do it? Nature Reviews Neuroscience, 5 1-7.<br />
83
As study case, we chose the magnitude 7.0 earthquake in Haiti on 12 January 2010, because of the availability<br />
of i) airborne imagery, which resolution allows for visual buildings damage assessment and ii) an official<br />
damage assessment, which can be the starting point to measure the task performance.<br />
The building damage assessment for the affected area has been carried out jointly by the United Nations<br />
Institute for Training and Research (UNITAR) Operational Satellite Applications Programme (UNOSAT), the<br />
European Commission Joint Research Centre (EC JRC), the World Bank Global Facility for Disaster Reduction<br />
and Recovery (GFDRR) and Centre National d’Information Géo-Spatial (CNIGS) representing the Government of<br />
Haiti.<br />
The damage assessment has been conducted through the use of aerial photos provided by the World Bank<br />
(World Bank-ImageCat-RIT Remote Sensing Mission), Google and NOAA, as well as satellite imagery from<br />
GeoEye and Digitalglobe, by comparing pre-earthquake satellite imagery to post-earthquake aerial photos. The<br />
spatial resolution (level of detail) for the satellite imagery used is approximately 50 cm while for the aerial<br />
photos it is approximately 15 to 23 cm.<br />
Image analysts have categorized buildings into different damage classes through manual photo-interpretation.<br />
More teams worked with a coordinated approach at UNITAR/UNOSAT, at the EC JRC, at the World Bank, which<br />
worked with a network of volunteer collaborators, GEO CAN (Global Earth Observation – Catastrophe<br />
Assessment Network), and with ImageCat. The results of the photo-interpretation have been harmonized in a<br />
point dataset. Each point represents a damaged building and the damage grade is classified according to the<br />
European Macroseismic Scale (EMS) 19988. The visual interpretation of aerial ortho-photos allowed only the<br />
proper identification of damage classes 4 (very heavy damage) and 5 (destruction), which in the following are<br />
referred to as t4 and t5 respectively.<br />
a) b)<br />
Figure 1 - images a) and b) represent respectively examples of t4 target AOI and t5 target AOI both<br />
“unprocessed”.<br />
The damage assessment has been provided in rush mode and is affected by a certain degree of error; in<br />
particular, validation using ground collected data showed an overall accuracy between 61 percent and 73<br />
percent9, varying with respect to the damage classes. In order to improve the accuracy of the dataset, two<br />
expert photo interpreters of the ISFEREA team performed a detailed revision of the images included in this<br />
8 G. Grünthal (ed.), “European Macroseismic Scale 1998 (EMS-98)”, CONSEIL DE L’EUROPE, Cahiers du Centre Européen de<br />
Géodynamique et de Séismologie, Volume 15, Luxembourg 1998<br />
9 C. Corbane et al, “A Comprehensive Analysis of Building Damage in the 12 January 2010 Mw7 Haiti Earthquake Using High Resolution<br />
Satellite and Aerial Imagery”, Photogrammetric Engineering & Remote Sensing Vol. 77, No. 10, October 2011<br />
84
experiment. The resulting output has been used as the reference dataset to measure the subjective task<br />
performance in identifying building damage.<br />
The images below (Figure 2) show examples of JRC image processing chains under test in the current<br />
experiment. a) “unprocessed” sub-sample of input image used during post-earthquake damage assessment in<br />
Haiti (2010) in the operational image interpretation tasks, b) “unsharpened” the same image after conditional<br />
local convolution enhancing small details, c) “cc64” the same image after a simplification based on alphaomega<br />
constraint connectivity on connected components and d) “rubble” the same image with injected<br />
knowledge-driven image information extracted by multi-scale differential morphological profiles (DMP).<br />
In these experiments the JRC is examining the human photo-interpreters while performing a target detection<br />
task on a given set of differently processed images in order to achieve a measure of the efficiency of the single<br />
enhancement processing method. During these experiments we record and analyse thinking aloud – semistructured<br />
interview, mouse click responses and eye movements. Figure 3 shows some examples of the output<br />
obtained during the eye tracking sessions: (left) “heat map” representing density of duration and localization<br />
of the fixations during the image analysis and (right) “gaze plot” representing the fixations order.<br />
a) b)<br />
c) d)<br />
Figure 2 – example of image used for damage assessment in Haiti (2010) and some processing under test in<br />
the current experiment. a) “unprocessed” raw image data, b) “unsharpened”, c) “cc64” and d) “rubble”<br />
processing.<br />
85
Figure 3 – example of eye tracking analysis output: (left) “heat map” (right) “gaze plot”.<br />
2. Experiment #1<br />
Before running the full experiment involving a considerable number of participants and a rich set of images,<br />
we stepped into preliminary interviews and a pilot test, as described below:<br />
Thinking aloud – semi-structured interviews: The thinking aloud – semi-structured interview is composed of<br />
4 different image processing methods developed by the JRC. The aim of this preliminary semi-structured<br />
interview was to collect individual opinions deemed to fine-tuning the pilot experiment. Three groups of<br />
participants were involved, each one representative of a particular skilfulness level: none, basic, good<br />
experience. The total number of the participants was 9. The participants were shown simultaneously the same<br />
image tile produced by 4 different processing methods in order to detect destroyed buildings; no time<br />
constraint was given; the participants were asked to think aloud while performing the task and, at the end, to<br />
answer some specific questions about the perception of the images. 4 Dell monitors - 1280x1024 resolution -<br />
were used to display 1 image tile (1024x1024 pixels, 15 cm resolution) each, in 4 different processing methods:<br />
“unprocessed”; “cc64”; “rubble”; “unsharpened”. The interviews were audio-recorded and the results of the<br />
verbal data were assessed to identify processing methods to put under test in the pilot experiment: after<br />
having ranked the processing methods, according to the verbal data analysis, the “unprocessed” tiles were<br />
never ranked the worst; the “cc64” were ranked the worst for 82% of the cases, while the “rubble” were<br />
ranked the best for 40% of cases. Consequently, the “unprocessed” and the “rubble” processing chains were<br />
selected as material of the pilot experiment phase.<br />
The pilot experiment was composed of 2 different processing methods which have been selected according to<br />
the results of the thinking aloud – semi-structured interviews. The test was run on Tobii T120 remote eye<br />
tracker and involved 2 photo-interpreters, 1 skilled and 1 basic-knowledge. A small set of 37 tiles (1024x1024<br />
pixels, 15 cm resolution), 19 unprocessed and 18 rubble, containing a total of 81 targets, was used as stimulus:<br />
the single image tiles were randomly presented and displayed for 5 seconds each. The subjects were fully<br />
instructed about the task and asked to identify the targets – completely destroyed and severely damaged<br />
buildings - by clicking on them. Clicking responses and eye movements were recorded and analysed.<br />
Provisional results showed in both cases (skilled photo-interpreter and basic-knowledge photo-interpreter)<br />
positive influence of the “rubble” processing on the task performance. A mildly significant improvement of<br />
86
accuracy was found with the rubble-processed images over the unprocessed ones (p-value=9.92%). (See Fig.<br />
4).<br />
Figure 4 – example of eye tracking analysis output: (left) “heat map” and clicking responses on AOI in<br />
“rubble” image tile; (right) “heat map” and clicking responses on AOI in “unprocessed” image tile.<br />
The full experiment was designed so as to increment the number of participants, the experience classes and<br />
the image tiles set. The experiment was run on a Tobii T120 remote eye tracker and organized into 2 sub-Tests<br />
(Test1 and Test2).<br />
The Stimulus was composed of 80 image tiles each one present in 2 versions: “unprocessed” and “rubble”; 10<br />
images per numerosity of target (from 0 up to 3 targets). If in Test1 an image tile was presented in the<br />
“unprocessed” method, in Test2, the same image tile was presented in the “rubble” one, and vice versa.<br />
Test1: image_A_un; image_B_rub; image_C_un; ecc…<br />
Test2: image_A_rub; image_B_un; image_C_rub; ecc…<br />
The Participants were totally 30: 10 skilled photo-interpreters, 10 basic-knowledge photo-interpreters, 10<br />
non-experienced subjects divided into 2 Tests:<br />
Test1: 5 skilled photo-interpreters, 5 basic-knowledge photo-interpreters, 5 non-experienced subjects.<br />
Test2: 5 skilled photo-interpreters, 5 basic-knowledge photo-interpreters, 5 non-experienced subjects.<br />
Procedure: Every participant observed 80 image tiles and was asked to seat in front of the monitor and<br />
perform the task of a photo interpreter involved in a damage assessment. Written instructions were displayed<br />
on the screen of the eye tracker at the very beginning of the recording session of each single test. The task was<br />
to explore the displayed image, search for destroyed or severely damaged buildings and click on them. The<br />
single image tiles were displayed randomly for 5 seconds each. The mouse cursor was visible and the subject<br />
could click on those he/she recognized as targets. The clicking response didn’t leave any point on the image:<br />
only in the replay and in the visualisations computed in Tobii Studio the clicking responses were marked on the<br />
image tiles.<br />
87
Analysis: the eye tracker output was analyzed in Tobii Studio and in R. It took into account 4 parameters: the<br />
clicking responses and eye movement metrics – time to first fixation (how long it takes before a participant<br />
fixates on a target AOI) and time from first fixation to next mouse click (how long it takes before a participant<br />
left-mouse clicks on a target AOI) to assess - between-subject - how the processing method and the level of<br />
damage impacted the subjective performance.<br />
AOI have been drawn on the targets identifying t4 and t5.<br />
3. Results:<br />
False positives: a false positive is the clicking response outside the target AOIs. Taking into account the<br />
processing method, the false positives had an increment of 32,9% (see Table 1) in rubble processed images<br />
with respect to the unprocessed ones. Most of the false positives have been generated by a clicking response<br />
set on buildings without roof, like the ones shown in Figure 5.<br />
Table 1<br />
unprocessed<br />
rubble<br />
Rate (%)<br />
FP tot 1281 FP tot 1702 32.9<br />
Unprocessed<br />
a) b)<br />
Rubble<br />
c) d)<br />
88
Figure 5 - examples of eye tracking analysis output: a) and c) show “heat map” and clicking responses on a t4<br />
target AOI respectively in “unprocessed” and “rubble” image tile. b) and d) show, for the same t4 target,<br />
“unprocessed” and “rubble”, clusters of attention automatically generated by the software representing<br />
the percentage of the participant attention.<br />
i) True positives – Clicking response: the clicking responses inside the target t4 and t5 AOIs were considered as<br />
true positives. The general results are shown in Table 2 below:<br />
Table 2<br />
unprocessed rubble Rate %<br />
TP t5+t4 518 606 17<br />
TP t5 377 413 9.5<br />
TP t4 141 193 36.9<br />
Totally the percentage of clicked targets, taking into account the processing method, increased of 17%. Taking<br />
into account the damage level and the processing method, the improvement of the test participants’<br />
performance on t4 was higher than the one on t5.<br />
The impact of the rubble processing on true positives was higher on the skilled participants, as they improved<br />
their performance on t4 and t5 respectively of 54,1% and 20,3% in rubble processed image tiles. The nonexperienced<br />
participants improved as well their performance on t4 significantly on rubble processed image<br />
tiles. The impact of the processing method, as far as the clicking response on t4 and t5, was lower for the basic<br />
participants (See Table 3 and Chart 1).<br />
Table 3<br />
Unprocessed → Rubble<br />
% increment rate<br />
t4<br />
t5<br />
skilled 54,1 20,3<br />
basic 18 6<br />
no exp 42,6 3,14<br />
Chart 1<br />
ii) True positives – Observation count and time to first fixation:<br />
89
The “observation count” - how many observations have been spent on t4 and t5 in the uprocessed and rubble<br />
processed images – spots an increment of 10,4% in the t4 observations in the rubble processed image tiles<br />
with respect to the unprocessed ones. As far as the observations on t5, the increment, was of the 7% (see<br />
Chart 2)<br />
Chart 2<br />
The “time to first fixation” decreases, in average for all the participants classes, from t4 in unprocessed image<br />
tiles (2,5 seconds) to t5 in rubble processed (2 seconds): it means that the damage level and the processing<br />
methods impact the time to fixate a target (Chart 3). The main trend is mostly reflected by the skilled<br />
participants’ ocular behaviour.<br />
Chart 3<br />
iii) True positives – time from first fixation to next mouse click: this metric spots that the “reaction time” to<br />
the target given by the clicking response is faster when the damage level is higher. In the case of t4 the<br />
processing method helps in reducing interestingly the distance, in time, from the fixation on the target AOI and<br />
the clicking response. (See Chart 4)<br />
90
1,25<br />
1,2<br />
1,15<br />
1,1<br />
1,05<br />
1<br />
0,95<br />
1,2<br />
time from first fix to click<br />
1<br />
1,1 1,1<br />
0,9<br />
t4 un t4 rub t5 un t5 rub<br />
Chart 4<br />
4. Conclusion:<br />
The experiment shows that the level of damage and the processing methods of the image tiles impact all the<br />
participant groups performance and ocular behaviour, as far as the metrics took into account. In particular, the<br />
lower damage level targets, the t4s, more difficult to be found, if enhanced by the rubble processing method,<br />
are more likely detected and recognized. Given the time pressure and the huge amount of information (visual<br />
and cognitive) to process during a damage assessment, the enhancement given by the rubble processing<br />
method offers a valid support decreasing the detection time and the clicking response. (See chart 5)<br />
Summarizing chart<br />
600<br />
3<br />
500<br />
400<br />
300<br />
2,5<br />
335<br />
2,3<br />
370<br />
512<br />
2,2<br />
377<br />
548<br />
413<br />
2<br />
2,5<br />
2<br />
1,5<br />
observ count<br />
click count<br />
time to first fix<br />
200<br />
1,2<br />
1<br />
193<br />
1,1 1,1<br />
1<br />
time from first<br />
fix to click<br />
100<br />
141<br />
0,5<br />
0<br />
t4 un t4 rub t5 un t5 rub<br />
Chart 5<br />
0<br />
5. Further steps:<br />
But still it is necessary to decrease the rate of false positives, mostly present in the rubble processed image<br />
tiles. To deepen the issue we designed a tool, based on eye tracking technology, combining eye gaze data with<br />
Computer-Assisted Detection algorithms to improve detection rates 10 of true positives.<br />
10 G. D. Tourassi, M. A. Mazurowski, B. P. Harrawood and E. A. Krupinski, "Exploring the potential of context-sensitive CADe in<br />
screening mammography," Medical Physics 37, 5728-5736<br />
91
ABSTRACT<br />
Crisis maps readability: first results of an experiment using the<br />
eye-tracker<br />
CASTOLDI R., CARRION D., CORBANE C., BROGLIA M. and PESARESI M.<br />
Joint Research Centre of the European Commission, Italy<br />
Roberta.castoldi@jrc.ec.europa.eu<br />
Abstract<br />
An eye tracker is a device for measuring eye positions and eye movement. Eye trackers are used in<br />
research in the most different fields: from ophthalmology to visual perception, linguistics to visual<br />
design, psychology to human computer interface. At JRC we are applying eye-tracking to damage<br />
assessment analysis based on satellite images and aerial photos. According to the eye-mind<br />
hypothesis, eye movements can be a window on the cognitive processes and are able to reveal<br />
reasoning strategies involved in tasks execution.<br />
An empirical study has been designed in the specific application of crisis maps in support of first<br />
responders, field operators and decision-makers involved in emergency events management.<br />
The purpose was to run an experiment on a group of crisis events management actors to explore<br />
the way they interact with emergency products (e.g. digital maps) to come out with<br />
recommendations regarding the best practices for making crisis maps more efficient,<br />
comprehensible and usable for the end-users. To analyse the user/map interaction we relied on<br />
eye movements data and cognitive task analysis methods (e.g. retrospective thinking aloud and<br />
questionnaires). The first qualititative results of this experiment will be presented.<br />
93
SESSION IV<br />
TOWARDS ROUTINE VALIDATION AND QUALITY<br />
CONTROL OF CRISIS MAPS<br />
Chair: Michael Judex<br />
Crisis maps are gradually moving from the research domain to the production domain<br />
and if on the one hand standardization is still a long way off, on the other hand the awareness of<br />
the importance of reliable, consistent and usable products is steadily raising. Several actors are<br />
actually involved in some kind of check, quality control or validation of the maps during their<br />
ordinary workflow: users may not formally accept or refuse a product but, de facto, they have to<br />
decide if they will use it or not and they have to decide if they ingest it or not into their GIS, in<br />
particular if the product is a vector layer; providers verify their output against requirements,<br />
specifications, or against a formal checklist before releasing them; sometimes validation is<br />
assigned to an independent party. This session will present and discuss practical cases and<br />
experiences of maps validation and quality control integration in the ordinary workflow, involving<br />
different points of view.<br />
95
SHORT PAPER<br />
A methodological framework for qualifying new thematic services<br />
for an implementation into SAFER emergency response and<br />
support services<br />
RÖMER H., ZWENZNER H., GÄHLER M. and VOIGT S.<br />
German Aerospace Center (DLR), German Remote Sensing Data Center (DFD), Oberpfaffenhofen, Germany<br />
hannes.roemer@dlr.de<br />
Abstract:<br />
In the FP-7 GMES SAFER project a pre-operational service for emergency response and emergency<br />
support products was implemented to reinforce the European capacity to respond to emergency<br />
situations. SAFER was not only focusing on “rapid mapping” and validated products during the crisis<br />
phase but also on the enrichment of the service with a wider set of thematic services. For the<br />
selection of new thematic services not only a high accuracy of products was of interest. Moreover,<br />
e.g. service maturity, user interest and compliance to the SAFER operational model are important<br />
issues to guarantee a validated service.<br />
The aim of this contribution is to present a methodological framework that was developed and<br />
applied for the evaluation and qualification of selected thematic services into the SAFER portfolio<br />
Version 2 (V2). The concept is characterized by strong user involvement including European Civil<br />
Protection Organisations and Humanitarian Aid Organisations. The framework consists of several<br />
steps comprising – among others – the definition of assessment criteria (here termed as Service<br />
Evolution Criteria), the Service Maturity Analysis (SMA), a ranking of interest/relevance by involved<br />
users and an operational performance check (operational check = OC). In total 19 Service Evolution<br />
Criteria were defined in collaboration with the users and were applied for both, the SMA and the<br />
OC. The criteria cover aspects of software and data sustainability, service producing time, user<br />
support and user availability, service transferability, metadata compliance and the reliability of the<br />
map contents. The SMA was designed to assess whether the services are mature and sustainable<br />
whereas during the OC the services were tested under operational conditions. The OC was<br />
conducted in collaboration with several project partners, e.g., the JRC conducting a scientific and<br />
technical validation of the delivered products.<br />
The qualification process led to a substantiated suggestion of thematic services to be implemented<br />
into SAFER V2 and thus served as an important decision support for the project stakeholders.<br />
Finally, with the selected approach it was ensured that the thematic variety of the existing “rapid<br />
mapping” services have been substantially increased.<br />
97
Keywords: rapid mapping, qualification, emergency response, thematic services<br />
1. Introduction<br />
With the aim to strengthen the European capacity to respond to emergency situations, a pre-operational<br />
service for emergency response and emergency support products was implemented in the FP-7 GMES SAFER<br />
project. Two major aims of SAFER are (a) the improvement, consolidation and validation of information<br />
services focussing on rapid mapping during the response phase and (b) the enrichment of existing preoperational<br />
services with a wider set of information products covering more widely the response cycle, from<br />
the prevention phase to the post-crisis phase. This second priority implies a longer-term qualification process<br />
which has started in the beginning of the project on 1 January 2009 and finished in July 2011 and is termed in<br />
the following as Service Evolution.<br />
The focus of this contribution is to present the methodological framework that was designed and developed<br />
within the context of Service Evolution of SAFER. Furthermore, as the framework was already successfully<br />
applied and implemented within SAFER, the developed methodology is not only of theoretical but also of great<br />
practical value.<br />
A crucial element of the multi-stage concept includes a strong involvement of European users, such as<br />
European Civil Protection Organisations and Humanitarian Aid Organisations represented by the UN. Their<br />
main role in the framework encompasses particularly the identification of qualification criteria (Service<br />
Evolution Criteria) and their contribution to the evaluation of the added-value of the new services in<br />
comparison to the existing Emergency response and support services (Core Services = CS).<br />
In general the framework was developed to evaluate the maturity, operability of the new thematic services as<br />
well as their added-value in relation to the CS. In addition to the user community, the Service Evolution<br />
process was supported by the JRC, CNES, e-GEOS, Infoterra (UK and Germany) and EUSC.<br />
The following chapter 2.1 provides a comprehensive overview of all steps of the qualification framework,<br />
whereas chapters 2.2 – 2.5 will pick up some of these steps in more detail.<br />
2. Methodological framework<br />
2.1. General approach<br />
As illustrated on figure 1, the Service Evolution describes the process of qualifying new thematic services for<br />
an implementation into the existing pre-operational model of the SAFER project. Service Evolution explicitly<br />
focuses on the evaluation and qualification of the services themselves, rather than on an in-depth analysis and<br />
validation of the provided products. The latter was conducted by JRC in parallel and includes a technical and<br />
scientifically validation where also external experts from different research domains were involved. As<br />
indicated by the red dashed frames on fig. 1, the involvement of users played a fundamental role throughout<br />
the qualification process. In the first step, the identification of Service Evolution Criteria, users (i.e. the Italian<br />
Civil Protection Authority, DPC), service providers (i.e. DLR, EUSC, ITUK) and other project partners agreed on<br />
the definition of 19 Service Evolution Criteria to be used for further qualification steps, in particular the Service<br />
Maturity Analysis (SMA). During the SMA, all thematic service providers gave a detailed inventory of the<br />
maturity of their services.<br />
98
The provided information was then checked against the predefined criteria. A further qualification stage<br />
includes a ranking of interests of involved users. In total 13 National civil protection organisations and five<br />
humanitarian aid organisations were asked to rank those thematic services that passes the SMA to a level of<br />
interest or relevance. In the operational check (OC) a realistic test scenario was created, where the service<br />
providers had to show the operational performance of their services.<br />
A prioritisation of the thematic services was made on the basis of the SMA, the ranking statistics as well as the<br />
OC. In addition, the required amount of budget and separation from the CS were additionally taken into<br />
account during this pre-selection. This prioritisation served as an essential basis for the final decision on the<br />
qualification of the services which was taken by the SAFER executive committee (EXCOM). It needs to be<br />
emphasized that the users were also involved in the decision phase of the framework and where represented<br />
in the committee by the Project User Board (PUB). The qualification process leads over to the implementation<br />
phase where the existing product portfolio versions 0 and 1 and also many other elements of the preoperational<br />
model of SAFER had to be updated.<br />
The following sections give a more comprehensive overview of the four major qualification steps, the criteria<br />
selection, the SMA, user ranking and the OC.<br />
Service Portfolio (V0, V1)<br />
Thematic services<br />
S E R V I C E E V O L U T I O N<br />
Identification of Safer Service<br />
Evolution Criteria<br />
Service Maturity Analysis (SMA)<br />
Ranking of interest of users<br />
Operational Performance Check<br />
(OC)<br />
Prioritization of thematic services<br />
Decision by Project Executive<br />
Committee (EXCOM)<br />
Scientific & technical validation<br />
Updated Service Portfolio (V2)<br />
Qualified thematic services<br />
Figure 1. The concept and workflow of Service Evolution.<br />
User<br />
involvement<br />
2.2. Selection of Service Evolution Criteria<br />
As illustrated on fig. 1, the Safer Service Evolution Criteria were firstly defined in a collaborative work during a<br />
workshop on 9 June 2009 in cooperation with the user forum, represented by the Italian Civil Protection<br />
Authority (DPC), the rapid mapping service provider community (DLR, ITUK) and other project partner<br />
99
organisations that are responsible for the product dissemination and geo-data infrastructure (e-GEOS), the<br />
product validation (JRC) and the quality control (CNES).<br />
In a first step all contributing partners have defined their own Service Evolution Criteria from their point of<br />
view and expertises. The second step included the synthesis of these criteria which was done by DLR. At this<br />
stage, the consortium agreed on the definition of 19 Service Evolution Criteria to be used for the further<br />
service qualification process. The criteria can in general be divided into the following four criteria groups:<br />
Service performance, Product quality, Dissemination and Usability/Additional value.<br />
Service performance criteria were mainly defined by the well established service providers and the user<br />
community. A major criterion here is the sustainability with regard to the required EO and non-EO data<br />
sources and the support of additional software/tools used for the product generation. Furthermore, service<br />
performance refers to the time requirements for different activation modes, the 24 hours / 7 day availability in<br />
case of Emergency Response services, the required costs and the technical support provided for the users.<br />
Even though a scientific and technical validation of the products was carried out by JRC on a sample basis, each<br />
new service provider should be familiar with the validation scheme. Product quality criteria include map and<br />
layout criteria, such as consistency between map and legend symbols, compatibility between geographic<br />
projections of the different entities or geographic information layers included in the same product.<br />
The criteria dealing with the product dissemination cover the type of delivered data sets, the metadata<br />
compliance to ISO 19115 standards and, in case of data publication as remote services, the compliance to OGC<br />
reference standards (WMS, WFS etc.).<br />
The Usability/Additional value refers to the innovative and additional value of the service compared to existing<br />
European services and the CS as well as to the service transferability. The latter targets on the question<br />
whether the service is limited and applicable to specific areas/regions or whether there are dependencies on<br />
specific data availabilities. A further criterion is the User feedback from previous GMES projects.<br />
The selected Service Evolution Criteria were only slightly modified and updated after the first workshop and<br />
played a fundamental role for the next qualification, the SMA that is presented in the next chapter 2.3.<br />
100
Technical compliance / Gateway 1<br />
eGEOS<br />
Users<br />
DPC<br />
Emergency (support) mapping<br />
DLR, ITUK<br />
Validation<br />
JRC<br />
Workshop on the<br />
definition of<br />
Service Evolution Criteria<br />
Quality control<br />
CNES<br />
Service Evolution Criteria<br />
Service Maturity Analysis<br />
1<br />
The Safer Gateway is a web application sustaining the GMES Emergency Response Service<br />
and providing several interfaces according to the user profile.<br />
Figure 2. Safer service criteria identification.<br />
2.3. Service Maturity Analysis<br />
During the Service Maturity Analysis (SMA) all thematic SP gave a detailed inventory of the maturity of their<br />
services by filling in a dedicated Service Maturity Questionnaire (SMQ). In the SMQ the questions were closely<br />
oriented to the predefined Service Evolution Criteria as described in section 2.2.<br />
Regarding the evaluation of the SMQ, two of the questions in the SMQ were considered as mandatory criteria:<br />
firstly the thematic SP had to state if their product/service can be considered as being mature and that the SP<br />
wants to have it implemented in the next SAFER version. Secondly, the SP must guarantee that a sustainable<br />
supply of EO and non-EO data can be assured. Only if these criteria were fulfilled, the service/product was<br />
checked against the other remaining questions. A quantitative evaluation scheme comprising three different<br />
levels of importance respectively weighting factors was applied for the other questions. For example, the<br />
knowledge and usage of the SAFER template was considered less important than the general transferability of<br />
the product to other areas or a support that can be provided in English language. In order to achieve a<br />
maximum of transparency in the evaluation, each SP was provided with an evaluation sheet in addition to the<br />
SMQ. This contains the information on the evaluation points and the weighting factors assigned to each<br />
question, respectively those questions considered as mandatory criteria.<br />
The number of evaluation points (respectively the percentage values) achieved by each SP served as an<br />
important quantitative basis in the Service Evolution in general. Furthermore, the results were related to the<br />
quantitative results derived from the ranking of interest of involved users (cp. section 2.4).<br />
101
SAFER Service Evolution<br />
SMQ /<br />
Evaluation Sheet<br />
SPs<br />
Evaluation<br />
Excluded from<br />
further consideration<br />
Meet mandatory criteria?<br />
No Yes<br />
Quantitative evaluation<br />
Filled in SMQ<br />
SERVICE MATURITY ANALYSIS<br />
Figure 3. The Service Maturity Analaysis (SMA).<br />
2.4. Ranking of interest of users<br />
Generally, the user community in SAFER is represented by the Project User Board (PUB). In order to get a<br />
wider feedback than from the five PUB members only, the members of the External User Advisory Committee<br />
(EUAC) were addressed during an EUAC conference. The participants comprised the five humanitarian aid<br />
organisations WFP, UNOSAT, UNHCR, UNICEF and IFFRC as well as 13 National Focal Points from Germany, UK,<br />
Hungary, Austria, Bulgaria, the Netherlands, Portugal, Croatia, France, Bosnia & Herzegovina, Italy, Greece and<br />
Sweden. They were asked to rank the thematic services which passed the Service Maturity Analyses for SAFER<br />
version 1 and 2 according to a level of importance or interest. The ranking scheme applied ranges from 1 for<br />
very low interest, to 5 for very high interest (2=low interest, 3=medium interest and 4 high interest,<br />
respectively).<br />
Since the SAFER project is a strongly user-driven project, the user ranking was a major component of the<br />
general qualification process of the thematic services. In order to account for unequal interest and impact of<br />
the different disaster types (e.g. flood is of much more interest that earthquake for European users), the user<br />
interest was categorized for each disaster type, because the aim of SAFER was to enrich the Service with a<br />
wide variety of thematic services, and not only flood services, for example.<br />
2.5. Operational performance check<br />
The operational performance check (OC) is a key step of the Service Evolution process. It aims at assessing<br />
whether the new thematic services can be offered under operational conditions. Similar to the SMQ, the<br />
assessment criteria were closely related to the predefined evolution criteria (cp. section 2.2). In contrast to the<br />
SMA, the focus was on those criteria that were related to the operational performance, in particular the user<br />
support, time requirements, service transferability, technical compliance and service sustainability. Thus, the<br />
OC comprises different sub-exercises for the respective criteria group that were carried out in collaboration<br />
with other project partner organisations, such as eGEOS, ITUK, EUSC and the JRC (cp. fig 4). As already<br />
mentioned in section 2.1, the product validation was indeed carried out in parallel to the Service evolution<br />
process, respectively to the OC, but was not a component of the Service Evolution in a narrower sense.<br />
At the beginning of the exercise a time window during which the OC had to be carried out was given to each<br />
SP to have them on alert. During the first week of this period, each SP was provided with the Service Request<br />
Form (SRF) which contains general information about the test scenario, such as the area of interest (AOI), the<br />
deadline for product delivery and the information required for product dissemination. The SRF represents the<br />
102
official SAFER document used to specify and standardise the service request of the user. Figure 4 illustrates<br />
that most of the sub-exercises were carried out after the product generation; however some tests were also<br />
carried out right after triggering of the service.<br />
Reliability of<br />
information content<br />
JRC<br />
User Support<br />
DLR<br />
Sustainability<br />
Non EO-data, EO-data, Software<br />
ITUK, DLR<br />
Scenario<br />
definition<br />
SP(s)<br />
Product<br />
generation<br />
Product<br />
Product<br />
Dissemination<br />
General<br />
evaluation<br />
DLR/JRC<br />
Time requirements<br />
DLR/JRC<br />
User Support<br />
DLR<br />
Technical compliance<br />
eGEOS<br />
SAFER Service<br />
Evolution<br />
OPERATIONAL CHECK<br />
Figure 4. The operational performance check (OC).<br />
The service transferability was assessed by choosing test scenarios outside of the SP’s working area and was<br />
carried out by JRC and DLR. User support was evaluated via phone interviews in order to check the SP’s<br />
availability, its ability to provide user support in English language and its flexibility to deal with potential user<br />
requirements, such as to make small adjustments to their product (e.g. change the projection from UTM to a<br />
local projection) even after product finalisation. The time requirement was checked by comparing the<br />
deadlines for the product delivery as indicated in the SRF, with the actual time of product delivery (upload of<br />
the data). The time-requirements are closely oriented to the time requirements that apply to the CS and are<br />
related to the respective activation mode (rush mode or emergency support mode). The technical compliance<br />
check was conducted after product dissemination in order to evaluate the metadata quality, i.e., the<br />
conformity to ISO19115/19139 and INSPIRE standards. The service sustainability check encompass the check<br />
of the sustainability of software applied (e.g., technical support, license model) and the EO- and non EO-data<br />
sets that were either required for processing or for product improvement (i.e., data sources, time required for<br />
data acquisition, etc.).<br />
As the OC was the only practical test within the Service Evolution process the results served as an essential<br />
basis for the general evaluation of the service performance. The products that were delivered in the frame of<br />
the OC exercise provided a good basis for the users to assess what they can expect from the thematic services<br />
under operational conditions.<br />
3. Concluding remarks and outlook<br />
The objective of this contribution was to present a methodological framework that has already demonstrated<br />
its practical value within the GMES Safer project. In summary, the framework holds two major strengths: a) the<br />
close cooperation with involved users throughout the evaluation process and 2) the integration and<br />
103
consideration of many different assessment criteria within the evaluation process which was realized by close<br />
cooperation with different partner organisations and experts. Therefore, it can be concluded, that the selected<br />
framework provided a comprehensive and reliable basis for a fair and transparent qualification process of the<br />
thematic services that were implemented into the SAFER operational model. Based on the practical<br />
experiences with the application of the framework, it can be assumed that the general structure of the<br />
methodology is also transferable and useful in comparable application cases.<br />
Even though the independent scientific product validation was carried out in parallel to the Service Evolution,<br />
the authors agree that a closer link between both parts would have simplified the qualification process in<br />
general. However, the absolute certainty on how the users will benefit from the new thematic services will<br />
turn out in the future. Here the most important indicators are the number of user requests per time period<br />
(activations) and the degree of user satisfaction in case of an activation of a new thematic service.<br />
References<br />
GAEHLER, M.; FOERSTER, A.; ZOSSEDER, K.; ZWENZNER, H. (2010): Report of defining the selection criteria for transfer<br />
of thematic services to the pre-operational emergency response services. Project report SAFER-<br />
D21000.1-SDD-DLR-01.03, 32p.<br />
ROEMER, H.; ZWENZNER, H. (2011): Service Maturity Analysis (V2). Project report SAFER-D21000.4-SDD-DLR-<br />
01.00, 310p.<br />
ROEMER, H.; ZWENZNER, H. (2011): Portfolio of thematic and technologic innovation services within the project<br />
and impact on operational architecture - Version 2. Project report SAFER-D21000.5-SDD-DLR-01.00,<br />
171p.<br />
104
EXTENDED ABSTRACT<br />
A methodology for a user oriented validation of satellite based<br />
crisis maps<br />
JUDEX M. 1 , SARTORI G. 2 , SANTINI M. 3 , GUZMANN R. 3 , SENEGAS O. 4 , SCHMITT T. 5<br />
1 Federal Office of Civil Protection and Disaster Assistance, Germany<br />
2 World Food Programme, Italy<br />
3<br />
Dipartimento della Protezione Civile, Italy<br />
4<br />
United Nations Institute for Training and Research – Operational Satellite Applications Programme (UNOSAT),<br />
Switzerland<br />
5<br />
Ministère de l’intérieur, Direction de la Sécurité Civile, France<br />
michael.judex@bbk.bund.de<br />
Abstract<br />
The European Commission together with the European Space Agency has established a civilian geospatial<br />
initiative unparalleled by any other civilian minded project today, under which, no other<br />
similar constellation has come together. The initiative “Global Monitoring for Environment and<br />
Security” (GMES) has the objective to provide access to geo-information being it from earth<br />
observation recourses or in-situ measurements. Expected benefits range from political decision<br />
making to security of citizens and hence the development is first and foremost user driven. One of<br />
the ongoing projects is the Services and Applications For Emergency Response (SAFER) developing<br />
the procurement of satellite based cartographic products and analyses in case of natural or man<br />
made disasters to responsible authorities.<br />
Being a user driven project, SAFER encompasses a component designed to validate project’s<br />
products by the users. Unlike ‘traditional’ Scientific and Technical validation, whereby processes<br />
and products are tested against pre defined protocols using a representative and significant<br />
sample, the main challenge for the Project User Board (PUB) lied in establishing a methodology<br />
that struck the right balance between validating objective, and subjective criteria, from the users<br />
perspective. Users have differing requirements, multiple expectations, and different levels of<br />
technological sophistication; how to reconcile these multiplicities is at the core of establishing an<br />
appropriate validation methodology.<br />
For the Validation process, the PUB compares the product requested by the user – based on the<br />
Service Request Form (SRF) form and the emergency activation Data Sheets – with the User<br />
Feedback (UFF) form, and then juxtaposes these with the products Portfolio to analyze if the<br />
105
product was delivered as promised. The PUB evaluates both documents – the SRF and UFF forms –<br />
using two distinct indexes; the ‘Coherence index’ – how closely the product reflected the promised<br />
product, in terms of technical content, delivery time etc – and the ‘Satisfaction Index’ – how useful<br />
and valuable was the product in planning and supporting the Emergency Response. The Coherence<br />
index focuses on objective criteria extracted from the forms, measured contextually within a<br />
specific disaster type; in short it is ‘content’ centred. The Satisfaction index on the other hand,<br />
measures subjective criteria; and thus is ‘experience’ centred.<br />
Therefore, PUB compares the product ‘Requested’ with the product ‘Delivered,’ and analyzes the<br />
answers with weighted indicators derived from prioritized needs. The PUB has worked in assessing<br />
the results and to fine-tune the models by adjusting the indicators’ weights on the basis of the<br />
quality of products, and to establish ‘thresholds’ that determine if the product is acceptable or not.<br />
To help address this fine-tuning the concept of Macro-indicators to simplify the process has been<br />
introduced. Roughly speaking, the PUB members have a ‘sense’ of the weight allocation, based on<br />
the experience of specialists and on the realities in the field – the questions: ‘is the product good,<br />
useful, coherent?’ become the locus of the equation. Macro indicators (non quantitative) are the<br />
general criteria used to translate the process into logic and to realign the micro indicators<br />
(indicators of the coherence and satisfaction indexes) with these macro indicators.<br />
The analysis was applied to the activations performed between the periods of December 2010 and<br />
beginning of May 2011. The case study is related to 15 activations. The validation methodology and<br />
mechanism developed by the PUB has been successfully applied and has proven to be successfully<br />
functioning.<br />
The results of the validation show that products did not reach the maximum coherence score<br />
because what is delivered doesn’t completely reflect what has been requested by users. However,<br />
in most cases the satisfaction index gives high scores and indicates that the users were happy with<br />
the results. The reason behind this phenomenon could be that the user is a) still satisfied with the<br />
product or b) the user changed his request after submitting the SRF without proper documentation<br />
in the SOR or c) parts of the products are delivered without mentioning in the operational<br />
documents.<br />
106
EXTENDED ABSTRACT<br />
Quality policy implementation: ISO certification of a Rapid<br />
Mapping production chain<br />
ALLENBACH B. 1 , FONTANNAZ D. 2 , RAPP JF. 1 , PEDRON JL. 2 , CHAUBET JY. 3<br />
1 SERTIT, France<br />
2 CNES, France<br />
3<br />
APAVE, France<br />
Bernard.allenbach@sertit1.u-strasbg.fr<br />
Abstract<br />
One of the challenges initially pursued by the SAFER project was to qualify and validate an<br />
Emergency Response Service to increase its acceptance by the users. The initial view of quality as<br />
set up by the writers of the project has split "Quality" into several work packages or tasks handled<br />
by different partners. Additionally a qualification-validation was also expected from the users. A<br />
quality management system was implemented aiming at coordinating quality issues through the<br />
following means: a quality assurance plan, a service control plan, specific service performance<br />
indicators computation and a set of associated regular quality reports. Close to the end of the<br />
project and concerning quality management a main lesson learnt is the difficulty to obtain<br />
application of such quality assurance plan when contractual links between partners are weak. Short<br />
duration of the project and the number of expected service portfolio versions are also major<br />
handicaps for the integration of quality requirements into operational procedures. Nevertheless<br />
positive results from this experience are numerous and the first is of course the quality awareness<br />
of all the actors of the project and the global improvement in the understanding of what quality<br />
should or could be in our realm. As a conclusion, a draft attempt to consider roles and<br />
responsibilities, from the quality view point, of the different actors implied in the Emergency<br />
Response is proposed. This effort relies strongly on the statement that quality control is closely<br />
linked with the core production knowhow of each actor; by the way is logically and usually a<br />
corporate and mainly private undertaking in the industry. Overall quality (service quality) is<br />
expected through the addition of the internal quality of all the segments – actors - procedures)<br />
chained to create the service. Hence, "quality can be published" at all scales at the interfaces<br />
between processes by comparing results with internal process references and/or external<br />
references like the portfolio specifications eventually normalized. It should be noticed that,<br />
unfortunately, it is not always possible to quantify everything we would like to measure to ensure<br />
quality conformity. Then, another and more comprehensive way to setup quality management is to<br />
use generic standardized quality methodology like ISO9001:2008. SERTIT has done this choice.<br />
107
More, the will was to impact strongly production; thus the domain of certification has been<br />
focused on the Rapid Mapping production chain integrating the fundamental timely constraint for<br />
rush production. Certification has proved to be a tough task requiring a lot of resources over a one<br />
year process. Despite the focused perimeter, quality management has truly, often deeply,<br />
impacted all major management processes and resources of the service. But the story got a happy<br />
end as SERTIT has obtained the certification of its Rapid Mapping Service under the following<br />
denomination and scope: “Production and publication within 6 hours after reception of the first<br />
satellite data of crisis geo-information for civil protection services.” This successful endeavor,<br />
consistent with SAFER targets and industry methodology, materialize for the user the jump done<br />
from best effort to certify production means, methodologies and resources.<br />
108
AUTHORS INDEX<br />
A<br />
AL-KHUDHAIRY D................................................................ 8<br />
ALLENBACH B. ................................................................. 107<br />
B<br />
BERGES J...........................................................................81<br />
BLAES X. ...........................................................................69<br />
BROGLIA M. ................................................................ 83, 93<br />
C<br />
CACAULT P........................................................................81<br />
CARRION D. ......................................................................93<br />
CASTOLDI R................................................................. 83, 93<br />
CHAUBET JY. ................................................................... 107<br />
CHAVENT N.......................................................................13<br />
CORBANE C. ......................................................................93<br />
D<br />
DILOLLI A. .........................................................................17<br />
E<br />
EDWARDS S. .....................................................................15<br />
EHRLICH D. .......................................................................37<br />
F<br />
FONTANNAZ D. ............................................................... 107<br />
G<br />
GÄHLER M. .......................................................................97<br />
GALLEGO J. .......................................................................37<br />
GORZYNSKA M. .................................................................57<br />
GOUHIER M. .....................................................................81<br />
GUEGUEN L.......................................................................47<br />
GUEHENNEUX Y. ...............................................................81<br />
GUZMANN R. .................................................................. 105<br />
H<br />
HARRIS A. .........................................................................81<br />
HORÁKOVÁ B. ...................................................................73<br />
HUBBARD D. .....................................................................11<br />
J<br />
JANIUREK D. ..................................................................... 73<br />
JOYANES G. ...................................................................... 57<br />
JUDEX M. ....................................................................... 105<br />
K<br />
KEMPER T....................................................................37, 69<br />
KOETTL C. ......................................................................... 15<br />
L<br />
LABAZUY P. ...................................................................... 81<br />
LANFRANCO M. ................................................................ 17<br />
LOMBARDO D. .................................................................. 17<br />
M<br />
MOREAU K. ...................................................................... 55<br />
O<br />
OSTERMANN F. ................................................................. 29<br />
P<br />
PEDRON JL...................................................................... 107<br />
PESARESI M. .......................................................... 47, 83, 93<br />
R<br />
RAPISARDI E. .................................................................... 17<br />
RAPP JF. ......................................................................... 107<br />
RIVET S............................................................................. 81<br />
RÖMER H. ........................................................................ 97<br />
ROUMAGNAC A. ............................................................... 55<br />
S<br />
SANTINI M...................................................................... 105<br />
SARTORI G. ..................................................................... 105<br />
SCHMITT T...................................................................... 105<br />
SENEGAS O. .................................................................... 105<br />
SOILLE P. .....................................................................37, 47<br />
SPINSANTI L...................................................................... 29<br />
STANKOVIČ J..................................................................... 73<br />
109
T<br />
TIEDE D. ...........................................................................57<br />
W<br />
WANIA A. ....................................................................37, 69<br />
U<br />
USSORIO A. .......................................................................57<br />
Z<br />
ZWENZNER H. ................................................................... 97<br />
V<br />
VEGA EZQUIETA P..............................................................57<br />
VOIGT S. ...........................................................................97<br />
110
EUROPEAN COMMISSION<br />
EUR 24948 EN – Joint Research Centre – Institute for the Protection and Security of the Citizen<br />
Title: Conference Proceedings: VALgEO 2011 - 3rd International workshop on Validation of geo-information<br />
products for crisis management.<br />
Authors: Christina Corbane, Daniela Carrion, Marco Broglia, Martino Pesaresi<br />
Luxembourg: Publications Office of the European Union<br />
2011 – 111 pp. – 21 x 30 cm<br />
EUR – Scientific and Technical Research series – ISSN 1018-5593 (print), ISSN 1831-9424 (online)<br />
ISBN 978-92-79-21379-3 (print)<br />
ISBN 978-92-79-21380-9 (PDF)<br />
doi:10.2788/73045<br />
Abstract<br />
This report is a collection of contributions presented in the 3 rd International Workshop for Validation of Geoinformation<br />
Products for Crisis Management- VALgEO 2011- organized by the JRC on October 18-19, 2011.<br />
The annual VALgEO workshop sets out to act as an integrative agent between the needs of practitioners in<br />
situation centers and in the field guiding the Research and Development community, with a special focus on<br />
the quality of information.<br />
The conference proceedings reflect the work presented and discussed during the workshop. The chapters are<br />
organized following the four thematic sessions:<br />
• The role of validation in Information and Communication Technologies (ICT) for crisis management<br />
• Validation of Remote Sensing derived emergency support products<br />
• Usability of Web based disaster management platforms and readability of crisis information<br />
• Towards routine validation and quality control of crisis maps
How to obtain EU publications<br />
Our priced publications are available from EU Bookshop (http://bookshop.europa.eu), where you can place<br />
an order with the sales agent of your choice.<br />
The Publications Office has a worldwide network of sales agents. You can obtain their contact details by<br />
sending a fax to (352) 29 29-42758.
The mission of the JRC is to provide customer-driven scientific and technical support<br />
for the conception, development, implementation and monitoring of EU policies. As a<br />
service of the European Commission, the JRC functions as a reference centre of<br />
science and technology for the Union. Close to the policy-making process, it serves<br />
the common interest of the Member States, while being independent of special<br />
interests, whether private or national.<br />
LB-NA-24948-EN-C