10.07.2015 Views

A Visual Dashboard for Linked Data - Semantic Web Journal

A Visual Dashboard for Linked Data - Semantic Web Journal

A Visual Dashboard for Linked Data - Semantic Web Journal

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

S. Mazumdar, D. Petrelli and F.Ciravegna / A <strong>Visual</strong> <strong>Dashboard</strong> <strong>for</strong> <strong>Linked</strong> <strong>Data</strong>: An Exploration of User and System Requirements 7Once the individual widgets are loaded with theirvisualizations, the user can further interact with localfilters and drill down to individual instances or groupof homogeneous instances as in the case of maps andtag cloud. On clicking an instance, queries will be sentto the backend, which will respond with a JSON objectcontaining all the in<strong>for</strong>mation regarding that instance.JS modules then parse the object to create an HTMLstring that gets rendered on a popup dialog, as shownin Figure 6.Fig. 6. Popup dialog providing details on individual instances - here,the details on grass.This allows a separation of the user from the rawdata instances. The approach of providing aggregatedviews and combinations of data instances as visualizationsenables users to have a high-level overview of thedata. However, users can also drill-down to individualinstances of data, which provides them direct access tothe underlying data. The benefit of such a mechanismis that the users would not need to be semantic-web ordatabase experts - their interactions would identify thesubset of the data they are interested in.6. User Needs: A focus group validationAs discussed previously, it is essential <strong>for</strong> any visualizationof linked data to take into account user needs.Following a user-centred design approach [6], a groupof potential end users has been involved in the <strong>for</strong>mativeevaluation of .views. A <strong>for</strong>mative evaluation differsfrom a summative evaluation in several ways 16 : it16 A summative evaluation occurs later in the development phase,when decisions have been already taken, and aims at ascertain thestatus of the system, e.g. by measuring its usability.is done earlier in the design-development cycle, it aimsat exploring the design space (e.g. alternative possibilities)and to have an overall sense of the user reactionto the system under design. As such it uses less <strong>for</strong>maltechniques than a summative evaluation, but providericher data to support understanding and, eventually,redesign.Two sets of <strong>for</strong>mative evaluations were carried outover one year: the first evaluation used a focus grouptechnique with hands-on sessions and provided evidenceof use (via observations), participants’ commentsand suggestions that were used to re-design thesystem; the second evaluation was a usability test conductedin pairs in order to provoke a natural discussionbetween the participants and reveal what is in theirmind better than other techniques, e.g. think aloud.<strong>Data</strong> collected in this way, narratives and discussionswere analysed qualitatively, looking <strong>for</strong> emerging patternsof consensus across groups.While <strong>for</strong> the system evaluations .views. has beentested on 4 different data sets, only the grass data set 17was used <strong>for</strong> the user evaluation. The set holds ecologicaland evolutionary data collected by biologistsaround the world, grass species descriptions and theirglobal distribution and is of high interest to biologists:these were the participants involved in the two <strong>for</strong>mativeuser evaluation.The goal of the evaluation was to understand how.views. matched expert users expectations, as well asgaining feedback on its usability. Over a period of aweek, 8 students from the Animal and Plant Sciencesdepartment took part in 5 focus groups, each involving1 to 3 participants 18 . Each session lasted between1.5 hours and 2 hours; the screen interaction and thecomments were recorded <strong>for</strong> future analysis. Participantsranged from first year BSc to MSc graduates.They were first briefed on the project as a whole anda 15-minute demonstration of the data and system wasgiven. Then it was their turn to have their hands onthe system: a trace provided as a set of questions. Auser satisfaction questionnaire was then used to startthe conversation around their experience. Questions ona 5-point Likert scale were targeted to rate differentcriteria in the system ranging from ease of use to reliability.The response was overall positive: the system17 The grass data set was kindly provided by the Kew Gardens viathe GrassPortal project.18 Sessions with one participant only were due to the partner missingthe meeting. Although this is not the ideal setting, we believevaluable data were collected in the individual sessions.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!