24.07.2013 Views

October 2007 Volume 10 Number 4 - Educational Technology ...

October 2007 Volume 10 Number 4 - Educational Technology ...

October 2007 Volume 10 Number 4 - Educational Technology ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>October</strong> <strong>2007</strong><br />

<strong>Volume</strong> <strong>10</strong> <strong>Number</strong> 4


<strong>Educational</strong> <strong>Technology</strong> & Society<br />

An International Journal<br />

Aims and Scope<br />

<strong>Educational</strong> <strong>Technology</strong> & Society is a quarterly journal published in January, April, July and <strong>October</strong>. <strong>Educational</strong> <strong>Technology</strong> & Society<br />

seeks academic articles on the issues affecting the developers of educational systems and educators who implement and manage such systems. The<br />

articles should discuss the perspectives of both communities and their relation to each other:<br />

• Educators aim to use technology to enhance individual learning as well as to achieve widespread education and expect the technology to blend<br />

with their individual approach to instruction. However, most educators are not fully aware of the benefits that may be obtained by proactively<br />

harnessing the available technologies and how they might be able to influence further developments through systematic feedback and<br />

suggestions.<br />

• <strong>Educational</strong> system developers and artificial intelligence (AI) researchers are sometimes unaware of the needs and requirements of typical<br />

teachers, with a possible exception of those in the computer science domain. In transferring the notion of a 'user' from the human-computer<br />

interaction studies and assigning it to the 'student', the educator's role as the 'implementer/ manager/ user' of the technology has been forgotten.<br />

The aim of the journal is to help them better understand each other's role in the overall process of education and how they may support each<br />

other. The articles should be original, unpublished, and not in consideration for publication elsewhere at the time of submission to <strong>Educational</strong><br />

<strong>Technology</strong> & Society and three months thereafter.<br />

The scope of the journal is broad. Following list of topics is considered to be within the scope of the journal:<br />

Architectures for <strong>Educational</strong> <strong>Technology</strong> Systems, Computer-Mediated Communication, Cooperative/ Collaborative Learning and<br />

Environments, Cultural Issues in <strong>Educational</strong> System development, Didactic/ Pedagogical Issues and Teaching/Learning Strategies, Distance<br />

Education/Learning, Distance Learning Systems, Distributed Learning Environments, <strong>Educational</strong> Multimedia, Evaluation, Human-Computer<br />

Interface (HCI) Issues, Hypermedia Systems/ Applications, Intelligent Learning/ Tutoring Environments, Interactive Learning Environments,<br />

Learning by Doing, Methodologies for Development of <strong>Educational</strong> <strong>Technology</strong> Systems, Multimedia Systems/ Applications, Network-Based<br />

Learning Environments, Online Education, Simulations for Learning, Web Based Instruction/ Training<br />

Editors<br />

Kinshuk, Athabasca University, Canada; Demetrios G Sampson, University of Piraeus & ITI-CERTH, Greece; Ashok Patel, CAL Research<br />

& Software Engineering Centre, UK; Reinhard Oppermann, Fraunhofer Institut Angewandte Informationstechnik, Germany.<br />

Editorial Assistant<br />

Barbara Adamski, Athabasca University.<br />

Associate editors<br />

Nian-Shing Chen, National Sun Yat-sen University, Taiwan; Alexandra I. Cristea, Technical University Eindhoven, The Netherlands; John<br />

Eklund, Access Australia Co-operative Multimedia Centre, Australia; Vladimir A Fomichov, K. E. Tsiolkovsky Russian State Tech Univ,<br />

Russia; Olga S Fomichova, Studio "Culture, Ecology, and Foreign Languages", Russia; Piet Kommers, University of Twente, The<br />

Netherlands; Chul-Hwan Lee, Inchon National University of Education, Korea; Brent Muirhead, University of Phoenix Online, USA;<br />

Erkki Sutinen, University of Joensuu, Finland; Vladimir Uskov, Bradley University, USA.<br />

Advisory board<br />

Ignacio Aedo, Universidad Carlos III de Madrid, Spain; Luis Anido-Rifon, University of Vigo, Spain; Alfred Bork, University of<br />

California, Irvine, USA; Rosa Maria Bottino, Consiglio Nazionale delle Ricerche, Italy; Mark Bullen, University of British Columbia,<br />

Canada; Tak-Wai Chan, National Central University, Taiwan; Darina Dicheva, Winston-Salem State University, USA; Brian Garner,<br />

Deakin University, Australia; Roger Hartley, Leeds University, UK; Harald Haugen, Høgskolen Stord/Haugesund, Norway; J R Isaac,<br />

National Institute of Information <strong>Technology</strong>, India; Mohamed Jemni, University of Tunis, Tunisia; Paul Kirschner, Open University of the<br />

Netherlands, The Netherlands; William Klemm, Texas A&M University, USA; Rob Koper, Open University of the Netherlands, The<br />

Netherlands; Ruddy Lelouche, Universite Laval, Canada; David McConnell, Lancaster University, UK; Rory McGreal, Athabasca<br />

University, Canada; David Merrill, Brigham Young University - Hawaii, USA; Marcelo Milrad, Växjö University, Sweden; Riichiro<br />

Mizoguchi, Osaka University, Japan; Hiroaki Ogata, Tokushima University, Japan; Toshio Okamoto, The University of Electro-<br />

Communications, Japan; Thomas C. Reeves, The University of Georgia, USA; Gilly Salmon, University of Leicester, United Kingdom;<br />

Norbert M. Seel, Albert-Ludwigs-University of Freiburg, Germany; Timothy K. Shih, Tamkang University, Taiwan; Yoshiaki Shindo,<br />

Nippon Institute of <strong>Technology</strong>, Japan; Brian K. Smith, Pennsylvania State University, USA; J. Michael Spector, Florida State University,<br />

USA; Chin-Chung Tsai, National Taiwan University of Science and <strong>Technology</strong>, Taiwan; Stephen J.H. Yang, National Central University,<br />

Taiwan.<br />

Assistant Editors<br />

Sheng-Wen Hsieh, Far East University, Taiwan; Taiyu Lin, Massey University, New Zealand; Kathleen Luchini, University of Michigan,<br />

USA; Dorota Mularczyk, Independent Researcher & Web Designer; Carmen Padrón Nápoles, Universidad Carlos III de Madrid, Spain;<br />

Ali Fawaz Shareef, Massey University, New Zealand; Jarkko Suhonen, University of Joensuu, Finland.<br />

Executive peer-reviewers<br />

http://www.ifets.info/<br />

Subscription Prices and Ordering Information<br />

For subscription information, please contact the editors at kinshuk@ieee.org.<br />

Advertisements<br />

<strong>Educational</strong> <strong>Technology</strong> & Society accepts advertisement of products and services of direct interest and usefulness to the readers of the journal,<br />

those involved in education and educational technology. Contact the editors at kinshuk@ieee.org.<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others ISSN than 1436-4522. IFETS must © International be honoured. Forum Abstracting of <strong>Educational</strong> with credit <strong>Technology</strong> is permitted. & To Society copy otherwise, (IFETS). The to republish, authors and to post the on forum servers, jointly or to retain redistribute the copyright to lists, of requires the articles. prior<br />

specific Permission permission to make and/or digital a fee. or hard Request copies permissions of part or all from of this the editors work for at personal kinshuk@ieee.org. or classroom use is granted without fee provided that copies are not made or distributed<br />

for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be<br />

honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a<br />

fee. Request permissions from the editors at kinshuk@massey.ac.nz.<br />

i


Abstracting and Indexing<br />

<strong>Educational</strong> <strong>Technology</strong> & Society is abstracted/indexed in Social Science Citation Index, Current Contents/Social & Behavioral Sciences, ISI<br />

Alerting Services, Social Scisearch, ACM Guide to Computing Literature, Australian DEST Register of Refereed Journals, Computing Reviews,<br />

DBLP, <strong>Educational</strong> Administration Abstracts, <strong>Educational</strong> Research Abstracts, <strong>Educational</strong> <strong>Technology</strong> Abstracts, Elsevier Bibliographic<br />

Databases, ERIC, Inspec, Technical Education & Training Abstracts, and VOCED.<br />

Guidelines for authors<br />

Submissions are invited in the following categories:<br />

• Peer reviewed publications: Full length articles (4000 - 7000 words)<br />

• Book reviews<br />

• Software reviews<br />

• Website reviews<br />

All peer review publications will be refereed in double-blind review process by at least two international reviewers with expertise in the relevant<br />

subject area. Book, Software and Website Reviews will not be reviewed, but the editors reserve the right to refuse or edit review.<br />

For detailed information on how to format your submissions, please see:<br />

http://www.ifets.info/guide.php<br />

Submission procedure<br />

Authors, submitting articles for a particular special issue, should send their submissions directly to the appropriate Guest Editor. Guest Editors<br />

will advise the authors regarding submission procedure for the final version.<br />

All submissions should be in electronic form. The editors will acknowledge the receipt of submission as soon as possible.<br />

The preferred formats for submission are Word document and RTF, but editors will try their best for other formats too. For figures, GIF and<br />

JPEG (JPG) are the preferred formats. Authors must supply separate figures in one of these formats besides embedding in text.<br />

Please provide following details with each submission: Author(s) full name(s) including title(s), Name of corresponding author, Job<br />

title(s), Organisation(s), Full contact details of ALL authors including email address, postal address, telephone and fax numbers.<br />

The submissions should be uploaded at http://www.ifets.info/ets_journal/upload.php. In case of difficulties, they can also be sent via email to<br />

(Subject: Submission for <strong>Educational</strong> <strong>Technology</strong> & Society journal): kinshuk@ieee.org. In the email, please state clearly that the manuscript is<br />

original material that has not been published, and is not being considered for publication elsewhere.<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others ISSN than 1436-4522. IFETS must © International be honoured. Forum Abstracting of <strong>Educational</strong> with credit <strong>Technology</strong> is permitted. & To Society copy otherwise, (IFETS). The to republish, authors and to post the on forum servers, jointly or to retain redistribute the copyright to lists, of requires the articles. prior<br />

specific Permission permission to make and/or digital a fee. or hard Request copies permissions of part or all from of this the editors work for at personal kinshuk@ieee.org. or classroom use is granted without fee provided that copies are not made or distributed<br />

for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be<br />

honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a<br />

fee. Request permissions from the editors at kinshuk@massey.ac.nz.<br />

ii


Journal of <strong>Educational</strong> <strong>Technology</strong> & Society<br />

<strong>Volume</strong> <strong>10</strong> <strong>Number</strong> 4 <strong>2007</strong><br />

Table of contents<br />

Special issue articles<br />

Theme: Current Approaches to Network-Based Learning in Scandinavia<br />

Guest Editors: Marcelo Milrad and Per Flensburg<br />

Current Approaches to Network-Based Learning in Scandinavia (Guest Editorial)<br />

Marcelo Milrad and Per Flensburg<br />

Design and Use of Collaborative Network Learning Scenarios: The DoCTA Experience<br />

Barbara Wasson<br />

Dynamic Assessment and the “Interactive Examination”<br />

Anders Jönsson, Nikos Mattheos, Gunilla Svingby and Rolf Attström<br />

Participation in an <strong>Educational</strong> Online Learning Community<br />

Anders D. Olofsson<br />

Framing Work-Integrated e-Learning with Techno-Pedagogical Genres<br />

Lars Svensson and Christian Östlund<br />

Netlearning and Learning through Networks<br />

Mikael Wiberg<br />

Anytime, Anywhere Learning Supported by Smart Phones: Experiences and Results from the MUSIS<br />

Project<br />

Marcelo Milrad and Daniel Spikol<br />

Structuring and Regulating Collaborative Learning in Higher Education with Wireless Networks and<br />

Mobile Tools<br />

Sanna Järvelä, Piia Näykki, Jari Laru and Tiina Luokkanen<br />

Full length articles<br />

Implementation of an Improved Adaptive Testing Theory<br />

Mansoor Al-A'ali<br />

Measures of Partial Knowledge and Unexpected Responses in Multiple-Choice Tests<br />

Shao-Hua Chang, Pei-Chun Lin and Zih-Chuan Lin<br />

Standardization from Below: Science and <strong>Technology</strong> Standards and <strong>Educational</strong> Software<br />

Kenneth R. Fleischmann<br />

Factors Influencing Junior High School Teachers’ Computer-Based Instructional Practices Regarding<br />

Their Instructional Evolution Stages<br />

Ying-Shao Hsu, Hsin-Kai Wu and Fu-Kwun Hwang<br />

Analysis of Computer Teachers’ Online Discussion Forum Messages about their Occupational Problems<br />

Deniz Deryakulu and Sinan Olkun<br />

The learning computer: low bandwidth tool that bridges digital divide<br />

Russell Johnson, Elizabeth Kemp, Ray Kemp and Peter Blakey<br />

Reliability and Validity of Authentic Assessment in a Web Based Course<br />

Raimundo Olfos and Hildaura Zulantay<br />

Transforming classroom teaching & learning through technology: Analysis of a case study<br />

Rosa Maria Bottino and Elisabetta Robotti<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are ISSN not 1436-4522. made or distributed © International for profit Forum or commercial of <strong>Educational</strong> advantage <strong>Technology</strong> and that copies & Society bear the (IFETS). full citation The authors on the first and page. the forum Copyrights jointly for retain components the copyright of this work of the owned articles. by<br />

others Permission than to IFETS make must digital be or honoured. hard copies Abstracting of part or with all of credit this work is permitted. for personal To copy or classroom otherwise, use to is republish, granted without to post fee on provided servers, or that to copies redistribute are not to made lists, requires or distributed prior<br />

specific for profit permission or commercial and/or advantage a fee. Request and that permissions copies bear from the the full editors citation at on kinshuk@ieee.org.<br />

the first page. Copyrights for components of this work owned by others than IFETS must be<br />

honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a<br />

fee. Request permissions from the editors at kinshuk@massey.ac.nz.<br />

1-2<br />

3-16<br />

17-27<br />

28-38<br />

39-48<br />

49-61<br />

62-70<br />

71-79<br />

80-94<br />

95-<strong>10</strong>9<br />

1<strong>10</strong>-117<br />

118-130<br />

131-142<br />

143-155<br />

156-173<br />

174-186<br />

iii


The Relationship of Kolb Learning Styles, Online Learning Behaviors and Learning Outcomes<br />

Hong Lu, Lei Jia, Shu-hong Gong and Bruce Clark<br />

In an Economy for Reusable Learning Objects, Who Pulls the Strings?<br />

Tim Linsey and Christopher Tompsett<br />

I Design; Therefore I Research: Revealing DBR through Personal Narrative<br />

Dave S. Knowlton<br />

Artificial Intelligence Approach to Evaluate Students’ Answerscripts Based on the Similarity Measure<br />

between Vague Sets<br />

Hui-Yu Wang and Shyi-Ming Chen<br />

A Web-based <strong>Educational</strong> Setting Supporting Individualized Learning, Collaborative Learning and<br />

Assessment<br />

Agoritsa Gogoulou, Evangelia Gouli, Maria Grigoriadou, Maria Samarakou and Dionisia Chinou<br />

Seven Problems of Online Group Learning (and Their Solutions)<br />

Tim S. Roberts and Joanne M. McInnerney<br />

Using Computers to Individually-generate vs. Collaboratively-generate Concept Maps<br />

So Young Kwon and Lauren Cifuentes<br />

Instructional Design for Best Practice in the Synchronous Cyber Classroom<br />

Megan Hastie, Nian-Shing Chen and Yen-Hung Kuo<br />

Book review(s)<br />

Cases on global e-learning practices: successes and pitfalls<br />

Reviewer: Richard Malinski<br />

Website review(s)<br />

WordChamp: Learn Language Faster<br />

Reviewer: Ferit Kılıçkaya<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others ISSN than 1436-4522. IFETS must © International be honoured. Forum Abstracting of <strong>Educational</strong> with credit <strong>Technology</strong> is permitted. & To Society copy otherwise, (IFETS). The to republish, authors and to post the on forum servers, jointly or to retain redistribute the copyright to lists, of requires the articles. prior<br />

specific Permission permission to make and/or digital a fee. or hard Request copies permissions of part or all from of this the editors work for at personal kinshuk@ieee.org. or classroom use is granted without fee provided that copies are not made or distributed<br />

for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be<br />

honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a<br />

fee. Request permissions from the editors at kinshuk@massey.ac.nz.<br />

187-196<br />

197-208<br />

209-223<br />

224-241<br />

242-256<br />

257-268<br />

269-280<br />

281-294<br />

295-297<br />

298-299<br />

iv


Milrad, M., & Flensburg, P. (<strong>2007</strong>). Current Approaches to Network-Based Learning in Scandinavia (Guest Editorial).<br />

<strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 1-2.<br />

Current Approaches to Network-Based Learning in Scandinavia<br />

(Guest Editorial)<br />

Marcelo Milrad<br />

Center for Learning and Knowledge Technologies (CeLeKT), Växjö University, Sweden //<br />

marcelo.milrad@msi.vxu.se<br />

Per Flensburg<br />

Department of Economy and Informatics, University West, Sweden // per.flensburg@hv.se<br />

This special issue of <strong>Educational</strong> <strong>Technology</strong> & Society aims at giving the reader an overview of current<br />

Scandinavian research in network-based learning. The base of the articles described in this issue features a selection<br />

of the best research papers from the Netlearning 2006 conference held at the Blekinge Technical University, Sweden<br />

in May 2006.<br />

There is a long tradition regarding distance education in Scandinavia. The first form of this type of education was<br />

called folk high school (institutions of informal education for adults) and started in 1844 in Rødding, Denmark. The<br />

folk high school was created for adult people and most often it was carried out in the form of a boarding school. The<br />

main ideas behind the folk high school approach were formulated by N.F.S. Grundtvig (1783–1872); a Danish<br />

teacher, philosopher and pastor. Grundtvig's pedagogical ideas were focused on learners´ active participation and<br />

experimentation during their studies. More than fifty years later, the first correspondence study institute (Hermods)<br />

was created in Malmö, Sweden in 1898. The main idea of Hermods was to provide educational materials and<br />

feedback to students by mail. The Hermods institute still exists, although the traditional letter-based courses have<br />

been replaced by Internet-based materials. The concepts of learners´ involvement and active participation inspired by<br />

Grundtvig´s ideas are still central components in many of the current network-based learning approaches used in<br />

Scandinavia. Thus, each of the papers selected for this special issue could be seen as an extension and evolution of<br />

these ideas.<br />

The first paper in this collection, entitled “Design and Use of Collaborative Network Learning Scenarios: The<br />

DoCTA Experience” is by Barbara Wasson. Wasson contributes to knowledge about the pedagogical design of<br />

network-based learning scenarios, the technological design of the learning environment to support these scenarios,<br />

and the organisational design for management of such environments by taking a sociocultural perspective on learning<br />

activity focussing on the interpersonal social interaction in collaborative learning settings. She describes a couple of<br />

learning scenarios that took place in the DoCTA (Design and use of Collaborative Telelearning Artefacts) project<br />

together with a discussion and reflection based on the lessons learned in these activities. This paper is an outstanding<br />

example that illustrates the relationship between the design and use of technology enhanced learning environments<br />

and how this is tightly intertwined in the institutional, pedagogical and technological aspects of a learning<br />

environment.<br />

Jönsson, Mattheos, Svingby & Attström, in their paper entitled ”Dynamic Assessment and the ’Interactive<br />

Examination’” are interested in assessment and the development of a methodology and technological support to<br />

assist in the assessment process. A method for supporting university students to carry out self-assessment and to<br />

compare it with the examiner’s assessment was developed and used in two different evaluation studies. During the<br />

examination, students assessed their own competence and their self-assessment was matched to the judgment of their<br />

instructors or to their examination results. Students then received a personal task, which they had to respond to in<br />

written text. After submitting their response, the students received a document representing the way an “expert” in<br />

the field chose to deal with the same task. They then had to prepare a “comparison document”, where they identified<br />

differences between their own and the “expert” answer. The results of this study indicate that the Interactive<br />

Examination might be a valid methodology for evaluating students’ self-assessment skills, and thus a potential tool<br />

for assisting the development of certain metacognitive skills in higher education.<br />

The context for the results reported by Anders Olofsson in his paper entitled “Participation in an <strong>Educational</strong> Online<br />

Learning Communities” is the issue of students´ participation in net-based higher education courses. Olofsson claims<br />

that the understanding of distance education has moved from being a question about transferring knowledge, towards<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

1


a question about learning together in an educational online learning community and he raises the issue of which<br />

pedagogical aspects need to be considered in order to support active student participation in these types of learning<br />

environments. Using data gathered through semi-structured interviews with 19 trainees on a Swedish network-based<br />

teacher training program supported by Information and Communication Technologies, Olofsson shows that what<br />

seems to be required from each trainee, in order to be a member of an educational online learning community, is an<br />

active participation and an inclusive attitude towards other members within the community. Based on these results,<br />

the author calls for a pedagogical approach to network-based learning in which the students being-together is taken<br />

as starting point, and where the pedagogical issues are firmly based on an ethical view of people and education.<br />

Svensson & Östlund, in their paper entitled “Bridging Design Theory and Distance <strong>Educational</strong> Practice with<br />

Techno-Pedagogical Genres” takes up another Scandinavian strength, namely design. The paper argues that design<br />

concepts should be used to bridge the gap between design theories and distance educational practice. It is also argued<br />

that genre theory could be instrumental in framing the characteristics of such techno-pedagogical genres in a way<br />

that constitutes a powerful level of communicating and disseminating new ideas within and across educational<br />

communities. The framework for techno-pedagogical genres is presented together with three illustrating examples.<br />

Mikael Wiberg elaborates the concept of learning through networks in his paper entitled “Netlearning and Learning<br />

through Networks”. This paper is inspired by recent research into the interaction society and the Scandinavian<br />

tradition in system development that have always highlighted the importance of user-driven processes, users as<br />

creative social individuals and a perspective on users as creative contributors to both the form, and content of new<br />

interaction technologies. Wiberg proposes the concept of netlearning as a general label for the traditional use of<br />

computer-based learning environments as education tools and then, he suggests the concept of learning through<br />

networks as a challenging concept for addressing user-driven technologies that support social, collaborative and<br />

creative learning processes in, via, or outside typical educational settings.<br />

Milrad and Spikol present their efforts in supporting learning using mobile phones in their paper entitled “Anytime,<br />

Anywhere Learning Supported by Smart Phones: Experiences and Results from the MUSIS project”. This paper<br />

presents the results of two pilot studies exploring the use of mobile phones in educational settings and the design of<br />

mobile services to support learning and collaboration in university courses. The results and discussion regarding the<br />

outcome of these trials are presented together with an explanation of how students experienced the mobile services.<br />

Issues and problems are discussed with regard to the technology and its use. The authors emphasize the importance<br />

of usability, institutional support, and tailored educational content in order to increase the potential for successful<br />

implementation of mobile services in higher education.<br />

To conclude, Järvelä, Näykki, Laru & Luokkanen present their efforts exploring the possibilities to scaffold<br />

collaborative learning in higher education with wireless networks and mobile tools in their paper entitled<br />

“Structuring and Regulating Collaborative Learning in Higher Education with Wireless Networks and Mobile<br />

Tools”. The authors investigate how pedagogical ideas that are grounded on concepts of collaborative learning,<br />

including the socially shared origin of cognition, can be supported using mobile phones. Three design experiments<br />

are presented investigating novel ways to structure and regulate individual and collaborative learning supported by<br />

smartphones. Based on the results illustrated in this paper, the authors conclude that there is a need to place students<br />

in various situations in which they can engage in effortful interactions in order to build a shared understanding.<br />

Wireless networks and mobile tools can provide multiple opportunities for bridging different contents and contexts,<br />

as well as virtual and face-to-face learning interactions in higher education.<br />

Having in mind the current stage in development, implementation and assessment of different network-based<br />

learning approaches, the papers in this special issue provide an illustration of current Scandinavian research efforts in<br />

this direction. There may be contrasts and similarities between the papers – but this was our intention from the start:<br />

to open up the discussion about networked-based learning and to look at it from different perspectives. The research<br />

results presented here provide the chance to look at them beyond the detail of particular studies and consider broader,<br />

more fundamental questions. With our capacity, as Guest Editors of this special issue, we hope that readers of ET&S<br />

will value the content and results presented in the different contributions featured in this special issue.<br />

2


Wasson, B. (<strong>2007</strong>). Design and Use of Collaborative Network Learning Scenarios: The DoCTA Experience. <strong>Educational</strong><br />

<strong>Technology</strong> & Society, <strong>10</strong> (4), 3-16.<br />

Design and Use of Collaborative Network Learning Scenarios: The DoCTA<br />

Experience<br />

ABSTRACT<br />

Barbara Wasson<br />

InterMedia & Department of Information Science and Media Studies, University of Bergen, Norway //<br />

Tel: +47 55584120 // Fax: +47 55584188 // barbara.wasson@uib.no<br />

In the Norwegian DoCTA and DoCTA NSS projects we aimed to bring a theoretical perspective to the design of<br />

ICT-mediated learning environments that support the sociocultural aspects of human interaction and to evaluate<br />

their use. By taking a sociocultural perspective on learning activity focussing on the interpersonal social<br />

interaction in collaborative learning settings we contribute to knowledge about the pedagogical design of<br />

network based learning scenarios, the technological design of the learning environment to support these learning<br />

scenarios, and the organisational design for management of such learning environments. Through various<br />

empirical studies we improved our understanding of the pedagogy and technology of networked learners, and<br />

increased our understanding of learner activity. This paper reports on the VisArt artefact design scenario and the<br />

gen-etikk collaborative knowledge building scenario focusing on their design and use. Both scenarios comprised<br />

co-located and distributed students collaborating over the Internet during a 3-4 week period.<br />

Keywords<br />

Collaborative Network Learning; Design and use; Distributed Collaboration; <strong>Technology</strong> Enhanced Learning<br />

Introduction<br />

According to Scandinavian tradition there is a tight relationship between design and use where one is always<br />

designing for future use situations (Bannon & Bødker, 1991; Wasson, 1998). Human-computer interaction<br />

researchers, Liam Bannon and Susanne Bødker argue for studying artefacts in use, for studying how they mediate<br />

use, how they are incorporated into social praxis as the basis for designing future use situations. Design in this<br />

human activity framework is “a process in which we determine and create the conditions which turn an object into<br />

an artifact of use. The future use situation is the origin for design, and we design with this in mind … To design with<br />

the future use activity in mind also means to start out from the present praxis of the future users ((Bannon & Bødker,<br />

1991, p. 242)”. This raises implications for the design, implementation, use and evaluation of network based<br />

learning environments.<br />

In this paper I show how these implications were manifested in the Norwegian DoCTA (Design and use of<br />

Collaborative Telelearning Artefacts) project (http://www.intermedia.uib.no/docta/hoved2.html) through the<br />

application of theory to both the pedagogical and technological design, the composition of the design teams and to<br />

how continuous evaluation, by both participants and researchers feeds into redesign.<br />

In DoCTA the focus was on the design and use of technological artefacts to support collaborative learning in<br />

distributed settings (Wasson, Guribye, & Mørch, 2000; Wasson & Ludvigsen, 2003). The objectives of DoCTA<br />

included:<br />

• taking a sociocultural perspective on learning activity focusing on the interpersonal social interaction in a<br />

networked collaborative learning setting<br />

• contributing to knowledge about the pedagogical design of learning scenarios, the technological design of the<br />

learning environment to support these learning scenarios, and the organisational design for management of such<br />

learning environments, including a reflection on teacher and learner roles for collaborative learning in<br />

distributed settings, and<br />

• to study and evaluate the social and cultural aspects of collaborative learning in distributed settings<br />

Through these objectives we aimed to improve our understanding of the pedagogy and technology of networked<br />

learners, and increase our understanding of learner activity, in order to lead to better design, management and<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

3


affordances of networked learning spaces. DoCTA 1 (1998-1999) and DoCTA NSS (2000-2004) were<br />

interdisciplinary research projects funded by the Network for IT-Research and Competence in Education (ITU)<br />

which is a measure taken by the Ministry of Education to support ICT and learning in the Norwegian <strong>Educational</strong><br />

system.<br />

In DoCTA 1 (Wasson, Guribye & Mørch, 2000) we focused on the design and use of technological artefacts to<br />

support collaborative networked learning aimed at teacher training. The research was not limited to only studying<br />

these artefacts per se, but included social, cultural, pedagogical and psychological aspects of the entire process in<br />

which these artefacts are an integral part. This means that we both provided and studied virtual learning<br />

environments that were deployed to students organised in geographically distributed teams. Various scenarios<br />

utilising the Internet were used to engage the students in collaborative learning activities. Through participation, the<br />

students gained experience with not only collaborative learning, but with networked learning through the<br />

collaborative design of a textual (Scenarios IDEELS and Demeter) or visual artefact (Scenario VisArt). Details of<br />

these studies can be found in the ITU DoCTA report (Wasson, Guribye & Mørch, 2000). In this paper, the VisArt<br />

scenario is in focus.<br />

In DoCTA NSS (Wasson & Ludvigsen, 2003) we investigated, through a design experiment, how the pedagogical<br />

design of an ICT-mediated collaborative learning environment enables students to learn complex concepts and how<br />

they can go about discussing these concepts in the broader learning community. Design experiments (Brown, 1992)<br />

can be seen as intervention in educational practice since the researchers, in collaboration with teachers, try to change<br />

the way student’s work (Ludvigsen & Mørch, 2003). In our design experiment we intervened in grade <strong>10</strong> natural<br />

science education by introducing an ICT-mediated collaborative learning scenario in gene technology, gen-etikk,<br />

where students collaborated both in co-located and distributed settings. The aim was to investigate how the<br />

pedagogical design of an ICT-mediated collaborative learning environment enables students to talk science and how<br />

this mediates learning. Augmenting our methodological toolbox with Interaction Analysis (Jordan & Henderson,<br />

1995) we added studies (Arnseth 2004; Arnseth et al. 2002, 2004; Rysjedal & Wasson, 2005) about the interaction<br />

between collaborating students and uncovered how the students make their evolving understanding visible to each<br />

other (Stahl, 2002) and how the artifacts that they use are an integral part of this process. In this paper I focus on<br />

how the underlying theoretical model of learning had implications for the pedagogical and technological design and<br />

for our evaluations.<br />

This paper is organized as follows. The next section looks that the design and use of technology enhanced learning<br />

environments and presents a conceptual model that illustrates the complexity of this relationship. Then the two<br />

scenarios, VisArt and gen-etikk are presented, respectively, with a focus on their design and use. The paper<br />

concludes with a general discussion.<br />

Design and Use of <strong>Technology</strong> Enhanced Learning Environments<br />

From a socio-cultural perspective on learning the notion of activity is seen as the basic concept for design and<br />

analysis (Ludvigsen & Mørch, 2003). This view, together with the Bannon & Bødker view of designing for future<br />

use situations, implies that when we look at a technology rich learning environment we need to look at activity from<br />

both a design and use perspective. Figure 1 illustrates this tight relationship. The TEL design addresses institutional<br />

(or organisational), pedagogical and technological aspects of the learning environment and the activity that emerges<br />

from implementing the design, that is the TEL environment in use, can been evaluated from an institutional,<br />

pedagogical or technological perspective. This means that when designing a learning scenario, the pedagogical and<br />

technological design is important and it is tightly entwined in an institutional context. It also implies that<br />

understanding the use is a complex relationship between institutional, pedagogical and technological perspectives.<br />

The institutional aspects often set the constraints for a learning scenario and are the aspect on which the designer has<br />

the least impact.<br />

The tight interaction between design and use, as illustrated in figure 1, shows that the design of a technology<br />

enhanced learning scenario requires the design of the institutional (or organizational), the pedagogical and the<br />

technological aspects. A theoretical perspective can influence the design (as illustrated in the section on the gen-etikk<br />

scenario). Implementation of the design can entail aspects such as tailoring or developing technology, intervention in<br />

existing practice (e.g., in a classroom), or pedagogical redesign of existing learning activities. Understanding the use<br />

4


of the technology enhanced learning environment can be taken from an institutional, a pedagogical or a technological<br />

perspective (example of this will be shown later in the description of the VisArt scenario). In order to understand<br />

the use, the evaluation, influenced by ones theoretical perspective, can take the form of experiments, field trials,<br />

ethnographies, etc., or some combination of these. The results of the evaluation inform the (re)design process.<br />

VisArt<br />

Figure 1. Design and Use of <strong>Technology</strong> Enhanced Learning Environments<br />

In DoCTA 1, the design and use of the VisArt scenario involved a networked learning environment situated in a<br />

higher education setting where the students participated in the collaborative design of a visual artefact for use in<br />

teaching. There are a number of research areas that have influenced DoCTA 1 (see Wasson, Guribye & Mørch<br />

(2000) for details). The two most significant are the conceptual framework offered by sociocultural perspectives<br />

(Wertsch, del Río & Alvarez, 1995) and the fields of computer support for collaborative learning (CSCL), in<br />

particular Salomon’s (1992) work on genuine interdependence. We also take inspiration and guidance from CSCW<br />

theory, in particular ideas on awareness (Dourish & Bellotti, 1992; Gutwin et al., 1995) and coordination science<br />

(Malone & Crowston, 1994). These theoretical perspectives have influenced our choice of groupware tool, the<br />

design of the VisArt scenario and its collaborative activity, and the design of our evaluation studies.<br />

Design of VisArt<br />

The design of the VisArt activity took place over four months. The design team comprised the DoCTA researchers<br />

and the instructors for the participating courses. We developed the student´s design activity, specified the<br />

technological environment, coordinated dates for the activity, designed the training, help and assistance for the<br />

deployment, and designed our evaluations.<br />

Figure 2 identifies the institutional, technological and pedagogical aspects of the VisArt scenario. The institutional<br />

aspects encompass students taking different courses at three geographically dispersed Norwegian higher educational<br />

institutions, the University of Bergen, Nord-Trøndelag College, and Stord/Haugesund College. The learner’s<br />

participating in the scenario had different backgrounds, ranged in age from 23 to 68 years and many had family<br />

responsibilities and full time jobs. The University of Bergen students were taking a graduate course in pedagogical<br />

information science and were learning about computer support for collaborative learning. They were a blend of<br />

pedagogical information science graduate students (with a teacher’s background) and information science graduate<br />

students (with a social science background). The students at Stord/Haugesund College were senior undergraduate<br />

students training to be teachers who were taking a distance learning course on pedagogical information science that<br />

5


included a unit on collaborative learning. The Nord-Trøndelag students were taking an undergraduate introduction<br />

course on the uses of technology in learning and had a choice to participate in the scenario to learn about networked<br />

learning.<br />

Figure 2. Design of VisArt<br />

There were a number of research areas that influenced the pedagogical design. The two most significant are the<br />

conceptual framework offered by sociocultural perspectives and the field of computer support for collaborative<br />

learning, in particular Salomon’s (1992) work on genuine interdependence. Thus we designed a collaborative<br />

learning activity where the students not only participated in teams collaborating to design a learning activity, but they<br />

had to reflect on their participation. The students were informed that in the VisArt activity they would be part of a<br />

team of 3 students comprised of 1 student from each institution. Thus each participant had complementary<br />

backgrounds; this is in alignment with Salomon´s (1992) idea that collaboration is only successful when there is a<br />

genuine interdependence between the collaborators. There were no opportunities for the team members to meet faceto-face.<br />

The team was to:<br />

• Organise a collaborative team effort thinking of Salomon’s definition genuine interdependence: 1) sharing<br />

information 2) division of labour 3) joint thinking<br />

• Carry out the Design Activity in TeamWave Workplace (TW)<br />

• the room you chose to design should enable the students to know more about a concept, a procedure, a<br />

theory, a process, etc.<br />

• with the aid of help pages, assistance (from a course assistant), Help room in TW<br />

• Produce 2 items:<br />

• A document of your pedagogical decisions (e.g., who is the room intended for, the content, etc….)<br />

• A TW room for teaching/learning<br />

In addition to participating in the VisArt activity, the UiB students were to produce an individual report on their<br />

experience with VisArt that contained:<br />

• an introduction to CSCL and networked learning<br />

• a description of the design activity, including the tools provided and used<br />

• a presentation of their team’s room and the pedagogical decisions made<br />

• a discussion of how the team met Salomon's requirements for genuine interdependence and whether or not<br />

TeamWave Workplace supported activities resulting from attempts at meeting Salomon’s requirements<br />

• a discussion of Gutwin et al.’s awareness concept and what it means in conjunction with your distributed<br />

collaboration through TW<br />

• their general reaction to collaborative telelearning (as you experienced it) including: a reflection on the team’s<br />

work, the process of carrying out the assignment, general comments about the entire assignment, your reaction<br />

to TW<br />

6


The technological aspects of the VisArt scenario comprised three tools, TeamWave Workplace, their own email<br />

system and a web browser. TeamWave Workplace (TW), a groupware tool developed at the GroupLab at the<br />

University of Calgary, was used as the main information and communication technology. One important distinction<br />

between TW and other real-time groupware systems is that TW is based on the metaphor of a place (i.e., a room),<br />

while most others are based on the metaphor of a meeting. A major strength of TW is that it provided a wellintegrated<br />

set of varied collaboration tools with a good blend of real-time (synchronous) and asynchronous<br />

communication tools that enable anytime team collaboration. Furthermore, TW augments both existing user<br />

interaction tools such as email, newsgroups and conferencing, and existing conventional applications such as word<br />

processors and spreadsheets. Another strength of TW’s integrated approach is that spontaneous as well as preplanned<br />

intra-team interactions are supported. From a CSCL theoretical perspective, TW enables collaboration and<br />

supports genuine interdependence between team members. Team members are able to share information, meanings,<br />

thoughts, conceptions and conclusions through their choice of operational tool objects. From a sociocultural<br />

perspective it can be said that these thinking tools (signs) in turn facilitate both team-mate knowledge construction<br />

and collective growth. Furthermore, the thinking tools provide a means for thoughts to be examined changed and<br />

elaborated upon by fellow team members.<br />

TeamWave was used for the IDEELS scenario in DoCTA 1 so we had already determined how to use the tool with<br />

teams and attempted to design a new tool to include in TW. This prove impossible for reasons beyond our control,<br />

thus we changed the design activity to utilise the exiting tools, highlighting the relationship between the<br />

technological and pedagogical aspects. Figure 3 shows the Classroom room that was the starting room for the<br />

students. The tools Calendar, To Do List, Doors (to other rooms), Address Book, URL links, Chat, Awareness Lists<br />

and a File (Visart oppgaver) can be seen.<br />

Implementation of VisArt<br />

The VisArt activity was deployed for 1 month beginning on February 25 th and ending on March 26 th . The students<br />

were given 5 days to download TW, test their accounts and team email addresses. A training-phase (in TW) lasted<br />

for ten days, and the main goal for this activity was for the students to get to know TW, to get to know the other<br />

members of their team, and to also give them some ideas on how to work and collaborate in TW. A three-week<br />

design activity followed. The students collaborated to create a learning room in TW. There were 11 teams and the<br />

topics they chose to design learning rooms for included: endanger species, gothic art, publishing on the internet,<br />

triangles, the big bang, travelling in Denmark, renewable energy sources, between the world wars, polar bears, and<br />

astronomy. Figure 4 shows the 2 rooms that were designed for learning about polar bears.<br />

Evaluation of use of VisArt<br />

The evaluation of the VisArt scenario was carried out on several levels and from several perspectives. The<br />

theoretical, or conceptual approach, to the evaluation of VisArt was rooted in a sociocultural perspective that<br />

emphasises an understanding of language, culture and other aspects of social setting and focus on the use situation.<br />

Ethnographic studies, favouring naturalistic and qualitative research methods were employed. In addition to the<br />

student’s own theoretical reflections (which are important in a sociocultural perspective), VisArt was evaluated as<br />

part of eight Master’s theses, including two activity theory studies of how students (Andreassen, 2000), instructors<br />

and facilitators (Wake, 2002) organise their work. In addition, these ethnographic flavoured studies were augmented<br />

with a usability study of TW (Rysjedal, 2000), looking at the efficiency of TW from a qualitative perspective using<br />

the data logs generated by TW (Meistad 2000, Meistad & Wasson, 2000), performing a formative evaluation of how<br />

to support collaborative design activities, seeing how TW supports coordination, how to design training and<br />

assistance in a collaborative telelearning setting (Underhaug, 2001). Some of the results of these studies are<br />

presented below.<br />

In general, the students were very satisfied with TW. As one student writes in his reflection over TW<br />

“An important side with TeamWave is that one can work both asynchronously and synchronously. … For<br />

example one can use the shared whiteboard synchronously when the users are online at the same time and<br />

write on it together, but it is also possible to use the whiteboard asynchronously when the different users log<br />

7


on at different times and work individually on tasks on the whiteboard. …That it supports both forms of<br />

work makes the program package flexible and accessible at all times.”<br />

Figure 3. Screenshot of the Classroom in TeamWave Workplace<br />

Figure 4. Screenshots of rooms for learning about Polar Bears<br />

Several students wrote that the successful use of TW was not just tied to the ease of use, rather, that it is used in an<br />

activity that meets Salomon’s requirements. As one student succinctly put it<br />

“I think that a requirement for successful use of it [TW] is that the participants are motivated and have<br />

mindful engagement and that the tool [TW] is used for something meaningful.”<br />

The majority of the groups had a heterogeneous makeup with the group members having different backgrounds. As<br />

one group said, this meant that they had different preconceptions and different experiences with collaboration. They<br />

said, that<br />

8


“according to Salomon it is exactly these differences that makes collaboration work…to use each others<br />

competence and pull something useful of these competencies through collaboration.”<br />

From a sociocultural research perspective the student’s own reflections are a very important part of evaluation and as<br />

illustrated in the previous excerpts they demonstrated an ability to reflect theoretically on practice. The research<br />

reports they submitted in the course held comments and reflections that were both thoughtful and insightful and will<br />

lead to improvements in future versions of the scenario.<br />

From an ethnographic flavoured study looking at how students organized their work in VisArt, Andreassen (2001)<br />

used different qualitative data gathering techniques from a variety of data sources, ranging from electronically<br />

collected TW chat logs to transcribed informal interviews. The data analysis dealt with aspects like co-ordination,<br />

communication mode, division of labour, and feedback. During the entire scenario the students met regularly in TW,<br />

and co-ordinated their actions by using TW or email. Sometimes both TW and email were used as a means of coordination,<br />

providing a form of double communication. This form of communication disappeared as the scenario<br />

went on, maybe as a result of the establishment of regular meetings and patterns for collaboration. The task decision<br />

marked a noticeable line of demarcation in the communication mode. Before deciding on the task the rate of<br />

synchronous meetings and communication were higher than after the decision had been made. The asynchronous<br />

nature of the post decision work, may have its root both in the fact that the need for synchronous meetings were<br />

diminished, and that each student was assigned her/his own area of responsibility, contributing to a co-operative,<br />

rather than a collaborative form of work. In spite of a mutual agreement on providing feedback on each other’s work,<br />

this hardly ever occurred. Time pressure and a feeling that one had to concentrate on what oneself was doing, are<br />

probably the main reasons for the lack of feedback.<br />

Collaboration patterns define sequences of interaction among members of a team (such as students). In the VisArt<br />

scenario we have searched for collaboration patterns by analysing interaction data from data logs, videotapes,<br />

observations, and interviews both between students and between students and facilitators (instructors and assistants).<br />

We have identified several instances that we believe can be characterised as collaboration patterns (Wasson &<br />

Mørch, 2000):<br />

Adaptation: This pattern describes how students gradually adapted to each other’s practices when working together<br />

to solve a common problem.<br />

Coordinated desynchronisation: This pattern describes how coordination of activities between team members<br />

changes after they have identified a common goal.<br />

Constructive commenting: This pattern describes commenting behaviour. Comments that are neutral (e.g., just to the<br />

point) are perceived to be less useful than comments that are also constructive (e.g., suggesting what to do next) or<br />

supportive (e.g., encouraging).<br />

Informal language: This pattern describes how interaction often starts in a formalistic style and gradually becomes<br />

more informal as team members get to know each other. Frequent use of slang words or dialects local to the<br />

community working together is common in instances of this pattern.<br />

Collaboration patterns are useful for the re(design) of the learning scenario. For example, in the initial phases of a<br />

collaboration effort, a sort of double communication might occur; more than one tool is used to inform other team<br />

members about a changed meeting time. This type of adaptation pattern was observed in VisArt and lead to<br />

inefficiencies. This sort of communication may be reduced or disappear with improved technical understanding or<br />

changed work coordination over time, but might be avoided with sufficient training and examples on how different<br />

tools can be used for coordination purposes.<br />

Our general findings in DoCTA 1 include:<br />

• the ease of use of the many collaborative tools tells us that the technological problems are no longer the prime<br />

issue in CSCL design<br />

• the main issues are related to the broader institutional contexts in which the tools are designed and used<br />

• coordination issues remain a challenge for collaboration with distributed learners<br />

9


• the numerous evaluation studies that we have carried out not only contribute to our understanding of the social<br />

and cultural aspects of collaborative networked learning environments, but equally important, they have also<br />

addressed methodological issues related to studying online environments<br />

• the reflections of the students played an important part in our understanding of the learning activity and feed into<br />

future designs<br />

Gen-etikk<br />

In DoCTA NSS we intervened in grade <strong>10</strong> natural science education by introducing an ICT-mediated collaborative<br />

learning scenario in gene technology, gen-etikk. In gen-etikk a cross curriculum scenario of natural science, religion<br />

& ethics (KRL) and Norwegian was developed collaboratively between the researchers and teachers and the learning<br />

goals related to the biological, ethical and societal aspects of gene technology. The pedagogical approach was<br />

progressive inquiry learning (Muukkonen, Hakkarainen, & Lakkala, 1999) and a web-based groupware system,<br />

FLE3, that supports this model was used as the main learning technology. Figure 5 illustrates that the progressive<br />

inquiry learning theoretical model was operationalised in both the pedagogical and technological design of gen-etikk.<br />

Students in two classes collaborated in both co-located (within groups in a class) and distributed (between groups in<br />

two different Norwegian cities) settings to share and discuss ideas and arguments around scientific and ethical<br />

questions related to gene technology. In this section we elaborate on the design rationale behind the scenario by<br />

detailing the pedagogical approach and the didactic design and then introduce the technological environment and<br />

describe the deployment of the scenario.<br />

Design of Gen-etikk<br />

In the design processes a key aspects in this kind of design experiment is both to adapt to the schools everyday<br />

practice, and on the other side, challenge and extend these practices. Progressive inquiry learning is an approach to<br />

collaborative knowledge building where students engage in a research-like process to gain understanding of a<br />

knowledge domain by generating their own problems, proposing tentative hypotheses and searching for deepening<br />

knowledge collaboratively. As a starting point for progressive inquiry learning, a context and the goal for a study<br />

project needs to be established in order for the students to understand why the topic is worthwhile investigating.<br />

Then the instructor or the students present their research problems/questions that define the directions where the<br />

inquiry goes. As the inquiry cycle proceeds, more refined questions will emerge. Focusing on the research problems,<br />

the students construct their working theories, hypotheses, and interpretations based on their background knowledge<br />

and their research. Then the students assess strengths and weaknesses of different explanations and identify<br />

contradictions and gaps of knowledge. To refine the explanation, fill in the knowledge gaps and provide deeper<br />

explanation, the students have to do research and acquire new information on the related topics, which may result in<br />

new working theories. In so doing, the students move step by step toward building up knowledge to answer the initial<br />

question. The role of the teachers is to be a facilitator for the students. The teachers can stimulate self-regulation by<br />

the students by giving comments and advice, both within the classroom and in the online environment.<br />

Figure 5. Operationalisation of Progessive Inquiry Learning<br />

<strong>10</strong>


The pedagogical design was inspired by the progressive inquiry approach to knowledge building. Animated by a<br />

trigger video (we edited a Norwegian National Broadcasting Corporation (NRK) documentary on gene technology to<br />

4 5-minute segments, each presenting a different theme within genetic technology) to set the context and supported<br />

by the structure and resources in the learning environment, the students themselves will identify problems on which<br />

to work, decide where they wanted to search for information, participate in inquiry learning cycles and create<br />

newspaper articles. We developed a set of activities with instructions that included assignments related to the inquiry<br />

learning cycle (e.g., generate scientific and ethical questions about gene technology; engage in inquiry about selected<br />

questions, compose scientific explanations, etc.) and products related to expressions of what they have learned<br />

(scientific and ethical questions, science questions for use on a test, write individual and collaborate texts on opinions<br />

about an argument or a discussion about a scientific or ethical question to be published in the national school<br />

newspaper).<br />

For the technological design, support for gen-etikk was given through a web portal that was designed in order to<br />

provide the students with a shared online space (see figure 6). From this portal the students had access to various<br />

learning resources, collaboration tools, and a tool for Internet publishing called Skoleavisa (an online newspaper<br />

generator available for all schools in Norway). Among the learning resources they could find an online text book<br />

(previously written by 2 of the DoCTA researchers), a Norwegian encyclopaedia, animations, a special search engine<br />

Atekst (searches Norwegian newspaper archives) and some selected links to external resources on the Internet.<br />

The main tool for collaboration was Future Learning Environment 3, FLE3 (http://fle3.uiah.fi). FLE3 is designed to<br />

support collaborative knowledge building and progressive inquiry learning (Muukkonen et al., 1999) and comprises<br />

several modules. The Web Top provides each group with a place where they can store and share digital material with<br />

other groups. An automatically generated message that tells what has happened since the last time they visited FLE3<br />

also appears here. The Knowledge Building module is considered to be the scaffolding module for progressive<br />

inquiry and it can be seen as a semi-structured communication interface (Dillenbourg, 2002). It is a shared database<br />

where the students can publish problem statements or research questions, and engage in knowledge building<br />

dialogues around these problems by posting their messages to the common workspace according to predefined<br />

categories which structure the dialogue. These categories are defined to reflect the different phases in the progressive<br />

inquiry process, thus they operationalise the theory in the tool. These included: Question, Our explanation, Scientific<br />

explanation, Summary, Comment and Process Comment. We added a digital assistant to FLE3 (Chen & Wasson,<br />

2003) to support both the students and teachers in monitoring what happened inside FLE3 (Dragsnes, Chen, &<br />

Baggetun, 2002). In addition to FLE3, a combined chat and mind mapping tool (Dragsnes, 2003) was developed and<br />

made available for the students to add support for synchronous communication.<br />

Implementation of Gen-etikk<br />

Gen-etikk took place over 31 hours during the three last weeks of September 2002, and involved two grade <strong>10</strong><br />

classes, one from Bergen (24 students) and one from Oslo (27 students). Five of the 31 hours were concurrent (i.e.,<br />

both classes worked on gene-tikk at the same time) and synchronous communication was possible. The scenario<br />

began with each class viewing the trigger video on genetic technology. Then the students brainstormed about<br />

questions related to genetic technology. This brainstorming session generated a long list of questions from the two<br />

classes, and the teachers used these questions in order to make one single list of questions with 12 scientific<br />

questions and 12 ethical questions about genetics. This list of questions was published on the web portal.<br />

The two classes were then divided into local groups with 3 or 4 members, and each of the local groups in Bergen<br />

was connected to a local group in Oslo to form a composed group. The scenario had two phases, and in the first<br />

phase the composed groups discussed the list of questions and decided on three scientific questions to work on.<br />

These questions were posted as problem-statements in FLE3 before they started to search for and discuss information<br />

around their questions. Whenever they found something relevant, they could post it as a note in the knowledge<br />

building module in FLE3. After having explored the questions for about a week the students should use the<br />

information they had gathered in order to write at least two different articles about genetics. These articles were<br />

published in Skoleavisa, the online newspaper generator.<br />

In the second phase of the scenario the focus was turned to the ethical aspects of gene technology. The list of<br />

questions was revisited, and this time the composed groups should decide on 3 ethical questions on which they<br />

11


wanted to work. The same inquiry process was repeated in this phase, with about one week of inquiry of questions<br />

before publishing articles in Skoleavisa. It was believed that focusing on scientific aspects before they turned to the<br />

ethical aspects would increase the students’ abilities to argue on their ethical viewpoints. By the end of the project<br />

every group had contributed and 60 articles were published in the online newspaper.<br />

Evaluation of the use of gen-etikk<br />

Figure 6. The Web Portal for gen-etikk<br />

There have been a number of empirical studies carried out on the DoCTA NSS data. The design of a learning<br />

environment needs to account for institutional, technological and pedagogical aspects at different levels. Three of the<br />

evaluations that take an institutional perspective on learning are summarised in this section. An institutional<br />

perspective takes student actions and activities as a staring point, not the goals in the curriculum or some scientific<br />

template. Diversity, multiple voices, the actors’ different goal and intensions, and the institutional history are some<br />

of the aspects that constitute a specific practice. These aspects create a basis for understanding how students act in<br />

specific situations.<br />

Rysjedal and Baggetun (2003) discuss issues related to infrastructure and design of learning environments. The<br />

established infrastructure in an institution creates both constraints and affordances for how new technology can be<br />

integrated. In the design of a learning environment that should work across institutional boundaries it is important to<br />

take the local infrastructures into consideration. Rysjedal and Baggetun take a broad perspective on infrastructure,<br />

and thereby make an important bridge between technological and social perspectives on how new technology can be<br />

introduced into social systems. They discuss how the design of a learning environment needs to take technological,<br />

organisational and pedagogical aspects into consideration.<br />

In Arnseth, Ludvigsen, Guribye and Wasson (2002) and Arnseth (2004) describe how rhetorical aspects of human<br />

talk and discourse become important if we want to understand how students co-construct knowledge in schools.<br />

Their empirical analyses show very clearly that students make specific interpretation of the task to which they are<br />

exposed and to how the institution actually works. The authors argue that knowledge building as a metaphor as used<br />

in some of the literature seems too be too rationalistic. In a similar vein, Ludvigsen and Mørch (2003) criticize the<br />

progressive inquiry model proposed by Mukkonenen et al. (1999) because it has a too distinct focus on the<br />

12


conceptual artefacts developed by the students, and that the progressive inquiry model is privileged as the analytic<br />

staring point.<br />

A selection of the lessons learned from DoCTA NSS as reported in Wasson & Ludvigsen (2003) include:<br />

• Our major finding is that too few students use higher order skills as part of their learning activities. This<br />

confirms the findings reported in many international studies. Students and teachers have a tendency to place<br />

more importance on solving the task than on the domain concepts to be learned. Students need to employ<br />

higher order skills when dealing with knowledge building in complex and conceptually-oriented environments<br />

in order to go beyond fact finding.<br />

• The teacher is extremely important in supporting, stimulating and motivating the students to integrate previous<br />

knowledge with the new knowledge they are learning through the gen-etikk tasks.<br />

• We find the same tendency as shown in the PISA study (Lie et al. ) that students do not have good enough<br />

learning strategies. When meeting new ICT-supported learning situations, students need time and training in<br />

their integrated use before their learning strategies become effective.<br />

• Prompting categories triggered some of the students to a more critical and analytic stance towards the learning<br />

resources and how they reason about ethical issues in the domain of gene technology.<br />

• The students that engage themselves in the task at a deep level show evidence of the necessary skills needed to<br />

critically examine the relationship between information and the argumentation that is part of the problem solving<br />

process.<br />

• The design, which includes small group collaboration, creates increased motivation and curiosity.<br />

• When schools work together to create a distributed environment where students solve tasks together, the<br />

management of the time schedules of the two schools needs to adjusted – or the school needs to have a flexible<br />

time schedule. Practical arrangements create tensions and problems with the coordination between the schools.<br />

• Students have little problem in the practical use of ICT-tools as long as the tools and network function as they<br />

should.<br />

• Several types of digital resources were created to support the development of the ability to integrate information<br />

from different resources as part of knowledge construction. This is one important aspect in designing for the<br />

cultivation of higher order skills.<br />

Discussion and Conclusions<br />

This paper has attempted to illustrate the relationship between the design and use of technology enhanced learning<br />

environments and to illustrate how this is tightly intertwined in the institutional, pedagogical and technological<br />

aspects of a learning environment. Furthermore, the view of design and use is heavily influenced by a sociocultural<br />

perspective on learning that views activity as central to both design and analysis. The DoCTA 1 and DoCTA NSS<br />

scenarios have been described in a way that highlights the pedagogical, technological and institutional design aspects<br />

and how the evaluation studies have looked at the use, or activity as it emerges. Several general observations have<br />

been presented for each of the projects.<br />

Having the opportunity to work with networked learning over a number of years has had its advantages. Early on we<br />

learned that it rarely was a problem with introducing a new technology to our students, and this was true for both 15<br />

year olds and university students. What we encountered, in both groups, however, was their desire to use the<br />

technological tools that they already used in their daily lives in our scenarios and we tried to accommodate this when<br />

possible. For example, in VisArt, we incorporated their own email into the scenario and adjusted our data collection<br />

to collect this email as well. In gen-etikk we found that the teenagers wanted to use IRC for having contact between<br />

the distributed groups, so we incorporated this as well. In later projects we found that they used their mobile phones<br />

for coordination and collaboration.<br />

We also learned a lot about evaluation of networked learning, the main one is that there is no recipe for how to<br />

evaluate and carry out analysis of networked learning. In DoCTA 1 we learned that there are many ways from which<br />

a technology enhanced learning scenario can be viewed and that only one view will not tell the story. For example, a<br />

technology that in incorporated into a pedagogical design may prove to be the wrong tool, or may have problems due<br />

to the institutions infrastructure. It is not as simple as saying “the technology did not support learning”. Maybe the<br />

usability of the tool is poor, or it did not fit the task for which it was chosen. Thus human computer interaction<br />

13


usability studies have a place in the evaluation repertoire, but are only part. In DoCTA NSS we paid attention to the<br />

unit of analysis. For example, we tried to build on state-of-the-art knowledge in order to design gen-etikk. The<br />

pedagogical and technological designs form conditions for how and what students could learn. The designed<br />

environment, however, is only one important aspect of what we need to understand. As Wasson & Ludvigsen (2003)<br />

have argued, learning and knowledge building are always part of an institutional arrangement, and we need to take<br />

this as a starting point. The socio-cultural perspective gives us possibilities to understand how higher order skills can<br />

be developed. By having insight into student’s learning trajectories, the kind of talk in which they are engaged, and<br />

in how the division of labor is distributed between the students and teachers, we begin to understand how the<br />

cultivation of higher order skills becomes part of institutionalized activities; otherwise it will be serendipitous. Only<br />

by looking at the chosen technology in relation to these other aspects can we say anything about how it supports<br />

learning. Thus we can arguer that institutional, technological and pedagogical aspects need to be treated as a unit of<br />

analysis in the design processes. Furthermore, the theoretical underpinnings, technological artefacts and the<br />

evaluation of use need to mutually inform each other. As illustrated in both scenarios, the designer´s theoretical<br />

perspective on learning influences the design of the pedagogy and can also, as in VisArt, be embedded in the<br />

technological tools. The theoretical perspective also has implications for the methodology and methods of analysis.<br />

When looking to the future the challenges for networked learning are many, but many of them exciting. Given<br />

increased mobility in work situations and in society in general, mobile and wireless technologies are becoming vital<br />

artefacts in all aspects of our lives. New technological advancements make collaboration across devices (e.g., mobile<br />

phones, PCs, PDAs, PocketPCs, etc.) and networks (e.g., GSM, GPRS, 3G, LANs, WLANs, WMANs) possible and<br />

opens for learning areans. For example third generation (3G) handsets allow users of 3G services to view and record<br />

video content and television in addition to WAP-functionalities such as reading e-mail or surfing on the Web.<br />

Furthermore, software is becoming available or accessible from a variety of devices (on e.g. PDAs, Pocket PCs,<br />

mobile phones). These new technological advancements place high demands new digital and mobile literacies among<br />

learners. Consequently, in-depth and structured knowledge on how wireless and mobile technologies impact human<br />

actions in learning is limited and raises a plethora of specific questions about how these technologies change<br />

collaborating institutions and their pedagogy. Recent studies are beginning to make headway into understanding<br />

these new conditions for learning (the collection in Arnedillo-Sánchez, Sharples & Vavoula, <strong>2007</strong>; Baggetun &<br />

Wasson, 2006; Kukulska-Hulme , <strong>2007</strong>; Milrad & Jackskon, <strong>2007</strong>; Thackara, 2005).<br />

Finally, I believe that we are dealing with a new type of student. <strong>Technology</strong> savvy students, who are used to<br />

configuring their own virtual world, organise their own interactions with peers, instructors and the world beyond.<br />

Games such as World of Warcraft and Social tools such as flikr, del.icious, YouTube, Facebook, blogging, WireHog,<br />

Groove are in the everyday repertoire of our current and future students and I ask, how are we to design learning<br />

environments for youth who have the world at their fingertips and how are we to capture their attention in order to<br />

learn something that we think is important.<br />

Acknowledgments<br />

DoCTA was funded by The Norwegian Ministry of Education under their Information <strong>Technology</strong> in Education<br />

(ITU) programme. I thank the teachers/instructors and students from the participating schools/colleges. Finally, I<br />

commend all the project participants for creating such an exciting and stimulating project environment.<br />

References<br />

Andreassen, E. F. (2000). Evaluating how students organise their work in a collaborative telelearning scenario: An<br />

Activity Theoretical Perspective, Masters dissertation, Dept of Information Science, University of Bergen, Norway.<br />

Arnseth, H. C. (2004). Discourse and artefacts in learning to argue: Analysing the practical management of<br />

computer supported collaborative learning, Ph.D. dissertation, University of Oslo, Norway.<br />

Arnseth, H. C., Ludvigsen, S., Guribye, F., & Wasson, B. (2002). From Categories of Knowledge Building to<br />

Trajectories of Participation. Analysing the Social and Rhetorical Organisation of Collaborative Learning. Paper<br />

presented at the ISCRAT 2002, June 18-22, 2002, Amsterdam.<br />

14


Arnseth, H.C., Ludvigsen, S., Mørch, A., & Wasson, B. (2004). Managing Intersubjectivity in Distributed<br />

Collaboration. Psychnology, 2 (2), 189-204.<br />

Bannon, L. J., & Bødker, S. (1991). Beyond the interface. Encountering artifacts in use. In J. M. Carroll (Ed.),<br />

Designing interaction: Psychology at the human-computer interface, Cambridge, UK: Cambridge University Press,<br />

227-253.<br />

Brown, A. L. (1992). Design Experiments: Theoretical and methodological challenges in creating complex<br />

interventions in classroom settings. The Journal of the Learning Sciences, 2 (2), 141-178.<br />

Chen, W., & Wasson, B. (2003). Coordinating Collaborative Knowledge Building. International Journal of<br />

Computer and Applications, 25 (1), 1-<strong>10</strong>.<br />

Dourish, P., & Bellotti, V. (1992). Awareness and Coordination in Shared Workspaces. Proceedings of the<br />

Conference on Computer-Supported Cooperative Work, <strong>10</strong>7-114, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.ics.uci.edu/~jpd/publications/1992/cscw92-awareness.pdf.<br />

Dragsnes, S. (2003). Development of a synchronous, distributed and agent-supported framework : exemplified by a<br />

mind map application, Masters dissertation, Department of Information Science, University of Bergen, Norway.<br />

Dragsnes, S., Chen, W., & Baggetun, R. (2002). A Design approach for Agents in Distributed Work and Learning<br />

Environments. Paper presented at the International Conference on Computers in Education, December 3-6, 2002,<br />

Auckland, New Zealand.<br />

Guribye, F. (1999). Evaluating a collaborative telelearning scenario: A sociocultural perspective, Masters<br />

dissertation, Department of Information Science, University of Bergen, Norway.<br />

Guribye, F., & Wasson, B. (2002). The ethnography of distributed collaborative learning. In G. Stahl (Ed.),<br />

Proceedings of CSCL 2002, Boulder, CO, USA, 637-638, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.cis.drexel.edu/faculty/gerry/cscl/cscl2002proceedings.pdf.<br />

Gutwin, C., Stark, G., & Greenberg, S. (1995). Support for Workspace Awareness in <strong>Educational</strong> Groupware.<br />

Proceedings of the ACM Conference on Computer Supported Collaborative Learning, Hillsdale NJ: Lawrence<br />

Erlbaum, 147-156.<br />

Jordan, B., & Henderson, A. (1995). Interaction Analysis: Foundations and Practice. Journal of the Learning<br />

Sciences, 4 (1), 39-<strong>10</strong>3.<br />

Kukulska-Hulme, A. (<strong>2007</strong>) Mobile Usability in <strong>Educational</strong> Contexts: What have we learnt? The International<br />

Review of Research in Open and Distance Learning, 8 (2), 1-16.<br />

Ludvigsen, S., & Mørch, A. (2003). Categorisation in knowledge building. In B. Wasson, S. Ludvigsen & U. Hoppe<br />

(Eds.), Proceedings of the 6th International Conference on Computer Support for Collaborative Learning,<br />

Dordrecht: Kluwer, 67-76.<br />

Malone, T., & Crowston, K. (1994). The Interdisciplinary study of coordination. ACM Computing Surveys, 26 (1),<br />

87-119.<br />

Meistad, Ø. (2000). Collaborative telelearning: Using log-files to identify collaboration patterns, Masters<br />

dissertation, Department of Information Science, University of Bergen, Norway.<br />

Meistad, Ø., & Wasson, B. (2000). Using server-logs to support collaborative telelearning research. In J. Bourdeau &<br />

R. Heller (Eds.), Proceedings of <strong>Educational</strong> Multimedia & <strong>Educational</strong> Telecom ’2000, Charlottesville, VA:<br />

AACE, 679-683.<br />

15


Milrad, M., & Jackskon, M. (<strong>2007</strong>). Designing and Implementing <strong>Educational</strong> Mobile Services in University<br />

Classrooms Using Smart Phones and Cellular Networks. International Journal of Engineering Education, 23 (4).<br />

Muukkonen, H., Hakkarainen, K., & Lakkala, M. (1999). Collaborative <strong>Technology</strong> for Facilitating Progressive<br />

Inquiry: Future Learning Environment Tools. Paper presented at the International Conference on Computer Support<br />

for Collaborative Learning, 12–15 December 1999, Palo Alto, CA, USA.<br />

Rysjedal, K.H. (2000). Teamwave Workplace in Use: A useability study, Masters dissertation, Department of<br />

Information Science, University of Bergen, Norway.<br />

Rysjedal, K., & Wasson, B. (2005). Local and distributed interaction in a collaborative knowledge building scenario.<br />

Paper presented at the International Conference on Computer Support for Collaborative Learning, 30 May - 4 June<br />

2005, Taipei, Taiwan.<br />

Salomon, G. (1992). What does the design of effective CSCL require and how do we study its effects? SIGCUE<br />

Outlook, 21 (3), 62-68.<br />

Stahl, G. (2002). The complexity of a collaborative interaction. Paper presented at the International Conference of<br />

the Learning Sciences (ICLS 2002), <strong>October</strong> 23-26, 2002, Seattle, WA, USA.<br />

Thackara, J. (2005). In the Bubble: designing in a complex world, Cambridge, MA: The MIT Press.<br />

Underhaug, H. (2001). Facilitating Training and Assistance in a Collaborative Telelearning Scenario, Masters<br />

dissertation, Department of Information Science, University of Bergen, Norway.<br />

Wake, J. (2002). How instructors organise their work in a collaborative telelearning scenario, Masters dissertation,<br />

Department of Information Science, University of Bergen, Norway.<br />

Wasson, B. (1998). Identifying Coordination Agents for Collaborative Telelearning. International Journal of<br />

Artificial Intelligence in Education, 9, 275-299.<br />

Wasson, B., Guribye, F., & Mørch, A. (2000). Project DoCTA: Design and use of Collaborative Telelearning<br />

Artefacts, Oslo: Unipub forlag.<br />

Wasson, B., & Ludvigsen, S. (2003). Designing for knowledge building, Oslo: Unipub forlag.<br />

Wasson, B., & Mørch, A.I. (2000). Identifying collaboration patterns in collaborative telelearning scenarios. Journal<br />

of <strong>Educational</strong> <strong>Technology</strong> & Society, 3 (3), 237-248.<br />

Wertsch, J. V., del Río, P., & Alvarez, A. (1995). Sociocultural studies: history, action and mediation. In J.V.<br />

Wertsch, P. del Río & A. Alvarez. (Eds.), Sociocultural Studies of Mind, Cambridge: Cambridge University Press, 1-<br />

34.<br />

16


Jönsson, A., Mattheos, N., Svingby, G., & Attström, R. (<strong>2007</strong>). Dynamic Assessment and the ”Interactive<br />

Examination”. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 17-27.<br />

Dynamic Assessment and the “Interactive Examination”<br />

Anders Jönsson, Nikos Mattheos, Gunilla Svingby and Rolf Attström<br />

Malmö University, SE-205 06 MALMÖ, Sweden<br />

anders.jonsson@mah.se // nikolaos.mattheos@mah.se // gunilla.svingby@mah.se // rolf.attstrom@mah.se<br />

ABSTRACT<br />

To assess own actions and define individual learning needs is fundamental for professional development. The<br />

development of self-assessment skills requires practice and feedback during the course of studies. The<br />

“Interactive Examination” is a methodology aiming to assist students developing their self-assessment skills.<br />

The present study describes the methodology and presents the results from a multicentre evaluation study at the<br />

Faculty of Odontology (OD) and School of Teacher Education (LUT) at Malmö University, Sweden. During the<br />

examination, students assessed their own competence and their self-assessments were matched to the judgement<br />

of their instructors (OD) or to their examination results (LUT). Students then received a personal task, which<br />

they had to respond to in written text. After submitting their response, the students received a document<br />

representing the way an “expert” in the field chose to deal with the same task. They then had to prepare a<br />

“comparison document”, where they identified differences between their own and the “expert” answer. Results<br />

showed that students appreciated the examination in both institutions. There was a somewhat different pattern of<br />

self-assessment in the two centres, and the qualitative analysis of students’ comparison documents also revealed<br />

some interesting institutional differences.<br />

Keywords<br />

Assessment, Self-assessment, Oral health education, Teacher education<br />

Introduction<br />

One of the major challenges for profession-directed higher education today, is not only to equip students with<br />

knowledge and skills, but also to help them develop into independent learners, able to cope with an ever increasing<br />

amount of information and learning needs. The basis of this process lies in the individual’s ability to continuously<br />

assess his or her actions and define individual learning needs accordingly. Research has shown that the ability to<br />

assess ourselves, especially within professional settings, is not a quality we are born with, but rather a metacognitive<br />

skill which can be learned, improved and excelled (Brown et al., 1997). It is also shown that not all professionals<br />

have developed this ability to a satisfactory degree and might consequently be unable to identify shortcomings in<br />

their own professional competence (Hays et al., 2002; Ngan & Amini, 1998; Reisine, 1996). But if professional<br />

education is supposed to foster reflecting and self-assessing practitioners, the students must be given the opportunity<br />

to practice these skills (Yeh, 2004), as well as be assessed on them. To assess the students’ self-assessment skills is<br />

of central importance since the assessment has a very strong influence on students’ learning (Brown et al., 1997).<br />

Examination schemes in profession-directed education traditionally provide educators with a thorough insight into<br />

students’ profession-related skills and competences, but little is known about students’ ability to self-assess their<br />

proficiency, to define their own learning objectives, and independently direct their competence development during<br />

their professional life. A structured assessment methodology focused on such metacognitive skills at the side of<br />

traditionally examined skills and knowledge, would therefore be a very important tool in higher education.<br />

De la Harpe and Radloff (2000) give several examples of both qualitative, like learning logs and interviews, as well<br />

as quantitative methods, mainly Likert scale based, to assess metacognitive skills. Most of these methods are,<br />

however, not integrated in the learning activities in an authentic manner, and the authors also point to the fact that<br />

“Students may be reluctant to engage in activities that focus on learning rather than on course content and may not<br />

devote the time and effort needed to complete assessment tasks effectively” (p. 177).<br />

To avoid this pitfall, the “Interactive Examination”, a structured assessment methodology developed and evaluated in<br />

the Faculty of Odontology at Malmö University, Sweden, has included the assessment of self-assessment skills in a<br />

regular examination. The methodology aims to evaluate students’ content specific skills and competences in parallel<br />

to their self-assessment skills, while expanding and supplementing the learning process. The self-assessment skills<br />

are assessed with both quantitative as well as qualitative means. Also, the methodology makes use of modern<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

17


information- and communication technology in order to facilitate training and feedback without necessarily<br />

increasing the workload of the personnel (Mattheos et al., 2004b).<br />

The present study aims to describe the model of the Interactive Examination and present the results from a<br />

multicentre evaluation study with undergraduate students in the Faculty of Odontology (OD) and School of Teacher<br />

Education (LUT) at Malmö University. It should be emphasized from the start, however, that this study does not aim<br />

for direct comparison of the two student groups, as differences in educational context and experimental settings<br />

would make this task meaningless. Rather, what is attempted is a “parallel execution”, where differences and<br />

similarities in the two institutions can be identified, leading to improvements of the methodology, as well as giving<br />

rise to new questions for further investigation.<br />

Material and method<br />

General Principle of the “Interactive Examination”<br />

In principle, the methodology is based on six explicit stages:<br />

1. Quantitative self-assessment. At the beginning of the process, the students assess their own competence through a<br />

number of Likert-scale questions, graded from 1 (poor) to 6 (excellent). In addition there are three open text fields,<br />

where the students can elaborate further on their self-assessment. When possible, the self-assessments are compared<br />

with the instructors’ judgements of students’ competence, and feedback is given – a process that to some extent can<br />

be automatized by the software. The purpose of this comparison is to highlight differences between student’s and<br />

instructor’s judgement, and not to constitute a judgement per se. Possible deviations between self-assessment and<br />

instructor’s assessment are only communicated to the students as a subject for reflection or a possible discussion<br />

issue with the instructor.<br />

2. Personal task. After the completion of the initial self-assessment, students receive a personal task in the form of a<br />

problem which they might experience during their professional life. This is an interactive part of the examination,<br />

where the interaction takes place between the student and the different affordances provided (such as links, pictures,<br />

background data etc.). The students have to come up with a solution strategy and elaborate their choices in written<br />

text.<br />

3. Comparison task. After the personal task, the students receive a document representing the way an “expert” in the<br />

field chose to deal with the same task. This “expert” answer does not correspond to the best or the only solution, but<br />

rather to a justified rationale from an experienced colleague, which remains open to discussion. The “expert”<br />

documents have been written in advance and the students are given access to them as they submit their responses to<br />

the personal task. This is a way of dealing with the problem of providing timely feedback to a large number of<br />

students, but the “expert” answers also provide a kind of social interaction, although in a fixed (or “frozen”) form.<br />

The stance taken here is thus that, although interaction is needed in order for learning to take place, this interaction<br />

does not necessarily involve direct communication or collaboration between humans (cf. Wiberg in this issue), but<br />

the interaction could also be mediated by technology.<br />

By the aid of the “expert” answer, the students can, according to the concept of “the zone of proximal development”<br />

(Vygotsky, 1978), potentially reach further than they can on their own, thus making the assessment dynamic.<br />

Dynamic assessment means that interaction can take place, and feedback can be given, during the assessment or<br />

examination, which separates it from more ”traditional assessments” (Swanson & Lussier, 2001). In this way,<br />

dynamic assessment provides the possibility to learn from the assessment, but also to assess the student’s potential<br />

(”best performance”), rather than (or together with) his or her ”typical performance” (Gipps, 2001). Empirical<br />

studies has shown that dynamic assessment indeed help to improve student performance, and also that lowperforming<br />

students are those who benefit the most, thus making the difference between high- and low-performing<br />

students less pronounced (Swanson & Lussier, 2001).<br />

After receiving the “expert” document, the students must, within a week, prepare a comparison document, where<br />

they identify differences between their own and the “expert” answer. The students are also expected to reflect on the<br />

reasons for these differences and try to identify own needs for further learning. This comparison document is a part<br />

18


of the qualitative self-assessment in the Interactive Examination, which, in contrast to the quantitative selfassessment,<br />

is used for summative purposes as well.<br />

4. Evaluation. After the examination the students evaluate the whole experience through a standardized form. At this<br />

point students have no feedback whether they have successfully completed the exam or not.<br />

5. Assessment of students. The students are assessed on the basis of: (1) their competence and knowledge on course<br />

specific objectives, and their ability to relate theoretical knowledge to displayed scenarios and critical thinking, as<br />

expressed in their personal task, as well as (2) their ability to reflect on their choices, identify weaknesses and define<br />

future learning objectives, as expressed in the comparison document.<br />

When poor performance is demonstrated in any of the above fields, students are assigned additional tasks. In this<br />

way students cannot “fail” the exam completely, but might be requested to practice and improve the respective skills,<br />

until a satisfactory level of competence is reached.<br />

6. Personalized feedback. One month after the examination, individual feedback is sent electronically to all students.<br />

This feedback includes comments on students’ self-assessment and how it relates to the judgement of the clinical<br />

instructor, as well as comments on the personal task and the comparison document. Finally the feedback contains<br />

suggestions for future tasks if necessary.<br />

Current Experimental Settings in OD and LUT<br />

The current experimental settings, as presented below, show how the Interactive Examination was applied in the<br />

autumn 2004 to undergraduate students at OD and LUT, both which are faculties at Malmö University. OD was<br />

founded in 1946 and provides undergraduate education in Dentistry, Dental <strong>Technology</strong>, and Dental Hygiene. The<br />

Interactive Examination is used within the dentistry programme, where 40 students are accepted every autumn.<br />

Within this programme, Problem-Based Learning (PBL) has been used since 1990 (Malmö University, 2006). LUT,<br />

with approximately 8.000 students, is the largest faculty at the university, and the undergraduate education covers the<br />

whole range from pre-school teaching to the upper secondary level. The undergraduate education at LUT is<br />

organized in five major areas, or fields of knowledge, and the Interactive Examination is used within the field called<br />

Science, Environment and Society (Malmö University, <strong>2007</strong>). The Interactive Examination was made available over<br />

the Internet through e-learning platforms, making it possible for the students to do the examination at any place they<br />

found suitable.<br />

The platforms used are non-commercial and both have been developed locally to meet the requirements of the<br />

specific learning activities to take place, as well as to facilitate the research conducted. At LUT a platform called<br />

ALHE (Accessability and Learning in Higher Education) has been used, and this will be the one presented more<br />

thoroughly in this article. ALHE is in many respects a conventional educational platform with both asynchronous<br />

(discussion forums and e-mail) and synchronous (online chat) communication tools, but it also includes some quite<br />

specific features. For example, students’ discussions are logged and displayed in a way to help the students reflect<br />

upon their own dialogic pattern and mutual knowledge building (e.g. who communicates with whom and to what<br />

extent, what kind of contributions have the students made), but also to facilitate research on the same issues. Another<br />

feature is the use of questionnaires, where the results can be exported to data sheets (such as Microsoft Excel or<br />

SPSS) for further analysis. Furthermore, ALHE is built to allow for the addition of new modules, and the Interactive<br />

Examination is such a module that has been implemented into the main platform. In the teacher interface, the files<br />

necessary for the examination (e.g. movies and “expert” documents) can be uploaded in a sequence similar to the<br />

methodology, and the files are then accessible in the student interface as hyperlinks. This is then complemented with<br />

the use of questionnaires in the quantitative self-assessment and the student evaluation as described below.<br />

1. Quantitative self-assessment. Both students of OD and LUT started the Interactive Examination through a<br />

questionnaire-based self-assessment in specific course directed competencies.<br />

Each OD student was assigned to one of six clinical instructors. The clinical instructors held regular meetings, where<br />

they revised the learning objectives for the 3rd semester and the quality assessment criteria for clinical work. As a<br />

result, 11 specific self-assessment fields were prepared for the students, reflecting 11 basic competencies. In the<br />

19


Interactive Examination, the students self-assessed their competence in the 11 knowledge and skill areas through an<br />

on-line form connected to a database. The self-assessment was carried out through ordinal scales marked from one to<br />

six, with six marked as “excellent” and one being “poor”. The same form had been used by the clinical instructors<br />

when assessing students’ clinical competence at the end of the semester. The results from students’ self-assessment<br />

were later compared to those originating from their clinical instructors.<br />

The self-assessment form LUT students completed was based on a scoring guide, or rubric, which was developed for<br />

this particular examination. The criteria in the rubric were equivalents to the self-assessment questions, making<br />

possible a comparison of students’ self-assessment to their actual results. The questionnaire had 13 questions relating<br />

to basic teacher competencies and three questions regarding reflection and self-assessment skills.<br />

2. Personal task. The OD students received a clinical patient case, accompanied with the possibility to access<br />

relevant images and diagnostic data. Their task was to identify the problem (diagnosis) and propose a treatment plan.<br />

Figure 1 illustrates the personal task, on the left side links to movies and other affordances are visible just below the<br />

Swedish title “Interaktiv examination”. The three forms described in the text are seen in the central portion of the<br />

screen. The right hand picture shows how one of the movies is displayed in the browser by using Flash software.<br />

Movies were also available as mpeg-files, with higher resolution and higher-quality sound, for those students not<br />

using a dial-up internet connection.<br />

Figure 1. Screenshots from the student interface in the ALHE version of the Interactive Examination<br />

The LUT students watched short movie sequences showing different problematic situations in a classroom context.<br />

Along with the movie sequences, the students could access some background data for the situation displayed, as well<br />

as the dialogue in text format. Of the total examination time, about one hour per movie was allocated for this part.<br />

With the movies as a starting point, the students filled in three different forms on the screen (see Figure 1):<br />

i. Describe the situation objectively and without prejudice,<br />

ii. Analyze the displayed situation on the basis of relevant literature and knowledge developed in the course, and<br />

iii. Consider different alternatives and give a proposal of how the teacher in the movie sequence should act.<br />

3. Expert response and comparison document. Both OD and LUT students received a text representing the way a<br />

qualified colleague chose to deal with the same problem. All students had to come up with a written reflection as<br />

directed by the previously described principles.<br />

4. Evaluation. After the examination the students evaluated the whole experience through a standardized form. The<br />

form included ten fields to which students could respond to on an ordinal scale from 1-9, as well as some multiple<br />

choice questions and free text fields. Free text comments were possible at the side of all fields. Eight fields were<br />

identical in the two centres, and two fields were similar. Along with practical issues, the form contained questions<br />

about the examination as a learning experience from the students’ point of view and the perceived relevance for their<br />

future profession.<br />

20


5. Assessment of the students. The students were assessed on an “acceptable”/”not acceptable” basis, depending on<br />

their performance in the written task and the comparison document. In the Faculty of Odontology one assessor<br />

evaluated all personal tasks and comparison documents, whereas six clinical instructors (4 female - 2 male) provided<br />

judgements for comparison with students’ initial self-assessment. The personal task was assessed through specific<br />

discipline related evaluation criteria, while the comparison document was assessed through a specific scoring guide<br />

(Table 1).<br />

At LUT, a scoring rubric covering the personal task as well as the comparison document was developed to provide<br />

information to both assessor and students what was to be assessed (Table 2). The students had access to the rubric<br />

well before the examination to enable a discussion of the assessment criteria with their instructors. The guide was<br />

also thought to make possible a more reliable assessment, despite the complexity of the task. All examinations were<br />

assessed by an external assessor.<br />

6. Personalized feedback. One month after the examination, individual feedback was sent to all students. Feedback to<br />

the OD students included their performance in the written task, commentary on their comparison document, as well<br />

as suggestions for future learning.<br />

For the LUT students, examination results were provided for each criterion in the scoring rubric. This very specific<br />

feedback showed both which criteria to give more attention in the future, as well as the direction of that attention to<br />

steer future learning. The students, as well as the researcher, could also easily compare the initial self-assessment to<br />

the actual results.<br />

Sample<br />

Both studies were carried out in the autumn 2004, with a cohort of first year student teachers in science and<br />

mathematics and second year dental students respectively. While all dental students were included in the study<br />

(n=34, 18 female- 16 male), some student teachers did not show up for the exam (n=171 out of 174, <strong>10</strong>3 female- 68<br />

male). Also some student answers are missing in the first part of the examination, due to technical problems, making<br />

the LUT sample somewhat smaller for the self-assessment (n=166). All students in both centres were exposed to the<br />

Interactive Examination for the first time.<br />

Statistical Analysis<br />

Students’ responses on the self-assessment fields were compared for agreement to those from their clinical<br />

instructors (OD), or to the actual examination results (LUT), using a two tailed Wilcoxon’s signed rank test. This test<br />

is a non-parametric analogue to the t-test and is used to determine whether two paired sets of data differ significantly<br />

from each other. The comparison was carried out for each individual student and also independently for each of the<br />

self-assessment fields. A frequency analysis was performed for the total of students’ and instructors’ scores in the<br />

assessment forms. The potential influence of gender or instructor on the examination scores, as well as the pattern of<br />

the self-assessment (higher, lower or in agreement), was investigated with regression analysis. Non-parametric linear<br />

regression was used to correlate gender, group and examination scores with students’ pattern of self-assessment.<br />

Qualitative Analysis<br />

The qualitative analysis of the students’ answers to the comparison task aims to answer the following questions:<br />

1. What kinds of differences or similarities between student and “expert” answers were identified?<br />

2. Which reasons, for the identified differences, are stated?<br />

3. Which weaknesses in their own competence as a teacher or dentist, and which learning needs, are identified<br />

by the students?<br />

21


Table 1. Criteria for grading OD students’ comparison documents<br />

Evaluation Excellent (3 pts) Acceptable (2 pts) Not acceptable (1 pt)<br />

Comparison of content The student has<br />

identified most/all the<br />

Analysis explanation<br />

of the differences<br />

Defining learning<br />

objectives<br />

important differences<br />

The student is able to<br />

analyze/attribute<br />

differences<br />

The student reaches the<br />

learning objectives<br />

deriving from the<br />

analysis of differences<br />

The student has identified<br />

half of the major differences<br />

The student can only partly<br />

analyze/attribute differences<br />

The student provides learning<br />

objectives only partly<br />

relevant to his analysis of<br />

differences<br />

The student has only<br />

identified very few or<br />

irrelevant differences<br />

The student does not<br />

attempt to analyze the<br />

differences.<br />

The student does not reach<br />

learning objectives, or they<br />

are irrelevant to his<br />

analysis of differences<br />

Evaluation<br />

Table 2. Part of the LUT scoring guide<br />

Acceptable Excellent<br />

Reflection:<br />

The reflection identifies differences The reflection identifies most of, or all,<br />

Can you use your own, between the own and the other teacher’s relevant differences between the own<br />

as well as others’, interpretation of the situation. and the other teacher’s interpretation of<br />

experiences as a basis<br />

for reflection and<br />

development?<br />

the situation.<br />

The reflection presents some reason, or<br />

reasons, for the identified differences.<br />

The reflection identifies shortcomings in<br />

own professional competence.<br />

The reflection argues in favour of own<br />

standpoints on the basis of relevant<br />

literature.<br />

The reflection identifies shortcomings in<br />

own professional competence and states<br />

learning needs resulting from these<br />

shortcomings.<br />

In a previous study (Mattheos et al., 2004b) the differences which students identified in the comparison of their own<br />

answers to the “expert”, were categorized into differences in (1) form, (2) content, and (3) attitude towards the<br />

content. This classification is used in the current study as well, with the addition of a fourth category which was<br />

mainly present among LUT students: Differences in interpretation. Difference in interpretation occurs when the<br />

student and the “expert” has interpreted the same situation in totally different ways, so that attempts to compare the<br />

two are extremely difficult. One example is when a student interprets the situation as a gender issue, while the<br />

“expert” writes about the same situation from an assessment point of view. Due to the nature of the problem dealt<br />

with in the cases of the OD students, differences in interpretation were much less prevalent.<br />

Results<br />

Evaluation – Students’ Attitudes<br />

Students’ acceptance of the methodology was positive, with the median values in most evaluation fields lying<br />

between 6 and 8 (OD), or around 6 (LUT), on a scale to 9 (Table 3). Based on their free text comments, OD students<br />

favoured the opportunity to reflect on one’s own self-assessment and the contact with their educators in this type of<br />

assessment. Some students would prefer more timely and personal feedback.<br />

Student teachers appear to have found this mode of examination interesting and instructive, especially the<br />

comparison part. Some mentioned enhanced motivation and engagement as a consequence of the authentic tasks. A<br />

few students wrote about less stress and anxiety as compared to more traditional modes of examination. Negative<br />

comments were mainly about lack of background information for the movie situations.<br />

22


Table 3. Some results from the student evaluation<br />

Question Student response OD<br />

(median)<br />

How do you value the Interactive Examination as a<br />

8<br />

learning experience?<br />

1 (not effective) - 9 (very effective)<br />

(n = 33)<br />

Was it clear what was expected from you in the<br />

6<br />

Interactive Examination?<br />

1 (very unclear) - 9 (very clear)<br />

(n = 33)<br />

To what extend do you feel you got the chance to<br />

7<br />

show what you know?<br />

1 (very little) - 9 (very much)<br />

(n = 33)<br />

How much do you think this type of examination can<br />

6<br />

help you to prepare for your working tasks as a<br />

dentist/teacher?<br />

1 (very little) - 9 (very much)<br />

(n = 33)<br />

How difficult were the examination cases?<br />

6<br />

1 (very easy) - 9 (very difficult)<br />

(n = 33)<br />

Self-Assessment<br />

Student response LUT<br />

(median)<br />

6<br />

(n = 140)<br />

5<br />

(n = 138)<br />

6<br />

(n = 140)<br />

6<br />

(n = 139)<br />

6<br />

(n = 138)<br />

The students’ self-assessment forms provided a total of 369 scores from 34 students (OD) and 2647 scores from 166<br />

students (LUT). The responding forms from the clinical instructors at the OD amounted to 374 scores, as some<br />

students had probably omitted some of the evaluation fields. The unmatched scores were excluded and the<br />

comparison was based on 369 scores from 34 students.<br />

A total of 142 (38 %) dental student scores were higher than the judgement of the clinical instructors, while 88 (24<br />

%) were lower and 139 (38 %) were in agreement. On an individual basis, 12 students’ (35 %) judgement was<br />

significantly higher than that from their instructors (p


content. Differences in the attitude were rarely reflected in the defined future learning objectives. On the other hand,<br />

the attitude differences were the field where students would most likely choose to defend their choice and argue<br />

against the response of the “expert”. Differences in the interpretation were something that was very rarely<br />

encountered by the OD students.<br />

In the majority of cases the OD students proved to be very skilful in locating the weak points and gaps in their<br />

knowledge, as opposed to the LUT students. Six students (2 females - 4 males) failed to identify the actual problems<br />

with their essays and were assigned some additional tasks.<br />

Discussion<br />

The main focus of this study was the reflective process which is initiated through comparing your own work with<br />

that of someone else. This process is well rooted in students’ self-assessment ability, a necessary professional skill.<br />

The Interactive Examination is a methodology developed on this principle and has been carried out with several<br />

cohorts of students with promising results (Mattheos et al., 2004a; Mattheos et al., 2004b).<br />

The present application of the Interactive Examination is unique in the sense that it brings together two different<br />

educational environments. As was emphasized initially, this study does not aim for direct comparison of the two<br />

student groups, but rather for a “parallel execution”, investigating the self-assessment pattern and the acceptance by<br />

the students in two different institutions. Furthermore, the study aimed to identify institutional differences and<br />

similarities, hopefully leading to improvements of the effectiveness and applicability of the methodology, as well as<br />

providing new insights into the respective institutional learning cultures.<br />

The students appeared to receive this form of examination favourably in both institutions. Students’ appreciation and<br />

acceptance of the examination methodology, as well as the value of the reflective process, are seen as prerequisites in<br />

order to affect their learning. The positive experience of the students therefore justifies a further analysis and<br />

discussion of the results from the Interactive Examination.<br />

In the studies reported here, there was a somewhat different pattern of self-assessment in the two centres. While a<br />

large portion of the scores at OD were in agreement with the judgements of the clinical instructors, the corresponding<br />

value at LUT was much lower. Also, whereas the dental students had a number of self-assessment scores both higher<br />

and lower than the instructors; the by far greatest number of scores from the student teachers were higher than the<br />

examination results. It should be kept in mind, however, that the self-assessment was somewhat different in the two<br />

centres. While the OD student scores were compared to their instructors’ judgement, the LUT students’ scores were<br />

compared to their examination results. This difference, along with other contextual factors (such as the formulation<br />

of the self-assessment questions), have a potential influence on the students’ self-assessment pattern. Also, as OD<br />

students and instructors have been spending a whole semester together, a “calibration” effect might bring their<br />

judgements closer to each other’s. This means that the differences between student teachers and dental students must<br />

be interpreted with caution. There are, however, striking differences in the frequency of scores in agreement, as well<br />

as in the distribution of higher and lower scores, warranting further discussion and research.<br />

According to previous research on self-assessment, low-ability students many times overestimate their grade or score<br />

as compared to the teachers’ judgement, while high-ability students more often are in agreement with their teacher<br />

(Kruger & Dunning, 1999). Also, progress in the course of studies seems to affect the self-assessment skills, where<br />

students in the beginning of a course produce less reliable assessments of themselves (Topping, 2003). In OD there<br />

was no relation between the students’ success on the exam and their self-assessment pattern, while on LUT, there<br />

was a relation between self-assessment pattern and success on the exam. A possible explanation to this, is that the<br />

dental students both have progressed somewhat further in their education (they were on the 3rd semester of their<br />

studies), but might also be more homogenous and calibrated as a group. By the 3rd semester the dental students have<br />

already spent a considerable amount of time working together in an environment where peer learning and group<br />

dynamics have a significant role, as well as continuous contact with the instructors.<br />

No other factor, such as students’ gender, group or clinical instructor, could be found relating to the self-assessment<br />

pattern in either OD or LUT. Studies reported in the literature on self-assessment shows no uniform results on gender<br />

differences in self-assessment skills either (Arnold et al., 1985; Ericson et al., 1997; Topping, 2003).<br />

24


The qualitative analysis of students’ comparison documents provided some very interesting findings. Evidently, the<br />

dental students were primarily focused on differences with the “solution” provided, while student teachers seem to<br />

have extensively focused on similarities. Similarities tend to be only briefly mentioned, if at all, by dental students<br />

and are usually not accompanied by further arguments. It appears that the dental students treat the similarities as<br />

something well expected, almost self-evident, not worth of special attention and choose to focus on the explanation<br />

of differences instead. This attitude has been repeated almost in every cohort of dental students so far, to the extent<br />

that the assessors in the dental faculty have considered this a standard attitude, the reasons of which were never<br />

questioned. However, the execution of the Interactive Examination in the Teacher Education has brought a valuable<br />

insight in this field. In addition, the differences in interpretation are encountered in student teachers’ documents,<br />

while they are rarely observed with dental students. Besides contextual factors, such as the nature of the expert<br />

document, one might consider many reasons as likely to have contributed to these differences:<br />

Different nature of the assessed task. Diagnosis and treatment planning as encountered in the second year dental<br />

students’ cases require a well defined array of knowledge fields and competences. Although controversies are very<br />

often encountered, the existence of specific guidelines and accepted practices, limits down the spectrum of viable<br />

choices as well as the importance of subjective factors. Dental students in their great majority identified the same<br />

main problem in each clinical case. With most students having the same starting point, it might be that differences<br />

are more likely to attract attention than similarities.<br />

On the other hand, the task which student teachers were called to complete covered a wider area of subjects,<br />

including social and moral issues, where application of standards and guidelines is sometimes unclear. Furthermore,<br />

the cases could be approached from different points of view, defining different problems as starting points. This<br />

resulted in different intervention strategies and differences in interpretation.<br />

Difference in the institutional learning cultures. It appeared that dental students tend to see more authority in the<br />

“qualified dentist” than student teachers see in a “qualified teacher”. Dental students, at least at this early stage of<br />

their studies, seem to be less eager to question the opinions of the qualified dentist than student teachers of an<br />

experienced teacher. If this is true, it might reflect differences in how the students see their future role as “end<br />

products” of their education.<br />

A dentist might represent for the dental students a very strictly defined set of competences, accompanied by a certain<br />

degree of “authority”, which they are most likely uncomfortable to challenge. Student teachers, however, seem to<br />

adopt more of a peer attitude towards their qualified colleagues. This is reflected in the fact that some students has<br />

chosen to criticize the “expert”. The criticism is mainly about views and values, but to some degree also about the<br />

interpretation and the examples chosen. In other cases the students regard their own solutions as being qualitatively<br />

better than the qualified teacher’s. Here it is foremost choices of specific examples or actions taken that are<br />

considered better.<br />

In future studies it would be very interesting to further investigate this assumption and see if there are certain<br />

differences between students of different profession-directed educations, in terms of how students perceive their<br />

development towards the “final product” of their studies.<br />

Conclusions<br />

The added value of this multicentre study is threefold. The first lies in the validation of the methodology. There is<br />

often a problem in estimating the quality of new modes of assessment, since they cannot always be evaluated on the<br />

basis of traditional psychometric criteria. Gielen et al. (2003) argue that “To do right to the basic assumptions of<br />

these assessment forms [“authentic assessment” and “performance assessment”], the traditionally used psychometric<br />

criteria need to be expanded, and additional relevant criteria for evaluating the quality of assessment need to be<br />

developed” (p. 38). In this widened set of criteria, referred to as “edumetric” criteria, the validity concept has been<br />

expanded to include the tasks used, considering authenticity and complexity in relation to the knowledge domain<br />

being assessed, but also consequences of the assessment such as the influence on students’ learning or learning<br />

strategies (Sambell et al., 1997; Gielen et al., 2003).<br />

25


Even though further research on the quality of the Interactive Examination is needed to better determine the<br />

consequences of the methodology, in terms of students’ learning and learning strategies, efforts have been made to<br />

include features aiming specifically for self-assessment skills and thus trying to make the examination more valid for<br />

the proposed purpose (Frederiksen & Collins, 1989). As described earlier, two parts of the Interactive Examination<br />

involves self-assessment. First, the students estimate their own competence according to Likert-like questions, where<br />

the results are compared either to judgements from the instructor (OD) or the actual examination results (LUT). This<br />

comparison, however, does not constitute a judgement per se and possible deviations between self-assessment and<br />

results are used only to draw the students’ attention to the difference and thus make reflection and learning possible.<br />

Secondly, the students compare their answers with the answer of an “expert”. This comparison is assessed, and<br />

feedback is given, according to scoring criteria.<br />

But addressing self-assessment skills is not the same thing as really capturing them. However, by investigating how<br />

the methodology is used and perceived in the two different institutions, an estimation of the validity can be made.<br />

Even though there are some institutional differences, displayed for instance in the different ways to handle the<br />

comparison task by LUT and OD students, the overall applicability of the methodology is similar in both centres and<br />

the students respond to it in a analogous manner. This indicates that the Interactive Examination might be a valid<br />

methodology for assessing students’ self-assessment skills in authentic settings, and thus a potential tool for assisting<br />

the development of certain metacognitive skills in higher education.<br />

The second added value of cross-sectional, multicentre studies such as this, is the provision of a better insight to<br />

students’ self-assessment abilities. Future studies could investigate the longitudinal changes of students’ selfassessment<br />

abilities throughout the curriculum. Such follow-up studies are necessary in order understand how these<br />

skills evolve and also to allow educators to design proper interventions in order to early identify and support students<br />

with weak self-assessment abilities.<br />

The last added value to be commented upon, relates to the use ICT in the Interactive Examination. Information and<br />

communication technology is used in several ways in the examination methodology, and for several reasons. For<br />

example, it makes possible a automatized comparison of the quantitative self-assessment and instructors’ judgement,<br />

and it also provides the necessary interactivity in the personal task, where the dental students have access to relevant<br />

images and diagnostic data, and the student teachers watch movie sequences that has to be accompanied with links to<br />

background data and other affordances. In both cases the students need to access the “expert” document after<br />

submitting their personal task, while at the same time saving their answers to the personal task in a database<br />

available to both assessors and researchers. Most importantly, however, the use of technology makes it possible to<br />

make valid assessments of student competences in a way not possible without this technological support. For<br />

instance, the authenticity of the examination could not be brought about by a paper-and-pencil test (cf. Lam,<br />

Williams, & Chua, <strong>2007</strong>), nor could the same effectiveness be achieved if the students were assessed while actually<br />

performing in practice – this is especially true for the teacher education with such a large number of students. A<br />

conclusion is thus that training and valid assessment of self-assessment skills can be facilitated through the<br />

Interactive Examination, and that this can be done without necessarily increasing staff numbers or workload. In<br />

addition, as the examination is available online, the methodology could easily be used for distance education<br />

purposes. The Internet accessibility was used by a majority of the LUT students who preferred to carry out the<br />

examination at home or, in a few cases, from other parts of the world (e.g. Afghanistan and Iceland).<br />

References<br />

Arnold, L., Willoughby, T., & Calkins, E. (1985). Self-evaluation in undergraduate medical education: a longitudinal<br />

perspective. Journal of Medical Education, 60 (1), 21-28.<br />

Brown, G., Bull, J., & Pendlebury, M. (1997). Assessing student learning in higher education, London: Routledge.<br />

De la Harpe, B., & Radloff, A. (2000). Informed teachers and learners: The importance of assessing the<br />

characteristics needed for lifelong learning. Studies in Continuing Education, 22 (2), 169-182.<br />

Ericson, D., Christersson, C., Manogue, M., & Rohlin, M. (1997). Clinical guidelines and self-assessment in dental<br />

education. European Journal of Dental Education, 1 (3), 123-128.<br />

26


Fredriksen, J. R., & Collins, A. (1989). A systems approach to educational testing. <strong>Educational</strong> Researcher, 18 (9),<br />

27-32.<br />

Gielen, S., Dochy, F., & Dierick, S. (2003). Evaluating the consequential validity of new modes of assessment: The<br />

influence of assessment on learning, including pre-, post-, and true assessment effects. In Segers M., Dochy F., &<br />

Cascallar E. (Eds.), Optimizing new modes of assessment: In search of qualities and standards, Dordrecht: Kluwer<br />

Academic Publishers, 37-54.<br />

Gipps, C. (2001). Sociocultural aspects of assessment. In Svingby, G. & Svingby, S. (Eds.), Bedömning av kunskap<br />

och kompetens, Stockholm: Lärarhögskolan i Stockholm, PRIM-gruppen, 15-67.<br />

Hays, R. B., Jolly, B. C., Caldon, L. J., McCrorie, P., McAvoy, P. A., McManus, I. C., & Rethans, J-J. (2002). Is<br />

insight important? Measuring capacity to change performance. Medical Education, 36 (<strong>10</strong>), 965-971.<br />

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own<br />

incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77 (6), 1121-1134.<br />

Lam, W., Williams, J. B., & Chua, A. Y. K. (<strong>2007</strong>). E-xams: harnessing the power of ICTs to enhance authenticity.<br />

<strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (3), 209-221.<br />

Malmö University (2006). About the Faculty, Retrieved <strong>October</strong> 1, <strong>2007</strong>, from, http://www.mah.se/templates/<br />

Page____13096.aspx.<br />

Malmö University (<strong>2007</strong>). About the School, Retrieved <strong>October</strong> 1, <strong>2007</strong>, from http://www.mah.se/templates/<br />

Page____13082.aspx.<br />

Mattheos, N., Nattestad, A., Christersson, C., Jansson, H., & Attström, R. (2004a). The effects of an interactive<br />

software application on the self-assessment ability of dental students. European Journal of Dental Education, 8 (3),<br />

97-<strong>10</strong>4.<br />

Mattheos, N., Nattestad, A., Falk Nilsson, E., & Attström, R. (2004b). The Interactive Examination: Assessing<br />

students’ self-assessment ability. Medical Education, 38 (4), 378-389.<br />

Ngan, P., & Amini, H. (1998). Self-confidence of general dentists in diagnosing malocclusion and referring patients<br />

to orthodontists. Journal of Clinical Orthodontology, 32 (4), 241-245.<br />

Reisine, S. (1996). An overview of self-reported outcome assessment in dental research. Journal of Dental<br />

Education, 60 (6), 488-93.<br />

Sambell, K., McDowell, L., & Brown, S. (1997). "But is it fair?" An exploratory study of student perceptions of the<br />

consequential validity of assessment. Studies in <strong>Educational</strong> Evaluation, 23 (4), 349-371.<br />

Swanson, H. L. & Lussier, C. M. (2001). A selective synthesis of the experimental literature on dynamic assessment.<br />

Review of <strong>Educational</strong> Research, 71 (2), 321–363.<br />

Topping, K. (2003). Self and peer assessment in school and university: Reliability, validity and utility. In Segers M.,<br />

Dochy F., & Cascallar E. (Eds.), Optimizing new modes of assessment: In search of qualities and standards,<br />

Dordrecht: Kluwer Academic Publishers, 55-87.<br />

Vygotsky, L. S. (1978). Mind in society, Cambridge: Harvard University Press.<br />

Yeh, Y-C. (2004). Nurturing reflective teaching during critical-thinking instruction in a computer simulation<br />

program. Computers & Education, 42 (2), 181-194.<br />

27


Olofsson, A. D. (<strong>2007</strong>). Participation in an <strong>Educational</strong> Online Learning Community. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4),<br />

28-38.<br />

Participation in an <strong>Educational</strong> Online Learning Community<br />

Anders D. Olofsson<br />

Umeå University, Sweden // Tel: +46 90 786 78 09 // Fax: +46 90 786 66 93 // Anders.D.Olofsson@pedag.umu.se<br />

ABSTRACT<br />

This paper discusses the issue of learner participation in a net-based higher education course. With the starting<br />

point in recent educational policies formulated by the European Union and the results of an evaluation report<br />

from the Swedish Net University, I raise the question of which pedagogical aspects need to be considered in<br />

order to support active learner participation in these types of learning environments. Based on analyzed data<br />

from 19 semi-structured interviews with trainees on a Swedish net-based teacher training programme supported<br />

by Information and Communication Technologies, I attempt to show that in order to become a member of such<br />

educational online learning community, each trainee is required to be active and hold an inclusive attitude<br />

towards the other members. Further, it seems that the trainees often had to rely on and trust each other due to the<br />

sparse communication with their teacher trainers. I conclude this paper by discussing the need for a pedagogical<br />

approach that relies heavily on social, collaborative and ethical aspects of learning as a starting point for the<br />

design of online learning communities to support the kind of education needed for the 21st century.<br />

Keywords<br />

Distance Education, Learning Community, Participation, Pedagogy, Teacher Training<br />

Introduction<br />

In Lisbon, the European council stated that the European Union (EU) should become the world’s most competitive<br />

and dynamic knowledge-based economy (EU legislation, 2003a). The field of education has been identified as one of<br />

the target areas for the implementation of this declaration, thus demanding an elaborate educational strategy. In<br />

response to this demand, a number of tangible goals for how the educational system in Europe should be carried out<br />

in the future (EU legislation, 2003b) have been identified. These goals have been defined as follows; first, to improve<br />

the quality in the education system, second, to facilitate the access to the education system for members of the<br />

European Union and third, to open up the educational system in the European Union to the rest of the world.<br />

Sweden has been actively working in this field focussing on the creation of an extended number of university<br />

programmes and courses following the EU recommendations. Programmes and courses which are provided offcampus,<br />

using the internet as means for participation, collaboration and dialogue have increased significantly during<br />

the last decade (Lindberg & Olofsson, 2006).<br />

Recently, The Swedish National Agency for Higher Education (2005) stated in an evaluation of the Swedish Net<br />

University, that net-based education could be understood as an educational form that contributes to widen the base<br />

for recruitment to higher education. Higher education in Sweden has traditionally had problems recruiting students<br />

that do not have middle and upper class backgrounds. Additionally, the evaluation stated that the students’<br />

geographical location was less important within this educational form. This is claimed to be due to the possibility to<br />

attend university programmes and courses all over Sweden, using the internet. It seems for that reason no longer<br />

necessary for the students to physically attend the university campus a number of times each semester. Higher<br />

education today appears to be open for people from every walk of life in Swedish society. This trend within higher<br />

education is however not revolutionary and has for example, according to Berg (2003), been present in the USA for<br />

quite a long time.<br />

Nevertheless, one issue that probably needs to be noted as an area for re-thinking is how to organize such higher netbased<br />

education. How is it possible to convert the net-based education into a venture characterized by active<br />

participation, collaborative learning, negotiating and sharing of meaning (Wenger, 1998)? What kind of pedagogy<br />

could be used to support and sustain the knowledge-building process of each student, and at the same time guarantee<br />

that education does not become merely a question of transferring information? How can social aspects of learning be<br />

included in the design of net-based education and what are some of its pitfalls? This paper intends to investigate<br />

some possibilities within a net-based teacher training programme in which the students are organized in smaller<br />

study-groups, which here are understood as <strong>Educational</strong> Online Learning Communities (E-OLCs) (Carlén & Jobring,<br />

2005; Olofsson & Lindberg, 2006). I wish to especially shed light on issue of values underpinning all activities<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

28


within the <strong>Educational</strong> OLCs. The overall research question is formulated as; what is required from each teacher<br />

trainee to become a member of one of the <strong>Educational</strong> OLCs within the programme?<br />

Additionally, I will describe in further detail how the understanding of distance education has moved from being a<br />

question about transferring knowledge, towards being a question about learning together in an E-OLC. Thereafter I<br />

describe the results of a study conducted with 19 trainees on a Swedish net-based teacher training programme<br />

supported by Information and Communication Technologies. I will conclude by discussing the outcomes of the study<br />

in connection to ideas discussed in section two and the challenges that might be present in the design of Online<br />

Learning Community (OLC). The paper concludes with stressing the importance of including pedagogical issues,<br />

firmly based in an ethical view of people and education, when educating online.<br />

From Individuality to Community – Net-based Learning in Transition<br />

Traditionally, distance education has been an educational form with limited possibilities for the students to learn with<br />

and from each other (Howard, Schenk & Discenza, 2004). This means that the students often were reduced to using<br />

technology that only, more or less, made it possible for them to download ready-made educational material within<br />

the programme or course they attended (Bonk & Cunningham, 1998). If the students wanted to collaborate with their<br />

co-students, for example, in order to gain a deeper understanding or to collaborate they had to use ordinary letters,<br />

telephone or even to psychically attend the university campus.<br />

Today this picture has changed. A new movement within higher education is present, which is also expressed in<br />

legislative documents regarding education in Europe. For example, the recommendations “eLearning – Designing<br />

tomorrow’s education” (EU legislation, 2003a) states the need for establishing virtual forums and campuses that will<br />

promote the development of distance teaching and learning and the exchange of best practice and experience. This<br />

call for virtual forums can be seen as a result of research investigating the concept of community within education,<br />

teaching and learning.<br />

Community, however, has been a concept used for different purposes within research for a long time, but the<br />

renewed interest stems from the work on communities of practice initially carried out by Lave & Wenger (1991) and<br />

further developed by Wenger (1998). Learning is to be understood as the movement or trajectory, in which a process<br />

of legitimate peripheral participation is developed between people in a certain context. People, or members, in a<br />

community are being fostered into the core of the practice by processes characterized by negotiating of meaning.<br />

Processes that not only include aspects of how something should be learnt and understood but also aspects of<br />

inculcation of values, assumptions and beliefs that are underlying the practice (see also Sergiovanni, 1999) ensuring<br />

a common ground for the activities within the community (Bauman, 2001).<br />

The concept of community has lately been used in relation to aspects on education and learning located on the<br />

internet (Sorensen & Ó Murchú, 2004) which is referred to as OLC (Jaldemark, Lindberg & Olofsson, 2005,<br />

Lindberg & Olofsson, in press; Lock, 2002; Olofsson & Lindberg, 2005; Palloff & Pratt, 2005; Seufert, Lechner &<br />

Stanoevska, 2002). An OLC is said to cross geographical borders and the aspect of time as a barrier is reduced<br />

(Haythornthwaite, Kazmer, Robins & Shoemaker, 2000; Lewis & Allan, 2005). According to Carlén and Jobring<br />

(2005), there are different types of OLCs depending on their focus. The first type is the so-called Professional OLC,<br />

which focuses on work-related questions. The second type is the so-called Interest OLC, which focuses on different<br />

topics related to the members’ leisure activities. The third type is the so-called <strong>Educational</strong> OLC (E-OLC) which<br />

focuses on different educational issues.<br />

The concept of community seems to be very promising in relation to higher net-based education, not only today but<br />

also tomorrow. However, when considering the concepts of community and OLC, the impression is one of a rather<br />

un-complicated process of both becoming and being a member of a community, regardless if the community is<br />

located on the internet or not (compare Söderström, Hamilton, Dahlgren & Hult, 2006). The rationales behind the use<br />

of community seems to be that it is only necessary for people to connect to each other (perhaps through the internet)<br />

for a process of bonding, building a community, and a process of collaborative learning by means of active<br />

participation and dialogue to appear. Is it that by interconnecting people in a forced or fictive community, in this<br />

case, teacher trainees in an E-OLC, a joint learning enterprise is apparent and learning occurs? What aspects of the<br />

moral underpinnings and the ethical considerations underlie a membership within such an E-OLC?<br />

29


Etzioni (1993) points out that an important aspect of a community is shared morality. The community speaks to its<br />

members with a moral voice, restricts and sets demands on its members. Within a community there are always,<br />

according to Etzioni, possibilities for negotiations between members; negotiations that may decide which activities,<br />

behaviours, opinions, values etc that should be considered right or wrong within the community. If any violation or<br />

obstruction in relation to the negotiated meaning of being a member occurs, the community will respond with power<br />

in order to stop the un-wanted, non-sanctioned activities.<br />

Bauman (2001) describes in a similar way the double-edged character of the community; how it cuts both ways. On<br />

the one hand, a community provides connotations to a state of being-together, and living side by side in harmony<br />

with others. On the other hand, Bauman continues, the demands for conformity with the community undermine the<br />

rights to one’s freedom and self-assertation. The continuous struggle between individuality and the desire to be<br />

included in the community and sharing a feeling of togetherness with others is by Elias (1991) called “the I/We –<br />

balance”. This balance could be tilted in either direction depending on the meaning of the negotiated understanding<br />

of being a member within a certain community.<br />

If membership in a community is about sharing values, negotiating values and creating shared meaning, it requires<br />

research to investigate and to try to understand how these processes work in educational settings. In the next section,<br />

an empirical study is presented aimed at these aspects of membership in the context of an E-OLC, positioned in netbased<br />

teacher training in Sweden. The analysis shows how the trainees perceive the programme as a venture<br />

characterized by active participation, collaborative learning, negotiating and sharing of meaning, and some of the<br />

potential pitfalls in taking theses process for granted.<br />

The empirical context is a net-based teacher training programme and for that reason, the type of OLC considered<br />

here is the E-OLC that “…relates to learning activities in schools, colleges and universities. An institution or faculty<br />

promotes and structures education programs for learners in which the students get credit for what they know and<br />

what they do.” (Carlén & Jobring, 2005, p. 275). By using tools like blogs, chats and e-mails the members of an E-<br />

OLC can connect with each other and central features within net-based higher education, like active participation,<br />

collaboration and dialogue, can take place (Schwier, 2002; Sorensen & Takle, 2004).<br />

Research Related to the Present Study - A Swedish Perspective<br />

An increasing amount of research has been carried out in Sweden related to the study presented in this paper. In that<br />

body of research, it seems possible to identify at least four different themes relevant for the ideas presented in this<br />

study. One of these themes is related to teacher training. Recent studies conducted by Lindberg (2002) investigate the<br />

discourses of teacher training and analyze contemporary discussions of teacher training as it has taken shape in<br />

relation to the present teacher training reform in Sweden. Bernmark-Ottosson (2005) investigates teacher trainees’<br />

conceptions of democracy and this is another example of current efforts in this direction. A second theme relates to<br />

distance education. Thórsteinsdóttir (2005) has investigated information seeking behaviour of distance students and<br />

Rydberg Fåhræus (2003) has been studying collaborative learning and how this approach can be applied and<br />

supported in distance education. A third theme is ICT and the internet. Keller (2005) has carried out analyses of<br />

students’ acceptance of Virtual Learning Environments (VLEs) in a mixed learning environment, and Stigmar (2002)<br />

investigated how students used the Internet for information seeking. A forth, and final theme relates to research<br />

aimed at the concept of community. Karlsson (2004) has explored the concept of community in order to gain an<br />

understanding of how a teacher team functions as a vehicle for the development of competencies in the pedagogical<br />

use of ICT, and Svensson (2002) uses the concept of community as a theoretical tool in order to understand and<br />

support IT-mediated communities of distance education.<br />

The ambition here is not to present a synthesis containing all research conducted in Sweden that can be related to the<br />

study presented in this paper. Rather, the themes mentioned above are to be understood as attempts to identify what<br />

already has been researched in Sweden regarding research objects such as teacher training, distance education, ICT<br />

and the internet, and community. According to Bransford, Brown & Cocking (1999), it is important to conduct<br />

extensive research on teacher learning, teacher training and how to develop learning opportunities for teachers, in<br />

order to teach in line with new theories of learning. Further, they claim that research evidence points out that<br />

successful professional development activity for teachers are extended over time and characterized as encouraging<br />

for development of teachers' learning communities. They also stress the importance of providing teacher trainees<br />

with a chance to form teams who stay together in their teacher training. In net-based teacher training such an idea<br />

30


could be to give the trainees a chance to create an E-OLC that is characterized by active participation, collaboration<br />

and dialogue. Furthermore, according to Lankshear & Knobel (2006) there is a demand for new ways to train future<br />

teachers that will face new teaching and learning challenges with young children and students that are growing up<br />

with mobile devices and new media. To conclude, in net-based teacher training all these ideas and challenges are<br />

intertwined. It is therefore important to continuously research and develop net-based teacher training programmes if<br />

teacher trainees of today shall be able to meet the demands from young children and students of tomorrow.<br />

The Context of the Study<br />

The context of the study was a net-based teacher training programme provided by a university in northern Sweden.<br />

This study is a follow-up investigation from a larger case study concentrated on the issue of training teachers through<br />

technology (Lindberg & Olofsson, 2005). The participants in this study were teacher trainees in the aforementioned<br />

net-based teacher training programme. The programme investigated was three and a half to four and a half years<br />

long, depending on the teacher trainees’ choice of degree. The programme was mainly aimed at training teachers to<br />

be able to teach in sparsely populated areas in Sweden. The programme was mainly conducted via a net-based<br />

learning environment and only a small number of on-campus gatherings were included. The trainees were divided<br />

into smaller study groups and the net-based learning environment enabled, throughout the programme, collaboration<br />

and communication both synchronously and asynchronously without meeting physically. The net-based learning<br />

environment used in the programme was WebCT and examples of learning tools are email, chat, discussion groups<br />

with threaded discussions, electronic portfolios, study-group conferences, and students’ café.<br />

The study group in this paper should be regarded as an E-OLC. The trainees were continuously encouraged by the<br />

educators to actively use the net-based learning environment. This was for example stressed in the study-guides<br />

provided for the teacher trainees. The teacher training programme was aimed at educating future teachers with the<br />

competence of both handling ICT and on using it in order to get in contact with other persons in the surrounding<br />

society that could enrich their own practice.<br />

Data Gathering and Procedure<br />

The total number of teacher trainees enrolled at the time of data gathering was 77. All trainees were asked to be<br />

interviewed. A group of 22 trainees volunteered, but out of these, three were unable to take part and the interviews<br />

were carried out with the remaining 19. The trainees that were interviewed were between 20 and 50 years old, 13 of<br />

the trainees were female and 6 were male. The interviews were semi-structured and each interview was conducted in<br />

relation to a pre-specified interview guide. The study was conducted in spring 2004. The interview guide was divided<br />

into 4 different themes, containing 15 questions in total. Before the interviews, each trainee was given an interview<br />

guide with the intention to allow the trainees to prepare. Each interview lasted for approximately 25 to 60 minutes.<br />

For this paper, analysis was conducted on data from the theme concerning educational issues. This theme included<br />

four questions and was constructed with the intention to function as a basis for an open discussion. All answers were<br />

recorded on tape, transcribed and analyzed. Each interviewee was given the opportunity to comment on the<br />

transcripts before the analysis. The questions included in the theme were as follows:<br />

A student in your study group is having difficulties keeping up in the programme and therefore needs help. What<br />

are your thoughts about being involved in and taking responsibility for the situation?<br />

There is a new student in the study group. What are your thoughts about being involved and taking responsibility<br />

for helping the student to feel at ease in the new group?<br />

What are the important issues that you discuss in the programme?<br />

What have you learnt in the programme that is especially important for the future?<br />

Analysis<br />

The analysis used an approach inspired by hermeneutics (Gadamer, 1976). The approach intended to preserve the<br />

complexity of the educational setting that was researched. In the analysis, three different categories were constructed<br />

and presented. The categories emanated from the interpretations of the transcribed data. The categories should not be<br />

seen as objective truths or as categories capturing actually existing practices within the net-based teacher training<br />

31


programme. They are rather constructions of the trainees’ discuss aspects of studying, learning and bonding together<br />

in a programme mostly carried out via the internet. The aim of the analysis is to put forth some aspects of what<br />

appears to be required from each trainee to be member of an E-OLC in terms of active participation and collaborative<br />

learning and how this can be understood as a process of negotiating and sharing of meaning with other trainees.<br />

The categories and the analysis are based on interpretations of the data collected from all four questions included in<br />

the theme concerned with educational issues. In the process of analysis, each question within the theme has been<br />

analyzed separately and in the light of the other three questions. The process of interpretation is reflexive in the sense<br />

that the interpretation of the teacher trainees’ discussions consisted of the interplay between the whole body of data<br />

collected within the theme and discussion of each trainee. This corresponds to the idea of relating parts to the whole<br />

and back again in an ongoing circular or spiral of interpretations, all in an order to construct an understanding of the<br />

data (Gadamer, 1976; Risser, 1997). Further, the interpretations are not to be seen as categorizations into mutually<br />

exclusive categories but rather a way of describing differences in how teacher trainees viewed participation in the<br />

programme. Consequently, quotations used in each category can emanate from data collected within all four<br />

questions included in the theme. The quotations provide specific accounts from the data from which the<br />

interpretations are made and typical accounts for the trainees’ discussions about being part of a net-based teacher<br />

training programme.<br />

Category 1 – The Importance of Contributing to the E-OLC<br />

In order for teacher trainees in a study group to be able to graduate from the programme, or just pass a single course<br />

within the programme, they are individually expected to continuously contribute to the learning activities in the E-<br />

OLC. To the trainees, it appears that they have to be able to adapt to the norms defining how to participate in the<br />

study group. For instance, this could involve to take an active part in the collaborative process of solving a grouprelated<br />

task provided by the teacher trainers on the programme. A trainee when discussing co-trainees that do not do<br />

what is expected expresses this:<br />

“…if the problem comes from the co-trainee’s lack of pre-knowledge’s, or that they just don’t manage the studies, or<br />

aren’t able to assimilate their understanding, the question is if we, as a study group should help her or him to pass the<br />

test. If so, you can question if we in the study group are doing something morally wrong…”<br />

Furthermore, the contribution should not only agree with what is formulated as a productive contribution in relation<br />

to, for example, how the task is conceived within the study group, but also when the other members in the group<br />

expect the contribution to be made. Failure to fulfil what has been negotiated seems often to be referred to a single<br />

trainee’s lack of individual qualities for studying or lack of self-discipline. When discussing co-trainees not<br />

delivering what is expected on time, one trainee states that:<br />

“…I think that you shall take responsibility when you get involved in something, is it [the student’s failure] because<br />

this person has problems because she or he has <strong>10</strong>00 other things besides the studies, I think that you should confront<br />

them…”<br />

Additionally, it seems as if the trainees are not interested in acting in the role of a teacher for their co-trainees, but<br />

rather reflects an understanding implying that each single trainee, and her or his individual contribution, forms the<br />

collective that together can solve, for example an examination task. Another trainee relates to the same issue by<br />

saying:<br />

“…if someone in the [study] group didn’t understand, the others tried to explain. That is something I think works<br />

well. However, to be the one who tries to help all those who have a problem in understanding could mean that you<br />

don’t have enough time to manage your own studies and instead, like Florence Nightingale, you flutter around and<br />

try to help everyone …”.<br />

The overall understanding within this category seems to be that the valued way to participate in the E-OLC is<br />

characterized by, for example, individualization, activity, self-regulation and goal-orientation. Being a teacher<br />

trainee, in terms of membership in an E-OLC in this particular programme, means being part of a collective of<br />

individuals.<br />

32


Category 2 – The Importance of Being-Together<br />

The meanings embedded in this category seems to be in rather strong contrast to category one. Instead of being built<br />

on an idea in which the individual teacher trainees have to take personal responsibility for their own learning and<br />

educational success within the programme, another aspect of membership of an E-OLC seems possible to present.<br />

Following a net-based programme and only having a few physical on-campus gatherings seems to call for another<br />

way of fulfilling the social dimension in the educational programme. One way of realizing a social dimension<br />

without meeting face-to-face (f2f) appears to be to use ICT to form an E-OLC. This time, however, it does not<br />

primarily seem to be the learning process and the goal of graduating from teacher training that is the reason for<br />

connecting with co-trainees but instead the search for a feeling of being-together. In relation to being separated in<br />

space and in need of someone to discuss with one trainee says that:<br />

“…in distance education you are so isolated. The teachers are far away, so you are far away from each other. We<br />

have learnt to give and take from each other and that we need each other…”.<br />

There are several trainees that underline the importance of everybody feeling at ease in the group and that it is every<br />

member’s responsibility in the E-OLC to make this happen. This could be due to the risk that something could<br />

happen to you and then you would be in need of support within the study group; a study group that mostly is<br />

accessible via the E-OLC. One trainee expresses such an understanding when stating:<br />

“Of course I would take responsibility for that [everyone feels welcome in the study group]. Everyone has the duty to<br />

push each other because you, yourself, could be in a low phase when everything feels difficult. It could be, for<br />

example, that you have problems at home or something else that makes you need support. It is our [the study<br />

group’s] mission to support each other”.<br />

Another aspect of the teacher training programme that seems to require a continuous dialogue between the members<br />

in the E-OLC is concerning the trainees’ future as teachers. Not specifically how to solve problems related to a<br />

certain course or task but instead, for example, what courses to chose within the programme and how it will be to<br />

work as a teacher in “the real world”. One trainee says:<br />

“…yes, we discuss a lot. Which attitude to have in the classroom together with the pupils and how to handle different<br />

situations, which working methods to use and of course the salary we will get after completing the education…”<br />

Another trainee discusses the issue and says:<br />

“…we discuss study loans, economy of course. How to act in order to get through this training and of course how<br />

you will work afterwards…”<br />

An overall interpretation within this second category seems to be that the trainees are in need of the possibility to<br />

have continuous access to each other. The trainees are located in different geographical settings not able to meet<br />

physically on a regular basis and therefore their solution is the E-OLC. The contrast with the first category seems to<br />

be that when discussing issues more related to everyday life or the future life as teachers in school. The norms of<br />

how to contribute to activities and dialogues within the E-OLC are not so restricted as when contributing to activities<br />

and dialogues concerning tasks that are more specific and examinations within the programme.<br />

Category 3 – The Importance of Bridging Distances Between Trainees and Trainers<br />

It seems that the teacher training programme in question requires continuous contact between both trainee-trainee<br />

and trainees-trainers. These can be carried out in diverse ways as seen in category 1 and 2 of having dialogues<br />

concerning different issues related to the programme, and not least to be able to receive response to their completed<br />

work or to obtain additional perspectives on the discussed issues. It seems, however, that the trainees, more or less,<br />

lack support from the trainers and instead have to rely on one another and each trainee’s individual capacity to move<br />

forward within the programme. Sometimes they do not know if they have passed the previously tasks or assignments.<br />

Students feel the programme lacks support and possibilities to influence communication, which are required in order<br />

to achieve a secure and improving teacher training in collaboration with the staff at the university. The norm seems<br />

33


to be that each individual trainee must take responsibility and be motivated to keep up their studies even if they<br />

receive no feed-back or support from the trainers. One trainee expresses this by saying:<br />

“…they [the trainers] just wait, and wait, and because I haven’t received anything back I assume that I have failed<br />

the test… The same thing happens when we post questions on the net, and it takes three weeks before we get any<br />

answers, or even worse when we don’t receive any answers at all, and the trainers says that they have seen us<br />

discussing these questions on the net… I hope that I will remember that feeling when I teach my pupils in school and<br />

when we talk about democracy. It [democracy] is non-existing in the university…”<br />

Furthermore, it seems as if problems needing to be solved and questions are related to when the trainees attending<br />

the university physically. The trainees put forth that they use the E-OLC to support each other between on-campus<br />

gatherings and, as one trainee puts it, that:<br />

“…we write e-mails to each other and when, during on-campus gatherings, we try to straighten things out…”<br />

Another aspect of the importance of bridging distances between trainees and trainers relates to how the on-campus<br />

gatherings are carried out. The gatherings are mostly built up around lectures and seminars during which different<br />

content matters are problematized. The analysis suggests that trainees would like more social activities during oncampus<br />

gatherings, which would provide opportunities for bonding and strengthening their relationships. These<br />

relationships could thereafter continue to be strengthened via the E-OLC. Additionally, there seems to be a wish for<br />

more occasions for trainees and trainers to collaboratively discus the complex task of being a teacher. One trainee<br />

states:<br />

“…a thing that I have thought a lot about the social dimension in the teacher profession. We are becoming so much<br />

more than just a person that has to transfer blocks of knowledge’s to them [the pupils]. We have to actually be there<br />

and foster them and give them [the pupils] values and every thing. We think that many damn fine words have been<br />

written, but there is a great distance to the pupils. It should be so much better if some time was spent on trainers<br />

connecting with the trainees instead of writing fancy papers…”<br />

Within this third category, the overall understanding possible to put forth, is that there is a lack of continuous contact<br />

between trainees and trainers. For example, a dialogue in which the two parts could negotiate around issues like how<br />

to meet the pupils in the classroom and the meaning of the constitutive values underpinning the Swedish school<br />

system. Furthermore, it seems as if trainers, to some extent, reject using ICT in order to report results from different<br />

assignments and refrain from joining the discussion taking place within the E-OLC. The trainees put forth that they<br />

want a social dimension in the programme, despite their separation in space, but that the staff at the university does<br />

not appear to share this progress.<br />

Discussion<br />

The net-based teacher training programme studied in this paper seems to be of a complex nature and it seems to be<br />

difficult to be a successful participant. When contrasting the three categories described in the analysis, three issues<br />

supporting this statement became particularly apparent. The first issue suggests that the individual trainee is held<br />

responsible for their learning. Success appears to be due to each trainee’s capacity to contribute to the collaborative<br />

work conducted within the study group. Failure, though, seems to be just around the corner for a trainee who has<br />

problems living up to this negotiated norm. It appears to be difficult to trust the other members in the E-OLC, and to<br />

act as a united community aimed at solving different kind of educational problems or tasks through a joint process of<br />

collaboration. Provocatively formulated, if you do not contribute with something or participate in line with what is<br />

expected, you had better not contribute or not participate at all!<br />

The second issue suggested how the individual trainee needs others. There seems to be a fear among the teacher<br />

trainees, a fear of being the next one with problems and to be in the hands of the cyber space in order to get in<br />

contact with co-trainees. This fear seems to promote a more inclusive attitude to other members and how they<br />

participate and express their ideas within the E-OLC. Being-together in the E-OLC is important, which is in<br />

opposition to the embedded meaning in the first issue.<br />

34


The third issue suggested difficulties in bridging the distance between trainees-trainees and trainees-trainers. This<br />

seems strange, as ICT is the tool to use in order to connect the trainees to the university. This seems to reflect a view<br />

that the trainees are supposed to manage on their own in the programme. The trajectory discussion very much<br />

becomes a question of the trainees’ prejudices of what to expect after graduating and how to handle morally and<br />

ethically complex situations when working as teachers. This result is interesting also in the light of a study made by<br />

the Swedish Knowledge Foundation (2005) reporting on teacher trainees’ attitudes towards ICT. Teacher trainees<br />

stress that teacher trainers’ ICT competence in education and teaching is poor and that their knowledge of ICT as a<br />

pedagogical tool is poor. A majority of the teacher trainees further state that they are positive towards using ICT and<br />

the internet in their teaching and that they will use ICT as a pedagogical tool.<br />

What can we learn from the study presented in this paper in order to support learners in this kind of educational<br />

programme? It seems possible to claim that a pedagogy relying on being designed for participation, in this case<br />

neglects that individuals may need their individual design (as well). The lack of collective solutions to satisfy<br />

trainees’ needs for others outside the study group may restrict their access to the educational content. Furthermore,<br />

the designed participation restricts the trainees’ studies, restricting them to wait for trainers to respond, and fellow<br />

trainees to react to their needs. Conclusively, if viewed as a pedagogical idea for higher education, designing E-<br />

OLCs might bring severe difficulties about.<br />

On the other hand, it seems possible to claim that the need for socializing and support, both from fellow trainees and<br />

from trainers, suggests that the E-OLC has an important role to fulfil. The aspects of negotiation of meanings and<br />

values to underpin the education in question are not to be taken for granted or merely handed over to the trainees<br />

without support. In order to support such a development in net-based teacher training programmes, more research<br />

seems to be required to construct an understanding of how the processes of participation and collaboration can be<br />

brought about, and at the same time lay an important foundation for processes of negotiating of meaning where the<br />

ethical aspects of being-together are in focus.<br />

The educational visions of the Bologna process and policies of the European Union will affect the way teachers are<br />

trained in the future. The next step in higher education today seems to build individual knowledge and at the same<br />

time build a future citizenship; a membership in Europe. In this process, teachers of tomorrow will play a crucial role<br />

for such a development. However, the question remains how people will react to the intentions of being designed to<br />

become the Europeans of tomorrow. Will belonging to the future European community of learners be tempting, will<br />

it offer such safety, so people are willing to give up parts of their sovereignty and become what they are expected to<br />

become? To what extent and towards what ends are the I/We balance (Elias, 1991) to be tilted? In each of the three<br />

categories described, different aspects are in focus and different rationales seem to be working.<br />

Conclusions<br />

When being part of an educational experience, in this case being part of a net-based teacher training programme,<br />

students transcend the intended use of both ICT and design. A pedagogy in which the being-together is the starting<br />

point seems to be called for when designing this kind of educational programme. E-OLCs need to be designed with<br />

the pedagogical issues firmly based in an ethical view of people and education. Having to rely on each other in the<br />

study-group situated within an E-OLC, simply because this is the designed structure to support the studies, becomes<br />

part of a process of negotiation in which the trainees seems left to their own devices. What is negotiated can hardly<br />

be designed, it seems. Therefore, it might well be that access to education (the intended way, based on participation,<br />

collaboration and negotiation) are denied some trainees if they do not adhere to the negotiated ways. I conclude this<br />

paper by emphasizing the need of a pedagogical approach that relies heavily on social, collaborative and ethical<br />

aspects of learning as a starting point when designing the E-OLCs to support the kind of education we in Sweden as<br />

well as EU need for the XXI century.<br />

References<br />

Bauman, Z. (2001). Community - Seeking safety in an insecure world, Cambridge: Polity Press.<br />

Berg, G. A. (2003). The knowledge medium: designing effective computer-based learning environments, Hershey,<br />

PA: Information Science Publishing.<br />

35


Bernmark-Ottosson, A. (2005). Demokratins stöttepelare: en studie av lärarstuderandes demokratiuppfattningar<br />

[Pillars of democracy. A study of teacher students’ perceptions of democracy], Doctoral thesis, Karlstad:<br />

Avdelningen för pedagogik, Institutionen för utbildningsvetenskap, Karlstads universitet.<br />

Bonk, J.C., & Cunningham, D.J. (1998). Searching for learner-centered, constructivist, and sociocultural components<br />

of collaborative educational learning tools. In Bonk, C.J. & King, K.S. (Eds.), Electronic Collaborators. Learner-<br />

Centered Technologies for Literacy, Apprenticeship, and Discourse, Mahwah, NJ: Lawrence Erlbaum, 25-50.<br />

Bransford, J. D., Brown, A. L., & Cocking, R. R. (1999). How People Learn: Brain, Mind, Experience, and School,<br />

Washington, D.C.: National Academy Press.<br />

Carlén, U., & Jobring, O. (2005). The rationale of online learning communities. International journal of web based<br />

communities, 1 (3), 272-295.<br />

Elias, N. (1991). The Society of Individuals, London: Continuum.<br />

Etzioni, A. (1993). The Spirit of Community. Rights, Responsibilities and the Communitarian Agenda, London:<br />

Fontana Press.<br />

EU legislation (2003a). eLearning - Designing tomorrow’s education, retrieved <strong>October</strong> 7, <strong>2007</strong>, from,<br />

http://europa.eu.int/scadplus/scad_en.htm.<br />

EU legislation (2003b). Concrete future objectives of education systems, retrieved <strong>October</strong> 7, <strong>2007</strong>, from,<br />

http://europa.eu.int/scadplus/leg/en/cha/c1<strong>10</strong>49.htm.<br />

Haythornthwaite, C., Kazmer, M.M., Robins, J., & Shoemaker, S. (2000). Community development among distance<br />

learners: Temporal and technological dimensions. Journal of Computer-Mediated Communication, 6(1), retrieved<br />

<strong>October</strong> 7, <strong>2007</strong>, from, http://jcmc.indiana.edu/vol6/issue1/haythornthwaite.html.<br />

Howard, C., Schenk, K., & Discenza, R. (2004). Distance learning and university effectiveness: Changing<br />

educational paradigms for online learning, London: Information Science Publishing.<br />

Jaldemark, J., Lindberg, J.O., & Olofsson, A.D. (2005). Att förstå hur man deltar via redskap i en lärgemenskap. In<br />

Jobring, O. & Carlén, U. (Eds.), Att förstå lärgemenskaper och mötesplatser på nätet [To understand learning<br />

communities and places to meet on the net], Lund: Studentlitteratur, <strong>10</strong>9-147.<br />

Karlsson, M. (2004). An ITiS Teacher Team as a Community of Practice, Doctoral thesis, Göteborg: Acta<br />

Universitatis Gothoburgensis.<br />

Keller, C. (2005). Virtual learning environments in higher education: a study of students' acceptance of educational<br />

technology, Doctoral thesis, Linköping: Department of Computer and Information Science, Linköpings University.<br />

Lankshear, C., & Knobel, M. (2006). New Literacies: Everyday Practices and Classroom Learning, Berkshire, UK:<br />

Open University Press.<br />

Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation, Cambridge: Cambridge<br />

University Press.<br />

Lewis, D., & Allan, B. (2005). Virtual learning communities. A guide for practitioners, Berkshire, UK: Open<br />

University Press.<br />

Lindberg, J.O., & Olofsson, A.D. (2005). Training Teachers Through <strong>Technology</strong>. A case study of a distance-based<br />

teacher training programme, Doctoral thesis, Umeå: Umeå University.<br />

36


Lindberg, J.O., & Olofsson, A.D. (2006). Distancing Democracy: organising on-line teacher training to promote<br />

community values. UCFV Research Review, 1 (1), retrieved <strong>October</strong> 7, <strong>2007</strong>, from<br />

http://journals.ucfv.ca/rr/RR11/article-PDFs/lindberg-olofsson.pdf.<br />

Lindberg, J.O., & Olofsson, A.D. (in press). OLC in the context of the Other – Face, trace and cyberspace.<br />

International Journal of Web Based Communities.<br />

Lindberg, O. (2002). Talet om lärarutbildning [Teacher Education: How we talk about it and what it means],<br />

Doctoral thesis, Örebro: Universitets-biblioteket.<br />

Lock, J. V. (2002). Laying the groundwork for the development of learning communities within online courses.<br />

Quarterly Review of Distance Education, 3 (4), 395-408.<br />

Olofsson, A.D. & Lindberg, J.O. (2005). Assumptions About Participating in Teacher Education Through the Use of<br />

ICT. Campus Wide Information Systems, 22 (3), 154-161.<br />

Olofsson, A.D., & Lindberg, J.O. (2006). Enhancing Phronesis. Bridging Communities through <strong>Technology</strong>. In<br />

Sorensen, E. K. & Murchú, D. Ó (Eds.), Enhancing Learning Through <strong>Technology</strong>, London: Information Science<br />

Publishing, 29-55.<br />

Palloff, R.M., & Pratt, K. (2005). Collaborating online. Learning together in community, San Francisco, CA: Jossey-<br />

Bass.<br />

Risser, J. (1997). Hermeneutics and the voice of the others. Re-reading Gadamer´s Philosophical hermeneutic, New<br />

York: State University of New York Press.<br />

Rydberg Fåhræus, E. (2003). A triple helix of learning processes: how to cultivate learning, communication and<br />

collaboration among distance-education learners, Doctoral thesis, Stockholm: Stockholms universitet.<br />

Schwier, R. A. (2002). Shaping the Metaphor of Community in Online Learning Environments. Paper presented at<br />

the International Symposium on <strong>Educational</strong> Conferencing, June 1, 2002, Banff, Canada, retrieved <strong>October</strong> 7, <strong>2007</strong>,<br />

from http://cde.athabascau.ca/ISEC2002/papers/schwier.pdf.<br />

Sergiovanni, T. (1999). The story of community. In Retallick, J., Cocklin, B. & Coombe, K. (Eds.), Learning<br />

Communities in Education, London: Routledge, 9-25.<br />

Seufert, S., Lechner, U., & Stanoevska, U. (2002). A reference model for online learning communities. International<br />

journal on E-learning, 1 (1), 43-55.<br />

Sorensen, E.K., & Ó Murchú, D. (2004). Designing Online Learning Communities of Practice: A Democratic<br />

Perspective. Journal of <strong>Educational</strong> Multimedia, 29 (3), 189-200.<br />

Sorensen, E.K., & Takle, E.S. (2004). A cross-cultural cadence. Knowledge building with networked communities<br />

across disciplines and cultures. In Brown, A. & Davis, N. (Eds.), World Yearbook of Education 2004. Digital<br />

<strong>Technology</strong>, Communities & Education, London: Routledge Falmer, 251-263.<br />

Stigmar, M. (2002). Metakognition och Internet. En undersökning om gymnasieelvers informationsanvändning<br />

[Metacognition and the Internet. A study of high-school students´ use of information], Doctoral thesis, Växjö: Växjö<br />

University Press.<br />

Svensson, L. (2002). Communities of Distance Education, Doctoral thesis, Göteborg: Göteborgs universitet.<br />

Söderström, T., Hamilton, D., Dahlgren, E., & Hult, A. (2006). Premises, promises: connection, community and<br />

communion in online education, Discourse, 27 (4), 533-549.<br />

37


The Knowledge Foundation (2005). IT och lärarstuderande 2005 [IT and Teacher Trainees 2005], Stockholm:<br />

Stiftelsen för kunskaps- och kompetensutveckling.<br />

The Swedish National Agency for Higher Education (2005). A follow-up on the Swedish Net University. Final report<br />

2: Accessibility, recruitment and extra compensation, Report No. 49, Stockholm: The Swedish National Agency for<br />

Higher Education.<br />

Thórsteinsdóttir, G. (2005). The information seeking behaviour of distance students: a study of twenty Swedish<br />

library and information science students, Doctoral thesis, Göteborg: Inst. f. biblioteks- och informationsvetenskap,<br />

Göteborgs universitet.<br />

Wenger, E. (1998). Communities of practice. Learning, meaning and identity, New York: Cambridge University<br />

Press.<br />

38


Svensson, L., & Östlund, C. (<strong>2007</strong>). Framing Work-Integrated e-Learning with Techno-Pedagogical Genres. <strong>Educational</strong><br />

<strong>Technology</strong> & Society, <strong>10</strong> (4), 39-48.<br />

Framing Work-Integrated e-Learning with Techno-Pedagogical Genres<br />

Lars Svensson<br />

Forum for Work-Integrated Learning, University West, Sweden // lars.svensson@hv.se // Tel: +46 733 975133<br />

Christian Östlund<br />

Laboratory for Interaction <strong>Technology</strong>, University West, Sweden // christian.ostlund@hv.se // Tel: +46 520 223567<br />

ABSTRACT<br />

Distance <strong>Educational</strong> Practice is today supported by a range of information systems (IS) design theories. Still,<br />

there are surprisingly few strong pedagogical ideas and constructs that are communicated across distance<br />

educational institutions. Instead it is often the technology, the software and the medium that is at the centre of<br />

attention as we frequently discuss notions such as learning management systems, courseware, chat room,<br />

streaming media and blogs. This paper argues that design concepts should be used to bridge the gap between<br />

design theories and distance educational practice. It is also argued that genre theory could be instrumental in<br />

framing the characteristics of such techno-pedagogical genres in a way that constitutes a powerful level of<br />

communicating and disseminating new ideas within and across educational communities.<br />

Keywords<br />

Genre, IS design theory, Design concept, Techno-Pedagogical Genre<br />

Introduction<br />

Looking back on the last decades of development within the field of distance education reveals a rich spectrum of<br />

initiatives that has been implemented and evaluated in educational as well as organisational settings. It is probably<br />

fair to state that the extent to which these initiatives and innovations are related to, or derived from, theory varies to a<br />

large degree. In an influential paper from 2002, Markus, Majchrzak and Gasser (Markus et al., 2002) presented a<br />

model for IS design theory (based on the work of Walls (1992)), where they stated that a design framework should<br />

be firmly rooted in a kernel theory that guides the elicitation of requirements, consequently transformed into<br />

principles for design and development. Examples of work that use this (or similar) approaches to design for learning<br />

and competence development are Herrington & Oliver (1995), Hung & Chen (2001), and Hardless (2005). Applying<br />

the approach suggested by Markus et al., (2002) is likely to generate design frameworks (i.e. theories) that are<br />

theoretically sound. Still, a central problem is to bridge a generic design theory with the specific contexts and content<br />

where theory is to be applied in actual design practice. Hardless (2005) suggest that design concepts could be<br />

instrumental to that effect.<br />

“Design concepts served the role as an intermediate conceptualization between design theory and<br />

concrete prototype. A design concept is here a collection of general ideas and principles for a type of<br />

CDS [Competence development system]. In other words, a definition of a particular type of learning<br />

intervention abstracted beyond specific instances or realizations based on the design concept.”<br />

(Hardless, 2005)<br />

Hence, design concepts are to be derived from, and evaluated against the design framework where the kernel theory<br />

(fig 1:1) generates requirements for the phenomenon design is intended to support and develop. Requirements (fig<br />

1:2) are then transformed into design guidelines and design concepts (fig 1:3) are derived from the design theory.<br />

Design concepts (fig 1:4) are realized into prototypes and systems in various practices and the system evaluation (fig<br />

1:5) feeds back to design concepts and design theory. In the context of e-learning we believe there is a need for this<br />

type of intermediary level of abstraction in between educational practice and IS design theory. However, the question<br />

of how to describe and frame a design concept still remains. In this paper we argue that the notion of genre is a<br />

promising approach in framing new and innovative techno-pedagogical ideas, and provide a structured approach to<br />

how such ideas should be deconstructed and presented. The ideas are illustrated through the presentation of three<br />

simple design concepts for work-integrated e-learning: (i) Web Lecture, (ii) Blog Reflection, and (iii) Competence<br />

Kick-off.<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

39


Figure 1. Design Theories in context (Inspired by Markus et al (2002) and Hardless (2005)).<br />

The next section briefly presents IS design theories for learning. This is followed by a section that outlines the<br />

fundamental concepts of Genre Theory. In section four the framework for techno-pedagogical genres is presented<br />

together with three illustrating examples. The paper concludes with a discussion of the relationship between<br />

intentional design and actual educational practice.<br />

IS Design Theory for Learning<br />

There is a growing body of research addressing various aspects of online learning design with focus on i.e. student<br />

centered learning (Uskov, 2004), reusability of learning resources (Uskov, 2004; Wills et al., 2002; Aroyo &<br />

Dicheva, 2004) team approach for iterative re-design and maintenance on educational on-line resources (Sims &<br />

Jones, 2002) strategies to enhance student motivation, confidence and control (Astleitner & Hufnagl, 2003), problem<br />

based learning (Slough et al., 2004) and quality assurance (Bohl et al., 2002).<br />

According to Herrington and Herrington (2006), based on a paper by Herrington and Oliver (1995), online learning<br />

environments can and should use situated learning as an approach to design. The elements that should be<br />

incorporated into the design in order to achieve what Herrington and Herrington (2006) refer to as authentic learning<br />

are authentic context, authentic activities, access to expert performances, multiple roles & perspectives,<br />

collaboration, reflection, articulation, coaching & scaffolding and lastly authentic assessment. The context is<br />

authentic when it is all-embracing and provides the purpose for learning, simply providing examples from real-world<br />

situations is consequently not enough. The tasks and activities the students perform should be ill-defined but still<br />

with real-world relevance and be completed over a sustained period of time, rather than many short and disconnected<br />

examples. Access to expert performances is achieved by giving the students a model of how a real practitioner<br />

behaves in a real situation, e.g. case-based learning or video showing expert performance. Students should be<br />

encouraged to explore multiple roles & perspectives’, providing one “correct” view is not false but inadequate.<br />

Collaboration needs to be designed to engage higher-order thinking so the collaboration between learners requires<br />

them to predict and hypothesize, and then suggest a solution. Along the same line is reflection upon a broad base of<br />

knowledge supposed to be encouraged. The environment should also ensure that the learners work and discuss in<br />

groups, present their findings in order for them to articulate, negotiate and defend their knowledge. An authentic<br />

learning environment should accommodate a coaching & scaffolding role for the teacher and not just a didactic role<br />

telling students what they need to know. The students can also assume a coaching & scaffolding role if a<br />

collaborative environment is provided. Authentic assessment needs to be integrated with the learning activity. It does<br />

not necessarily have to be done by conventional methods such as examination and essays, but can be in the form of<br />

statistics of the learner’s path through multimedia programs, diagnosis or reflection and self-assessment. (Herrington<br />

& Herrington, 2006)<br />

40


Hung and Chen’s (2001) design framework identifies four principles of learning (from a situated learning and<br />

Vygotskian view) and derives from these four design considerations for e-learning: Situatedness i.e. e-learning<br />

environments should be Internet based so that learners can access the learning environment in their situated contexts<br />

and thereby being able to focus on tasks and projects, thus enabling learning through doing and reflection-in-action.<br />

Commonality, i.e. e-learning environments should create a situation where there is continual interest and interaction<br />

through the tools and capitalize on the social communicative and collaborative dimensions. E-learning environments<br />

should also have scaffolding structures which contain the genres and common expressions used by the community.<br />

Interdependency i.e. e-learning environments should create interdependencies between individuals where novices<br />

need more capable peers capitalizing on the zone of proximal development and the diverse expertise in the<br />

community. E-learning environments should be made personalized, with tasks that are meaningful to the learner in<br />

their context and personalized strategies and content by tracking the learner’s history, profile, and progress.<br />

Infrastructure i.e. e-learning environments should have structures and mechanisms set up to facilitate, in a flexible<br />

way (anywhere and anytime), projects where learners’ are engaged in.<br />

Genre theory<br />

To some extent genre is an intuitive concept that points to typified and recognizable features of a phenomenon, but<br />

the notion of Genre has also been frequently used as a theoretical construct for various purposes, and within different<br />

theoretical traditions (Roberts, 1998; Ryan et al., 2002; Svensson, 2002; Shepherd et al., 2004; Saebø & Päivärinta,<br />

2005). The primary strength of genres seems to be to provide tools to describe a phenomenon, but when resting on<br />

supporting theories such as for instance structuration theory or social theories of learning the scope sometimes<br />

stretches to understand the processes that put the genres in play. Orlikowski and Yates (1994) focus on how<br />

electronic interaction within an organisational community is typified into genres with characteristic purpose,<br />

structure and form. A genre is defined as:<br />

Typified communicative act having a socially defined and recognised communicative purpose with<br />

regard to its audience (Orlikowski & Yates, 1994).<br />

In addition to the definition, Orlikowski and Yates (1994) make several clarifications that are helpful in order to<br />

grasp the genre concept. Firstly, they emphasise that the purpose referred to in the definition should be recognised<br />

and shared within the community/group/organisation, and not to be interpreted as the purpose of individual<br />

community members. Secondly, they state that a stable substance and form should be connected to such a shared<br />

purpose in order to constitute a genre. Substance refers to the topics and the discursive structure of the interaction,<br />

and form has three sub-dimensions: structural features, communicative medium and language. In a similar manner,<br />

Shepherd and Watters (1998) define a Cybergenre to be characterised by content, form and functionality.<br />

Furthermore, Orlikowski and Yates (1994) states that a collection of genres can constitute a Genre Repertoire, i.e. the<br />

complete set of genres used for interaction within a community. They say:<br />

Both the composition as well as the frequency with which they are used are important aspects of a<br />

repertoire. When genres are heavily intertwined and overlapping it may be useful to talk of a genre<br />

system, where genres are enacted in a certain sequence with interdependent purpose and form<br />

(Orlikowski & Yates, 1994).<br />

Similar to the way Wenger (1998) sees the shared repertoire of a physical community as an important element in the<br />

definition of that community, Orlikowski and Yates (1994) underscores how electronic communication genres<br />

frames and indirectly defines the community space by serving as a "social template" for work. However, it is<br />

important not to perceive genres as inherently static. Instead, genres evolve over time, partly as a result of technology<br />

adaptation and innovation (Shepherd & Watters, 1998), and partly as a result of community negotiations (Wenger,<br />

1998).<br />

Since genres, represented by simple labels such as “western movie”, “mystery novel” or “academic seminar”, carry<br />

so much detailed information of form, content and functionality for the audiences or communities that are familiar<br />

with the genre, they could also be interesting with respect to design. In the context of distance educational practice it<br />

is surprising to notice how few strong pedagogical ideas and constructs that are communicated across distance<br />

educational institutions. Instead it is often the technology, the software and the medium that is at the centre of<br />

41


attention as we frequently discuss notions such as learning management systems, courseware, chat room, streaming<br />

media and blogs. At the same time we can easily find many examples of stable educational genres in traditional (non<br />

technology mediated) education. Concepts such as lectures and seminars dates back to the very beginning of schools<br />

and education, but also more novel examples such as multiple-choice exam or term paper are widely recognized with<br />

respect to form and structure across institutions and countries.<br />

A genre framework for techno-pedagogical design concepts<br />

This section outlines a tentative framework for description of techno-pedagogical genres. The framework is<br />

organised as a table with three rows – one for each of the primary genre dimensions (form, content and<br />

functionality). The framework uses the narrative structure, i.e. a series of activities organized, sequentially or<br />

parallel, in time as the primary structural element. The narrative, being an observable structural feature obviously<br />

relates to the form of the genre. Each activity in the narrative generates a column in the framework, where<br />

information on content and functionality can be added. Figure 2 shows how the narrative structure can be illustrated<br />

graphically. The symbols used to illustrate the narrative structure also capture additional form elements. Rectangles<br />

imply information activities, whereas circles relate to communicate activities, and by tilting a rectangle 45 degree it<br />

is possible to differ between student-centered and teacher-centered information activities. A specific symbol is also<br />

used to mark the position of a synchronization point (Lundin, 2003), i.e. a point in time in the sequence of activities<br />

where participants need to be temporally coordinated (typically a deadline).<br />

Synchronization<br />

Point<br />

Figure 2. Graphical illustration of the symbols used to describe the narrative structure of techno-pedagogical genres<br />

In the following sub-sections three different techno-pedagogical genres that has been developed in connection with<br />

various design-oriented research projects at Laboratorium for Interaction <strong>Technology</strong> (University West, Sweden) is<br />

presented in order to highlight how the framework can be used to describe a techno-pedagogical genre. These design<br />

concepts have not been chosen on merits of innovation and novelty, but rather as examples where IS-design and<br />

instructional design are integrated.<br />

Web Lecture<br />

Teacher<br />

Information<br />

Activity<br />

Student<br />

Information<br />

Activity<br />

Communication<br />

Activity<br />

Group<br />

The web lecture design concept was developed through a grounded approach where existing “web lecture practice”<br />

in four different courses were examined. The study identified central design elements which subsequently were<br />

evaluated through a student survey and through comparison with design theories (Hung & Chen, 2001; Herrington &<br />

Oliver, 1995) (see Svensson & Östlund, 2005 for further details).<br />

The first activity is when a lecture guide document is published on the course web-site. The document typically<br />

contains student support for engaging with central concepts, and supplements to text books such as examples,<br />

exercises etc. Shortly after the lecture notes (powerpoint-file) are published on the web, and can be used for lecture<br />

preparations and as a basis for more detailed lecture notes. The lecture is a student controlled video-stream, typically<br />

consisting of three frames (1) Video of teacher, (2) animated slides, and (3) interactive table of content (fig 3). One<br />

of the closing modules of the video lecture typically specifies the assignment that the students should work on,<br />

relating to the topic of the web lecture. Assignment-work is coached and tutored using a discussion forum and the<br />

module closes with the students getting feedback on their submitted work. In table 1, the web lecture genre described<br />

above is deconstructed into a genre framework matrix with three rows corresponding to the genre dimension of form,<br />

content and functionality.<br />

Work<br />

Parallel<br />

Activities<br />

42


Narrative<br />

Structure<br />

(Form)<br />

Content<br />

(Substance)<br />

Functionality<br />

(Medium)<br />

Figure 3. Screen dump from web lecture on “How to search the internet”<br />

Study Guide and<br />

Lecture Slides<br />

containing:<br />

Presentation of central<br />

concepts.<br />

Supplements to lecture<br />

readings.<br />

Examples and<br />

exercises.<br />

Support for lecture<br />

notes<br />

Pre-Lecture<br />

Material<br />

Text, graphs and<br />

hyperlinks to web<br />

resources<br />

Electronic document<br />

with text, images and<br />

graphs<br />

Table1. Genre framework for Web Lecture<br />

Video<br />

Lecture<br />

Modularized<br />

information of<br />

subject matter.<br />

Presentation of<br />

Students’<br />

assignment(s)<br />

Hyperlinked<br />

table of content.<br />

Student control<br />

of video-stream<br />

(fast forward,<br />

rewind, pause<br />

etc.)<br />

Coaching<br />

T<br />

Student-student and<br />

student-instructor<br />

interaction<br />

Text-based<br />

interaction in<br />

threaded discussion<br />

forum or multiparty<br />

video-conference<br />

session<br />

Student<br />

Paper(s)<br />

Students’<br />

Individual paper<br />

Electronic<br />

document with<br />

text, images and<br />

graphs<br />

Deadline<br />

Feedback<br />

T<br />

Teacher’s<br />

individual<br />

feedback to<br />

students<br />

Anchored<br />

annotations in<br />

electronic<br />

document<br />

43


Blog Reflection<br />

This design concept was developed in a Ph.d.-course on work-integrated learning with participants from several<br />

disciplines with in social sciences. The narrative structure depicted below was repeated four times (in four<br />

consecutive modules). The core IT-support for this simplistic design was a standard blog system (fig 4).<br />

Figure 4. Standard system used for the Blog Reflection<br />

The overall purpose is for students to reflect and discuss on a common reading assignment. The first activity is<br />

streamed video-vignette which introduces and problematize central themes, concepts and theories from the literature.<br />

This is followed by individual reading in parallel with joint discussion on the web. Each student is then expected to<br />

publish a brief text where she relates the literature to her own dissertation project. The reflections are posted as<br />

entries to the blog. Each reflection receives commentary from two other students, and the module is closed by the<br />

teacher who writes a meta-reflection on all contributions. Table 2, outlines schematically how the blog reflection<br />

genre can be described using the genre framework matrix.<br />

Narrative<br />

Structure<br />

(Form)<br />

Content<br />

(substance)<br />

Functionality<br />

(Medium)<br />

Vignette<br />

Introduction of<br />

reading<br />

assignment<br />

Presentation of<br />

central<br />

concepts and<br />

theories<br />

Video-stream<br />

with student<br />

control<br />

Table 2. Genre framework for Blog reflection<br />

Joint<br />

Discusssion<br />

Student-student<br />

discussion<br />

parallel to<br />

individual<br />

reading<br />

Threaded textbased<br />

discussion<br />

forum<br />

Publish<br />

Reflections<br />

Deadline<br />

Short personal<br />

reflection on the<br />

literature<br />

Text-based Blog<br />

entry<br />

Writing<br />

Rreviews<br />

Publish<br />

Reviews<br />

Deadline<br />

Comments and reviews of<br />

at least two other entries<br />

Teacher<br />

Wrap Up<br />

Teacher’s<br />

metacomments<br />

on<br />

all reflections<br />

and reviews<br />

Standard Blog Follow-up Blog entry<br />

44


Competence Kick-off<br />

The competence kick-off was developed in an action research project together with a network organization<br />

consisting of SME in the automotive sector of Sweden. The design concept was developed together with a group of<br />

practitioners, and was a response to the fact that many of the participating practitioners expressed frustration over<br />

past experiences when having used external experts for introductions to new knowledge domains. The frustration<br />

related firstly to a mismatch between what was expected and what was delivered by the expert, or as expressed by<br />

one of the practitioners:<br />

“Most times the so called experts fail to give you what you really need. Either their talk is too basic or<br />

they are far to advanced” (HR Manager)<br />

Secondly, many of the practitioners stated that they often felt that the audience of an expert lecture or seminar was<br />

ill-prepared and consequently, it was often difficult to see that these activities left any trace in the organization. To<br />

address these shortcomings a design concept with six distinct activities were developed. The Competence Kick-Off<br />

genre is to some extent inspired by the pedagogical traditions of liberal adult education (Hawke, 1999), where “good<br />

conversations” in “learning circles” are central elements. In many cases, learning circles lacks the traditional<br />

teacher/expert. Instead participants are more or less equally novice to the subject, and goals and objectives are<br />

discussed among the participants, of which one act as a circle-leader or a moderator. In the first step the title of the<br />

competence kick-off is set and advertised in the network. Participants enrol, and an expert is contacted and<br />

contracted. After having studied some introductory material, the students gather for a moderated discussion seminar<br />

where the goals and knowledge levels of each participant is presented, discussed and negotiated resulting in a jointly<br />

agreed upon requirement specification that clearly states what themes and questions the expert should address. After<br />

negotiating the requirements with the expert, an interactive expert presentation is performed. Subsequently, the<br />

participants meet to evaluate and discuss the outcome, and how to proceed. Finally, the experiences are documented<br />

in a white-paper that could be distributed to other interested parties in the network<br />

.<br />

Figure 5. Mindmap tool for phase two of competence kick-off<br />

The Competence kick-off is at present being evaluated in three different settings with three different themes (“Digital<br />

Video” for a group of journalists, “Geographical Information Systems” for a network of public administrators, and<br />

“Organizational Culture” for SME managers). The evaluations have generated several implications for design<br />

regarding for instance online-templates for the requirement specification document and the white paper, as well as a<br />

mind-mapping tool for framing, deconstructing and prioritizing the theme (fig 5). The current synthesis of design<br />

implications for thecompetence kick-off genre is summarized in the genre framework of table 3.<br />

45


Narrative<br />

Structure<br />

(Form)<br />

Content<br />

(Substance)<br />

Functionality<br />

(Medium)<br />

Conclusion<br />

Introducing<br />

The Theme<br />

A title is set<br />

for the circle,<br />

and an expert<br />

is contracted.<br />

Participants<br />

work with<br />

introductory<br />

material<br />

Introductory<br />

texts and/or<br />

video clips<br />

Table 3. Genre framework for Competence Kick-off<br />

Framing and<br />

Prioritizing<br />

Participants<br />

discuss &<br />

negotiate<br />

their<br />

objectives<br />

and needs.<br />

Thereby<br />

identifying &<br />

prioritizing<br />

core<br />

questions and<br />

themes<br />

Computer<br />

Conference<br />

System or<br />

Video<br />

Conference<br />

Graphical<br />

tool for mindmapping<br />

Requirement<br />

Spec.<br />

Document<br />

A formal<br />

document that<br />

is negotiated<br />

with the<br />

expert<br />

regulating<br />

content and<br />

goals for the<br />

expert<br />

presentation<br />

Collaborative<br />

authoring tool<br />

or Blog<br />

Expert<br />

Presentation<br />

Interactive<br />

seminar<br />

Streaming<br />

video<br />

suplemented<br />

with tool for<br />

synchronous<br />

text-communication<br />

De-Briefing<br />

Group<br />

discussions<br />

following up<br />

& evaluating<br />

the expert<br />

seminar.<br />

Threaded<br />

discussion<br />

forum or<br />

videoconference<br />

A jointly<br />

authored<br />

document<br />

describing the<br />

outcome of<br />

the<br />

competence<br />

kick-off.<br />

Intended also<br />

for sharing<br />

with non-<br />

participants.<br />

Collaborative<br />

authoring tool<br />

or Blog<br />

This paper wants to make a fairly simple point with respect to design of educational technology, innovation and<br />

dissemination of techno-pedagogical ideas need design concepts that can bridge the gap between highly abstracted<br />

IS design theories for learning and the situated nature of e-learning practices. We have presented concrete examples<br />

where techno-pedagogical design concepts, framed by the genre dimensions form, content and functionality, are<br />

instrumental to that effect.<br />

We think there are primarily two major merits from using a genre approach to e-learning design Firstly, we think that<br />

the structure dimension of a genre captures the inherently narrative property that is central for any educational<br />

design, and by viewing the design as a series of activities that unfolds as a narrative or a story we can capitalize on<br />

the communicative strengths of narratives that can be effective as agents for innovation and change (Bolin et al.,<br />

2006)<br />

Secondly, the interplay between structure, content and functionality has the potential of letting the designer work<br />

with an integrated design rationale rather than with separate design agendas for instruction and technology (Svensson<br />

& Östlund, 2005)<br />

To back-up these claims we are drawing from the experiences of the collaborative design work where the design<br />

concepts presented in this paper have been realised into systems and systems prototypes. These experiences and the<br />

evaluations of the systems further stresses that the genre framework functioned well as a tool for collaborative and<br />

multi-disciplinary design. Furthermore, given the fact that the Competence kick-off genre has been applied in three<br />

different contexts, with varying content and audience suggests that the level of abstraction of a techno-pedagogical<br />

genre makes it suitable for flexible translations.<br />

However, it is important to acknowledge that the proposed framework for techno-pedagogical genres does not<br />

replace the need for design theories for various learning contexts. Instead genres should be carefully developed in<br />

White<br />

Paper<br />

46


close dialogue with theory. Furthermore, the level of description with respect to IS support provided by the<br />

functionality element of the framework can hardly be fine-tuned enough to capture all the requirements and<br />

specification of the systems and applications needed to realise a certain instantiation of a genre, a fact that stresses<br />

the importance of design theories that explicitly guides design choices and development strategies.<br />

Finally, it must be stressed that acts of intentional design can not fully determine what will constitute a genre once it<br />

is put in to use. Intentional design can at best suggest a “scope of action” or a “social action space”, where some<br />

actions are encouraged and supported and others are obstructed and discouraged (Löwgren & Stolterman, 2004;<br />

Köhler, 2006). In other words - the full flavour of an educational genre is negotiated and enacted within a situated<br />

community of participants. Hopefully, over time, strong and successful genres will be designed, communicated,<br />

disseminated and enacted across educational cultures.<br />

Acknowledgement<br />

This work has partly been sponsored by the European Union Structure Fund (Area 2), and partly by the Swedish<br />

Knowledge and Competence Foundation (Learn-IT program).<br />

References<br />

Aroyo, L., & Dicheva, D. (2004). The New Challenges for E-learning: The <strong>Educational</strong> Semantic Web. <strong>Educational</strong><br />

<strong>Technology</strong> & Society, 7 (4), 59-69.<br />

Astleitner, H., & Hufnagl, M. (2003). The Effects of Situation-Outcome-Expectancies and of ARCS-Strategies on<br />

Self-Regulated Learning with Web-Lectures. <strong>Educational</strong> Multimedia and Hypermedia, 12 (4), 361-376.<br />

Bohl, O., Winand, U., & Schellhase, J. (2002). A Conceptual Framework for the Development of WBT-Guidelines.<br />

Paper presented at the E-Learn 2002, 15-19 <strong>October</strong>, 2002, Montreal, Canada.<br />

Bolin, M., Bergqvist, M., & Ljungberg, J. (2004). A Narrative Mode of Change Management. In Flensburg, P. &<br />

Ihlström, C. (Eds.), Proceedings of IRIS27, Falkenberg, Sweden.<br />

Hardless, C. (2005). Designing Competence Development Systems, Doctorial dissertation, Department of<br />

Informatics, Gothenburg University, Sweden.<br />

Herrington, J., & Oliver, R. (1995). Critical Characteristics of Situated Learning: Implications for the Instructional<br />

Design of Multimedia. In Pearce, J. & Ellis, A. (Eds.), Learning With <strong>Technology</strong>, Parkville, Victoria: University of<br />

Melbourne, 235-262.<br />

Hawke, B. (1999) Adult Education Research Trends in the Western European Countries. In W. Mauch (Ed.), Report<br />

on the International Seminar on World Trends in Adult Education Research, Hamburg, Germany: UNESCO Institute<br />

for Education.<br />

Herrington, A., & Herrington, J. (2006). Authentic learning environments in higher education, Hershey, PA:<br />

Information Science Publishing.<br />

Hung, D., & Chen, D. (2001). Situated Cognition, Vygotskian Thought and Learning from the Communities of<br />

Practice Perspective: Implications for the Design of Web-Based E-Learning. Education Media International, 38 (1),<br />

3-12.<br />

Köhler, V. (2006). Co-creators of Scope of action: An exploration of the dynamic relationship between people, IT,<br />

and work in a nursing context, Licentiate thesis, Luleå University of <strong>Technology</strong>.<br />

Lundin, J. (2003). Synchronizing Asynchronous Collaborative Learners. In Huysman, M., Wenger, E. & Wulf, V.<br />

(Eds.), Communities and Technologies, Dordrecht: Kluwer Academic, 427-433.<br />

47


Löwgren, J., & Stolterman, E. (2004) Thoughtful Interaction Design: a design perspective on information<br />

technology, Cambridge, Mass: MIT Press.<br />

Markus, M. L., Majchrzak, A., & Gasser, L. (2002). A Design Theory for Systems that Support Emergent<br />

Knowledge Processes. Management Information Systems Quarterly, 26 (3), 179-212.<br />

Orlikowski, W., & Yates J. (1994). Genre Repertoire: The Structuring of Communicative Practices in Organizations.<br />

Administrative Science Quarterly, 39, 541-574.<br />

Roberts, G. F. (1998). The Home Page as Genre: A Narrative Approach. Paper presented at the Thirtieth-First<br />

Annual Hawaii International Conference on System Sciences, January 6-9, 1998, Hawaii.<br />

Ryan, T., Field, H. G. R., & Olfman, L. (2002). Homepage Genre Dimensionality. Eight American Conference on<br />

Information Systems, retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://melody.syr.edu/hci/amcis02_minitrack/CR/Ryan.pdf.<br />

Saebø, Ø., & Päivärinta, T. (2005). Autopoietic Cybergenres for e-Democracy? Genre Analysis of a Web-Based<br />

Discussion Board, Paper presented at the 38 th Annual Hawaii International Conference on System Sciences, January<br />

3-6, 2005, Hawaii.<br />

Shepherd, M., & Watters, C. (1998). The evolution of Cybergenres. Paper presented at the Thirtieth-First Annual<br />

Hawaii International Conference on System Sciences, January 6-9, 1998, Hawaii.<br />

Shepherd, M., Watters, C., & Kennedy, A. (2004). Cybergenre: Automatic Identification of Home Pages on the Web.<br />

Web Engineering, 3 (3 & 4), 236-251.<br />

Sims, R., & Jones, D. (2002). Enhancing Instructional Development Processes for E-Learning. Paper presented at<br />

the E-Learn 2002, 15-19 <strong>October</strong>, 2002, Montreal, Canada.<br />

Slough, S., Aoki, J., Hoge, B. & Spears, L. (2004). Development of an E-Learning Framework for Web-based<br />

Project-Based Learning in Science. Paper presented at E-Learn 2004, 1-5 November, 2004, Washington DC, USA.<br />

Svensson, L. (2002) Communities of Distance Education, Doctorial dissertation, Department of Informatics,<br />

Gothenburg University, Sweden.<br />

Svensson, L., & Östlund, C. (2005). Emergent Design Concepts - An inductive approach to bridging design theory<br />

and educational practice. WSEAS Transactions on Advances in Engineering Education, 3 (2), 207-217.<br />

Uskov, V. (2004). Advanced Online Courseware for Student-Centered Learning: The Results of 4-Year NSF CCLI<br />

Project at Bradley University. Paper presented at E-Learn 2004, 1-5 November, 2004, Washington DC, USA.<br />

Walls, J. G., Widmeyer, G. R., & El Sawy, O. A. (1992). Building an Information System Design Theory for<br />

Vigilant EIS. Information Systems Research, 3 (1), 36-59.<br />

Wenger, E. (1998). Communities of Practice: Learning, Meaning and Identity, New York: Cambridge University<br />

Press.<br />

Wills, S., Agostinho, S., Harper, B., Oliver, R., & Hedberg, J. (2002). Developing Reusable Learning Design<br />

Resources. Paper presented at the E-Learn 2002, 15-19 <strong>October</strong>, 2002, Montreal, Canada.<br />

48


Wiberg, M. (<strong>2007</strong>). Netlearning and Learning through Networks. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 49-61.<br />

Netlearning and Learning through Networks<br />

Mikael Wiberg<br />

Umeå University, Sweden // Tel: +46 90 786 61 15 // Fax: +46 90 786 65 50 // mikael.wiberg@informatik.umu.se<br />

ABSTRACT<br />

Traditional non-computerized learning environments are typically founded on an understanding of learning as<br />

acquiring silence for an effective individual learning process. Recently, it has also been reported that the high<br />

expectations for the impact of computer-based technology on educational practice have not been realized. This<br />

paper sets out to challenge both the assumptions made about the requirements for effective learning<br />

environments by pointing in the direction of social creative learning processes, as well as the technologies for<br />

effective and creative learning processes by redirecting the focus from what has been labeled “traditional<br />

computer-based learning environments” towards user-driven learning networks. Thus, this paper proposes the<br />

concept of netlearning as a general label for the traditional use of computer-based learning environments as<br />

education tools and then, it suggests the concept of learning through networks as a challenging concept for<br />

addressing user-driven technologies that support social, collaborative and creative learning processes in, via, or<br />

outside typical educational settings. The paper is inspired by recent research into the interaction society and the<br />

Scandinavian tradition in system development that always have highlighted the importance of user-driven<br />

processes, users as creative social individuals, and a perspective on users as creative contributors to both the<br />

form, and content of new interaction technologies. The paper ends with a presentation of a participatory design<br />

project in which children developed their own computer-based tools for editing film and I present this<br />

technology followed by a discussion on user-driven design of learning technologies in which the technology is<br />

not just a container for something else, but instead, novel technologies as a tool that directly enable the children<br />

to do new things, i.e. to collectively learn through their computer-supported social network.<br />

Keywords<br />

Device cultures, Interaction, Learning through networks, Mobility. Netlearning, Web 2.0<br />

Introduction – silence please!<br />

Think about a typical non-computerized learning environment, e.g. a library, or a student room. One thing obvious<br />

about these two different settings is that the library design, and the student room layout, as well as the expected<br />

activities that these two different rooms should support can be characterized by the word “silence”. Traditional noncomputerized<br />

learning environments are typically founded on an understanding of learning processes as acquiring<br />

silence for an effective individual learning process. These two aspects of learning processes, i.e. silence and<br />

individual isolation are two aspects that this paper sets out to challenge from a modern computer-based learning<br />

perspective, including recent studies into the importance of social interaction and creativity for effective computerbased<br />

learning processes (Muirhead, <strong>2007</strong>). Most recently, it has also been reported that the high expectations for the<br />

impact of computer-based technology on educational practice have not been realized (e.g. Gifford & Enyedy 1999;<br />

Muirhead, <strong>2007</strong>). According to Beynon (<strong>2007</strong>) and Norris, et al. (2002) this latest fact has led researchers to place<br />

greater emphasis on cultural issues. Following this trend, this paper sets out to explicitly explore the concept of<br />

learning technologies, or netlearning within the socio-technical culture of an emerging interaction society (Wiberg,<br />

2004). On a more detailed level, this paper sets out to challenge both the assumptions made about the requirements<br />

for effective learning environments, by pointing in the direction of social creative learning processes, as well as the<br />

technologies for effective and creative learning processes, by redirecting the focus from what has been labeled<br />

“traditional computer-based learning environments” towards user-driven learning networks supported by social<br />

internet based applications.<br />

Let’s get loud – from isolated & individual learning environments to social networking and<br />

learning through networks<br />

While the traditional, individual, and isolated learning environment can function as a calm place for an individual to<br />

reflect upon e.g. text books, articles or other forms of literature, and while this setting is often romanticized, the<br />

individual is in fact also restrained by a number of factors. In an isolated learning environment, the single individual<br />

have no access to a second opinion from another person, no access to a complementary perspective, or external<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

49


critique, neither do the single individual have any chances to get complementary literature from anyone which might<br />

have a different reference library. Given this, there is not much social interactions in this kind of traditional learning<br />

environment.<br />

More recently, we can see this focus on isolated learning environments gaining new terrains in forms of e.g. selfstudies<br />

and “learn yourself x in only three weeks”-book series. <strong>Technology</strong> wise, we have as a technological field,<br />

followed a similar trend in our focus on learning technologies that support the individual and his/her learning<br />

processes. Some of these technologies, e.g. systems for online universities, or distance education do have some<br />

support for social interaction. However, most of these systems assume a centralized communication model in which<br />

the learning peers (i.e. the students) mostly communicate with one central peer (i.e. a mentor or advisor). This leads<br />

in many cases to communication related to the structure rather than the content of an online education and does not<br />

support spontaneous, creative social learning processes.<br />

On the other hand, it is today widely acknowledged that social networking is an important aspect of learning<br />

processes (Castells, 1996), and the following conversation can serve as an illustrative example of how much of what<br />

we know is in fact just part of threads which includes other persons in our social networks:<br />

"Do you think me a learned, well-read man?"<br />

"Certainly," replied Zi-gong, "Aren´t you?"<br />

"Not at all," said Confucius.<br />

"I have simply grasped one thread which links up the rest"<br />

(Recounted in Sima Qian (145-ca. 89 BC), "Confucius," in Hu Shi, The Development of Logical Methods<br />

in Ancient China, Shanghai: Oriental Book Company, 1922; quoted in Qian 1985:125, in Castells 1996, p.<br />

1. )<br />

This quotation also illustrate that what it is to be knowledgeable can be defined either in terms of how much one<br />

person have read and learned in isolation, or how knowledgeable a particular person is about different threads to<br />

grasp in order to gain access to other peers in different social networks. In more general terms, the later<br />

understanding of what it is to be knowledgeable corresponds to the “learning through networks” concept as proposed<br />

in this paper. This concept pinpoint the social dimension of learning processes, it pinpoints the social interaction<br />

setting, and it goes back to a Socratic understanding of knowledge gaining through conversations and argumentations<br />

with others.<br />

The concept of “learning through networks” also corresponds well to the current development towards an<br />

interaction society (Wiberg, 2004) in which IT plays an important role since these new interaction technologies<br />

enable people to communicate and interact “anytime, anywhere”. In this view, modern interaction technologies<br />

enable social networks to stay connected even though they might not be collocated, to collaborate over distances, and<br />

to keep each other updated whenever they need or want to get in touch with each other. In the interaction society, the<br />

focus is not directed at one-way information flows or information processing. Rather, the focus is on the technologyenabled<br />

interplay between the habitants of the interaction society, their social networks, their innovations of new<br />

interaction technologies, and the new channels they invent for their own social learning processes. Today, we can see<br />

new computer-related behaviors arising as an effect of these interplays including e.g. community building activities,<br />

the establishment of sharing cultures, and innovative linkages of different interaction technologies (sometimes<br />

referred to as “mash-ups”).<br />

In line with this trend, this paper proposes the concept of netlearning as a general label for the traditional use of<br />

computer-based learning environments as education tools and the concept of learning through networks as a<br />

challenging concept for addressing user-driven technologies that support creative, social learning processes in, via, or<br />

outside typical educational settings.<br />

There is a wide range of literature today that point in the direction of user-driven, innovative and creative processes<br />

in which modern IT plays an important role as a mediating and enabling platform, and support engine for social<br />

interaction. For instance, the book “The interaction society” (Wiberg, 2004) illustrates the turn to social interaction<br />

when it comes to modern IT-use and points out that IT might be better described as an interaction technology, rather<br />

than just an information technology since much of our IT-use today can be more closely related to supporting social<br />

50


interaction (e.g. email, skype, ICQ/MSN, etc) rather than information processing (e.g. calculation or transactions)<br />

from the individuals viewpoint of everyday IT-use. Another example of this kind of literature is the book<br />

“Connections – New ways of working in the networked organization” (Sproull & Kiesler, 1998) which illustrate the<br />

complexity of social networks and the highly intertwined role of modern IT as a mediating channel for social<br />

interaction.<br />

Finally, in the book “The rise of the Networked Society” Castells (1996) provide an in-depth analysis of how this<br />

new technology is not only reshaping our everyday IT-use, our social networks and our organizations, but in fact,<br />

how this technology have large scale implications for our society as a whole. According to his view, our modern<br />

society is today best described as a network society in which people uses their social and computational networks to<br />

do business and live their everyday lives. In the next section I take one step back and discuss how this focus on<br />

interaction relates to the Scandinavian tradition and the emerging interaction society on a general level before<br />

presenting a more detailed description of interaction as a core concept in learning trough network processes.<br />

Interaction as a vehicle to support new learning processes<br />

This paper is inspired by recent research into the interaction society (Wiberg, 2004) as well as it has some of its roots<br />

in the Scandinavian tradition (e.g. Hirschheim & Klein, 1989; Spinuzzi, 2002) i.e. in Scandinavian system<br />

development projects that always have highlighted the importance of user-driven processes (see e.g. Ehn, 1988),<br />

users as creative social individuals (Warr & O´Neill, 2005), and worked for a perspective of users as creative<br />

contributors to both the form, and content of new interaction technologies (Ehn, et al., 1983).<br />

Interaction is as such an approach with clear roots in Scandinavia and at the same time it is a modern concept able to<br />

capture a movement towards an interaction society. In such a society the technology enable new learning cycles and<br />

new paths for generating new knowledge. As formulated by the International Herald Tribune:<br />

"We are moving from the information society to the interaction society, and Lunarstorm is leading the<br />

trend," said Ola Ahlvarsson, chairman of Result, a Stockholm-based technology consulting company.<br />

"Young people here no longer accept a flow of information from above. They trust what they hear from<br />

friends on their network."<br />

This quote describes the story about Lunarstorm, a Swedish Internet service that 90% of high school students in<br />

Sweden are subscribed to. As this quote illustrates, we are heading towards an emerging interaction society in which<br />

new technologies enable socially connected, creative, online learning environments. It also illustrates the bottom-up<br />

approach, or the learning through networks learning cycle. Today, this learning through networks approach can also<br />

be found elsewhere on the Internet in the form e.g. the Web 2.0 concept, or sometimes also described as usergenerated<br />

content.<br />

The basic concept of interaction<br />

Before going further into the importance of social interaction for creative learning processes we present a close-up<br />

view of the concept of interaction and discuss how it relates to the concept of interaction technologies, CSCW<br />

(Computer Supported Cooperative Work) - and CSCL (Computer Supported Collaborative Learning)- technologies.<br />

The concept of interaction can, according to Dix & Beale (1996), be decomposed into the concepts of<br />

communication and collaboration. These two dimensions of interaction form two basic levels in the person-personobject<br />

model as formulated by Ljungberg (1999) based on Dix & Beale (1996) (see figure 1).<br />

In this model, communication is the exchange of information between people, e.g. Video conferencing.<br />

Collaboration is when two or more people are operating a common object or artifact (e.g. co-operative authoring<br />

where the shared document is the common object. In collaboration, operations produce "feedback" to the operator,<br />

but also "feed through" to co-workers). Support for collaboration is sometimes combined with support for<br />

communication, e.g., a collaborative authoring system (collaboration) equipped with a chat feature (communication).<br />

In the context of this model, communication and collaboration can be conceived as subsets of "interaction". As<br />

51


suggested by Ljungberg (1999b) we can use "CSCW technologies" to frame the technological support for interaction,<br />

i.e., communication technologies, collaboration technologies, and interaction technologies.<br />

Figure 1. Definition of the concept of interaction and interaction technologies (From Ljungberg (1999) based on Dix<br />

& Beale (1996))<br />

According to Dix & Beale (1996) the concept of interaction is precisely this combination of person-to-person<br />

communication, and collaboration around a shared object. In other terms, the concept of interaction can be conceived<br />

as a unifying concept for communication and collaboration. This social interplay between different people around<br />

shared objects, and how these people and objects are intertwined in different social structures and contact networks<br />

are thus the primary focus for studies of the interaction society.<br />

Today, we can see new such structures arising as an effect of clever design of social network technologies that<br />

enable social interaction, networking and new ways of learning, i.e. learning through networks. These services<br />

include interaction technologies like e.g. YouTube, Flickr, Skype, and del.ici.ous, and in the next section we take a<br />

close-up look at a couple of these services that support this new learning through networks learning cycle.<br />

Interaction technologies in support of learning through networks<br />

During the last couple of years we have been able to witness a shift in the development of learning technologies.<br />

While traditional learning technologies were focused on teacher-centered learning processes (Hiltz & Turoff, 2005;<br />

Gertzman & Kolodner, 1996) and specific learning objects we can now see how the technology is developed in a<br />

new direction (Shackelford, 1990; Gifford & Enyedy 1999). Now, the technologies are characterized as being open<br />

for everybody, general in terms of content, and builds upon a model of user-centered production of content instead of<br />

centralized, or teacher-centered production of content. Further on, the architecture is flat, and typically peer-to-peer<br />

to facilitate direct people-to-people interaction, collaboration, and learning.<br />

These new web-based technologies (including e.g., Skype, Flickr, YouTube, Orcut, LinkedIn, Wikis, blogs, and new<br />

forms of social network services like e.g. the Four word film review web site www.fwfr.com) enable people to come<br />

together in new ways to share ideas, opinions, content, humor or ideals and as such, these technologies enable new<br />

forms of creativity, socialization, and learning. As described by Jenkins and colleagues (2006) these new social<br />

network services enable and scaffold new “participatory cultures” including; 1) affiliations (memberships, formal<br />

and informal, in online communities centered around various forms of media, such as Friendster, Facebook, message<br />

boards,m metagaming, game clans, or MySpace), 2) expressions (producing new creative forms, such as digital<br />

sampling, skinning and modding, fan videomaking, fan fiction writing, zines, mash-ups), 3) collaborative problemsolving<br />

(including working together in teams, formal and informal, to complete tasks and develop new knowledge<br />

(such as through Wikipedia, alternative reality gaming, spoiling), and finally, 4) circulations, i.e. shaping the flow of<br />

media (such as podcasting and blogging).<br />

While email and mobile phones are two basic technologies that have enabled people to communicate at a distance,<br />

we can now see new forms of interaction support being developed. In this section we point at four such digital<br />

52


services that all demonstrate new ways of learning through networks. More specifically, we will take a closer look at<br />

the digital services YouTube, Creative Commons, Stumble Upon, and Del.icio.us.<br />

One thing that these four Internet-based services have in common is that they rely on user-generated content, i.e. the<br />

company behind the service offers only the digital structure for it and leaves the content to be consumed, and produced by<br />

anyone interested in contributing to the community forming around the service. If taking a closer look at YouTube<br />

(www.youtube.com) for instance (see figure 2, left), the service enable people to upload their own short movies to<br />

the YouTube website, and others can brows the uploaded moveclips and play any clip they find interesting. YouTube<br />

continuously present “videos being watched right now” by others, as well as a list of featured videos. Besides this<br />

“social navigation” (Dieberger, et al., 2000) inspired support the site also support standard search of the content on<br />

the site. On YouTube, everybody can post new move clips, and everybody can rate, comment, or save a clip as a<br />

personal favorite. Each time a clip is watched it is tracked by a counter so it is always possible to know for how<br />

many times a certain clip has been played. A clip can also easily be shared by pushing a “share” button in the end of<br />

each clip to send a URL to anyone over email.<br />

Another example of a web-based sharing service is Creative Commons (www.creativecommons.org) (see figure 2,<br />

right). Creative commons has the tag line “Share, reuse, and remix — legally”, and that is the essence of what this<br />

service is all about. A big issue when it comes to open web-based services for sharing user-generated content has<br />

been how to deal with copyright issues. On Creative Commons everybody are able to share their content with<br />

anyone. However, what is different here is respect to YouTube is that e.g. authors, scientists, artists, and educators<br />

can easily mark their own work with the proper rights they want their content to carry. Anyone who contributes with<br />

some content to Creative Commons can apply any copyright terms from "All Rights Reserved" to "Some Rights<br />

Reserved.” Thus, Creative Commons supports a bottom-up, user-driven, and open learning and sharing culture while<br />

protecting the individual’s rights.<br />

Figure 2. Screenshots of YouTube (left) and Creative Commons (right)<br />

In relation to the main argument in this paper, these two services illustrate another important aspect of the learning<br />

through networks phenomenon. While traditional learning material are directed, produced, packaged, distributed, and<br />

consumed by learners, new material in these digital networks follow another cycle characterized by user-generated<br />

content that is continuously spread, re-mixed, re-spread, and so on. As such, this new media follows a new media life<br />

cycle (Wiberg, <strong>2007</strong>).<br />

The sharing culture is important for social learning processes, for maintaining a community, and for the creation of<br />

shared points of references. But in these new learning environments it is not only the content that is shared and<br />

circulated, but also the peers themselves (Hoppe, et al., 2005; Milrad, et al., 2005). Today we can see several similar<br />

sites growing fast in terms of number of associated and active members including e.g. the Stumbleupon.com site and<br />

53


the del.icio.us site (see figure 3). Below we take a closer look at these two services from the perspective of new<br />

learning environments for learning through networks.<br />

One important aspect for the learning through networks concept to work is to provide the tools for finding new peers<br />

of content in the network, and to be able to share found peers with other members of the network. Two such<br />

examples of technical support can be found at the Stumbleupon.com site and the del.icio.us web site.<br />

Stumbleupon.com (figure 3, left) works actively with the tag line “Discover new sites” and that is also the essence of<br />

the site. On Stumbleupon.com people can recommend and review sites so that others more easily can get across<br />

relevant information and find sites related to their own interests. The overall idea with Stumbleupon.com is to serve<br />

as a public recommendation service. The del.icio.us site (figure 3, right) on the other hand serves a similar, but<br />

slightly different purpose. Here, the focus is about providing information to make it easier for others to find new<br />

interesting peers in the network. But, it is now just any peer that one might want to recommend to others. Instead, the<br />

central unique idea behind the del.icio.us site is that it is assumed that what is interesting to the single individual<br />

might also be interesting to others. Thus, on the del.icio.us site anyone can share their personal favorites (similar to<br />

“my bookmarks” in a browser) to anyone else. As such, the del.icio.us site builds upon an idea of finding new<br />

interesting material in the network through the glancing at other points of references in the network.<br />

Figure 3. Screenshots from the StumbleUpon site (http://www.stumbleupon.com/), (left), and the del.icio.us site<br />

(http://del.icio.us/), (right)<br />

Del.icio.us can thus be thought of as a new way of sharing your points of reference, which is indeed an important<br />

aspect of any body of knowledge. While Albert Einstein ones said, “The secret to creativity is knowing how to hide<br />

your sources” we can now start to see the creative power behind mass interaction around shared sources. The<br />

del.icio.us site is only one example. Another example is Wikipedia, the online, user-generated, and user-maintained<br />

dictionary, which contains almost the same amount of information as the Webster´s dictionary (although its accuracy<br />

has been frequently debated during the last two years).<br />

With all these new services in place for uploading and sharing new content across the Internet it also becomes<br />

obvious that there are not only needs for tools to find new peers, but also to keep track of changes at already known<br />

peers of interest (Hoppe, et al., 2005; Jones, et al., 2005, Chen, et al., <strong>2007</strong>). Today, we can therefore see a growing<br />

interest in e.g. RSS- and similar technologies to enable people to automate the checking for new content on remote<br />

peers (Miao, et al., 2005; Glotzbach, et al., <strong>2007</strong>), and mash-up technologies (Ankolekar, et al., <strong>2007</strong>), (including e.g.<br />

yahoo pipes) to enable people to more easily combine different sources of information, and also form new services<br />

(to again support the new learning vehicle of uploading, sharing, and circulating interesting content across the<br />

networks.<br />

54


Taking one step back and two steps forward – towards new physical (and social) learning<br />

environments<br />

If taking one step back for a moment to have a look at the direction in which the development of many traditional<br />

learning environments are heading today, we can notice that there is a similar trend going on around traditional,<br />

physical learning environments as well. In the classical library, silence and individual search for literature was key<br />

factors for any library design. Now however, we have a new set of design requirements for these classical learning<br />

environments.<br />

For instance, if reviewing the redesign of the city library in Umeå (see figure 4), a city in the north of Sweden, we<br />

can notice a couple of fundamental changes. In the old library, bookshelves were arranged as to create narrow<br />

corridors all across the library. Now, there is the same amount of books, but 1/3 of the books have been placed on a<br />

newly built upper floor. A stair with glas sealing enable the library visitors at the ground floor to see other people<br />

moving up and down the stairs. And, it enables people using the stairs to get a good overview of the library area<br />

below. Further on, the upper floor only covers 1/3 of the ground floor. This enable people at the upper floor and<br />

ground floor to see each other and communicate if they which to do that. Finally, a new sofa was installed around<br />

one of the scaffolding piles in the library. An important aspect of this sofa is its central place in the new open area,<br />

and another important aspect is that the seats are faced outwards. This might not be ideal for a sofa conversation, but<br />

it fulfills the idea of an open library in which people notice and acknowledge each other. According to the architects<br />

behind this redesign, a guiding principle was “social awareness”, a not so unfamiliar word in the area of information<br />

technologies these days.<br />

Figure 4. Illustration from the redesign of the city library in Umeå as an example of a new approach to traditional<br />

learning environments<br />

What we can learn from this example is that the “learning through networks” trend is more far reaching than we can<br />

grasp if looking solely at digital and specific learning environments. Instead, we need to redirect our attention and<br />

acknowledge that this trend moves across the digital and the physical landscape, and across new, as well as<br />

traditional, old-style, learning environments. While the traditional library was about silence, and individual browsing<br />

of literature, the new library might be better labeled as”the social interaction library”. While this might be to<br />

extreme, this example still illustrates the movement towards more open and more socially encompassed learning<br />

environments.<br />

55


Device cultures<br />

It is not only the physical learning environments that are affected, effected, and changed as an effect of this<br />

movement towards socially encompassed learning settings. Another change is related to where the learning<br />

processes, in terms of knowledge sharing, interaction, and collaboration take place. In the knowledge society,<br />

sometimes referred to as an “interaction society” (Wiberg, 2004) people bring along a myriad of interaction tools and<br />

digital devices (typically mobile phones, laptop computers, or different handheld devices) to enable them selves to<br />

interact with anyone, at anytime, and anywhere. These devices and the appropriation of these devices are interesting<br />

in several different ways. The way in which people customize their devices to make them fit their needs and<br />

everyday behaviors is very individual, and it tells a lot about the individual who customize the devices. At the same<br />

time, the adoption and customization of devices is not only an individual processes, but also a highly social process.<br />

In the customization of my own devices, I do at the same time do this in relation to others. For instance if my friends<br />

are running Skype, or similar VoIP services on their computers it is likely that I will also install and set up my own<br />

Skype account on my device. The customization of digital devices is therefore both an issue for the individual, as<br />

well as it is a highly social process. In our view, this is also an example of a delicate symbiotic interplay concerning<br />

the acknowledging of the peers in the social networks, and simultaneously about the acknowledging of oneself and<br />

ones relation to the peers that these networks consist of. In the end, we are just individual peers in these new social<br />

networks, but this time around we have the technology available for reaching out to interact with others who might<br />

not be, in geographic terms, close to us.<br />

The device culture, characterized by people wearing, carrying, using and constantly configuring digital devices are<br />

now possible to observe allover our modern society. People see these devices as their ontological security (Lowry &<br />

Moskos, 2005), think that they cannot manage without them (Fortunati, 2001) and carries them with them to ensure<br />

constant accessibility to their online social networks (e.g. Wiberg & Whittaker, 2005; Sadler, et al, 2006). From a<br />

learning through networks perspective, and in line with Lankshear & Knobel (2006) it is a way to ensure their<br />

belonging to their social networks as an important point of reference for their learning, and collaborative processes.<br />

The device culture is also characterized by a few additional factors. The device cultures are about the use and<br />

appropriation of digital devices, but also about a few additional aspects in relation to the learning through networks<br />

phenomenon. First of all, we have the “I, I, I trend”, i.e. we can see the acknowledging of the individual in services<br />

like iTunes and iGoogle. But at the same time, we can also see digital services that build upon the strength of<br />

individual contributions to the public, which is about our social networks, shared digital content, and mass<br />

interaction (Keen, <strong>2007</strong>) including e.g. the most recently, and highly popular internet service Facebook<br />

(www.facebook.com). Further on, we should not underestimate the importance of fun as a mean to support creative<br />

learning processes. This has been acknowledged previously in the literature (e.g. by Neal, et al., 2004 and<br />

MacFarlane, 2005) and specific terms, like “edutainment” (e.g. Rasmussen, 1994 and Rapeepisarn, et al., 2006), has<br />

been formulized to highlight the importance of fun for good learning processes. Finally, we should not forget about<br />

the coming generation, i.e. the children as an important component of today’s device cultures. These young people<br />

are often early adopters of new technology, creators of innovative products and services and adaptive to learn, and<br />

spread new things (Jenkins, et al., 2006). Without making it into a cliché, the young people of today are the nodes in<br />

tomorrow’s networks.<br />

To summarize this trend, which today is typically described or labeled as a movement towards “Web 2.0” we can<br />

now make the following distinctions between “Web 1.0” and “Web 2.0”: In “Web 1.0” most web sites were quite<br />

static in terms of frequency of information updates, designed to be centrally (and seldom) administrated, corporatedriven,<br />

and seen as a showroom for internet visitors. On the other hand, for “Web 2.0” services the focus is on userdriven/invented<br />

Internet projects, user-generated content, socially exchanged digital media via these new digital<br />

services, and characterized by a high frequency of updates, which typically include various forms of digital media<br />

rather than text based corporate information.<br />

Supporting children to learn film editing through participatory design and social<br />

collaboration<br />

While it might be fine to just theorize about the important aspects of devices cultures and the development towards<br />

Web 2.0 in relation to the phenomenon of learning though networks it might be equally important to illustrate these<br />

56


ideas in a concrete project. In this section I therefore set out to take the aspects of learning through networks in the<br />

modern device cultures to an applied setting and present a participatory design project in which children developed<br />

their own computer-based tools for editing films and movie clips. On a general level, our project is related to e.g. the<br />

work of Hiroaki Ogata, specially their project with RFID Tags for language learning (http://www-yano.is.tokushimau.ac.jp/ogata/projects.html),<br />

the research conducted by Masanori Sugimoto on tangible learning environments (e.g.<br />

Sugimoto, et al., 2003), the research conducted by Kurti, et al. (<strong>2007</strong>) in the AMULETS (Advanced Mobile and<br />

Ubiquitous Learning Environments for Teachers and Students) project, and the Savannah project at FutureLab in the<br />

UK (http://www.futurelab.org.uk/resources/documents /project_reports/ Savannah_research_report.pdf).<br />

In this section, I present our project and the technology we have developed followed by a discussion on user-driven<br />

design of learning technologies in which the technology is not just a container for some other learning objects, but<br />

instead, a novel technology that in itself serves as a tool that directly enable children to do new things, i.e. to<br />

collectively learn through their computer-supported social network, i.e. a digital support for learning through<br />

networks.<br />

In 2005, we initiated a participatory design project together with 12-year old children as part of the EU funded<br />

Meetings/vITal research project. During 6 mounts, a group of 13 children from a local school participated in a<br />

creative process of identifying needs for novel collaboration technologies. Their participation included the needs<br />

identification process, the system requirement analysis, and the design of a concrete system to support their vision of<br />

a novel collaboration technology. In this project, the children developed a huge number of paper mock-ups, interface<br />

sketches, and low-fi prototypes to illustrate, and communicate their visions concerning novel technical support for<br />

editing film (see figure 5). Their main vision included, amongst other ideas, a circular, table size, area for tangible<br />

editing of movie clips by just arranging physical tags (RFID-tags) in the order they should be played in the finalized<br />

video, and the whole project is carefully described in Vaucelle, et al., (2005).<br />

Figure 5. Two illustrations from the creative process of identifying the needs and visions for a future computer-based<br />

novel tool for editing movie clips. To the left, a picture from one of the sessions with the children, and to the right, an<br />

example of an interface sketch drawn during this project<br />

From a “learning through networks” point of view, however, there are some interesting aspects to acknowledge here.<br />

As formulated above, we took those principles as a guide and involved children as creative and active participants in<br />

this project. We also allowed them to freely play around with paper and pencil materials as to make the process fun.<br />

We stimulated them to work together rather than in isolation, and we wanted them to develop a tool that could work<br />

for them as an individual as well as being a good support for collectively editing film.<br />

To us, it turned out that these criteria’s worked very well, and in only 6 mounts, the children went from vague initial<br />

ideas to an implemented system (although they had some help in the end with the programming of the actual system<br />

and the creation of the table, the tags, and the special cameras developed in the project. Figure 6 illustrate the final<br />

version of the system which contains 1) a circular editing table that the children can use to arrange their tags (RFIDtags)<br />

on in any order they want them to appear in the final cut of the movie, 2) a special camera developed that was<br />

57


uilt using a camera-equipped PDA running windows mobile which we also equipped with a WLAN-card and an<br />

RFID tag reader (as to enable association of video clips with RFID-tags, and as a seamless way to transfer the video<br />

clips taken to the editing table (over WLAN). As illustrated in figure 6 (right) the children could then just take an<br />

RFID-tag. Place it in the holder on the camera (see the little light blue circle), shot a sequence, and then just take the<br />

RFID-tag from the camera and place it in a proper place on the editing table.<br />

Figure 6. The editing table (left), the RFID/WLAN-camera (center), and a couple of the children in the project<br />

playing around with the cameras and the editing table (right)<br />

So, while this project on the one hand illustrates the design of a novel technology for film editing in terms of a<br />

robust, tangible interface for collaborate film editing, it does at the same time illustrate something else of importance<br />

from a learning through networks perspective, i.e. the power of social, user-driven design of learning technologies in<br />

which the technology is not just a container for something else, but instead, a tool that directly enable the children to<br />

do new things, i.e. to collectively learn through their computer-supported social network. For the children that<br />

participated in this project, the most important lesson might not be how to design this specific tool, but rather the<br />

discovery of the power of playing around together, playing around with digital technology, and participate in a<br />

design process of new technologies Vaucelle, et al (2005), and how such a process can stimulate creativity, and thus<br />

stimulate new ways of learning.<br />

Conclusions – A final note on netlearning vs. learning through networks<br />

In this paper we have elaborated on the notion of netlearning as a labeling concept for learning technologies, vs.<br />

learning thought networks as a label for the new ways in which people use the Internet and wireless mobile networks<br />

to communicate, collaborate, participate and generate new knowledge around their own special topics. As such, this<br />

paper have challenged both the assumptions made about the requirements for effective learning environments by<br />

pointing in the direction of social, creative learning processes, as well as the technologies for effective and creative<br />

learning processes by redirecting the focus from what has been labeled “traditional computer-based learning<br />

environments” towards user-driven learning networks.<br />

Further on, we have in this paper proposed the concept of netlearning as a general label for the traditional use of<br />

computer-based learning environments as education tools and then we have proposed the concept of learning<br />

through networks as a challenging concept for addressing user-driven technologies that support creative, social<br />

learning processes in, via, or outside typical educational settings. The general outline for this paper has been inspired<br />

by recent research into the interaction society and the Scandinavian tradition in system development that always have<br />

highlighted the importance of user-driven processes, users as creative social individuals, and a perspective on users<br />

as creative contributors to both the form, and content of new interaction technologies.<br />

58


Finally, we have in this paper presented a participatory design project in which children developed their own<br />

computer-based tools for editing film and we have presented this technology followed by a discussion on user-driven<br />

design of learning technologies in which the technology is not just a container for something else, but instead, a<br />

novel technology that in itself serves as a tool that directly enable children to do new things, i.e. to collectively learn<br />

through their computer-supported social network, i.e. a digital support for learning through networks.<br />

While this paper started out with a quick step away from the traditional learning environment and the idea of silence<br />

as an important factor for effective individual learning processes I think that the biggest challenge for further<br />

research in this field is about generating new knowledge about the other side of the individual “silence coin”., i.e.<br />

research efforts directed towards understanding and generating new theories and concepts that can explain and shed<br />

new light upon the processes related to collaborative and creative concentration as a result of learning through, and<br />

playing with, new forms of digital networks.<br />

References<br />

Ankolekar, A., Krötzsch, M., Tran, T., & Vrandecic, D. (<strong>2007</strong>). Semantic web and web 2.0: The two cultures:<br />

mashing up web 2.0 and the semantic web. Paper presented at the 16th International Conference on World Wide<br />

Web, May 8-12, <strong>2007</strong>, Banff, Canada.<br />

Castells, M. (1996). The Rise of the Network Society, Oxford: Blackwell.<br />

Chen, Y., Fabbrizio, G., Gibbon, D., Jora, S., Renger, B., & Wei, B. (<strong>2007</strong>). Geotracker: geospatial and temporal<br />

RSS navigation. Paper presented at the 16th International Conference on World Wide Web, May 8-12, <strong>2007</strong>, Banff,<br />

Canada.<br />

Dieberger, A., Dourish, P., Höök, K., Resnick, P., & Wexelblat, A. (2000). Social navigation: techniques for building<br />

more usable systems. Interactions, 7 (6), 36-45.<br />

Ehn, P. (1988). Work-Oriented Design of Computer Artifacts, Stockholm: Arbetslivscentrum.<br />

Ehn, P., Kyng, M., & Sundblad, Y. (1983). The UTOPIA Project: On training, technology, and products viewed<br />

from the quality of work perspective. In U. Briefs, C. Ciborra, and L. Schneider, (Eds), Systems Design For, With<br />

and By the Users, Amsterdam: North-Holland, 439-449.<br />

Fortunati, L. (2001). The Mobile Phone: An Identity on the Move. Personal and Ubiquitous Computing, 5 (2), 85-<br />

98.<br />

Gifford, B., & Enyedy, N. (1999). Activity centered design: towards a theoretical framework for CSCL. Paper<br />

presented at the 1999 conference on Computer support for collaborative learning, CSCL '99, 11-12 December 1999,<br />

Stanford, USA.<br />

Glotzbach, R., Mohler, J., & Radwan, J. (<strong>2007</strong>). RSS as a course information delivery method. Paper presented at<br />

the International Conference on Computer Graphics and Interactive Techniques, 7-9 August <strong>2007</strong>, San Diego,<br />

California, USA.<br />

Hirschheim, R., & Klein, H. (1989). Four paradigms of information systems development, Communications of the<br />

ACM, 32 (<strong>10</strong>), 1199-1216.<br />

Hoppe, H, Pinkwart, N, Oelinger, M, Zeini, S, Verdejo, F, Barros, B, & Mayorga, J.I. (2005). Building Bridges<br />

within Learning Communities through Ontologies and "Thematic Objects. Paper presented at the CSCL 2005<br />

Conference, 30 May - 4 June 2005, Taipei, Taiwan.<br />

Jenkins, H, Clinton, K., Purushotma, R., Robison, A., & Weigel, M. (2006). Confronting the Challenges of<br />

Participatory Culture: Media Education for the 21st Century, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.projectnml.org/files/working/NMLWhitePaper.pdf.<br />

59


Jones, C., Dirckinck-Holmfeld, L., & Lindtröm, B. (2005). CSCL - the next ten years: a view from Europe. Paper<br />

presented at the CSCL 2005 Conference, 30 May - 4 June 2005, Taipei, Taiwan.<br />

Keen, A. (<strong>2007</strong>). The Cult of the Amateur: How Today's Internet Is Killing Our Culture, Currency, USA: Trade<br />

Cloth.<br />

Kurti, A., Milrad, M., & Spikol, D. (<strong>2007</strong>). Designing Innovative Learning Activities Using Ubiquitous Computing.<br />

Paper presented at the ICALT <strong>2007</strong> Conference, July 18-20, <strong>2007</strong>, Niigata, Japan.<br />

Lankshear, C., & Knobel, M. (2006). New Literacies: Everyday Practices and Classroom Learning, Berkshire, UK,<br />

Open University Press.<br />

Lowry, D., & Moskos, M. (2005). Hanging on the Mobile Phone: Experiencing work and spatial flexibility. Paper<br />

presented at the 4 th Critical Management Studies Conference, 4-6 July 2005, Cambridge, UK.<br />

MacFarlane, S., Sim, G., & Horton, M. (2005). Assessing usability and fun in educational software. Paper presented<br />

at the 2005 Conference on Interaction Design and Children, June 8-<strong>10</strong>, 2005, Boulder, CO, USA.<br />

Miao, Y., Hoeksema, K., Hoppe, H., & Harrer, A. (2005). CSCL scripts: modelling features and potential use. Paper<br />

presented at the CSCL 2005 Conference, 30 May - 4 June 2005, Taipei, Taiwan.<br />

Milrad, M., Björn, M., & Jackson, M. H. (2005). Designing networked learning environments to support intercultural<br />

communication and collaboration in science learning. International Journal of Web Based Communities, 1 (3), 308-<br />

319.<br />

Neal, L., Miller, D., & Perez, R. (2004) Online learning and fun. eLearn, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.elearnmag.org/subpage.cfm?section=articles&article=4-1.<br />

Norris, C., Soloway, E., & Sullivan, T. (2002). Examining 25 Years of <strong>Technology</strong> in Education, Communications of<br />

the ACM, 45 (8), 15-18.<br />

Rapeepisarn, K., Wai Wong, K., Che Fung, C., & Depickere, A. (2006). Similarities and differences between "learn<br />

through play" and "edutainment". Paper presented at the 3 rd Australasian conference on Interactive entertainment,<br />

December 4-6, 2006, Perth, Australia.<br />

Rasmussen, M. (1994). Interactive edutainment on the Internet. ACM SIGGRAPH Computer Graphics, 28 (2), 139.<br />

Russell, L., & Shackelford, R. (1990). <strong>Educational</strong> computing: myths versus methods - why computers haven't<br />

helped and what we can do about it. Paper presented at the Conference on Computers and the Quality of Life,<br />

September 13-16, 1990, Washington DC, USA.<br />

Sadler, K., Robertson, T., & Kan, M. (2006). "It's always there, it's always on": Australian freelancer's management<br />

of availability using mobile technologies. Paper presented at the 8 th Conference on Human-Computer Interaction<br />

with Mobile Devices and Services, September 12-15, 2006, Espoo, Finland.<br />

Spinuzzi, C. (2002). A Scandinavian challenge, a US response: methodological assumptions in Scandinavian and US<br />

prototyping approaches. Paper presented at the 20th annual international conference on Computer documentation,<br />

<strong>October</strong> 20-23, 2002, Toronto, Canada.<br />

Sproull, L., & Kiesler, S. (1998). Connections - New ways of working in the networked organization, Cambridge,<br />

The MIT Press.<br />

Sugimoto, M., Kusunoki,F, Inagaki, S, Takatoki, K, & Yoshikawa, A. (2003). Design of a System and a Curriculum<br />

to Support Group Learning for School Children. Paper presented at the CSCL 2003 Conference, June 14-18, 2003,<br />

Bergen, Norway.<br />

60


Vaucelle, C., Africano, D., Davenport, G., Wiberg, M., & Fjellstrom, O. (2005). Moving Pictures: Looking Out /<br />

Looking In. Paper presented at the SIGGRAPH 2005 Conference, 31 July – 4 August 2005, Los Angeles, USA.<br />

Warr, A., & O'Neill, E. (2005). Understanding design as a social creative process. Paper presented at the 5 th<br />

Conference on Creativity & Cognition, 12-15 April 2005, London, UK.<br />

Wiberg, M. (2004). The Interaction Society - Practice, Theories, and Supportive Technologies, Hershey, PA: Idea<br />

group.<br />

Wiberg, M., & Whittaker, S. (2005). Managing availability: Supporting lightweight negotiations to handle<br />

interruptions. ACM Transactions on Computer-Human Interaction, 12 (4), 356-387.<br />

Wiberg, M. (<strong>2007</strong>). Midgets - Towards truly liquid media. Paper presented at the CMID ´07 - The 1st International<br />

Conference on Cross-Media Interaction Design, March 22-25, <strong>2007</strong>, Hemavan, Sweden.<br />

61


Milrad, M., & Spikol, D. (<strong>2007</strong>). Anytime, Anywhere Learning Supported by Smart Phones: Experiences and Results from the<br />

MUSIS Project. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 62-70.<br />

Anytime, Anywhere Learning Supported by Smart Phones: Experiences and<br />

Results from the MUSIS Project<br />

Marcelo Milrad and Daniel Spikol<br />

Center for Learning and Knowledge Technologies (CeLeKT), Växjö University, Sweden<br />

marcelo.milrad@msi.vxu.se // daniel.spikol@msi.vxu.se<br />

ABSTRACT<br />

In this paper we report the results of our on-going activities regarding the use of smart phones and mobile<br />

services in university classrooms. The purpose of these trials was to explore and identify which content and<br />

services could be delivered to the smart phones in order to support learning and communication in the context of<br />

university studies. The activities were conducted within the MUSIS (Multicasting Services and Information in<br />

Sweden) project where more than 60 students from different courses at Växjö University (VXU) and Blekinge<br />

Institute of <strong>Technology</strong> (BTH) participated during the course of their studies. Generally, the services integrated<br />

transparently into students’ previous experience with mobile phones. Students generally perceived the services<br />

as useful to learning; interestingly, attitudes were more positive if the instructor adapted pedagogical style and<br />

instructional material to take advantage of the distinctive capabilities of multicasting. To illustrate, we describe a<br />

number of educational mobile services we have designed and implemented at VXU and BTH. We conclude with<br />

a discussion and recommendations for increasing the potential for successful implementation of multicasting<br />

mobile services in higher education, including the importance of usability, institutional support, and tailored<br />

educational content.<br />

Keywords<br />

Ubiquitous Learning, <strong>Educational</strong> Mobile Services, Smart Phones.<br />

Introduction<br />

In the past decade, the Internet has spawned many innovations and services that stem from its interactive character.<br />

The emergence of ubiquitous and inexpensive microprocessors and wireless networks has lead to the wide<br />

deployment of mobile devices that allow us to access and to handle information almost anytime and anywhere<br />

(Roussos et al., 2005). Diverse multimedia applications have flourished with recent advances in hardware and<br />

network technology with the proliferation of inexpensive video-capture devices and widespread adoption of the<br />

worldwide web via these mobile devices. All these forms of interactive multimedia and communication offer new<br />

possibilities for supporting innovative ways of learning, collaborating and communicating (Milrad, 2003; Thornton<br />

& Houser, 2004). These technologies and new forms of mobile communication and collaboration have been widely<br />

adopted by young people and integrated into their everyday lives. Clear indications of this trend can be found in sites<br />

such as www.youtube,com, www.flickr.com, and www.facebook.com. However, this transformation does not live up<br />

to the promises and expectations when it comes to the use of mobile technologies at schools and universities (Norris<br />

et al., 2002; Tatar et al., 2003).<br />

Lankshear and Knoble (2006) claim that formal education ignores some of these trends and argue that mobile and<br />

wireless technologies and new media might be integrated into current school educational activities, as they are<br />

transforming and defining new literacies in teaching and learning. Thus, there are a number of challenging questions<br />

that deserve further exploration. What are the implications of using mobile computing and wireless communication<br />

for supporting learning and teaching? What new scenarios and applications will emerge? In order to understand the<br />

possible impact of using smart phones for facilitating learning and teaching, we will proceed by presenting the<br />

results of one of our on-going projects, MUSIS (MUlticasting Services and Information in Sweden).<br />

This paper presents the results of two pilots studies conducted within the framework of the MUSIS project between<br />

the periods of 2005 and <strong>2007</strong>. By presenting these two periods of the trials, we hope to gain new insights regarding<br />

how attitudes and expectations towards using mobile phones in educational settings may have changed over the last<br />

two years. The next section describes the MUSIS project and the technical infrastructure. The method section<br />

describes the implementation of the trials and the data collection techniques we have used. The results and discussion<br />

section describes the outcome of our trials and explain how students experienced the mobile services. Issues and<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

62


problems are discussed with regard to the technology and its use. Overall conclusions are provided in the final<br />

section of the paper.<br />

A brief overview of the MUSIS project<br />

The main objectives of the MUSIS projects are to explore, identify and develop a number of innovative multicast<br />

mobile services to support learning with multimedia information to be distributed over wireless networks using<br />

multicasting solutions at university. The project has had two pilot phases, the first one during 2005 and the second in<br />

<strong>2007</strong>. MUSIS (http://www.musis.se) has brought together different partners. The key partners have been TeliaSonera<br />

(TS), Sweden's largest telecom operator, the City of Stockholm, Växjö University (VXU), and Bamboo<br />

MediaCasting, a company pioneering in the field of cellular multicasting. Also, Luleå <strong>Technology</strong> University (LTU),<br />

the Royal Institute of <strong>Technology</strong> (KTH) in Stockholm and the Blekinge Institute of <strong>Technology</strong> (BTH) have been<br />

actively involved in the project.<br />

Multicasting mobile services developed in the MUSIS projects are organized as a range of content channels to which<br />

users can subscribe. Each user can build a personal portfolio of channels that interest them. Multimedia content is<br />

sent, according to a predefined time schedule, to subscribers over the GPRS (General Packet Radio Service) network<br />

using wireless multicast technology (Varshney, 2002). It is also possible to program the MUSIS system in order to<br />

send content to the phones based on discrete events. The content sent to the phone is downloaded in the background<br />

and stored on the phone’s memory card. Once the content has arrived, the phone beeps announcing a new message<br />

has been received, similar to standard message services. Users can then interact with the MUSIS client installed in<br />

the smart phone in order to view and save the content. This approach differs from the latest type mobile services<br />

offered by the telecom industries, which are using streaming technology. The digital content used in these trials<br />

included TV news, music, entertainment videos, general information related to student’s activities, such as lecture<br />

notes (including video and audio), and specific information related to the different courses.<br />

During the second phase of the project, we introduced additional content tools for the users allowing them to<br />

multicast video, audio, images, and text directly from the handset. We also expanded the web interface that<br />

controlled the subscriptions to include the ability to upload and convert content for multicast delivery. This<br />

fundamental change in this trial focused on shifting the traditional broadcast model we used in phase 1 of the project<br />

from a one to many model, to a many-to-many model, thus providing students with the ability to explore how these<br />

concepts could be used in an educational environment. The content in this phase was created by the students and the<br />

instructors and it included text, calendar events, photographs, and video. All these materials were sent between the<br />

students´ groups and back and forth between the instructors and the students.<br />

Technical aspects<br />

A complex technical infrastructure has been developed in order to deliver the different mobile services to the<br />

students. This task requires complex software solutions in order to connect and combine the content coming from<br />

different content providers. Figure 1 illustrates the generic technical architecture and the different hardware and<br />

software components used in the project.<br />

Bamboo’s equipment provides the multicasting feature in the GPRS network. The content management system<br />

(CMS) located at TS is responsible for scheduling the content transmissions. The MUSIS CCS (Collect, Convert and<br />

Send) developed and implemented at VXU is responsible for collecting, organizing, and converting the different<br />

digital material coming from all content providers (including educational material produced by the teachers) as<br />

described above. The MUSIS CCS system provides tools to manipulate content automatically and transmit it to<br />

Bamboo's router for distributing to the users. The CCS can get the content from the content's resources based on predefined<br />

rules, convert it to formats that are supported by the mobile handset and transmit it to Bamboo's server.<br />

These activities can be done automatically without human intervention.<br />

The following illustration (see figure 2) describes the generic architecture of the MUSIS CCS system. As seen in the<br />

illustration below, the system is based on several different inputs and outputs. The system, which has been<br />

implemented using Java related technologies, is scaleable and it consists of modular, reusable and easily expandable<br />

63


components to be able to deal with new types of content. This includes all features, i.e. the collecting, converting and<br />

the sending mechanisms. The system is programmed in Java using JSP and Java Beans on a Linux platform. It also<br />

uses open-source tools and applications.<br />

Figure 1. MUSIS generic architecture<br />

Figure 2. Generic illustration of the CCS system (Collect, Convert and Send)<br />

64


In the next section we concentrate on those activities carried out at Växjö University and Blekinge Institute of<br />

<strong>Technology</strong>. We present the results of a couple of pilot studies conducted over a two years period, focusing<br />

specifically on the question of whether students would find a mobile phone useful for supporting their learning, and<br />

in particular whether multicasting mobile services would be suitable for supporting learning and other activities<br />

related to their academic life. These studies aimed to look at the patterns of use of the various mobile services and<br />

the impact on students’ learning habits. We were also interested in determining what type of functionality is required<br />

for educational mobile services to be considered useful. Our results lead us to advocate a comprehensive approach<br />

regarding the introduction of smart phones and mobile services in university classes that considers not only technical<br />

features but also the individual, social and organizational aspects of technology adoption.<br />

Method<br />

Participants<br />

For the first set of trials, we solicited volunteers from students enrolled in two courses offered at VXU during the<br />

spring term of 2005. One course was offered at the School of Humanities, and the other at the School of Mathematics<br />

and Systems Engineering. After a short presentation delivered by members of the research team at the beginning of<br />

the term, students from these two courses volunteered to participate in the pilot. Twenty-two students from the<br />

course in the School of Humanities and nineteen from the School of Mathematics and Systems Engineering<br />

volunteered. Each volunteer was given a ”smart phone” for the duration of the school term (3 months). Although the<br />

number of smart phones available limited the number of participants, we were able to provide phones to all students<br />

who wanted to volunteer as participants. Each student signed a contract of use that specified their obligation to<br />

participate in the project in return for free use of the phone and a small amount of money they could use to make<br />

phone calls. The project also provided continuously available online and face-to-face support. The project began with<br />

a workshop session to familiarize the students with the smart phone and the software. Participants ranged from 19 to<br />

40 years of age, with a mean age of 26. Nineteen were female and twenty-two male. All 41 students already owned<br />

at least one mobile phone at the start of project. With regard to the issue of how much they spent on their own phone<br />

services before joining the project, on average a student in this group paid 28 USD a month. Twenty per cent of the<br />

41 students participating in this study spent more than 45 USD a month.<br />

For the second trial, we worked with BTH students during the spring term of <strong>2007</strong> in a special project course in the<br />

Literature Culture and Digital Media in the Humanities program (END011). Twenty-one students and two instructors<br />

participated in the trial over a five weeks period. Participants ranged from 20 to 26 years of age, with a mean age of<br />

24. Eleven were female and ten male. The students organized themselves into five groups consisting of 4 persons<br />

each. The expected outcome of the course for the students was to produce 5 pilot mobile applications that will help<br />

tourists to explore the history of the local city in novel ways using mobile phones and interactive storytelling<br />

techniques. The students in this trial all owned at least 1 mobile phone and spent a similar amount in phone costs<br />

compared to the 2005 trial.<br />

Equipment and Services<br />

The participants of the studies were each equipped with smart phones. For the first trail the students were supplied<br />

with NOKIA 6630 phones and for the second trial NOKIA N70 phones were used. Both phone models run on<br />

Symbian based operating systems and have mobile internet browsers, cameras with digital zoom, video, still, and<br />

audio recording, and RealOne’s player for playback and streaming of 3GPP-compatible and RealMedia video clips.<br />

Additional applications include a personal information management (PIM), a calendar, and a contacts database.<br />

Users could synchronize contacts and calendar stored on the phone with data stored on a personal computer. Since<br />

the Symbian operating system is open we were able to develop a Python application that enables mobile multicasting<br />

from the handset. This particular feature was implemented during the second trial.<br />

Technical development of MUSIS services took place concurrently in both studies, enabling refinements during the<br />

project cycles. For the first pilot phase that took place during the period March 1st- April 30th 2005, all participants<br />

accessed the same set of channels, receiving approximately 5 to 7 MUSIS messages (push technology) daily. One of<br />

these channels carried educational content related to their VXU course. Subscription to the educational channel was<br />

65


compulsory throughout the project. However, beginning May 1, 2005 users were able to subscribe to up to 30<br />

channels of their choice using a Web interface (both available via a PC or a mobile phone) specially developed for<br />

this project. During the second phase that took place during April 1 to May 24, <strong>2007</strong> all participants and two<br />

instructors accessed two public channels and then each of the five groups had a group channel for inter-group<br />

communication. In this paper, we focus specifically on our experience and results with the different educational<br />

channels only.<br />

Figure 3. The MUSIS client interface (left) and the mobile multicast client interface (right).<br />

Implementation of phase 1<br />

For the first trial, educational materials delivered for this project include small ”micro lectures” in video format,<br />

voice based course information and assignments, and specific information related to the logistics (calendar<br />

information, cancellation of lectures and so on) of the different courses. In the case of the ”micro lectures” the audio<br />

based and text information, the contents were developed for (and sometimes tailored to) the phone by the course<br />

instructor. In order to send this material to the phones, the teacher used a special web interface we designed for this<br />

purpose. We also developed a number of solutions that allow internet-based educational resources used in the course<br />

to be sent automatically to the phones. Instructors were also given a smart phone of the same type given to the<br />

students.<br />

FirstClass (FC) is a communication platform used at Växjö University mainly for distance education but also for<br />

campus based courses. There are two ways of accessing the FirstClass application. Students can use the FC client<br />

software or a web-based client directly from any browser. In the current version implemented at VXU, the only way<br />

to deliver FC content to mobile phones is by purchasing a very expensive SMS module. We developed an application<br />

using java and XML that it is used to convert the instructor’ contributions in the FC forum to an RSS (Real Simple<br />

Syndication, an XML format for syndicating web content) feed that it is then multicast to the phones. The java<br />

application was running in the background of the FC forum, so the instructor’s contributions to the forum were<br />

automatically transformed to a format suitable for the phone. The content from the FC forum arrives to the phones as<br />

a file in HTML format that can be viewed with the phone’s Internet browser.<br />

Implementation of phase 2<br />

In the second trial, the MUSIS system was used to foster collaboration and communication between the instructors<br />

and the students. Figure 4 below illustrates how communication and collaboration between the teachers and the<br />

student groups was envisioned. Instructors and students could multicast to group members, the entire class, specific<br />

groups and instructors using the mobile application we developed for the handset, as well as the development of the<br />

web-based interface to achieve the same goals. The system and the smart phones supported group and class<br />

interaction. This allowed the students to schedule group work and giving the instructors an additional way to give<br />

feedback using video, audio, and text messages to the class and the students´groups. This also provided ways to<br />

coordinate and organize the class work with calendar and text based messages sent to the phones.<br />

66


Figure 4. Communication and collaboration between instructors and student groups<br />

In this second phase, we modified the multicast model to allow any user at anytime and from anywhere to multicast<br />

content from the smart phone. This fact opened up different ways to use the technology not only in educational<br />

contexts but also to support students´ daily activities. The key application that enabled this feature was a python<br />

application we developed enabling digital content generated with the smart phones (including video, still<br />

photographs, and audio) to be uploaded to the system and then multicast as an event to a particular channel. In<br />

addition, the application also provided plain text and calendar events that could be multicast.<br />

Data collection across both projects<br />

Given the exploratory nature of these studies, we used multiple methods to collect data for both phases of the MUSIS<br />

project. This allowed us to scan the patterns of uses and attitudes that could be investigated more specifically in<br />

future studies. For the first phase participants completed web based questionnaires in weeks 1, 5, and <strong>10</strong> of the<br />

project. The first survey included items that measured personal attitudes toward mobility, attitudes toward media<br />

formats, and how much different media formats were used. The second and third surveys included items regarding<br />

perceived effect of the phones on learning, preference for different media formats, preference for channels, and<br />

perceptions of telephone functionality and usability. Additionally, members of the research team facilitated four<br />

focus group interviews with 15 participants, which were videotaped.<br />

The focus group ranged in size from 3 to 6 participants. The interview covered issues regarding the participants’<br />

perception of the project in connection to the services, the functionality and usefulness. Additionally, the participants<br />

were asked to suggest and discuss additional educational mobile services that could be developed. Finally, a 90<br />

minutes workshop with the students was held at the end of the term, which was videotaped. The purpose of these<br />

sessions was to carry out an open discussion with most of the students in order get an overall view of how the<br />

students experienced the project. In the second phase of the project we continued using multiple methods to for data<br />

collection. Focus groups sessions with each of the groups were videotaped, covering perception and future uses of<br />

the technology. Additionally, the communication data was collected providing some additional insight on how<br />

collaboration and communication in each group and how the mobile application supported this. The main objectives<br />

of all these activities were to assess the usefulness and quality of the services, to identify problems experienced by<br />

the students and to explore how future MUSIS services could look like.<br />

Results and discussion<br />

General use and attitudes<br />

The majority of the students participating in the two MUSIS pilots had mobile phones of their own before they<br />

joined the project. Therefore, delivering content directly to the smart phones is transparent for them, integrating not<br />

67


only with their existing day-today practices, but also with their views of mobility and accessibility as central to their<br />

life-style. It is important to note that even if their personal handsets supported a variety of features such as e-mailing,<br />

surfing the Internet, calendar support, and so forth, most participants used their phones only for making ordinary<br />

voice calls or for receiving/sending SMS. During the course of the first trial, students’ attitudes toward the services<br />

improved when they could for instance start choosing the channels of their preference and explore cost free the<br />

additional services. Through out both trials time periods the students perceived the MUSIS mobile services as<br />

something potentially useful, dynamic and as something that could be integrated in their every-day life.<br />

Did students find mobile phones and multicasting useful for supporting their learning?<br />

In the first set of trials, the participants were more likely to see the multicasting service as useful the more it was<br />

integrated into their course content. The cases presented in this paper differed substantially in how the instructors and<br />

the students used the technology. In the earlier trials, one instructor (for MEA708, in the School of Mathematics and<br />

Engineering) did not adapt his assignments or activities for the technology. The other instructor (for GIX 131, in the<br />

School of Humanities), actively produced content for this new medium. In addition to sending a relatively high<br />

number of MUSIS messages (41) to students, 7 were multimedia in form (video and audio).<br />

Table 1. Perceived usefulness of the educational mobile services after 5 weeks (n=41) and <strong>10</strong> weeks (n=41)<br />

Course Week Very Useful Fairly Not<br />

useful<br />

Useful Useful<br />

GIX131 5 27.3% 45.5% 18.2% 9 %<br />

MEA708 5 <strong>10</strong>.5% 52.6% 21.1% 15.8 %<br />

GIX131 <strong>10</strong> 40 % 26.7% 20% 13.3%<br />

MEA708 <strong>10</strong> 5.9% 35.3% 41.2% 17.6%<br />

The importance of integrating the service into the pedagogy or instructional style of the course is illustrated in Table<br />

1, which reports results from the survey item, “How useful did you experience the course related information sent to<br />

the educational channel”. In both classes, the majority of students saw the educational multicast services as useful or<br />

very useful in week 5. However, by week <strong>10</strong>, that figure had dropped to less than 50% for MEA708. At the same<br />

time, the number of students in GIX131 viewing the services as “very useful” grew substantially.<br />

In the most recent trial conducted at BTH (END011), the initial thoughts of how to use the system between the<br />

students and the instructors differed. The students expressed interest in using the system for communication and<br />

relevant school information while the instructors’ concerns were on supplementing the student feedback with the<br />

technology. While the student groups sent 24 messages where <strong>10</strong> of them concerned specific project work scheduling<br />

and the remaining where more social. The instructors sent a high number of messages (25) where 12 were video<br />

feedback to the different groups about ongoing work and the remaining organizational about course times and<br />

deadlines. Table 2 illustrates how the perceptions changed as the system was used during the trial. Over the 5-weeks<br />

trial period, the students’ perceptions changed, team communication was used and perceived as being helpful and<br />

very helpful. The instructor’s feedback over the trial period was evaluated as being not helpful by more then a third<br />

of the students, while slightly less then a third felt more positive about the feedback and the remaining undecided.<br />

The analysis of these results gives us the opportunity to think about further research issues regarding how to best<br />

provide feedback with mobile devices in future trials.<br />

Table 2. Perceived usefulness of the educational mobile services for the END 011 course after 5 weeks (n=21)<br />

Initial Perceptions Week No interest Low interested Interested High Interested<br />

Team Communication 1 0.0% 19.0% 28.6% 52.4%<br />

Instructor Feedback 1 19.0% 33.3% 28.6% 19.0%<br />

Final Perceptions Week Not helpful Helpful Very helpful Undecided<br />

Team Communication 5 17.0% 21.0% 43.0% 19.0%<br />

Instructor Feedback 5 38.0% 5.0% 24.0% 33.0%<br />

68


With regard to usability and functionality of the phone itself, participants reported dissatisfaction with the small size<br />

of the mobile phone buttons, the quality of video, the small screen size, and the limited battery life of devices.<br />

Discussion<br />

These two trials clearly illustrate that both, students and teachers are open and intrigued while using everyday mobile<br />

communication and collaboration tools in education. What is still lacking is an understanding of how these tools<br />

provide new collaboration modes and how self-organizing environments can provide educational benefits (Dron,<br />

<strong>2007</strong>). The perceived needs of the instructors and students remain unsynchronized with the instructors’ desire to use<br />

the smart phones for providing feedback to the students while the students prefer more logistical and practical<br />

information to be delivered to the handsets. The creation of rich media like audio and video generated by the students<br />

requires more efforts than the traditional use of SMS and chat. From this perspective, having students working and<br />

communicating using these new media types may have some impact on the different educational activities (Lai &<br />

Wu, 2006).<br />

An unexpected finding took place in both trials based on the outcomes generated from the assignments developed by<br />

the instructor in the earlier pilot, and then again in the most recent trial regarding the use of the mobile application<br />

for many-to-many multicasting. Contrary to email, SMS, chat and other type of more instantaneous communication,<br />

students and instructors spent significant time staging and composing their answers and feedback, often recording<br />

multiple “takes” before the final video or audio was sent in to be multicasted. This suggests that a common practice<br />

of “composing” text messages may extend to audio and video messaging as well. Indeed, preliminary analysis of<br />

these recordings shows that the users tried to compress information not by indistinct, fast talk (similar to the<br />

abbreviations of SMS), but by concentrated, effectively expressed sentences.<br />

Conclusions and future development<br />

These studies were designed to explore the patterns of use of a number of mobile services experienced by students at<br />

a couple of university campuses and other locations of their choice. In the second trial these patterns of use were<br />

extended to provide the individuals the ability to multicast to the channels of their choice and to explore new patterns<br />

of collaboration. Impact on learning itself was not measured, nor would it have been possible to measure it<br />

meaningfully when the devices were used for such diverse purposes. Phone-optimized content was heavily used, and<br />

there was a clear request from the students that more resources be made available in this format, including<br />

administrative information from the universities. It is also important to recognize the need to address the technical<br />

requirements of producing and sharing of content across multiple types of devices and networks. This clearly points<br />

towards a low barrier for adoption of these mobile services by students in the near future if the ease of use between<br />

smart phones and traditional e-learning materials can effectively harnessed in ways that make sense and provided<br />

that the cost is comparable to wireless broadband.<br />

Ownership of the technology is clearly important. As long as the phones are loaned, students are reluctant to invest<br />

time and money in personalization. This will prevent better evaluation of the impact of technology on learning.<br />

Greater institutional support is needed in order for the smart phones to be used more fully. Regular updates of<br />

timetables and content, as well as adequate training and hardware provision are needed. As more students bring the<br />

technology with them to the university, change will most likely be driven by their demands as learners.<br />

Our results confirm the importance of designing applications and services for learners that are easy to use “on the<br />

road” that could be completed in short bursts of time (Wuthrich et al., 2003). Multicasting is one way to support what<br />

Brodersen, et al. (2005) call ‚”nomadic learners” who are more project oriented and who send much of their daily<br />

life‚ “transit between many physical places (“oasis”) such as classrooms, labs, workshops, libraries, museums, the<br />

city, nature, clubs and at home” (p. 298). However, our results also suggest that in higher education, a challenge is on<br />

designing for social technologies that allow for bridging different pedagogic goals (control of learning) and ways of<br />

communication between the different actors in the learning environment. These latest aspects require more than<br />

designing just services to connect people and content (Dron, <strong>2007</strong>), but also creating new didactic sequences and<br />

educational activities that can connect formal and informal learning settings.<br />

69


As our work continues, we will try to enhance the educational aspects of the mobile services by developing and<br />

implementing various solutions to specific problems we have identified based on our observations and the data we<br />

have collected from the students. Our future efforts will continue to refine both the technology and activities for<br />

providing learners with more meaningful experiences with regard to the use of smart phones in educational settings<br />

by providing more tools for collaboration that take in consideration of the needs of both instructors and students.<br />

Coming research activities include the continuation of our efforts within the framework of a new international<br />

project exploring the use of mobile devices for game based learning and field studies in natural science, math, and<br />

physical fitness supported by mobile applications.<br />

Acknowledgement<br />

MUSIS is partially funded by governmental bodies – Sweden’s VINNOVA and Israel 's MATIMOP - as part of the<br />

SIBED program, a joint Swedish-Israeli mobile technology research effort.<br />

References<br />

Brodersen, C., Christensen, B.G., Grønbæk, K. Dindler, C., & Sundararajah, B. (2005). eBag: A Ubiquitous Web<br />

Infrastructure for Nomadic Learning. Paper presented at the 14 th International conference on World Wide Web, May<br />

<strong>10</strong>-14, 2005, Chiba, Japan.<br />

Dron, J. (<strong>2007</strong>). Designing the Undesignable: Social Software and Control. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong><br />

(3), 60-71.<br />

Katz, J. E., & Aakhus, M. (2002). Perpetual Contact: Mobile Communication, Private Talk, Public Performance,<br />

Cambridge: Cambridge University Press.<br />

Lai, C.-Y., & Wu, C.-C. (2006). Using handhelds in a Jigsaw cooperative learning environment. Journal of<br />

Computer Assisted Learning, 22 (4), 284-297.<br />

Lankshear, C., & Knobel, M. (2006). New Literacies: Everyday Practices and Classroom Learning, Berkshire, UK,<br />

Open University Press.<br />

Milrad, M. (2003). Mobile Learning: Challenges, Perspectives and Reality. In K. Nyíri (Ed.), Mobile Learning:<br />

Essays on Philosophy, Psychology and Education, Vienna: Passagen Verlag, 25-38.<br />

Norris, C., Soloway, E., & Sullivan, T. (2002). Examining 25 years of technology in U.S. education. Communication<br />

of the ACM, 45 (8): 15-18.<br />

Roussos, G., Marsh, A.J., & Maglavera, S. (2005). Enabling Pervasive Computing with Smart Phones. IEEE<br />

Pervasive Computing, 4 (2), 20- 27.<br />

Tatar, D., Roschelle, J., Vahey, P., & Penuel, W. (2003). Handhelds Go to School: Lessons Learned. IEEE<br />

Computer, 36 (9), 30-37.<br />

Thornton, P., & Houser, C. (2004). Using Mobile Phones in Education. Paper presented at the 2 nd IEEE<br />

International Workshop on Wireless and Mobile Technologies in Education, 23-25 March 2004, Taoyuan, Taiwan.<br />

Varshney, U. (2002). Multicast over Wireless Networks. Communications of ACM, 45 (12), 31-37.<br />

Wuthrich, C., Kalbfleisch, G., Griffin, T., & Passos, N. (2003, April). On-line Instructional Testing in a Mobile<br />

Environment. Journal of Computing Sciences in Colleges, 18 (4), 23-29.<br />

70


Järvelä, S., Näykki, P., Laru, J., & Luokkanen., T. (<strong>2007</strong>). Structuring and Regulating Collaborative Learning in Higher Education<br />

with Wireless Networks and Mobile Tools. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 71-79.<br />

Structuring and Regulating Collaborative Learning in Higher Education with<br />

Wireless Networks and Mobile Tools<br />

Sanna Järvelä, Piia Näykki, Jari Laru and Tiina Luokkanen<br />

University of Oulu, Finland // sanna.jarvela@oulu.fi // Fax +358 8 553 3744<br />

ABSTRACT<br />

In our recent research we have explored possibilities to scaffold collaborative learning in higher education with wireless<br />

networks and mobile tools. The pedagogical ideas are grounded on concepts of collaborative learning, including the<br />

socially shared origin of cognition, as well as self-regulated learning theory. This paper presents our three design<br />

experiments on mobile, handheld supported collaborative learning. All experiments are aimed at investigating novel<br />

ways to structure and regulate individual and collaborative learning with smartphones. In the first study a Mobile<br />

Lecture Interaction tool (M.L.I.) was used to facilitate higher education students’ self-regulated learning in university<br />

lectures. In the second study smartphones were used as regulation tools to scaffold collaboration by supporting<br />

externalization of knowledge representations in individual and collaborative levels. The third study demonstrates how<br />

face to face and social software integrated collaborative learning supported with smartphones can be used for<br />

facilitating socially shared collaboration and community building. In conclusion, it is stressed that there is a need to<br />

place students in various situations in which they can engage in effortful interactions in order to build a shared<br />

understanding. Wireless networks and mobile tools will provide multiple opportunities for bridging different contents<br />

and contexts as well as virtual and face to face learning interactions in higher education.<br />

Keywords<br />

Collaborative learning, Higher education, Mobile tools, Self-regulated learning, Wireless networks<br />

Introduction<br />

Recent developments in mobile technologies have contributed to the potential to support learners studying a variety<br />

of subjects (Scanlon, Jones & Waycott, 2005; Sharples, 2000) in elementary education (Zurita & Nussbaum, <strong>2007</strong>)<br />

as well as in higher education (Baggetun & Wasson, 2006; Näykki & Järvelä, <strong>2007</strong>; Milrad & Jackson, in press).<br />

Furthermore, there have also been efforts to improve the performance of knowledge workers in work-place settings<br />

(Brodt & Verburg, <strong>2007</strong>). The integration of social software (web2.0) (Kolbitsch & Maurer, 2006; Cress &<br />

Kimmerle, <strong>2007</strong>) and new mobile technologies (Kurti, Milrad & Spikol, <strong>2007</strong>) has created interesting new<br />

possibilities for organizing novel learning and working situations.<br />

In higher education much effort has been made to find new ways to support individual student learning, but also to<br />

find ways for effective collaboration. Previous studies have explored how mobile technology can be used for offering<br />

an additional channel for the lecture interaction. The Classtalk project focused on giving lecturers the ability to pose<br />

questions to students (Dufresne, Gerace, Leonard, Mestre, & Wenk, 1996) while the eClass project provided<br />

facilities for structured capture and access of classroom lecture activities (Abowd, 1999). Ratto, Shapiro, Truong and<br />

Grisworld (2003) developed the ActiveClass application for encouraging lecture participation by using personal<br />

wireless devices. Furthermore, in order to support collaborative learning, new possibilities of mobile technologies<br />

have been explored. Interactive logbook (Chan, Corlet, Sharples, Ting & Westmanncott, 2005) provided the<br />

technology for knowledge sharing and multimedia notetaking, while many campus-wide laptop initiatives have<br />

provided students with access to social computing tools, such as instant messaging or chat (Gay, Stefanone, Grace-<br />

Martin, Hembrooke, 2001).<br />

Overall, the general claim has been that when new technologies and software have been used in an educational<br />

setting, new learning opportunities have arisen. Thus far there have been plenty of case studies and design<br />

experiments where mobile technologies have been used for innovative pedagogical ideas and design studies.<br />

However, only a few studies give detailed arguments as to what are these new opportunities in terms of learning<br />

interaction and collaboration and what are the exact processes that mobile tools can scaffold. We claim that it is not<br />

only the learner being “mobile” that matters. A stronger argument for applying mobile tools for education is that of<br />

increasing students’ opportunities for interactions and sharing ideas and thus, increasing opportunities for an active<br />

mind in multiple contexts (Dillenbourg, Järvelä, & Fischer, in press). In this paper we describe our current research<br />

focusing on exploring possibilities to scaffold collaborative learning in higher education with wireless networks and<br />

mobile tools. The pedagogical ideas are grounded on collaborative learning, including the socially shared origin of<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

71


cognition as well as self-regulated learning theory. This is to say that special effort has been put on enhancing and<br />

scaffolding collaborative learning as cognitive, social, and motivated activity.<br />

Structuring and regulating collaborative learning<br />

Earlier research on collaborative learning has pointed out that shared understanding is not easy to achieve (Häkkinen<br />

& Järvelä, 2006; Leinonen, Järvelä & Häkkinen, 2005), and that students face difficulties engaging in learning and<br />

achieving their learning goals in a variety of learning contexts, including technology supported learning<br />

environments (Volet & Järvelä, 2001). In order to favour the emergence of productive interactions and to improve<br />

the quality of learning, different pedagogical models and technology-based regulation tools have been developed to<br />

support collaboration between participants. One way to enhance the process of collaboration, as well as to integrate<br />

individual and group-level perspectives of learning, is to structure learners’ actions with the aid of scaffolding or<br />

scripted cooperation (Fischer, Kollar, Mandl, & Haake, <strong>2007</strong>). One of the ideas in this field is to design scripts that<br />

can be defined as “a set of instructions prescribing how students should perform in groups, how they should interact<br />

and collaborate and how they should solve the problem” (Dillenbourg, 2002, p. 63), that can be modified according<br />

to what kind of interaction, learning or outcomes are expected to be achieved. In scripted collaboration participants<br />

are supposed to follow prescriptions and engage in learning tasks.<br />

In addition to scripting as a mechanism to structure collaboration, we suggest that structuring can be enriched with<br />

technology-based regulation tools, which offer an individual and a group of learners opportunities to self-regulate<br />

their collaborative learning processes. Self-regulated learning theory concerns how learners develop learning skills<br />

and use learning skills effectively (Boekaerts, Pintrich & Zeidner, 2000). Self-regulated learners take charge of their<br />

own learning by choosing and setting goals, using individual strategies in order to monitor, regulate and control the<br />

different aspects influencing the learning process and evaluating his or her actions. Eventually, they become less<br />

dependent on others and on the contextual features in a learning situation. Although research into self-regulation has<br />

traditionally focussed on the individual perspective, there is increasing interest in considering the mental activities<br />

that are part of self-regulated learning at the social level, with reference to concepts such as social regulation, coregulation<br />

and shared regulation (McCaslin, 2004).<br />

Järvelä, Volet & Järvenoja (2005) characterize self-regulated learning from three perspectives. Self-regulated<br />

learning focuses on an individual as a regulator of a behavior and refers to the process of becoming a strategic<br />

learner by regulating their cognition, motivation and behavior to optimize learning (Schunk & Zimmerman, 1994).<br />

Conceptualizing self-regulated learning as a co-regulation has been influenced by socio-cultural theory and it<br />

emphasizes gradual appropriation of sharing common problems and tasks through interpersonal interaction (Hadwin,<br />

Wosney & Pontin, 2005; McCaslin & Hickey, 2001). The third perspective looks at how the regulation process can<br />

be framed to shared cognition and recent research on collaborative learning, which is in essence the co-construction<br />

of shared understanding (Roschelle & Teasley, 1995). This is collective regulation, where groups develop shared<br />

awareness of goals, progress, and tasks toward co-constructed regulatory processes, thereby sharing regulation<br />

processes as collective processes.<br />

Self-regulated learning theory has been used in our studies as a theoretical framework to develop those learning<br />

activities that give potential to individual and collaborative learning so that it stimulates active minds and<br />

interactions on individual and social levels. There are not yet studies which use mobile technology for supporting<br />

self-regulated learning, but recently there have been specific efforts made by people working on self-regulated<br />

learning theory to find ways to design technology to assist in helping students develop better learning strategies and<br />

regulate their learning process (e.g. Winne et al., 2006).<br />

Previous studies have explored how visualization as a form of regulation tool can be used for supporting individuals’<br />

understanding (Larkin & Simon, 1987) or awareness of each others ideas (Fisher, Bruun, Gräsel, & Mandl, 2002;<br />

Leinonen & Järvelä, 2006). Visualizing individuals’ understanding can create for a group of learners a shared<br />

reference point, which supports focusing on central issues, for example shared or non-shared knowledge in a group’s<br />

interaction (Pea, 1994). Opportunities have been also searched from computer-based regulation tools, which aim to<br />

promote cognitive regulation processes. Learning tools are to promote motivated learning from the point of view of<br />

the individual learner as well as in opening new learning opportunities for social and interactive learning (Azevedo,<br />

2005). This developmental work can be used for compensating weak study and collaboration skills in different<br />

72


interactive learning environments, and studying in different domains. Azevedo and his colleagues (Azevedo, Guthrie<br />

& Seibert, 2004) have investigated the effects of goal-setting conditions on the ability of learners to regulate their<br />

learning in hypermedia environment. Their research results show that students use various types of self-regulatory<br />

behavior in learning with hypermedia, such as planning, monitoring, strategy use, task difficulty and demands and<br />

interest statements, but that students differ in their ability to regulate their learning. Later studies have put effort into<br />

designing computer-based scaffolds for self-regulated learning (Azevedo & Hadwin, 2005). For example, in a study<br />

that focused on collaborative planning and monitoring of students working within an online scientific inquiry<br />

learning environment, Manlove, Lazonder and de Jong (2006) examined the effect of a tool designed to support<br />

planning and monitoring in a scientific inquiry into the fluid dynamics on students’ model quality. The results<br />

showed a significant correlation between planning and model quality, indicating an overall positive effect for the<br />

support tool.<br />

Three studies of using wireless network and mobile tools in higher education<br />

In this paper our three design experiments on mobile, handheld supported collaborative learning are presented in<br />

order to demonstrate different pedagogical models and levels of scaffolds for socially-shared learning. Each<br />

experiment is aimed at investigating novel uses to structure and regulate collaborative learning with mobile tools.<br />

Mobile lecture interaction tool for activating students’ participation to the lecture interaction<br />

The aim of this study was to explore how the Mobile Lecture Interaction tool (M.L.I.) can be used for regulating and<br />

supporting students’ thinking and participation in the lecture interaction. It was studied how higher education<br />

students used the M.L.I. tool during lectures and in what ways the students view the M.L.I. tool as a support for their<br />

learning.<br />

Participating in the study were 173 higher education students (114 male and 59 female). The data were collected as a<br />

part of the authentic lecture situations in nine lectures where five lectures were in economics studies, two lectures in<br />

technical studies and two lectures in educational psychology studies. The lecture interaction was supported with the<br />

M.L.I.-tool, which was developed for this experiment (©Costa, 2006). The basic idea of the M.L.I.-tool is as follows:<br />

using personal, mobile devices (smartphones), students can anonymously ask questions, answer polls, and give<br />

feedback during the lecture (See Table 1). The tool allows every student and the lecturer to see these lists of<br />

questions. Furthermore, students have a possibility to vote on presented questions. Voting raises questions ranking in<br />

the display, encouraging the lecturer to give those questions precedence.<br />

Table 1. The M.L.I Pedagogical Structure<br />

Description of activities in the Lecture Pedagogical idea<br />

Send a question<br />

Encourage students’ cognitive activity and self-regulation<br />

in the lecture, engage students to the learning in the lecture<br />

Send a comment Enhance reflection<br />

Vote for a question or comment<br />

Enhance students’ metacognition and engage students’<br />

learning in the lecture<br />

The data were collected via a questionnaire (including likert-scale questions as well as open ended questions), group<br />

interviews, lecture observations and log files. The process-oriented data, in a form of observation and log files, were<br />

collected in order to explore how students use M.L.I. tools as a part of their lecture interaction, e.g. what kind of<br />

questions or comments students present. The questionnaire and the interview data were collected to explore how<br />

students reflect the use of the M.L.I. tool. The results show that the students used the M.L.I. tool mostly for voting.<br />

The students reported that with the M.L.I. tool they were more active in thinking of questions and evaluating the<br />

presented questions’ meaning for themselves than they normally are during the lectures. Furthermore, the use of the<br />

M.L.I. tool supported students’ feelings of belonging to a group. The students mentioned that the use of the M.L.I<br />

tool supported their engagement in the content of the lecture. Their concentration did not stray as much as it did in<br />

lectures. Awareness of other students’ questions offered new ideas for the students and therefore that was seen as<br />

valuable for their learning.<br />

73


Mobile mind map tool for stimulating collaborative knowledge construction in groups<br />

The aim of this study was to investigate the process of collaborative knowledge construction when technology and<br />

self-generated pictorial knowledge representations are used for visualizing individual and group shared ideas (See<br />

Näykki & Järvelä, <strong>2007</strong>). In particular, the aim was to find out how students contribute to the group’s co-regulation<br />

of collaborative knowledge construction and use each other’s ideas and cognitive tools as a provision for their jointly<br />

evolving cognitive systems.<br />

The participants of this study were teacher education students (N = 13, 5 male, 8 female) who were randomly<br />

assigned to work in groups. Their working was scaffolded with the Mobile Mind Map tool (© Scheible, 2005, see<br />

Figure 1.), and a problem-oriented pedagogical structure. Student activities were structured around different phases<br />

in which they brainstormed, explored real-life examples to visualize their thinking, and used pictures as knowledge<br />

representations to answer to the learning task. The Mobile Mind Map tool allowed students to take pictures with a<br />

smart phone and to add text annotations to the pictures. The annotated pictures were sent to the server and they were<br />

used to construct a mind map with the computer (See Table 2).<br />

Table 2. “The Mobile mind map” pedagogical structure<br />

Description of group activities Pedagogical idea Outcome<br />

1. Mind mapping with paper and pen Grounding Mind map with paper & pen<br />

2. Campus area exploration for evidence<br />

with mobile phones<br />

Inquiring Annotated pictures<br />

3. Mind mapping with Mind Map Tool and<br />

pictures by the laptop computer<br />

Constructing Mind map with pictures<br />

4. Reflection on the experience Reflecting Shared experiences<br />

Figure 1. The Mind map tool<br />

The process-oriented data involved students’ videotaped face-to-face group activities (mind-mapping with paper and<br />

pen and mind-mapping with a Mobile Mind Map tool and pictorial knowledge representations) as well as stimulated<br />

74


ecall interviews (Ericsson & Simon, 1980). The data-driven qualitative content analysis revealed that pictures as<br />

self-generated knowledge representations were used for carrying individuals’ abstract meanings. Furthermore,<br />

students’ activity in processing each others ideas further, as well as a level of cognitively challenging activities,<br />

indicates that pictorial knowledge representations and technology tools scaffolded co-regulation of collaborative<br />

knowledge construction. However, students were of the opinion that pictorial knowledge representations are<br />

challenging and thorough negotiation is needed for grounding pictures on the content discussions.<br />

Mobile “Edufeeds” for creating shared understanding among virtual learning communities<br />

The aim of this study was to explore how mobile technologies and social software (weblogs, wikis, RSS-aggregators<br />

and file-sharing services) can be used for scaffolding collaborative learning; sharing understanding and building<br />

virtual communities (See Näykki & Järvelä, <strong>2007</strong>). The main assumption was that students’ interactions will be<br />

enriched when possibilities of social software are integrated in the learning situations, and thus, building virtual<br />

communities will be more fluent than in more traditional virtual learning environments.<br />

The participants of this study (N = 22, 5 male and 17 female) worked in groups of 4-5 students for a period of three<br />

months. Group work was structured to different phases, which were facilitated with social software (weblog, wikis<br />

and RSS-aggregators) as well as mobile phones and lap top computers (See Table 3). Visualizing ideas with pictorial<br />

knowledge representations was a tool for students to plan and monitor their individual level and group level learning<br />

processes.<br />

Description of<br />

activities<br />

Lecture<br />

(6 sessions)<br />

Face to face group<br />

working sessions<br />

(6 sessions)<br />

Individual working<br />

sessions<br />

(6 sessions)<br />

Group’s meaning<br />

making sessions<br />

(2 sessions)<br />

Virtual group<br />

working sessions<br />

(2 sessions)<br />

Table 3. Pedagogical structure<br />

Pedagogical idea Outcome Technological tool<br />

Theoretical<br />

grounding<br />

Theoretical concepts -<br />

Groups’ grounding Shared concepts -<br />

Constructing,<br />

monitoring<br />

Elaborating<br />

Constructing,<br />

monitoring<br />

Constructing<br />

knowledge<br />

representations<br />

Negotiating<br />

knowledge<br />

representations<br />

Using knowledge<br />

representations<br />

Mobile phone,<br />

Weblog,<br />

RSS-aggregator<br />

Weblog<br />

Wiki<br />

Weblog,<br />

Wiki<br />

RSS-aggregator<br />

The students were introduced to the content of the course with six lectures and after the each lecture the students<br />

reflected on the content of the lectures in groups. The given task for each group was to first reflect on the content and<br />

to name five important themes in the lecture. After that the students were asked to choose one of the themes and to<br />

formulate their group’s working problem based on the theme with which they continued to work by finding real-life<br />

examples to represent their shared discussions. In practise, the group work was followed with a one-week phase of<br />

independent on-line work, where students were asked to use mobile phones to take pictures and/or video clips to<br />

represent their ideas of the learning content. While taking the pictures/videos students were also asked to answer the<br />

following questions: what is the name of this picture, what does this picture represent, and how is it related to the<br />

learning content, by typing a short description for the picture. The pictures and descriptions of the pictures were sent<br />

automatically to the each student’s own weblog. This same task continued after each lecture. Weblogs were used as<br />

personal journals, where students reflected their ideas further by writing journal entries around the respective<br />

pictures/videos. Furthermore, students were asked to follow each others’ contributions to their personal weblogs by<br />

using RSS-aggregators in their mobile phones. In the middle of the course, students had a “meaning-making session”<br />

where they reviewed all the group members’ weblogs to see the pictorial material everyone had collected. The<br />

students were asked to introduce the pictures by explaining what they represent and to negotiate and choose among<br />

75


the pictorial knowledge representations those pictures that could represent the group’s shared understanding. This<br />

session was repeated twice, in the middle of the course and at the end of the course, and the session was followed by<br />

the virtual group working phase, where students continued to share their ideas with the chosen pictorial knowledge<br />

representations.<br />

The data were collected by using video-observation, questionnaire, stimulated recall interviews. In addition log files<br />

of the students’ activities, and students’ and groups’ products in weblogs and wikis were used as a data. The results<br />

showed that construction and sharing of knowledge representations scaffold students’ shared regulation of<br />

collaborative learning by activating their cognitive processes; explaining and elaborating their own understanding.<br />

Conclusions<br />

In this paper three different studies were presented in order to illustrate how collaborative learning can be structured<br />

and regulated in higher education, with wireless networks and mobile tools. The pedagogical ideas derive from<br />

collaborative learning, including the socially shared origin of cognition as well as self-regulated learning theory.<br />

Each study shows that, with a novel use of technology, certain aspects of collaboration can be supported.<br />

The Mobile Lecture Interaction Tool (M.L.I) is an example of how individual self-regulation can be supported in<br />

university lectures. The results show that students’ cognitive activities, such as metacognition and reflection, were<br />

stimulated by focusing on questions about the content of the lecture. This kind of learning tool can be used for<br />

compensating weak study skills in different domains (Azevedo, Guthrie & Seibert, 2004) and especially in university<br />

lectures. Even though, the M.L.I. tool was credited as beneficial for lecture interaction, some students viewed that<br />

there were too many things to concentrate on at the same time. Students pointed out that listening to the lecturer,<br />

writing lecture notes and using the M.L.I. tool at the same time, were too much for them to handle. Therefore,<br />

aspects related to cognitive load (Kirschner, 2002) need to be recognized. To lower the cognitive load the M.L.I. tool<br />

could be developed further so that students and teacher could use questions and comments also after the lecture.<br />

Another possibility is structuring or scripting the lecture situation so that there are specific times for questions and<br />

comments, and therefore students would not feel that they are missing some important things while they are using the<br />

M.L.I. tool.<br />

The results of the Mobile Mind Map tool study imply that the tool can be used for enhancing co-regulation in terms<br />

of sharing and externalizing visual knowledge representations and developing them in interpersonal interactions. The<br />

students should be encouraged, when appropriate, to create, modify or co-design the learning environments in which<br />

they are working. Construction of external representations can be a successful learning strategy that not only helps<br />

students by externalizing and sharing their thoughts, but also provides a source of information for teachers about<br />

students’ current task understanding (Butler & Cartier, 2004). Students’ co-regulated learning can be supported by<br />

scaffolding externalization of their own thinking and by seeing what others are thinking and, when possible, to then<br />

continue their own and others’ flow of thinking. However, externalizing knowledge representations is cognitively<br />

challenging. In this study pictures were used as external knowledge representations and the study indicates that<br />

pictures can carry individuals’ co-created meanings in them but since those meanings are personal, ambiguous<br />

negotiation processes are highly valuable.<br />

The Edufeed study indicates that web 2.0 technologies can be designed to scaffold shared regulation processes within<br />

the groups. In the study the students constructed a variety of pictorial knowledge representations and shared their<br />

meaning collaboratively. However, the students thought that it was challenging to represent their own ideas with<br />

pictures and even more challenging to explain the representation to others. Nevertheless, the students who explained<br />

their pictures to others viewed that the elaboration of pictures was beneficial for their learning. The results showed<br />

that construction and sharing of knowledge representations activated students' self-regulated learning; explaining and<br />

elaborating their own understanding. Other studies from collaborative learning contexts have highlighted that in<br />

collaboration, individuals should share their understanding in discussion to converge their knowledge<br />

representations, which might lead to new shared knowledge representations (e.g. Dillenbourg & Traum, 2006; Fisher<br />

& Mandl, 2005; Rochelle, 1992). It is concluded that even though self-generated pictorial knowledge representations<br />

are often personal and ambiguous they may give an impetus for sharing and explaining individual knowledge<br />

representations to the socially shared level.<br />

76


In conclusion, this paper stresses a need to place students in various situations in which they can engage in effortful<br />

interactions in order to create opportunities for active minds (Dillenbourg, Järvelä, & Fisher, in press). Wireless<br />

networks and mobile tools will provide multiple opportunities for bridging different contents and contexts, as well as<br />

virtual and face to face learning interactions in higher education. Since the learning environment in higher education<br />

is more open and less teacher-guided than at other educational levels there is a need to increase student opportunities<br />

for self-regulating their learning on an individual as well as socially shared level. Wireless networks and mobile tools<br />

will provide future potential for developing learning in higher education, which needs to be explored in detail.<br />

References<br />

Abowd, G. D. (1999). Classroom 2000: an experiment with the instrumentation of a living educational environment.<br />

IBM Systems Journal, 38 (4), 508–530.<br />

Azevedo, R. (2005). Computers as metacognitive tools for enhancing learning. <strong>Educational</strong> Psychologist, 40 (4),<br />

193-197.<br />

Azevedo, R., & Hadwin, A. (2005) Scaffolding self-regulated learning and metacognition – implications for the<br />

design of computer-based scaffolds. Instructional Science, 33 (5-6), 367-379.<br />

Azevedo, R., Guthrie, J.T., & Seibert, D. (2004). The role of self-regulated learning in fostering students’ conceptual<br />

understanding of complex systems with hypermedia. Journal of <strong>Educational</strong> Computing Research, 30 (1), 87-111.<br />

Baggetun, R., & Wasson, B. (2006). Self-Regulated Learning and Open Writing. European Journal of Education, 41<br />

(3-4), 453-472.<br />

Boekarts, M., Pintrich, P. R., & Zeidner, M. (2000). Handbook of self-regulation, San Diego, CA: Academic Press.<br />

Brodt, T., & Verburg, R. (<strong>2007</strong>). Managing mobile work – insights from European practice. New <strong>Technology</strong>, Work<br />

and Employment, 22 (1), 52-65.<br />

Butler, D. L., & Cartier, S. C. (2005). Multiple complementary methods for understanding self-regulated learning as<br />

situated in context. Paper presented at the Annual Meeting of the American <strong>Educational</strong> Research Association, April<br />

11-15, 2005, Montreal, Canada.<br />

Chan, T., Corlett, D., Sharples, M., Ting, J., & Westmancott, O. (2005). Developing interactive logbook: a personal<br />

learning environment. Paper presented at the 2005 IEEE International Workshop on Wireless and Mobile<br />

Technologies in Education, November 28-30, 2005, Tokushima, Japan.<br />

Cress, U., & Kimmerle, J. (<strong>2007</strong>). A theoretical framework of collaborative knowledge building with wikis – a<br />

systemic and cognitive perspective. Paper presented at the 7 th International Computer Supported Collaborative<br />

Learning Conference, July 16-21, <strong>2007</strong>, New Brunswick, NJ, USA.<br />

Dillenbourg, P., Järvelä, S., & Fischer, F. (in press). The evolution of research on computer-supported collaborative<br />

learning: from design to orchestration. To appear in Kaleidoscope Legacy Book.<br />

Dillenbourg, P., & Traum, D. (2006). Sharing solutions: persistence and grounding in multimodal collaborative<br />

problem solving. The Journal of the Learning Sciences, 15 (1), 121–151.<br />

Dillenbourg, P. (2002). Over-scripting CSCL: The risks of blending collaborative learning with instructional design.<br />

In P. A. Kirschner (Ed.), Three worlds of CSCL. Can we support CSCL,. Heerlen: Open Universiteit Nederland, 61-<br />

91.<br />

Dufresne, R. J., Gerace, W. J., Leonard, W. J., Mestre, J. P., & Wenk, L. (1996). Classtalk: A classroom<br />

communication system for active learning. Journal of Computing in Higher Education, 7 (2), 3-47.<br />

77


Ericsson, K. A., & Simon, H. A. (1980). Verbal reports as data. Psychological Review, 87 (3), 215-251.<br />

Fischer, F., Kollar, I., Mandl, H., & Haake, J. M. (<strong>2007</strong>). Scripting computer-supported collaborative learning –<br />

cognitive, computational, and educational perspectives, New York: Springer.<br />

Fischer, F., & Mandl, H. (2005). Knowledge convergence in computer-supported collaborative learning: the role of<br />

external representation tools. The Journal of the Learning Sciences, 14 (3), 405–441.<br />

Fischer, F., Bruun, J., Gräsel, C., & Mandl, H. (2002). Fostering collaborative knowledge construction with<br />

visualization tools. Learning and Instruction, 12 (2), 213-232.<br />

Gay, G., Stefanone, M., Grace-Martin, M., & Hembrooke, H. (2001). The Effects of Wireless Computing in<br />

Collaborative Learning Environments. International Journal of Human-Computer Interaction, 13 (2), 257-276.<br />

Hadwin, A.F., Wozney, L., & Pontin, O. (2005). Scaffolding the appropriation of self-regulatory activity: A sociocultural<br />

analysis of changes in student-teacher discourse about a graduate research portfolio. Instructional Science,<br />

33 (5-6), 413-450.<br />

Häkkinen, P., & Järvelä, S. (2006). Sharing and constructing perspectives in web-based conferencing. Computers<br />

and Education, 47 (4), 433-447.<br />

Järvelä, S. , Volet, S. & Järvenoja, H. (2005). Motivation in collaborative learning: New concepts and methods for<br />

studying social processes of motivation. A paper presented at the Earli 2005 conference, 22-27 August 2005,<br />

Nicosia, Cyprus.<br />

Kirschner, P. A. (2002). Cognitive load theory. Learning and Instruction, 12 (1), 1-<strong>10</strong>.<br />

Kolbitsch, J., & Maurer, H. (2006). The Transformation of the Web: How Emerging Communities Shape the<br />

Information We Consume. Journal of Universal Computer Science, 12 (2), 187-213.<br />

Kurti, A., Milrad, M., & Spikol, D. (<strong>2007</strong>). Designing Innovative Learning Activities Using Ubiquitous Computing.<br />

Paper presented at the ICALT <strong>2007</strong> conference, July 18-20, <strong>2007</strong>, Nagaoka, Japan.<br />

Larkin, J.H., & Simon, H.A. (1987). Why a diagram is (sometimes) worth of ten thousand words. Cognitive Science,<br />

11 (1), 65-99.<br />

Leinonen, P., & Järvelä, S. (2006). Facilitating interpersonal evaluation of knowledge in a context of distributed<br />

team collaboration. British Journal of <strong>Educational</strong> <strong>Technology</strong>, 37 (6), 897-916.<br />

Leinonen, P., Järvelä, S., & Häkkinen, P. (2005). Conceptualizing the awareness of collaboration: A qualitative study<br />

of a global virtual team. Computer Supported Cooperative Work, 14 (4), 301-322.<br />

Manlove, S., Lazonder, A.W., & De Jong, T. (2006). Regulative support for collaborative scientific inquiry learning.<br />

Journal of Computer Assisted Learning, 22 (2), 87-98.<br />

McCaslin, M. (2004). Coregulation of opportunity, activity, and identity in student motivation. In D. McInerney & S.<br />

Van Etten (Eds.), Big theories revisited: Research on sociocultural influences on motivation and learning,<br />

Greenwich, CT: Information Age, 249-274.<br />

McCaslin, M., & Hickey, D. T. (2001). Self-regulated learning and academic achievement: A Vygotskian view. In B.<br />

Zimmerman & D. Schunk (Eds.), Self-regulated learning and academic achievement: Theory, research, and<br />

practice, Mahwah, NJ: Lawrence Erlbaum, 227-252.<br />

Milrad, M., & Jackson, M. (in press). Designing and Implementing <strong>Educational</strong> Mobile Services in University<br />

Classrooms Using Smart Phones and Cellular Networks. International Journal of Engineering Education.<br />

78


Näykki, P., & Järvelä, S. (<strong>2007</strong>). Pictorial Knowledge Representations and <strong>Technology</strong> Tools for Regulating<br />

Collaborative learning. Paper presented at the 12 th Biennial Conference for Research on Learning and Instruction,<br />

August 28 -Septermber 1, <strong>2007</strong>, Budapest, Hungary.<br />

Pea, R.D. (1994). Seeing what we build together: Distributed multimedia learning environments for transformative<br />

communications. The Journal of the Learning Sciences, 3 (3), 285-299.<br />

Ratto, M., Shapiro, R. B., Truong, T. M., & Griswold,W. G. (2003). The ActiveClass Project: Experiments in<br />

Encouraging Classroom Participation. Paper presented at the CSCL 2003 Conference, June 14-18, 2003, Bergen,<br />

Norway.<br />

Roschelle, J., & Teasley, S. (1995). The construction of shared knowledge in collaborative problem solving. In C. E.<br />

O’Malley (Ed.), Computer supported collaborative learning, Heidelberg: Springer, 69-97.<br />

Roschelle, J. (1992). Learning by collaborating: convergent conceptual change. The Journal of the Learning<br />

Sciences, 2 (3), 235–276.<br />

Scanlon, E., Jones, A., & Waycott, J. (2005). Mobile technologies: prospects for their use in learning in informal<br />

science settings. Journal of Interactive Media in Education, 25, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://jime.open.ac.uk/2005/25/scanlon-2005-25.pdf.<br />

Schunk, D. H., & Zimmerman, J. (1994). Self-regulation of learning and performance. Issues and educational<br />

applications, Hillsdale, NJ: Erlbaum.<br />

Sharples, M. (2000). The design of personal mobile technologies for lifelong learning. Computers & Education, 34<br />

(3-4), 177–193.<br />

Volet, S. E., & Järvelä, S. (2001). Motivation in learning contexts: Theoretical advances and methodological<br />

implications, Amsterdam: Elsevier Science.<br />

Winne, P. H., Nesbit, J. C., Kumar, V., Hadwin, A. F., Lajoie, S. P., Azevedo, R. A., & Perry, N. E. (2006).<br />

Supporting self-regulated learning with gStudy software: The Learning Kit Project. <strong>Technology</strong>, Instruction,<br />

Cognition and Learning, 3 (1), <strong>10</strong>5-113.<br />

Zurita, G., & Nussbaum, M. (<strong>2007</strong>). A constructivist mobile learning environment supported by a wireless handheld<br />

network. Journal of Computer Assisted Learning, 20 (4), 235-243.<br />

79


Al-A'ali, M. (<strong>2007</strong>). Implementation of an Improved Adaptive Testing Theory. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 80-94.<br />

Implementation of an Improved Adaptive Testing Theory<br />

Mansoor Al-A'ali<br />

Department of Computer Science, College of Information <strong>Technology</strong>, University of Bahrain, Kingdom of Bahrain,<br />

malaali@itc.uob.bh // mansoor.alaali@gmail.com<br />

ABSTRACT<br />

Computer adaptive testing is the study of scoring tests and questions based on assumptions concerning the<br />

mathematical relationship between examinees’ ability and the examinees’ responses. Adaptive student tests,<br />

which are based on item response theory (IRT), have many advantages over conventional tests. We use the least<br />

square method, a well-known statistical method, to reach an estimation of the IRT questions’ parameters. Our<br />

major goal is to minimize the number of questions in the adaptive test in order to reach the final level of the<br />

students’ ability by modifying the equation of estimation of the student ability level. This work is a follow-up on<br />

Al-A'ali (<strong>2007</strong>). We consider new factors, namely, initial student ability, subject difficulty, number of exercises<br />

covered by the teacher, and number of lessons covered by the teacher. We compared our conventional exam<br />

results with the calculated adaptive results and used them to determine IRT parameters. We developed the IRT<br />

formula of estimating student ability level and had positive results in minimizing the number of questions in the<br />

adaptive tests. Our method can be applied to any subject and to school and college levels alike.<br />

Keywords<br />

IRT, Item response theory, Testing methods, Adaptive testing, Student assessment<br />

Introduction<br />

Item response theory (IRT) is the study of scoring tests and questions based on assumptions concerning the<br />

mathematical relationship between the examinee’s ability (or other hypothesized traits) and the questions’ responses.<br />

Adaptive student tests, which are based on IRT, have many advantages over conventional tests. The first advantage<br />

is that IRT adaptive testing contributes to the reduction of the length of the test because the adaptive test gives the<br />

most informative questions when the student shows a mastery level in a certain field. Secondly, the test can be better<br />

tailored for individual students.<br />

Adaptive assessment would undoubtedly present improved methods of assessment, especially with the availability of<br />

computers in all schools. As we know, evaluation and assessment are an integral part of learning. A good objective<br />

test at the end of each learning objective can reveal a great deal about the level of understanding of the learner. The<br />

possibility of differential prediction of college academic performance was discussed by researchers (Young, 1991).<br />

A good analysis of item response theory was presented in (Fraley, Waller, & Brennan, 2000) who applied the theory<br />

to self report measures of adult attachment. Error-free mental measurements resulting from applying qualitative item<br />

response theory to assessment and program validation, including a developmental theory of assessment, were<br />

discussed in Hashway (1998). The general applicability of item response models was extensively discussed by Stage<br />

(1997a, 1997b, 1997c, & 1997d). The use of the item response theory for the issue of gender bias in predicting<br />

college academic performance was discussed in Young (1991). Some researchers proposed implementing decision<br />

support systems for IRT-based test construction (Wu, 2000). The IRT algorithm aims to provide information about<br />

the functional relation between the estimate of the learner’s proficiency in a concept and the likelihood that the<br />

learner will give the correct answer to a specific question (Gouli, Kornilakis, Papanikolaou, & Grigoriado, 2001).<br />

Figure 1 is a schematic representation of an adaptive test.<br />

The IRT algorithm illustrated in Figure 1 aims to provide information about the functional relation between the<br />

estimate of the learner’s proficiency on a concept and the likelihood that the learner will give the correct answer to a<br />

specific question (Gouli et al., 2001). In a conventional test, two matters are considered: the time and the length of<br />

the test. In an adaptive test, possible termination criteria are:<br />

1. The number of questions posed exceeds the maximum number of questions allowed.<br />

2. The accuracy of the estimation of the learner’s proficiency reaches the desired value.<br />

3. Time limitations: most popular adaptive tests have a time limit. Although time limitation is not necessary in<br />

adaptive testing, it can be beneficial. Students who spend too much time on tests may get tired, which can<br />

negatively affect the score.<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

80


4. No more relevant items in the item bank: when the item bank is small, or questions with a difficulty level<br />

suitable for the student do not exist, the test must be terminated.<br />

Figure 1. A schematic representation of an adaptive test<br />

Conventional exams suffer from certain problems that must be considered carefully. First, assessments and<br />

assignments are normally given to test the different capabilities of students who are in the range of poor to excellent,<br />

and thus all students have to meet or resolve the same standard questions, which mostly do not match their own<br />

capabilities, and are either lower or higher. This means that students with high capabilities may waste their time in<br />

solving average assignments that do not excite, challenge, or interest them. Another aspect that we should keep in<br />

mind considerably is that a score of 75% on a test containing easy items has a different meaning from a score of 75%<br />

on a test containing difficult items (Segall, 2000). Second, testing for knowledge and understanding in the context of<br />

a specific course typically involves administering the same set of test items or questions to all the students enrolled in<br />

the course, usually at the same examination sitting.<br />

When we consider classical test theory (CTT), we realize that CTT has a number of deficiencies. One of the<br />

problems with CTT is that it is test-oriented rather than item-oriented; that is, in the classical true-score model there<br />

is no regard for an examinee’s response to a given item. As a result, CTT does not allow predictions to be made<br />

about how an individual or group of examinees will perform on a given item (Hildegarde & Jacobson, 1997).<br />

To formulate a complete understanding of the assessment process in schools, we look at some drawbacks of penciland-paper<br />

tests. Some drawbacks of conventional pencil-and-paper tests are scoring and feedback. Instructors need a<br />

lot of time to correct test papers. This means that examinees cannot be informed about the result of the test<br />

immediately after its completion. We know that immediate feedback is important to the students for psychological<br />

reasons. It is motivational, helps them focus, and informs them if they have to work harder.<br />

One of the solutions to these problems is to use computer adaptive tests (CATs). Advantages of CATs can include<br />

shorter and quicker tests, flexible testing schedules, increased test security, better control of item exposure, better<br />

balancing of test content areas for all ability levels, quicker test-item updating, quicker reporting, and a better testtaking<br />

experience for the test-taker. CATs are widely used these days and they give good results in many educational<br />

fields. CATs are used in many professional certification programs. Novell successfully introduced CATs into its<br />

certification program in 1991. The <strong>Educational</strong> Testing Service, the world’s largest testing organization, published<br />

the Graduate Record Exam (GRE) as an adaptive test in 1993. TOEFL also uses a CAT. The Nursing Boards<br />

converted completely from paper-based testing to a computerized adaptive test in 1994.<br />

Assessments can guide improvement—presuming they are valid and reliable—if they motivate adjustments to the<br />

educational system (Shute & Towle, 2003). “The question is no longer whether assessment must incorporate<br />

technology. It is how to do it responsibly, not only to preserve the validity, fairness, utility, and credibility of the<br />

measurement enterprise but, even more so, to enhance it” (Bennett & Persky, 2002).<br />

Intelligent tutoring systems permit the modeling of an individual learner, and with that modeling comes the<br />

knowledge of how to perform an individualized assessment. Examinations can once again be tailored to meet the<br />

individual needs of a particular learner. In contrast with paper-and-pencil multiple-choice tests, new assessments for<br />

81


complex cognitive skills involve embedding assessments directly within interactive, problem-solving, or open-ended<br />

tasks (Bennett & Persky, 2002).<br />

One of our main objectives in this research was to study IRT in order to reduce the number of questions before<br />

reaching stability. It had been shown (Gouli et al., 2001; Eggen & Straetmans, 2000) that after 13–15 questions, the<br />

level of students’ ability becomes stable. Therefore, the length of time required to complete the adaptive test is<br />

shorter than that required for the pencil-and-paper test. Our hypothesis is that the starting level of the adaptive test is<br />

arbitrary. Our starting point in the adaptive test was at the conventional level that we got from stage 1 of the method<br />

previously mentioned. Our target was to get a stable level after seven questions only, hence reducing the length of<br />

the test and the time required to complete the test. In order to test our hypothesis we had to build a system and<br />

evaluate and enhance IRT. We measured the effectiveness of IRT by comparing the students’ achievement levels<br />

using the new IRT-based system with the results achieved by the students after taking an ordinary written test<br />

prepared by the teachers.<br />

Testing, Adaptive Testing, and Item Response Theory<br />

Linear tests are not adaptively administered tests and thus they are presented on the far-left side of Figure 2. Items on<br />

these tests are represented in sequence, that is, they are linear. The examinee is presented with the first item, then the<br />

second, then the third, and so on, in a predetermined fashion. Linear tests administered on the computer are also<br />

known as fixed-form tests.<br />

In linear-on-the-fly testing (LOFT), unique, fixed-length tests are constructed for each examinee. The target of<br />

content and psychometric specifications should be met in constructing the test. However, the examinee’s proficiency<br />

level is not a consideration when constructing form items; thus, these tests are not adaptive. A large pool of items is<br />

needed to develop this type of test because the test forms should be unique. The benefits of constructing LOFT tests<br />

are in presenting item exposure and rigorous content-ordering requirements. Therefore, the LOFT model has one<br />

major advantage over the linear model: improved security that comes from presenting different items across forms.<br />

Figure 2. Types of Testing<br />

Testlets are groups of items that are considered a unit and administered together. Usually, testlets are constructed<br />

based on previous knowledge of the difficulty of the items or based on content. More specifically, testlets are<br />

developed according to the order of items’ difficulty or their ability to meet content specifications. Testlets are<br />

presented to examinees in units. Within testlets, examinees are given the opportunity to review, revise, and omit<br />

items. Items within a testlet may be assembled by similarities in level of difficulty, subject matter, or both. Hence,<br />

multistage testing will be possible.<br />

Mastery Models tests are developed to provide accurate information about mastery/non-mastery. The main goals of<br />

mastery models are (1) covering the content domain and (2) making accurate mastery decisions. There are various<br />

models to implement mastery models, and they all share a major advantage, namely efficiency. Efficiency is<br />

observed in the ease of classifying all examinees based on simple rules. Eggen and Straetmans (2000) and Rudner<br />

(2002) provide good descriptions of classifying examinees into three categories.<br />

Tests based on CAT delivery models present items depending on the performance of the examinee. The items that<br />

are presented have been pre-tested and item parameter estimates have been calculated. Using this information,<br />

examinees receive items that match their proficiency level at that time. Adaptive assessment systems can ask the<br />

most informative questions and determine when the student has displayed mastery of a particular concept. There is<br />

no need to pose further test items. The test may be better tailored for individual students.<br />

82


Adaptive assessment has two main goals. First, the length of the test may be decreased because the adaptive test<br />

gives the most informative questions when the student shows a mastery level in a certain field. Second, the test may<br />

be better tailored for individual students. Adaptive assessment can provide accurate estimation of the learner’s<br />

proficiency in an efficient way without forcing him/her to answer questions that are either too easy or too difficult for<br />

him/her (Gouli et al. 2001).<br />

In the example shown in Figure 3, the examinee’s level was about 50 out of <strong>10</strong>0, i.e., his capability for answering<br />

questions of varying difficulty is average. The first question the computer gave the examinee was of level 52, i.e., a<br />

slightly more difficult question than his initial estimated capability. The examinee correctly answered the first two<br />

questions and his level rose. When he answered the third question incorrectly, his level went down. The process<br />

continued until there was minimal error in estimating the examinee’s level. The computer program becomes more<br />

and more certain that the examinee’s ability level is close to 50 (Linacre, 2000).<br />

Figure 3. Example CAT Test Administration<br />

In this research project, we use item response theory (IRT). IRT is a modern test theory designed to address the<br />

shortcomings inherent in classical test theory methods for designing, constructing, and evaluating educational and<br />

psychological tests (Hambleton, Swaminathan, & Rogers, 1991). One of the functions of IRT plots respondent<br />

behavior across the continuum of abilities underlying the concept of interest. In other words, IRT is adaptive.<br />

IRT was used in the implementation of adaptive assessments because the proficiency estimate is perhaps independent<br />

of the particular set of questions selected for the assessment. Each learner gets a different set of questions with<br />

different difficulty levels while taking the adaptive assessment (Weiss, 1983). IRT-based item selection strategies<br />

(Weiss, 1983) are maximum information item selection strategy, and Bayesian item selection strategy.<br />

Kingbury and Weiss used maximum information item selection strategy, in which the item pool is searched for an<br />

item that can give maximum information about the examinee. In Bayesian item selection strategy, used by McBride<br />

and Martin, the item selected from the item pool will maximally reduce the posterior variance of an individual’s<br />

ability estimate. Bayesian item selection strategy uses prior information about the examinee more completely than<br />

maximum information item selection strategy (Weiss, 1983).<br />

Eggen and Straetmans (2000) showed that optimum item selection is largely responsible for the major efficiency<br />

gains of CAT against a fixed, linear paper-and-pencil test. Efficiency gains means that fewer items are required to<br />

assess candidates with the same degree of accuracy, or that they can be assessed much more accurately with the same<br />

number of items. Table 1 shows that compared to pencil-and-paper tests, CAT achieves a higher percentage of<br />

correct answers when using maximum information, and fewer items are required to achieve a similar level of<br />

accuracy.<br />

83


Table 1. CAT and a linear mathematics intake test<br />

Method Average number of items % correct decisions<br />

Paper-and-pencil<br />

25<br />

87.0<br />

CAT(maximum information)<br />

14.2<br />

88.3<br />

CAT (random selection)<br />

20.2<br />

85.2<br />

Adaptive Assessment Algorithm<br />

The IRT algorithm aims illustrated in Figure 1 provide information about the functional relation between the<br />

estimate of the learner’s proficiency in a concept and the likelihood that the learner will give the correct answer to a<br />

specific question (Gouli et al., 2001).<br />

Item Characteristic Curve in IRT<br />

The item characteristic curve (ICC) is the basic building block of item response theory. There are two technical<br />

properties of an item characteristic curve. The first is the difficulty of the item. Under item response theory, the<br />

difficulty of an item describes where the item functions along the ability scale. The second technical property is<br />

discrimination, which describes how well an item can differentiate between examinees with abilities below the item<br />

location and those with abilities above the item location (Baker, 2001).<br />

At each ability level, there will be a certain probability that an examinee with that ability will give a correct answer<br />

to the item. This probability will be denoted by P(θ). Its formula is given by<br />

1 1<br />

P(<br />

θ)<br />

= =<br />

− L −a(<br />

θ−b)<br />

1+<br />

e 1+<br />

e<br />

where:<br />

b is the difficulty parameter<br />

a is the discrimination parameter<br />

L = a(θ - b) is the logistic deviate (logit) &<br />

θ is an ability level.<br />

The importance of item discrimination comes from the fact that it relates the strength of the relationship between a<br />

test item and the underlying (and unobservable) attribute being measured, for example, knowledge or learning.<br />

In the case of a typical test item, this P(θ) will be small for examinees of low ability and large for examinees of high<br />

ability (Baker, 2001). Adding one more factor to the previous factor, the formula will be (Gouli et al., 2001):<br />

1−c<br />

P(<br />

θ ) = c +<br />

−2(<br />

θ−b<br />

) 1+<br />

e<br />

where c is unknown. Notice that a = 2 is given for the purpose of simplification.<br />

Item Information Function<br />

Item information function (IIF) is considered a very important value in IRT. IFF is used in estimating the value of the<br />

ability parameter for an examinee. Moreover, it is related to the standard deviation of the ability estimation. If the<br />

amount of information is large, it means that an examinee whose true ability is at that level can be estimated with<br />

precision; that is, all the estimates will be reasonably close to the true value. If the amount of information is small, it<br />

means that the ability cannot be estimated with precision, and the estimates will be widely scattered about the true<br />

ability (Baker, 2001).<br />

In statistics, Sir R. A. Fisher defined information as the reciprocal of the precision with which a parameter could be<br />

estimated (Baker, 2001). Statistically, the precision with which a parameter is estimated is measured by the<br />

variability of the estimates around the value of the parameter. The amount of information is given by the formula:<br />

1<br />

I<br />

=<br />

σ 2<br />

84


where σ 2 is a measure of precision of the variance of the estimators.<br />

Estimating Item Parameters<br />

I) Rasch model for estimating item parameters<br />

Mathematical analysis shows that the Rasch model is statistically strong. Rasch model estimates are sufficient,<br />

consistent, efficient, and unbiased. It estimates the difficulty parameter and student ability (student level). There is an<br />

efficient method for approximating parameter estimates that could easily be calculated by hand. The drawback of this<br />

model is that there are no guessing factors and discrimination parameters. Besides, estimating items’ parameters and<br />

students’ abilities in a test with 20 or 30 items requires at least <strong>10</strong>0 examinees.<br />

II) Chi-square goodness-of-fit<br />

When the conventional test finishes, we observe a sample of M examinees’ responses to N items in the exam.<br />

According to examinees’ achievement results, we determine the examinees’ ability levels. These ability levels will<br />

be distributed over the ability scale. According to Baker, “The agreement of the observed proportions of correct<br />

response and those yielded by the fitted item characteristic curve for an item is measured by the chi-square goodnessof-fit<br />

index” (2001).<br />

III) Method of least square<br />

This method works when the regression is approximately linear. The equation of a straight line is determined by the<br />

points that it passes through.<br />

IV) Level estimation method<br />

The student level estimator approach is a modification of the Newton-Raphson iterative method for solving equations<br />

method outlined by Lord (1980).<br />

Item Types<br />

I) Dichotomous items<br />

With dichotomous items, there must be only one correct answer. This means that the examinee either answers the<br />

question correctly and gets a full mark or answers the question incorrectly and gets no mark. The common<br />

dichotomous item types are:<br />

Multiple choice: this kind of question gives a number of options to choose from (usually four), and the<br />

examinee has to choose the correct option.<br />

True–false: this kind of questions presents a statement that is either right or wrong, and the examinee has to<br />

decide whether it is correct or incorrect.<br />

Short answer: this kind of question provides a space that the student has to fill in with a short answer. If the<br />

answer is fixed and no other alternative answer is correct, then the question is dichotomous item.<br />

II) Polytomous Items<br />

Items in this category can be partly correct; the examinee might solve part of the question correctly and will receive<br />

part of the mark.<br />

Results<br />

Before estimating question parameters and starting the adaptive test, students were given conventional tests. Students<br />

answered about <strong>10</strong>0 questions in three stages. In each stage, they solved an exam of 34 questions. Forty-five students<br />

of the intermediate class were examined in these conventional tests. These questions were of a first-intermediate<br />

level. The questions related to three math topics of first-intermediate level.<br />

The least square method was used to estimate the difficulty level and discrimination parameters of each question. We<br />

used this method because it decreases the load of using computer processors, and requires no statistical tables such as<br />

chi-square.<br />

85


It is agreed that the difficulty level ranges from +3, which is very difficult level, to –3, which is a very easy level<br />

(Baker, 2001). Table 1 shows that the level of difficulty of the questions ranges from +2.9 to –2.9. This means that<br />

the questions cover all possible ability levels of students. In table 2, discrimination ranges from –0.3 to –1.22.<br />

Positive discrimination values means that the question is not valid and should be revised. There are questions with<br />

discrimination greater than –0.3. However, we do not include them in the process of selection of items because they<br />

will not be good items because they fall outside the scope of almost all students being tested and would not<br />

contribute anything of value to the results.<br />

Table 2. Analysis Of question difficulty<br />

Max difficulty Min difficulty Average difficulty<br />

2.9 –2.9 0.51<br />

Table 3. Analysis of discrimination<br />

Max discrimination Min discrimination Average discrimination<br />

–0.3 –3.15 –1.22<br />

After the conventional test we tested the adaptive assessment on 24 students. Two of the students did not complete<br />

the exam. According to the hypothesis, we should have modified the formula to shorten the number of questions to<br />

reach the stable level. We started the adaptive test with the conventional level of the students. Each student started<br />

with his previous level and with the consideration that we were certain of 85%, that is, σ = 0.15. The graph in Figure<br />

1 shows the conformance of the adaptive test level and the conventional test level.<br />

4<br />

3<br />

2<br />

1<br />

0<br />

-1<br />

-2<br />

-3<br />

1 3 5 7 9 11 13 15 17 19 21<br />

Coventional Level Adaptive Level<br />

Figure 4. Conformance between conventional and adaptive levels<br />

In Figure 4, about 20% of the points of the conventional levels are not close enough to their corresponding points in<br />

the adaptive level. While the average student level in the conventional tests is –0.17, the average in the adaptive tests<br />

is 0.031. Transforming these values to the <strong>10</strong>0 scale, we find that these values are 47.14 and 50.51, respectively. We<br />

could say that the averages are almost near each other. The difference between the average of the conventional level<br />

and the adaptive level is –0.20. In the <strong>10</strong>0 scale it is –3.37.<br />

As an example, the fourth point shows that the conventional level is close to the adaptive level: the conventional<br />

level is about 0.47 and the adaptive level is about 0.2. This means that the difference is 0.27. In the <strong>10</strong>0 scale, this<br />

difference is about 4 marks. The nineteenth point shows that the conventional level is 0.13 and the adaptive level is<br />

1.9; the difference between these points is 1.8. In the <strong>10</strong>0 scale, the difference is about 29 marks.<br />

The correlation between the conventional level and the adaptive level is 0.63, which is considered to be highmoderate<br />

relation. The correlation is considered high if it is 0.70 and above.<br />

It is not strange to have some deviation in the results of the two kinds of exams due to human error. After elimination<br />

of 20% of the points that are very far from their corresponding points, we get Figure 5, which is more accurate than<br />

86


Figure 4. The average difference of the conventional level and the adaptive level in this case 2 is 0.028. In the <strong>10</strong>0<br />

scale, it is 0.46.<br />

After the elimination of 20% of the points, the correlation increases and becomes 0.74. This value is considered to be<br />

high.<br />

3<br />

2<br />

1<br />

0<br />

-1<br />

-2<br />

-3<br />

1 3 5 7 9 11 13 15<br />

Coventional Level Adaptive Level<br />

Figure 5. Conformance between conventional and adaptive levels<br />

In Figure 6, we started with the subject difficulty as a starting point of the adaptive test. The results here were done<br />

by simulation of real students’ results. Notice that the previous chart had a student’s initial conventional level as a<br />

starting point. In this graph, we could say that the conformance of the two lines is acceptable. The students’<br />

conventional levels were the same as they were in the previous chart. While the average of these students’ levels in<br />

conventional tests is –0.17, the average in adaptive tests is –0.0036. Transforming these values to the <strong>10</strong>0 scale, we<br />

find that these values are 47.14 and 49.9, respectively. Both charts’ results are almost the same average. The average<br />

difference of the conventional levels and adaptive levels is –0.12. In the <strong>10</strong>0 scale it is –2.8. The correlation between<br />

the conventional and adaptive tests levels is 0.81. The average number of questions to reach the stable or final level<br />

is 12 questions.<br />

A question arises after finding that both starting points, the students’ conventional level or the difficulty of the<br />

subject, reached final levels after an average of 12 questions. Why do the levels of students change? This could be<br />

interpreted by the fact that students’ levels changed based upon their readiness for the exam, motivation, time of<br />

exam, health and other conditions, and the fact that some students wanted to get better results. However, those<br />

changes were within an acceptable range in general.<br />

Our improved IRT formula and model<br />

4<br />

3<br />

2<br />

1<br />

0<br />

-1 1<br />

-2<br />

-3<br />

3 5 7 9 11 13 15 17 19 21<br />

Coventional<br />

Adaptive<br />

Figure 6. Conformance between conventional and adaptive levels<br />

One of the objectives was to find a new formula that contributes new factors to the IRT model. These factors are<br />

initial student ability, subject’s difficulty, number of exercises covered by the teacher, and number of lessons covered<br />

by the teacher. As shown in Figure 7, the number of lessons covered and the number of exercises covered directly<br />

affect the student’s ability level.<br />

87


Figure 7. Our newly added factors in IRT model<br />

Moreover, the increase in students’ ability levels will in turn increase ease of subject and vice versa. As a matter of<br />

fact, the number of exercises and the number of lessons covered do not work by themselves in measuring students’<br />

levels because the difficulty of the subject affects these two factors directly. For instance, if the subject is very<br />

difficult, the increase in number of exercises and lessons might not help in producing good student results. Therefore,<br />

there should be further research that measures these factors in different subject difficulties.<br />

For now, we considered these two factors: subject difficulty and initial student ability level. The new factors will<br />

give the new shape of the formula as:<br />

θ<br />

∑<br />

i n<br />

n + 1 = θ n +<br />

n=<br />

1<br />

n<br />

where (θ0)<br />

⎧<br />

⎪θ<br />

= init<br />

θ = 0 ⎨<br />

⎪ =<br />

⎩θ<br />

diffLevel<br />

and Iinitial is<br />

I initial<br />

⎧<br />

⎪<br />

= ⎨<br />

⎪1<br />

⎩<br />

(<br />

I<br />

initial<br />

n<br />

s ( θ<br />

) +<br />

∑<br />

n=<br />

1<br />

I<br />

i<br />

)<br />

( θ<br />

the initail student level from database<br />

5 ;<br />

1-<strong>Number</strong> of<br />

exercises<br />

covered<br />

2-<strong>Number</strong> of<br />

lessons<br />

covered<br />

the difficulty level of subject if<br />

if<br />

if<br />

θ<br />

θ<br />

init<br />

init<br />

exists<br />

does not exist<br />

n<br />

)<br />

increase<br />

increase<br />

θ 0<br />

does not exist<br />

increases<br />

4-Subject<br />

difficulty<br />

3-Initial<br />

Student<br />

ability level<br />

decreases<br />

facilitates<br />

facilitates<br />

Measuring<br />

student level<br />

adaptively<br />

5 means that the standard error, σ, equals 0.45, because we know that θ0 gives some certainty of the level. In fact, we<br />

used this value because it is near the middle of the student ability level and will not negatively affect the process of<br />

estimation. 1 means that the standard error = 1, which means that we are not sure and do not know anything about<br />

the student level.<br />

Figure 8 shows the performance of a typical student. It is one of the ideal results that we found. This student started<br />

in a level of 46 and was almost stable for the first 7 questions. Then his level increased slowly until he became stable<br />

at 50. The difference is only 4 marks. This change is expected because he might be a little better or a little worse than<br />

his conventional level.<br />

Now, consider Figure 9. About 57% of the students follow the best pattern, in which their levels changed little up or<br />

down. 18 percent of the students were fluctuating in their levels. 15 percent of the students decreased in their levels<br />

constantly. <strong>10</strong> percent of students increased in their levels constantly. Most students were actually in their correct<br />

levels from the beginning of the exam. That shows that students actually start almost from the right level position.<br />

Students whose level changed constantly, increasing or decreasing, were the excellent or weak-level students.<br />

Students who fluctuated in their level were students who hesitated or revised one or two of the three exam topics.<br />

88


Figure 8. <strong>Number</strong> of questions vs. level of student<br />

abililty Level<br />

<strong>10</strong>0<br />

90<br />

80<br />

70<br />

60<br />

50<br />

40<br />

30<br />

20<br />

<strong>10</strong><br />

0<br />

The improved IRT System Implementation<br />

Ability Level<br />

70<br />

65<br />

60<br />

55<br />

50<br />

45<br />

40<br />

35<br />

30<br />

25<br />

20<br />

15<br />

<strong>10</strong><br />

5<br />

0<br />

1 3 5 7 9 11 13 15 17 19<br />

No of Question<br />

1 3 5 7 9 11 13 15 17 19 21 23 25 27<br />

items- questions<br />

Figure 9. <strong>Number</strong> of questions vs. level of a student<br />

Figure <strong>10</strong>. System context diagram<br />

student Level<br />

Average<br />

The model shown in Figure <strong>10</strong> was implemented using C++ as the programming platform and Access as the<br />

database. The system provides a number of functionalities through a number of processes as illustrated in Figures 11,<br />

12, 13, and 14. The system facilitates a number of functional requirements such as: a login process; a student<br />

registration process; add new exams and their questions; create, delete, and edit exam questions; calculate and save<br />

89


question factors according to IRT; and a wide variety of reports for students’ results. Most importantly, the system is<br />

the implementation of the adaptive testing process for each student and for a group of students in a given class.<br />

Admin<br />

student<br />

Admin<br />

Log id &<br />

password<br />

Log id<br />

password<br />

Deleted class<br />

or subject info<br />

New classes<br />

and subject details<br />

New user<br />

info<br />

Edited user<br />

info<br />

Edited classes<br />

and subject details<br />

Edited user Info<br />

Recjecting logon<br />

1<br />

Login in<br />

Activation<br />

info<br />

2<br />

Accounts &<br />

Basics<br />

Edited password<br />

or email info<br />

Instructor<br />

Instructor<br />

Log id &<br />

password<br />

D2<br />

New, deleted or<br />

Edited Data<br />

Edited password<br />

or email info<br />

Student results<br />

Activation<br />

info<br />

Edited exam &<br />

question<br />

Student Info<br />

Student file<br />

New exam &<br />

question<br />

Deleted exam<br />

or question infor<br />

5<br />

Reporting<br />

Exam Results<br />

3<br />

Question<br />

Generator<br />

exam &<br />

Question & solutions<br />

Activation<br />

Info from<br />

Login subsystem<br />

Figure 11. The main processes of the system<br />

Deleted exam<br />

or question infor<br />

Edited exam &<br />

question<br />

Student<br />

D1 Exam Storage<br />

exam &<br />

Question solutions<br />

4<br />

Examine<br />

Question info<br />

Solution info<br />

Exam selection<br />

The system provides a search based on some criteria, such as exam creation date and the subject of an appropriate<br />

exam. The student can select the exam he/she wants to write and answer its questions. The system makes sure that<br />

the factors of IRT exist for each question of that exam. The system selects an appropriate question from the database<br />

depending on the student’s level and presents the question to the student. Depending on student’s answer, the system<br />

calculates and saves the level of the student as he/she answers each questions using IRT. If the stopping criteria are<br />

met, the system finishes the exam.<br />

In order to achieve all the functional requirements through the system processes, the system deals with a database<br />

consisting of a number of tables or data stores, including: student table, instructor table, admin table, exam table,<br />

question table, question choice table, class table, stage table, teach table, subject table, question solution table, and<br />

student-level table.<br />

The context diagram shown in Figure <strong>10</strong> gives the empirical overview of the system. There are three main players in<br />

the system, namely, the instructor, the student, and the system administrator.<br />

Figure 11 shows the main processes of the system. Process 1 is a main process responsible for auditing the login<br />

activities of all types of users. Process 2 is responsible for all the user accounts, including new classes, subject<br />

details, and instructor information. Process 2 is further detailed in Figure 12. Process 3 is the question-generator<br />

process that stores all questions in the exam storage data store. Process 3 is further detailed in Figure 13. This<br />

process receives from the instructor the new exam questions, the edited questions, and the deleted questions. Process<br />

4 is the process for conducting the exams. Process 4 deals with the student and allows exam selection, shows<br />

question information, and provides question solutions. Process 4 is further detailed in Figure 14.<br />

90


Updated Stage info<br />

user-id<br />

and<br />

Edited<br />

info<br />

2.4<br />

Edit stage<br />

2.5<br />

Edit Subject<br />

Updated Subject info<br />

Admin<br />

admin-id<br />

password<br />

User<br />

2.11<br />

Edit user<br />

info<br />

Reject<br />

info<br />

1<br />

login<br />

user-id<br />

and<br />

Allowed Edited<br />

info<br />

edited stage info<br />

edited Subject info<br />

Deleted<br />

Subject<br />

info<br />

Updated Class info<br />

User’s Id and password<br />

User<br />

info<br />

Deleted subject info<br />

2.6<br />

Edit Class<br />

edited Class info<br />

edited<br />

user-info<br />

Email<br />

& pass<br />

& logid<br />

2.13<br />

Self-edit<br />

user<br />

info<br />

New stage info<br />

Stage storage<br />

Subject storage<br />

2.8<br />

Delete<br />

Subject<br />

User<br />

file<br />

modified<br />

Email<br />

& pass<br />

& logid<br />

user<br />

info<br />

Detailed Stage info<br />

New Subject info<br />

Detailed Subject info<br />

New Class Info<br />

Class storage<br />

Deleted Stage info<br />

2.3<br />

Add Class<br />

Deleted Class info<br />

New<br />

user<br />

info<br />

deteted<br />

info<br />

user-id<br />

2.12<br />

deleted<br />

info<br />

Reject<br />

Delete<br />

2.2<br />

Add<br />

Subject<br />

Detailed Class info<br />

Figure 12. Detailed questions process<br />

2.7<br />

Delete<br />

stage<br />

Class info<br />

Subject info<br />

Deleted<br />

user<br />

info<br />

2.1<br />

Add Stage<br />

Deleted Stage info<br />

Deleted Class info<br />

2.9<br />

Delete<br />

Class<br />

Admin<br />

2.<strong>10</strong><br />

Add-user<br />

Admin<br />

Stage info<br />

New<br />

user<br />

Existing<br />

user<br />

91


Instructure<br />

1<br />

Login<br />

User-ID<br />

Password<br />

Instructure<br />

user-ID,password<br />

User storage<br />

4.1<br />

Select<br />

Exam with<br />

exam type<br />

Exam-id<br />

To conventional<br />

4.2<br />

select<br />

Question<br />

Exam id<br />

edited<br />

Exam info<br />

Exam id<br />

test-title<br />

3.1<br />

Define<br />

Exam or<br />

Assignment<br />

3.3<br />

Delete<br />

Exam or<br />

assignment<br />

3.8<br />

update<br />

Exam or<br />

assignment<br />

info<br />

Instructure<br />

Question data<br />

exam-Id<br />

Exam-<br />

Assignment<br />

info<br />

Exam<br />

details info<br />

Deleted Exam info<br />

Question<br />

file<br />

Exam detailed info<br />

3.2<br />

Generate<br />

Question<br />

New Question Info<br />

and answer<br />

Exam<br />

details info<br />

Question<br />

detailed info<br />

Updated Exam info<br />

Exam id<br />

3.4<br />

Search<br />

exam<br />

Exam info<br />

Exam info<br />

Question_id<br />

For deletion<br />

3.5<br />

Selecting<br />

Question<br />

3.6<br />

Delete<br />

Question<br />

Deleted quesion<br />

Question<br />

file<br />

question<br />

Factors(IRT)<br />

question<br />

details info<br />

3.9<br />

Estimate<br />

Question<br />

Factors(IRT)<br />

Instructure<br />

Question edited info<br />

Question_id<br />

3.7<br />

editing<br />

Question<br />

Updated Question edited info<br />

solutions<br />

Solutions<br />

file<br />

Figure 13. Detailed question generator process<br />

Exam-id<br />

To adaptive<br />

exist-eaxm-id<br />

stu-id<br />

Question<br />

solution<br />

stu-id<br />

Question<br />

solaition<br />

end exam<br />

exam-id<br />

4.3<br />

select<br />

Question<br />

adaptively<br />

Question storage<br />

stu-id<br />

Termination<br />

details<br />

Student<br />

Level of student<br />

Question<br />

details<br />

Student<br />

Student<br />

level<br />

4.7<br />

terminate<br />

end exam<br />

stu-id<br />

Question<br />

solution<br />

Stu-id &<br />

student level<br />

4.4<br />

Solve<br />

question<br />

Solution<br />

& Question id<br />

Update<br />

level<br />

Student file<br />

Figure 14. Detailed question solution process<br />

4.5<br />

update<br />

student<br />

level<br />

Student<br />

previvoes<br />

level<br />

92


Conclusion<br />

This paper presented a description of adaptive testing based on IRT and experimented with IRT in order to evaluate<br />

its applicability and benefits. It also presented enhancements for IRT. Our estimation of IRT questions’ parameters<br />

was based on the least square method, a well-known statistical method. We demonstrated that it is possible to reduce<br />

the number of questions in the adaptive test to reach the final level of the students by modifying the equation for the<br />

estimation of the student’s ability level. In order to make IRT more realistic and applicable, we incorporated new<br />

factors into IRT, namely initial student ability, subject difficulty, number of exercises covered by the teacher, and<br />

number of lessons covered by the teacher. Students’ attitudes toward adaptive tests and their results were measured<br />

by a questionnaire. Our conventional exam results were compared with adaptive results and used in determining IRT<br />

parameters. We have presented enhancement and modification of the IRT formula of estimating student ability level<br />

and had positive results in minimizing the number of questions in adaptive tests.<br />

References<br />

Al-A'ali, M. (<strong>2007</strong>). A method for improving adaptive testing by evaluating and improving the item response theory.<br />

WSEAS Transactions on Information Science and Applications 4 (3), 466-471.<br />

Baker, F. (2001). The basics of item response theory: ERIC Clearinghouse on assessment and evaluation, College<br />

Park, MD: University of Maryland.<br />

Bennett, R. E., & Persky, H. (2002). Problem solving in technology-rich environments. In C. Richardson (Ed.)<br />

Qualifications and Curriculum Authority: Assessing gifted and talented children, London, England: Qualifications<br />

and Curriculum Authority, 19–33.<br />

Eggen, T. J. H. M., & Straetmans, G. J. J. M. (2000). Computerized adaptive testing for classifying examinees into<br />

three categories. <strong>Educational</strong> and Psychological Measurement, 60, 713–734.<br />

Fraley, R. C., Waller, N. G., & Brennan, K. A. (2000). An item response theory analysis of self-report measures of<br />

adult attachment. Journal of Personality and Social Psychology, 78 (2) 350–365.<br />

Gouli, E., Kornilakis, H., Papanikolaou, K., & Grigoriado, M. (2001). Adaptive assessment improving interaction in<br />

an educational hypermedia system, Greece: University of Athens.<br />

Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory, Newbury Park,<br />

CA: Sage Publications.<br />

Hashway, R. M. (1998). Error-free mental measurements: Applying qualitative item response theory to assessment<br />

and program validation including a developmental theory of assessment, San Francisco, CA: Austin & Winfield.<br />

Hildegarde, S., & Jacobson, Z. (1997). A comparison of early childhood assessments and a standardized measure for<br />

program evaluation, Ph.D. Dissertation, Virginia Polytechnic & State University, Blacksburg, VA.<br />

Linacre, J. M. (2000). Computer-adaptive testing: A methodology whose time has come (MESA Memorandum No.<br />

69), MESA Psychometric Laboratory, University of Chicago.<br />

Lord, F. M. (1980). Application of item response theory to practical testing problems, Hillsdale, NJ: Lawrence<br />

Erlbaum Associates.<br />

Rudner, L. M. (2002). An examination of decision-theory adaptive testing procedures. Paper presented at the annual<br />

meeting of the American <strong>Educational</strong> Research Association, April 1-5, 2002, New Orleans, USA.<br />

Segall, D. O. (2000). Principles of multidimensional adaptive testing. In W. J. van der Linden & C. A. W. Glas<br />

(Eds.), Computerized adaptive testing: Theory and practice, Dordrecht: Kluwer Academic, 53-73.<br />

93


Shute, V., & Towle, B. (2003). Adaptive e-learning, Hillsdale, NJ: Lawrence Erlbaum Associates.<br />

Stage, C. (1997a). The applicability of item response models to the SweSAT: A study of the DTM subtest, retrieved<br />

<strong>October</strong> 15, <strong>2007</strong>, from http://www.umu.se/edmeas/publikationer/pdf/enr2197sec.pdf.<br />

Stage, C. (1997b). The applicability of item response models to the SweSAT: A study of the ERC subtest, retrieved<br />

<strong>October</strong> 15, <strong>2007</strong>, from http://www.umu.se/edmeas/publikationer/pdf/enr2497sec.pdf.<br />

Stage, C. (1997c). The applicability of item response models to the SweSAT: A study of the READ subtest, retrieved<br />

<strong>October</strong> 15, <strong>2007</strong>, from http://www.umu.se/edmeas/publikationer/pdf/enr2597sec.pdf.<br />

Stage, C. (1997d). The applicability of item response models to the SweSAT: A study of the WORD subtest, retrieved<br />

<strong>October</strong> 15, <strong>2007</strong>, from http://www.umu.se/edmeas/publikationer/pdf/enr2697sec.pdf.<br />

Weiss, D. J. (1983). New horizons in testing: Latent trait test theory and computerized adaptive testing, New York,<br />

NY: Academic Press.<br />

Wu, I.-L. (2000). Model management system for IRT-based test construction decision support system. Decision<br />

Support Systems, (27) 4, 443–458, Netherlands: Elsevier.<br />

Young, J. W. (1991). Gender bias in predicting college academic performance: A new approach using item response<br />

theory. Journal of <strong>Educational</strong> Measurement, 28 (1), 37-47.<br />

94


Chang, S.-H., Lin, P.-C., & Lin, Z. C. (<strong>2007</strong>). Measures of Partial Knowledge and Unexpected Responses in Multiple-Choice<br />

Tests. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 95-<strong>10</strong>9.<br />

Measures of Partial Knowledge and Unexpected Responses in Multiple-Choice<br />

Tests<br />

Shao-Hua Chang<br />

Department of Applied English, Southern Taiwan University, Tainan, Taiwan // shaohua@mail.stut.edu.tw<br />

Pei-Chun Lin<br />

Department of Transportation and Communication Management Science, National Cheng Kung University, Taiwan<br />

peichunl@mail.ncku.edu.tw<br />

Zih-Chuan Lin<br />

Department of Information Management, National Kaohsiung First University of Science & <strong>Technology</strong>, Taiwan<br />

u9324819@ccms.nkfust.edu.tw<br />

ABSTRACT<br />

This study investigates differences in the partial scoring performance of examinees in elimination testing and<br />

conventional dichotomous scoring of multiple-choice tests implemented on a computer-based system.<br />

Elimination testing that uses the same set of multiple-choice items rewards examinees with partial knowledge<br />

over those who are simply guessing. This study provides a computer-based test and item analysis system to<br />

reduce the difficulty of grading and item analysis following elimination tests. The Rasch model, based on item<br />

response theory for dichotomous scoring, and the partial credit model, based on graded item response for<br />

elimination testing, are the kernel of the test-diagnosis subsystem to estimate examinee ability and itemdifficulty<br />

parameters. This study draws the following conclusions: (1) examinees taking computer-based tests<br />

(CBTs) have the same performance as those taking paper-and-pencil tests (PPTs); (2) conventional scoring does<br />

not measure the same knowledge as partial scoring; (3) the partial scoring of multiple choice lowers the number<br />

of unexpected responses from examinees; and (4) the different question topics and types do not influence the<br />

performance of examinees in either PPTs or CBTs.<br />

Keywords<br />

Computer-based tests, Elimination testing, Unexpected responses, Partial knowledge, Item response theory<br />

Introduction<br />

The main missions of educators are determining learning progress and diagnosing difficulty experienced by students<br />

when studying. Testing is a conventional means of evaluating students, and testing scores can be adopted to observe<br />

learning outcomes. Multiple-choice (MC) items continue to dominate educational testing owing to their ability to<br />

effectively and simply measure constructs such as ability and achievement. Measurement experts and testing<br />

organizations prefer the MC format to others (e.g., short-answer, essay, constructed-response) for the following<br />

reasons:<br />

Content sampling is generally superior to other formats, and the application of MC formats normally leads to<br />

highly content-valid test-score interpretations.<br />

Test scores can be extremely reliable with a sufficient number of high-quality MC items.<br />

MC items can be easily pre-tested, stored, used, and reused, particularly with the advent of low-cost,<br />

computerized item-banking systems.<br />

Objective, high-speed test scoring is achievable.<br />

Diagnostic subscores are easily obtainable.<br />

Test theories (i.e., item response, generalizability, and classical) easily accommodate binary responses.<br />

Most content can be tested using this format, including many types of higher-level thinking (Haladyna &<br />

Downing, 1989).<br />

However, the conventional MC examination scheme requires examinees to evaluate each option and select one<br />

answer. Examinees are often absolutely certain that some of the options are incorrect, but still unable to identify the<br />

correct response (Bradbard, Parker, & Stone, 2004). From the viewpoint of learning, knowledge is accumulated<br />

continuously rather than on an all-or-nothing basis. The conventional scoring format of the MC examination cannot<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

95


distinguish between partial knowledge (Coombs, Milholland, & Womer, 1956) and the absence of knowledge. In<br />

conventional MC tests, students choose only one response. The number of correctly answered questions is counted,<br />

and the scoring method is called number scoring (NS). Akeroyd (1982) stated that NS makes the simplifying<br />

assumption that all of the wrong answers of students are the results of random guesses, thus neglecting the existence<br />

of partial knowledge. Coombs et al. (1956) first proposed an alternative method for administering MC tests. In their<br />

procedure, students are instructed to mark as many incorrect options as they can identify. This procedure is referred<br />

to as elimination testing (ET). Bush (2001) presented a multiple-choice test format that permits an examinee who is<br />

uncertain of the correct answer to a question to select more than one answer. Incorrect selections are penalized by<br />

negative marking. The aim of both the Bush and Coombs schemes is to reward examinees with partial knowledge<br />

over those who are simply guessing.<br />

Education researchers have been continuously concerned not only about how to evaluate students’ partial knowledge<br />

accurately but also about how to reduce the number of unexpected responses. The number of correctly answered<br />

questions is composed of two numbers: the number of questions to which the students actually know the answer, and<br />

the number of questions to which the students correctly guess the answer (Bradbard et al., 2004). A higher frequency<br />

of the second case indicates a less reliable learning performance evaluation. Chan & Kennedy (2002) compared<br />

student scores on MC and equivalent constructed-response questions, and found that students do indeed score better<br />

on constructed-response questions for particular MC questions. Although constructed-response testing produces<br />

fewer unexpected responses than the conventional dichotomous scoring method, the change of item constructs raises<br />

the complexity of both creating the test and of the post-test item grading and analysis, whereas ET uses the same set<br />

of MC items and makes guessing a futile effort.<br />

Bradbard et al. (2004) suggested that the greatest obstacle in implementing ET is the complexity of grading and the<br />

analysis of test items following traditional paper assessment. Accordingly, examiners are not very willing to adopt<br />

ET. To overcome this problem, this study provides an integrated computer-based test and item-analysis system to<br />

reduce the difficulty of grading and item analysis following testing. Computer-based tests (CBTs) offer several<br />

advantages over traditional paper-and-pencil tests (PPTs). The benefits of CBTs include reduced costs of data entry,<br />

improved rate of disclosure, ease of data conversion into databases, and reduced likelihood of missing data (Hagler,<br />

Norman, Radick, Calfas, & Sallis, 2005). Once set up, CBTs are easier to administer than PPTs. CBTs offer the<br />

possibility of instant grading and automatic tracking and averaging of grades. In addition, they are easier to<br />

manipulate to reduce cheating (Inouye & Bunderson, 1986; Bodmann & Robinson, 2004).<br />

Most CBTs measure test item difficulty based on the percentage of correct responses. A higher percentage of correct<br />

responses implies an easier test item. The approach of test items analysis disregards the relationship between the<br />

examinee’s ability and item difficulty. For instance, if the percentage of correct responses for test item A is quite<br />

small, then the test item analysis system categorizes it as “difficult.” However, statistics also reveal that more failing<br />

examinees than passing examinees answer item A correctly. Therefore, the design of test item A may be<br />

inappropriate, misleading, or unclear, and should be further studied to aid future curriculum designers to compose<br />

high-quality items. To avoid the fallacy of percentage of correct responses, this study constructs a CBT system that<br />

applies the Rasch model based on item response theory for dichotomous scoring and the partial credit model based<br />

on graded item response for ET to estimate the examinee ability and item difficulty parameters (Baker, 1992;<br />

Hambleton & Swaminathan, 1985; Zhu & Cole, 1996; Wright & Stone, 1979; Zhu, 1996; Wright & Masters, 1982).<br />

Before ET implemented by computer-based system is broadly adopted, we still need to examine whether any<br />

discrepancy exists in the performance of examinees who take elimination tests on paper and the performance of those<br />

who take CBTs. This study compares the scores of examinees taking tests using the NS of dichotomous scoring<br />

method and the partial scoring of ET using the same set of MC items in CBT and PPT settings, where the content<br />

subject is operations management. This study has the following specific goals:<br />

1. Evaluate whether the partial scoring for the MC test produces fewer unexpected responses of examinees.<br />

2. Compare the examinee performance on conventional PPTs with their performance on CBTs.<br />

3. Analyze whether different question content, such as calculation and concept, influences the performance of<br />

examinees on PPTs and CBTs.<br />

4. Investigate the relationship between an examinee’s ability and the item difficulty, to help the curriculum<br />

designers compose high-quality items.<br />

96


The rest of this paper will first present a brief literature review on partial knowledge, testing methods, scoring<br />

methods, multiple choice, and CBTs. Then this paper will describe the configuration of a computer-based assessment<br />

system, formulate the research hypotheses, and provide the research method, experimental design, and data<br />

collection in detail. The statistics analysis and hypothesis testing results will be presented subsequently. Conclusions<br />

are finally drawn in the last section.<br />

Related literature<br />

This study first defines related domain knowledge, discusses studies applied to conventional scoring and partial<br />

scoring, compares scoring modes and investigates the principle of designing MC items, and finally summarizes the<br />

pros and cons of CBT systems.<br />

Partial knowledge<br />

Reducing the opportunities to guess and measuring partial knowledge improve the psychometric properties of a test.<br />

These methods can be classified by their ability to identify partial knowledge on a given test item (Alexander,<br />

Bartlett, Truell, & Ouwenga, 2001). Coombs et al. (1956) stated that the conventional scoring format of the MC<br />

examination cannot distinguish between partial knowledge and absence of knowledge. Ben-Simon, Budescu, &<br />

Nevo (1997) classify examinees’ knowledge for a given item as full knowledge (identifies all of the incorrect<br />

options), partial knowledge (identifies some of the incorrect options), partial misinformation (identifies the correct<br />

answer and some incorrect options), full misinformation (identifies only the correct answer), and absence of<br />

knowledge (either omits the item or identifies all options). Bush (2001) conducted a study that allows examinees to<br />

select more than one answer to a question if they are uncertain of the correct one. Negative marking is used to<br />

penalize incorrect selections. The aim is to explicitly reward examinees who possess partial knowledge as compared<br />

with those who are simply guessing.<br />

<strong>Number</strong>-scoring (NS) of multiple choice<br />

Students choose only one response. The number of correctly answered questions is composed of the number of<br />

questions to which the student knows the answer, and the number of questions to which the students correctly guess<br />

the answer. According to the classification of Ben-Simon et al. (1997), NS can only distinguish between full<br />

knowledge and absence of knowledge. A student’s score on an NS section with 25 MC questions and three points per<br />

correct response is in the range 0–75.<br />

Elimination testing (ET) of multiple choice<br />

Alternative schemes proposed for administering MC tests increase the complexity of responding and scoring, and the<br />

available information about student understanding of material (Coombs et al., 1956; Abu-Sayf, 1979; Alexander et<br />

al., 2001). Since partial knowledge is not captured in conventional NS format of an MC examination, Coombs et al.<br />

(1956) describe a procedure that instructs students to mark as many incorrect options as they can identify. One point<br />

is awarded for each incorrect choice identified, but k points are deducted (where k equals the number of options<br />

minus one) if the correct option is identified as incorrect. Consequently, a question score is in the range (–3, +3) on a<br />

question with four options, and a student’s score on an ET section with 25 MC questions with four options each is in<br />

the range (–75, +75). Bradbard and Green (1986) have classified ET scoring as follows: completely correct score<br />

(+3), partially correct score (+2 or +1), no-understanding score (0), partially incorrect score (–1 or –2), completely<br />

incorrect score (–3).<br />

Subset selection testing (SST) of multiple choice<br />

Rather than identifying incorrect options, the examinee attempts to construct subsets of item options that include the<br />

correct answer (Jaradat & Sawaged, 1986). The scoring for an item with four options is as follows: if the correct<br />

97


esponse is identified, then the score is 3; while if the subset of options identified includes the correct response and<br />

other options, then the item score is 3 – n (n = 1, 2, or 3), where n denotes the number of other options included. If<br />

subsets of options that do not include the correct option are identified, then the score is –n, where n is the number of<br />

options included. SST and ET are probabilistically equivalent.<br />

Comparison studies of scoring methods<br />

Coombs et al. (1956) observed that tests using ET are somewhat more reliable than NS tests and measure the same<br />

abilities as NS scoring. Dressel & Schmid (1953) compared SST with NS using college students in a physical science<br />

course. They observed that the reliability of the SST test, at 0.67, was slightly lower than that of the NS test at 0.70.<br />

They also noted that academically high-performing students scored better than average compared to low-performing<br />

students, with respect to full knowledge, regardless of the difficulty of the items. Jaradat and Tollefson (1988)<br />

compared ET with SST, using graduate students enrolled in an educational measurement course. No significant<br />

differences in terms of reliability were observed between the methods. Jaradat and Tollefson reported that the<br />

majority of students felt that ET and SST were better measures of their knowledge than conventional NS, but they<br />

still preferred NS. Bradbard et al. (2004) concluded that ET scoring is useful whenever there is concern about<br />

improving the accuracy of measuring a student’s partial knowledge. ET scoring may be particularly helpful in<br />

content areas where partial or full misinformation can have life-threatening consequences. This study adopted ET<br />

scoring as the measurement scheme for partial scoring.<br />

Design of multiple choice<br />

An MC item is composed of a correct answer and several distractors. The design of distractors is the largest<br />

challenge in constructing an MC item (Haladyna & Downing, 1989). Haladyna & Downing summarized the common<br />

rules of design found in many references. One such rule is that all the option choices should adopt parallel grammar<br />

to avoid giving clues to the correct answer. The option choices should address the same content, and the distractors<br />

should all be reasonable choices for a student with limited or incorrect information. Items should be as clear and<br />

concise as possible, both to ensure that students know what is being asked, and to minimize reading time and the<br />

influence of reading skills on performance. Haladyna and Downing recommended some guidelines for developing<br />

distractors:<br />

Employ plausible distractors; avoid illogical distractors.<br />

Incorporate common student errors into distractors.<br />

Adopt familiar yet incorrect phrases as distractors.<br />

Use true statements that do not correctly answer the items.<br />

Kehoe (1995) recommended improving tests by maintaining and developing a pool of “good” items from which<br />

future tests are drawn in part or in whole. This approach is particularly true for instructors who teach the same course<br />

more than once. The proportion of students answering an item correctly also affects its discrimination power. Items<br />

answered correctly (or incorrectly) by a large proportion of examinees (more than 85%) have a markedly low power<br />

to discriminate. In a good test, most items are answered correctly by 30% to 80% of the examinees. Kehoe described<br />

the following three methods to enhance the ability of items to discriminate among abilities:<br />

Items that correlate less than 0.15 with total test score should probably be restructured.<br />

Distractors that are not chosen by any examinees should be replaced or eliminated.<br />

Items that virtually all examinees answer correctly are unhelpful for discriminating among students and should<br />

be replaced by harder items.<br />

Pros and cons of CBTs<br />

CBTs have several benefits:<br />

The single-item presentation is not restricted to text, is easy to read, and allows combining with pictures, voice,<br />

image, and animation.<br />

98


CBTs shorten testing time, give instantaneous results, and increase test security. The online testing format of an<br />

exam can be configured to enable instantaneous feedback to the student, and can be more easily scheduled and<br />

administered than PPTs (Gretes & Green, 2000; Bugbee, 1996).<br />

The test requires no paper, eliminating the battle for the copy machine. The option of printing the test is always<br />

available.<br />

The student can take the test when and where appropriate. Even taking the test at home is an option if the<br />

student has Internet access at home. Students can wait until they think they have mastered the material before<br />

being tested on it.<br />

The test is a valuable teaching tool. CBTs provide immediate feedback, requiring the student to get the correct<br />

answer before moving on.<br />

CBTs save <strong>10</strong>0% of the time taken to distribute the test to test-takers (a CBT never has to be handed out).<br />

CBTs save <strong>10</strong>0% of the time taken to create different versions of the same test (re-sequencing questions to<br />

prevent an examinee from cheating by looking at the test of the person next to him).<br />

However, CBTs have some disadvantages:<br />

The test format is normally limited to true/false and MC. Computer-based automatic grading cannot easily<br />

judge the accuracy of constructed-response questions such as short-answer, problem-solving exercises, and<br />

essay questions.<br />

When holding an onsite test, instructors must prepare many computers for examinees, and be prepared for the<br />

difficulties caused by computer crashes.<br />

The computer display is not suitable for question items composed of numerous words, since the resolution<br />

might make the text difficult to read (Mazzeo & Harvey, 1988).<br />

Most items in mathematics and chemistry testing need manual calculation. The need to write down and<br />

calculate answers on draft paper might decrease the answering speed (Ager, 1993).<br />

Research method<br />

This study first constructs a computer-based assessment system, and then adopts experimental design to record an<br />

examinee’s performance with different testing tools and scoring approaches. This section describes research<br />

hypotheses, variables, the experimental design, and data collection.<br />

System configuration<br />

As well as providing a platform for computer-based ET, this system implements the Rasch one-parameter logistics<br />

item characteristics curve (ICC) model for dichotomous scoring and the grade-response model for partial scoring of<br />

ET to estimate the item and ability parameters (Hambleton & Swaminathan, 1985; Wright & Masters, 1982; Wright<br />

& Stone, 1979; Zhu, 1996; Zhu & Cole, 1996). Furthermore, item difficulty analysis allows the system to maintain a<br />

high-quality test bank. shows the system configuration, which comprises databases and three main subsystems: (1)<br />

The computer-based assessment system is the platform used by examinees to take tests, and enables learning<br />

progress to be tracked, grades to be queried, and examinees’ response patterns to be recorded in detail; (2) the<br />

testbank management system, the platform used by instructors to manage testing items and examinee accounts; (3)<br />

the test diagnosis system, which collects data from the answer record and the gradebook database to analyze the<br />

difficulty of test items and the ability of examinees. These subsystems are described in detail as follows (Figure 1):<br />

Computer-based assessment system<br />

The computer-based assessment system links to the answer record, and the gradebook database collects the complete<br />

answer record and incorporates a feedback mechanism that enables examinees to check their own learning progress<br />

and increase their learning efficiency by means of constructive interaction. The main functions of the computerbased<br />

assessment system are as follows: The system first verifies the examinee’s eligibility for the test and then<br />

displays the test information, including allowed answering time, scoring methods, and test items. The system permits<br />

examinees to write down keywords or mark on the test sheet, as in a paper test. The complete answering processes<br />

99


are stored in the answer record and the gradebook database. The answer record and gradebook database enable the<br />

computer-based assessment system to collect information for examinees during tests, improving examinees’<br />

understanding of their own learning status. Examinees can understand the core of a problem by reading the remarks<br />

they themselves made during a test to identify any errors and improve in areas of poor understanding.<br />

Instructor<br />

interface<br />

Testbank management system<br />

Test item management/<br />

Exams management/<br />

Account management<br />

Testbank<br />

Database<br />

Examinee<br />

interface<br />

Computer-based assessment system<br />

Online testing /<br />

Progress tracking/<br />

Grade querying<br />

Answer record<br />

and Gradebook<br />

Database<br />

Test Diagnosis system<br />

Defining difficulty<br />

Setting up commonly used tests<br />

Testing unexpected responses<br />

Item fit analysis Person fit analysis<br />

Estimating ability and item parameter<br />

Scoring<br />

Sorting examinee’s responses<br />

Figure 1. System configuration<br />

<strong>10</strong>0


Testbank management system<br />

The testbank management system is normally accessed by instructors who are designing test items, revising test<br />

items, designing examinations, and reusing tests. The main advantage of the item banking is in test development.<br />

This system allows a curriculum designer to edit multiple-choice questions, constructed-respond items, and true/false<br />

items and specify scoring modes. Designed test items are stored in the testbank database. Test items can be displayed<br />

in text format or integrated with multimedia images. This system supports three parts of the preparation of<br />

examinations: (1) creating original questions, (2) browsing questions from a testbank, and (3) using questions from a<br />

testbank by selecting questions items at random. The system provides dichotomous scoring and partial scoring for<br />

MC items. After the item sheet has been prepared, the curriculum designer must specify “examinee eligibility,”<br />

“testing date,” “scoring mode,” “time allowed,” and “question values.” The computer-based assessment system<br />

permits only eligible examinees to take the test at the specified time. Although the system can automatically and<br />

instantly grade multiple-choice and true/false questions, the announcement of testing scores is delayed until all<br />

examinees have finished the test.<br />

Test diagnosis system<br />

The test diagnosis system analyzes items and examinees’ ability based on scoring methods (dichotomous scoring and<br />

partial scoring) and the data retrieved from the answer record and gradebook database. The diagnostic process is<br />

summarized as follows: (1) Sorting the results from the test records. The system first sorts the item-response patterns<br />

for all examinees. The answer records are rearranged according to each examinee’s ability and the number of<br />

students who have given the correct answer to a given question. (2) Matrix reduction. The system then deletes<br />

useless item responses and examinee responses, such as those questions to which all examinees gave correct answers<br />

or wrong answers, or received a full or a zero score, since these data do not help to assess the ability of examinees<br />

(Baker, 1992; Bond & Fox, 2001). Matrix reduction can reduce the resources and time required to conduct the<br />

calculation. (3) The JMLE procedures (Hambleton & Swaminathan, 1985; Wongwiwatthananukit, Popovich, and<br />

Bennett, 2000) depicted in Figure 2 are applied to estimate the ability and item parameters. Then the diagnosis<br />

procedures are followed by the item and person fit analysis, and the procedures are described as follows: First, let<br />

random variable X ni denote examinee n’s response on item i, in which X ni = 1 signifies the correct answer; θ n is<br />

the personal parameter of examinee n; and b i denotes the item parameter, which determines the item location and is<br />

called the item difficulty in attainment tests. The expected value and variance of X ni are shown in equations (1) and<br />

(2).<br />

(1) EX ( ni) = exp( θn − bi) /[1+ exp( θn − bi)]<br />

= πni<br />

(2) Var( X ) = π (1 − π ) = W<br />

ni ni ni ni<br />

The standardized residual and kurtosis of X ni are shown in equations (3) and (4).<br />

(3) Zni = [ Xni − E( Xni )]/[ Var( Xni<br />

)]<br />

(4) C<br />

4 4<br />

= [(1 − π ) ( π )] + [(0 −π ) (1 − π )]<br />

ni ni ni ni ni<br />

The person fit analysis shows the mean square (MNSQ) and standardized weighted mean square (Zstd), as<br />

represented in equations (5) and (6).<br />

1 2<br />

2<br />

(5) MNSQ = ∑WniZni ∑ Wni = ν n<br />

i i<br />

13<br />

(6) Zstd = tn = ( ν n − 1(3/ ) qn) + ( qn<br />

/3) where = ∑( − ) ∑<br />

q C W ( W )<br />

2 2 2<br />

n ni ni ni<br />

i i<br />

The item fit analysis shows the mean square (MNSQ) and standardized weighted mean square (Zstd) are as follows:<br />

<strong>10</strong>1


2<br />

(7) MNSQ = ∑WniZ ni ∑ Wni = ν i<br />

n n<br />

13<br />

(8) Zstd = ti = ( ν i − 1(3/ ) qi) + ( qi<br />

/3) where = ∑( − ) ∑<br />

q C W ( W )<br />

2 2 2<br />

i ni ni ni<br />

n n<br />

Figure 2. JMLE procedures for estimating both ability and item parameters<br />

Table 1 lists the principles to distinguish examinees and items from unexpected responses, summarized by Bond and<br />

Fox (2001), and Linacre and Wright (1994). The acceptable range for MNSQ is between 0.75 and 1.3, and the Zstd<br />

value should be in the range (-2, +2). Person and item fitness outside the range are considered unexpected responses.<br />

Incorporating the calculation module in the online testing system is helpful for curriculum designers to maintain the<br />

quality of test items by modifying or removing test items with unexpected responses.<br />

Table 1. Fit statistics<br />

MNSQ Zstd Variation Misfit Type<br />

>1.3 >2.0 Too much Underfit<br />


Research hypotheses<br />

This study proposes four hypotheses based on literature reviews. First, Alexander et al. (2001) explored students in a<br />

computer technology course who completed either a PPT or a CBT in a proctored computer lab. The test scores were<br />

similar, but students in the computer-based group, particularly freshmen, completed the test in the least amount of<br />

time. Bodman and Robinson (2004) investigated the effect of several different modes of test administration on scores<br />

and completion times, and the results of the study indicate that undergraduates completed the computer-based tests<br />

faster than the paper-based tests with no difference in scores. Stated formally:<br />

Hypothesis 1. The average score of CBTs is equivalent to that of PPTs.<br />

Second, the statistical analysis of Bradbard et al. (2004) indicates that the Coombs procedure is a viable alternative to<br />

the standard scoring procedure. In Ben-Simon et al.’s (1997) classification, NS performed very poorly in<br />

discriminating between full knowledge and absence of knowledge. Stated formally:<br />

Hypothesis 2. ET detects partial knowledge of examinees more effectively than NS.<br />

Third, Bradbard and Green (1986) indicated that elimination testing lowers the amount of guesswork, and the<br />

influence increases throughout the grading period. Stated formally:<br />

Hypothesis 3. ET lowers the number of unexpected responses for examinees more effectively than NS.<br />

Finally, most items in mathematics and chemistry testing need manual calculation. The need to write down and<br />

calculate answers on draft paper might lower the answering speed (Ager, 1993). Stated formally:<br />

Hypothesis 4. Different types of question content, such as calculation and concept, influence the performance of<br />

examinees on PPTs or CBTs.<br />

Research variables<br />

The independent variables used in this study and the operational definitions are as follows:<br />

Scoring mode: including partial scoring and conventional dichotomous scoring to analyze the influence of<br />

different answering and scoring schemes on partial knowledge.<br />

Testing tool: comparing conventional PPTs with CBTs and recognizing appropriate question types for CBTs.<br />

The dependent variable adopted in this study is the students’ performance, which is determined by test scores.<br />

Experimental design<br />

Tests were provided according to the two scoring modes and two testing tools, which were combined to form four<br />

treatments. Table 2 lists the multifactor design. Treatment 1 (T1) was CBTs, using the ET scoring method;<br />

Treatment 2 (T2) was CBTs, using NS scoring method; Treatment 3 (T3) was PPTs, using ET scoring method; and<br />

Treatment 4 (T4) was PPTs, using NS scoring method.<br />

Data collection<br />

Table 2. Multifactor design<br />

CBTs PPTs<br />

ET T1 T3<br />

NS T2 T4<br />

The subjects of the experiment were <strong>10</strong>2 students in an introductory operations management module, which is a<br />

required course for students of the two junior classes in the Department of Information Management at National<br />

Kaohsiung First University of Science and <strong>Technology</strong> in Taiwan. All students were required to take all four exams<br />

<strong>10</strong>3


in the course. A randomized block design, separating each class into two cohorts, was adopted for data collection<br />

before the first test. The students were thus separated into the following four groups: class A, cohort 1 (A1); class A,<br />

cohort 2 (A2); class B, cohort 1 (B1); and class B, cohort 2 (B2). The four tests were implemented separately using<br />

CBTs and PPTs. Each test was worth 25% of the final grade. The item contents were concept oriented and<br />

calculation oriented. Because all subjects participating in this study needed to take both NS and ET scored tests, they<br />

were given a lecture describing ET and given several opportunities to practice, to prevent bias in personal scores due<br />

to unfamiliarity with the answering method, thus enhancing the reliability of this study. Students in each group were<br />

all given exactly the same questions.<br />

Data analysis<br />

Reliability<br />

The split-half reliability coefficient was calculated to ensure internal consistency within a single test for each group.<br />

Linn and Gronlund (2000) proposed setting the reliability coefficient between 0.60 and 0.85. Table 3 lists the<br />

reliability coefficient for each treatment and cohort combination, and all Cronbach α values were in this range.<br />

Table 3. Reliability coefficients for each group<br />

Cronbach α<br />

Test 1 .6858 (T2, A1) .6684 (T3, A2) .6307 (T4, B1) .6901 (T1, B2)<br />

Test 2 .6649 (T4, A1) .7011 (T1, A2) .6053 (T2, B1) .7791 (T3, B2)<br />

Test 3 .7450 (T4, A2) .7174 (T1, A1) .7043 (T2, B2) .8078 (T3, B1)<br />

Test 4 .7358 (T2, A2) .7270 (T3, A1) .7041 (T4, B2) .6341 (T1, B1)<br />

Correlation analysis<br />

The Pearson and Spearman correlation coefficients were derived to determine whether response patterns of the<br />

examinees were consistent. Table 4 lists the correlation coefficient of ET and NS on each test, and Table 5 presents<br />

the correlation coefficients of CBTs and PPTs. The analytical results in both tables demonstrate a highly positive<br />

correlation for each scoring mode and test tool.<br />

Table 4. Correlation coefficient of ET vs. NS<br />

Pearson Spearman<br />

Test 1<br />

CBTs<br />

0.894<br />

PPTs CBTs PPTs<br />

**<br />

0.903 **<br />

0.887 **<br />

0.902 **<br />

Test 2 0.698 **<br />

Test 3 0.747 **<br />

0.692 **<br />

0.787 **<br />

0.679 **<br />

0.676 **<br />

0.722 **<br />

0.833 **<br />

Test 4 0.871 **<br />

0.768 **<br />

0.887 **<br />

**indicates significance at the 0.01 level for two-tailed test.<br />

0.743 **<br />

Table 5. Correlation coefficient of CBTs vs. PPTs<br />

Pearson Spearman<br />

NS ET NS ET<br />

Test 1 .868 **<br />

.836 **<br />

.875 **<br />

Test 2 .759 **<br />

.846 **<br />

.791 **<br />

Test 3 .840 **<br />

.805 **<br />

.802 **<br />

Test 4 .832 **<br />

.830 **<br />

.842 **<br />

**indicates significance at the 0.01 level for two-tailed test.<br />

.800 **<br />

.803 **<br />

.766 **<br />

.796 **<br />

<strong>10</strong>4


Hypothesis testing<br />

One-way analysis of variation (ANOVA) was adopted to test Hypothesis 1. The average score of the CBTs is<br />

equivalent to that of PPTs. Table 6 shows that all p-values are greater than 0.05. Each individual group is not<br />

statistically significant at the 5% level and no sufficient proof is available to reject Hypothesis 1.<br />

Test 1<br />

Test 2<br />

Test 3<br />

Test 4<br />

Table 6. One-way ANOVA<br />

Scoring<br />

mode<br />

Mean of score F-statistic p-value<br />

NS<br />

45.60 (PPTs)<br />

42.84 (CBTs)<br />

1.622 .209<br />

ET<br />

NS<br />

ET<br />

NS<br />

ET<br />

NS<br />

ET<br />

27.16 (PPTs)<br />

32.08 (CBTs)<br />

55.12 (PPTs)<br />

55.44 (CBTs)<br />

43.32 (PPTs)<br />

39.72 (CBTs)<br />

47.04 (PPTs)<br />

49.44 (CBTs)<br />

44.12 (PPTs)<br />

45.70 (CBTs)<br />

41.12 (PPTs)<br />

36.48 (CBTs)<br />

30.72 (PPTs)<br />

24.40 (CBTs)<br />

1.509 .225<br />

0.16 .901<br />

.671 .417<br />

.495 .485<br />

.143 .707<br />

1.688 .200<br />

2.223 .143<br />

Hypothesis 2: ET can detect partial knowledge of examinees more effectively than NS. According to the scoring<br />

classification of Bradbard and Green (1986), and Ben-Simon et al.’s classification (1997), Table 7 summarizes the<br />

average number of items for NS and ET scoring taxonomy. For Test 1, the average number of correct answers for<br />

class A, cohort 1 was 14.28; the number of answers revealing full knowledge for class A, cohort 2 was 11.67; the<br />

number of answers indicating partial knowledge for class A, cohort 2 was 2.38; the number of answers revealing<br />

absence of knowledge for class A, cohort 2 was 1.42; the number of answers indicating partial misinformation for<br />

class A, cohort 2 was 9.25; and the number of answers revealing full misinformation for class A, cohort 2 was 0.29.<br />

Test 1<br />

Test 2<br />

Test 3<br />

Test 4<br />

Table 7. Average number of items for scoring taxonomy<br />

NS ET<br />

correct full knowledge<br />

partial<br />

knowledge<br />

absence of<br />

knowledge<br />

partial<br />

misinformation<br />

full<br />

misinformation<br />

14.28 (A1) 11.67 (A2) 2.38 1.42 9.25 0.29<br />

15.20 (B1) 12.48 (B2) 2.56 1.36 8.44 0.16<br />

18.38 (A1) 14.16 (A2) 2.40 2.20 6.20 0.04<br />

18.48 (B1) 15.12 (B2) 2.40 2.24 5.08 0.16<br />

15.88 (A2) 15.33 (A1) 3.46 1.13 4.83 0.25<br />

16.48 (B2) 15.80 (B1) 2.12 0.96 6.08 0.04<br />

12.16 (A2) 11.08 (A1) 3.63 2.83 7.38 0.08<br />

13.16 (B2) <strong>10</strong>.36 (B1) 2.56 2.68 9.08 0.32<br />

<strong>10</strong>5


To test whether ET can effectively detect partial knowledge of examinees, the number of correct items of NS was<br />

compared to the number of full knowledge of ET in each test. Examinees who did not know the correct answer to an<br />

NS test item would have either guessed or given up answering. The number of correctly answered items is composed<br />

of (1) correct response by lucky blind guesses, and (2) correct response by examinees’ knowledge. Burton (2002)<br />

proposed that the conventional MC scoring can be described by equation B = (K + k + R), where B is the number of<br />

correct items; K is the number of correct items for which an examinee possesses accurate knowledge; k is the number<br />

of correct items for which an examinee possesses partial knowledge and guesses correctly; R is the number of correct<br />

items for which the examinee has no knowledge but makes a lucky blind guess. Burton considered that the examinee<br />

could delete distractors and increase the proportion of correct guesses based on partial knowledge. Our study<br />

randomly assigns subjects to each group so the ability should be approximately equivalent. Table 7 demonstrates that<br />

the number of correct NS items for each of the four tests is greater than the full knowledge number of ET, which<br />

shows that ET can distinguish between full knowledge and partial knowledge by partial scoring.<br />

Hypothesis 3: ET lowers the number of unexpected responses for examinees more effectively than NS. Table 8<br />

shows the number of unexpected responses based on the calculation described in the research method section,<br />

equations (1) to (8), and reveals that the number of unexpected responses in NS is greater than ET for each test,<br />

which demonstrates that ET reduces the unexpected responses of examinee.<br />

Table 8. <strong>Number</strong> of unexpected responses<br />

NS ET<br />

Test 1 25 17<br />

Test 2 <strong>10</strong> 9<br />

Test 3 17 7<br />

Test 4 29 15<br />

Next, one-way ANOVA was adopted to test Hypothesis 4, that different question contents, such as calculation and<br />

concept, influence the performance of examinees on PPTs or CBTs. Table 9 shows that p-value > 0.05 for each<br />

subgroup. Thus, experimental results reject Hypothesis 4, and indicate that different question content does not<br />

influence the performance of examinees on PPTs or CBTs. This result did not confirm Ager’s study (1993). The first<br />

reason might be the course content: an introductory course focused on principles and concepts of operations<br />

management, instead of lots of complex calculation. The second explanation is the subjects in this study are all<br />

students from the information management department. These students are quite used to interacting with computer<br />

interfaces; consequently, the performance difference between PPTs and CBTs is insignificant.<br />

Test 1<br />

PPTs<br />

CBTs<br />

Test 2 PPTs<br />

Table 9. One-way ANOVA<br />

Scoring Mean F-statistic p-value<br />

NS<br />

ET<br />

NS<br />

ET<br />

NS<br />

ET<br />

1.97 (concept)<br />

1.87 (calculation)<br />

1.22 (concept)<br />

1.45 (calculation)<br />

1.08 (concept)<br />

.90 (calculation)<br />

.385 (concept)<br />

.39 (calculation)<br />

2.15 (concept)<br />

2.13 (calculation)<br />

1.83 (concept)<br />

1.64 (calculation)<br />

.163 .688<br />

.744 .394<br />

.206 .666<br />

.000 .989<br />

.014 .907<br />

.552 .462<br />

<strong>10</strong>6


Conclusion<br />

Test 3<br />

Test 4<br />

CBTs<br />

PPTs<br />

CBTs<br />

PPTs<br />

CBTs<br />

NS<br />

ET<br />

NS<br />

ET<br />

NS<br />

ET<br />

NS<br />

ET<br />

NS<br />

ET<br />

2.34 (concept)<br />

2.45 (calculation)<br />

1.49 (concept)<br />

1.46 (calculation)<br />

1.91 (concept)<br />

1.95 (calculation)<br />

1.85 (concept)<br />

1.80 (calculation)<br />

2.24 (concept)<br />

2.20 (calculation)<br />

2.17 (concept)<br />

2.06 (calculation)<br />

1.62 (concept)<br />

1.51 (calculation)<br />

1.37 (concept)<br />

.90 (calculation)<br />

1.68 (concept)<br />

1.40 (calculation)<br />

1.05 (concept)<br />

1.08 (calculation)<br />

.304 .591<br />

.007 .936<br />

.054 .818<br />

.072 .789<br />

.005 .944<br />

.052 .831<br />

.248 .622<br />

3.222 084<br />

1.229 .281<br />

.005 .947<br />

This study demonstrates the feasibility of adopting ET and CBTs to replace conventional NS and PPTs. Under the<br />

same MC testing item construct, the researchers investigate the performance difference among examinees between<br />

partial scoring by the elimination testing and the conventional dichotomous scoring method. This study first builds a<br />

computer-based assessment system, then adopts experimental design to record an examinee’s performance with<br />

different testing tools and scoring approaches. One-way ANOVA does not show sufficient proof to reject the<br />

hypothesis that the performance of students taking CBTs is the same as the performance of students taking PPTs.<br />

This finding is in agreement with Alexander et al. (2001) and Bodman and Robinson (2004). We conclude that the<br />

discrepancy does not exist in the performance of examinees who take PPTs nor in the performance of those who take<br />

such CBTs when ET is used.<br />

Next, the correct number of NS was compared to the number of full knowledge of ET in each test. Data analysis<br />

demonstrates that the number of correct NS items is greater than the full knowledge number of ET for each test,<br />

which shows that ET can distinguish between full knowledge and partial knowledge by partial scoring. The number<br />

of unexpected responses calculated by the Rasch model based on item response theory for dichotomous scoring and<br />

the partial credit model based on graded item response for elimination testing to estimate the examinee ability and<br />

item difficulty parameters reveals that the number of unexpected responses in NS is greater than ET for each test,<br />

which demonstrates that ET reduces the unexpected responses of examinees. ET scoring is helpful whenever<br />

examinees’ partial knowledge and unexpected responses are concerned. Moreover, the instructors can more<br />

accurately assess examinees’ partial knowledge by adopting ET and CBTs, which is not only helpful in teaching, but<br />

also increases examinees’ eagerness and willingness to learn.<br />

Experimental results also indicate that different question content does not influence the performance of examinees on<br />

PPTs or CBTs. This result did not confirm Ager’s study (1993). The first reason might be the course content, an<br />

introductory course focused on principles and concepts of operations management, instead of on lots of complex<br />

calculation. The second explanation is that the subjects in this study are all students from the information<br />

management department. These students are quite used to interacting with computer interfaces; consequently, they<br />

present the difference of performance on PPTs and CBTs is insignificant. We remain aware that the validity of any<br />

experimental study is limited to the scope of the experiment. Since the study involved only two classes and one<br />

<strong>10</strong>7


course, more comparison tests could be performed on other content subjects to induce those suited for adopting ET<br />

and CBTs to replace conventional NS and PPTs.<br />

Since the trend in e-learning technologies and system development is toward the creation of standards-based<br />

distributed computing applications. To maintain a high-quality testbank and promote the sharing and reusing of test<br />

items, further research should consider incorporating the IMS question and test interoperability (QTI) specification,<br />

which describes a basic structure for the representation of assessment data groups, questions, and results, and allows<br />

the CBT system to go further by using the shareable content object reference model (SCORM) and IMS metadata<br />

specification.<br />

References<br />

Abu-Sayf, F. K. (1979). Recent developments in the scoring of multiple-choice items. <strong>Educational</strong> Review, 31, 269–<br />

270.<br />

Ager, T. (1993). Online placement testing in mathematics and chemistry. Journal and Computer-Based Instruction,<br />

20 (2), 52–57.<br />

Akeroyd, F. M. (1982). Progress in multiple-choice scoring methods. Journal of Further and Higher Education, 6,<br />

87–90.<br />

Alexander, M. W., Bartlett, J. E., Truell, A. D., & Ouwenga, K. (2001). Testing in a computer technology course: An<br />

investigation of equivalency in performance between online and paper and pencil methods. Journal of Career and<br />

Technical Education, 18 (1), 69–80.<br />

Baker, F. B. (1992). Item response theory: Parameter estimation techniques, New York, NY: Marcel Dekker.<br />

Ben-Simon, A., Budescu, D. V., & Nevo, B. (1997). A comparative study of measures of partial knowledge in<br />

multiple-choice tests. Applied Psychological Measurement, 21 (1), 65–88.<br />

Bodmann, S. M. & Robinson, D. H. (2004). Speed and performance differences among computer-based and paperpencil<br />

tests. Journal of <strong>Educational</strong> Computing Research, 31 (1), 51–60.<br />

Bond, T. G. & Fox, C. M. (2001), Applying the Rasch model: Fundamental measurement in the human sciences,<br />

Mahwah, New Jersey: Lawrence Erlbaum Associates.<br />

Bradbard, D. A. & Green, S. B. (1986). Use of the Coombs elimination procedure in classroom tests. Journal of<br />

Experimental Education, 54, 68–72.<br />

Bradbard, D. A., Parker, D. F., & Stone, G. L. (2004). An alternate multiple-choice scoring procedure in a<br />

macroeconomics course. Decision Sciences Journal of Innovative Education, 2 (1), 11–26.<br />

Bugbee, A. C. (1996). The equivalence of PPTs and computer-based testing. Journal of Research on Computing in<br />

Education, 28 (3), 282–299.<br />

Burton, R. F. (2002). Misinformation, partial knowledge and guessing in true/false tests. Medical Education, 36,<br />

805–811.<br />

Bush, M. (2001). A multiple choice test that rewards partial knowledge. Journal of Further and Higher Education,<br />

25 (2), 157–163.<br />

Chan, N., & Kennedy, P. E. (2002). Are multiple-choice exams easier for economics students? A comparison of<br />

multiple-choice and “equivalent” constructed-response exam questions. Southern Economic Journal, 68 (4), 957–<br />

971.<br />

<strong>10</strong>8


Coombs, C. H., Milholland, J. E., & Womer, F. B. (1956). The assessment of partial knowledge. <strong>Educational</strong> and<br />

Psychological Measurement, 16, 13–37.<br />

Dressel, P. L., & Schmid, J. (1953). Some modifications of the multiple-choice item. <strong>Educational</strong> and Psychological<br />

Measurement, 13, 574–595.<br />

Gretes, J. A., & Green, M. (2000). Improving undergraduate learning with computer-assisted assessment. Journal of<br />

Research on Computing in Education, 33 (1), 46–4.<br />

Hagler, A. S., Norman, G. J., Radick, L. R., Calfas, K. J., & Sallis, J. F. (2005) Comparability and reliability of<br />

paper- and computer-based measures of psychosocial constructs for adolescent fruit and vegetable and dietary fat<br />

intake. Journal of the American Dietetic Association, <strong>10</strong>5 (11), 1758–1764.<br />

Haladyna, T. M. & Downing, S. M. (1989). A taxonomy of multiple-choice item-writing rules. Applied Measurement<br />

in Education, 2, 37–50.<br />

Hambleton, R. K., & Swaminathan, H. (1985). Item response theory: Principles and applications, Boston, MA:<br />

Kluwer-Nijhoff.<br />

Inouye, D. K., & Bunderson, C. V. (1986). Four generations of computerized test administration. Machine-Mediated<br />

Learning, 1, 355–371.<br />

Jaradat, D. & Sawaged, S. (1986). The subset selection technique foe multiple-choice tests: An empirical inquiry.<br />

Journal of <strong>Educational</strong> Measurement, 23 (4), 369–376.<br />

Jaradat, D. & Tollefson, N. (1988). The impact of alternative scoring procedures for multiple-choice items on test<br />

reliability, validity, and grading. <strong>Educational</strong> and Psychological Measurement, 48, 627–635.<br />

Kehoe, J. (1995). Basic item analysis for multiple-choice tests. Practical assessment, Research & Evaluation, 4 (<strong>10</strong>),<br />

retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://PAREonline.net/getvn.asp?v=4&n=<strong>10</strong>.<br />

Linacre, J. M., & Wright, B. D. (1994). Chi-square fit statistics. Rasch Measurement Transactions, 8 (2), 360,<br />

retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://rasch.org/rmt/rmt82.htm.<br />

Linn, R. L., & Gronlund, N. E. (2000). Measurement and assessment in teaching (8 th Ed.), Upper Saddle River, NJ:<br />

Prentice-Hall.<br />

Mazzeo, J., & Harvey, A. L. (1988). The equivalence of scores from automated and conventional education and<br />

psychological test, New York, NY: College Board Publications.<br />

Wongwiwatthananukit, S., Popovich, N. G., & Bennett, D. E. (2000). Assessing pharmacy student knowledge on<br />

multiple-choice examinations using partial-credit scoring of combined-response multiple-choice items. American<br />

Journal of Pharmaceutical Education, 64 (1), 1–<strong>10</strong>.<br />

Wright, B. D., & Masters, G. N. (1982). Rating scale analysis, Chicago, IL: MESA Press.<br />

Wright, B. D., & Stone, M. H. (1979). Best test design, Chicago, IL: MESA Press.<br />

Zhu, W. (1996). Should total scores from a rating scale be used directly? Research Quarterly for Exercise and Sport,<br />

67 (3), 363–372.<br />

Zhu, W., & Cole, E. L. (1996). Many-faceted Rasch calibration of a gross motor instrument. Research Quarterly for<br />

Exercise and Sport, 67 (1), 24–34.<br />

<strong>10</strong>9


Fleischmann, K. R. (<strong>2007</strong>). Standardization from Below: Science and <strong>Technology</strong> Standards and <strong>Educational</strong> Software.<br />

<strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 1<strong>10</strong>-117.<br />

Standardization from Below: Science and <strong>Technology</strong> Standards and<br />

<strong>Educational</strong> Software<br />

Kenneth R. Fleischmann<br />

College of Information Studies, University of Maryland, College Park, MD, USA // kfleisch@umd.edu<br />

ABSTRACT<br />

Education in the United States is becoming increasingly standardized, with the standards being initiated at the<br />

national level and then trickling down to the state level and finally the local level. Yet, this top-down approach<br />

to educational standards carries with it significant limitations, such as loss of local autonomy and restrictions on<br />

the creativity of educational software designers. This paper reports findings from a study of the design and use<br />

of frog dissection simulations used in middle school and high school biology classes. The paper builds on the<br />

existing literatures on science and technology standards in education, using interviews, participant observation,<br />

and content analysis guided by grounded theory. The results highlight the ways that top-down educational<br />

standards constrain science teachers and software designers. The discussion presents an alternative to the topdown<br />

regime of educational standards, namely, a bottom-up approach of standardization from below. Finally,<br />

the conclusion argues that local control of educational experiences in the form of standardization from below<br />

can improve upon the traditional regime of top-down standards.<br />

Keywords<br />

Information technology, Computers, Science standards, <strong>Technology</strong> standards, Frog dissection simulations<br />

Introduction: Top-Down <strong>Educational</strong> Standards<br />

Standardization in education, including the growth of both science and technology standards, is a major ongoing<br />

trend in education in the United States. Although educational standards have historically been created and enforced<br />

at the state level, the recent trend has been toward the creation and enforcement of national educational standards.<br />

The explicit goal of this standardization is to enable all students within the United States to use the same computers<br />

with the same educational software. Yet, the danger is that these standards represent a one-size-fits-all approach to<br />

education that may overlook the specific needs of local communities. Going too far in the direction of standardizing<br />

education may endanger more socially, culturally, and geographically appropriate education. This paper asks two key<br />

questions. First, how do current top-down science and technology standards influence the design, marketing, and use<br />

of educational software? Second, how might bottom-up science and technology standards, in contrast, differently<br />

influence on the design, marketing, and use of educational software?<br />

Prior to the 20th Century, town meetings, gossip, and small newspapers gave citizens a local context and a shared<br />

experience. In the 20th Century, new technologies such as the telephone, the radio, and the television expanded the<br />

shared experience of humanity. Now in the 21st Century, the Internet has the potential to shape this shared<br />

experience at a global level. At each of these stages, the standardization of shared experience has depended largely<br />

on available technologies. One important form of shared experience is the educational experience, such as the<br />

secondary school experience. <strong>Educational</strong> policies and technologies are now being used to standardize this<br />

experience from one locality to another, so that the transmission of knowledges from one generation to another can<br />

become increasingly uniform. The goal of this paper is to determine the advantages and disadvantages of this topdown<br />

approach to educational standards as well as to consider the possibility of an alternative approach, a bottom-up<br />

approach to educational standards.<br />

Background: Top-Down Science Standards<br />

Traditionally, educational content standards focused on the “Three R’s” of reading, (w)riting, and (a)rithmatic. High<br />

school exit examinations focused on these three areas. These examinations not only are required for graduation, but<br />

also may be used to determine funding for different schools, and may even be used to fire teachers or principals or<br />

even close high schools. As a consequence, English and mathematics were considered the most important areas of<br />

learning, and privileged above other domains such as the sciences, social sciences, and arts. Further, these standards<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

1<strong>10</strong>


were typically set at the state level, with relatively little involvement by the federal government, including the United<br />

States Department of Education. Yet, recently these situations appear to be changing.<br />

One important example of the growing importance of science standards is the National Science Education Standards,<br />

published in 1996 by the National Research Council. This document was publicized as the first nationwide list of<br />

science standards ever produced, and the primary goal was to increase the level of science understanding among all<br />

students. These national standards then trickled down to the state level, adding pressure on states to develop their<br />

own science standards in line with the National Science Education Standards. Indeed, as a direct result, “by the<br />

<strong>2007</strong>–08 school year, all states are expected to measure students' science knowledge at least once, at each level, in<br />

elementary, middle, and high school, under the federal ‘No Child Left Behind’ Act of 2001” (Hoff, 2002, p. <strong>10</strong>). The<br />

“No Child Left Behind” Act further sped up the trickling down process for state science standards by two years,<br />

requiring that “by 2005-06, states must develop science standards” (National Education Association, 2003, p. 31).<br />

Thus, the National Science Education Standards and the No Child Left Behind Act were both responsible for<br />

promoting implementation of top-down science standards.<br />

How do state science standards work in practice? Since schools and school districts rely on states for budgets for<br />

textbooks, software, and other educational materials, the states generally have the power to enforce their standards,<br />

especially when they are connected to statewide exit exams. For example, one California school district devoted two<br />

years to developing science standards for grades K-12. Yet, just as the school district finished this process, the state<br />

of California published its own science standards, which was spurred by the National Science Education Standards.<br />

Since there were significant differences between the organization of the state and district science standards for grades<br />

6-8, the state demanded that the district immediately revise their standard to match the state standards, threatening to<br />

withdraw textbook purchasing support if the district did not immediately comply (Evans, 2002). Thus, the standards<br />

in this process began at the national level, then trickled down to the state level where they were enforced through the<br />

power of the purse strings, and finally trickled all the way down to the school district level where they overruled the<br />

local standards that had recently been developed. This story provides a clear example of top-down creation and<br />

enforcement of science standards.<br />

Background: Top-Down <strong>Technology</strong> Standards<br />

<strong>Technology</strong> standards are also an important consideration in educational software design, marketing, and use.<br />

<strong>Technology</strong> in general is particularly dependent on the notion of standards. Without standards, it may be difficult or<br />

impossible for different technologies to interface. In the case of education, some amount of technological<br />

standardization is necessary to guarantee that the same educational software can be used in different classrooms and<br />

computer laboratories (Owen, 1999). Tremendous amounts of money have been invested in improving the level of<br />

technology in schools to meet technical standards. Further, the implementation of technology standards can help all<br />

students, and thus may help teachers to bridge the Digital Divide between technological haves and have-nots (Swain<br />

and Pearson, 2002).<br />

Swain and Pearson’s (2002) analysis is particularly fascinating and compelling when they explore differences in<br />

students’ experiences with technology. They find that student ethnicity and socioeconomic status correlate to<br />

students’ experiences with technology. For example, they cite studies that demonstrate that “minority, poor, and<br />

urban students were more likely to use computers for lower-order thinking skills than their White, non-poor, and<br />

suburban counterparts” (p. 329). They use this finding to explain why increased access to technology does not<br />

always guarantee improved learning for students; learning depends not only on access to technology but also on the<br />

particular experiences with technology and the types of technology that are used. Similarly, Awalt and Jolly (1999)<br />

argue that technology standards would be useful for students, teachers, and administrators.<br />

Certainly, technology standards play an important role in educational settings. Yet, it is important to consider that the<br />

use of educational technology in schools encompasses both technological infrastructure and educational software,<br />

which is influenced by content as well as by the computers that run the software and the networks that connect the<br />

computers. While technology standards in education focus primarily on infrastructure, there may be value to<br />

considering standardization for educational software as a technology that goes beyond content standards such as<br />

science standards that currently have a top-down impact on the development and use of educational materials such as<br />

textbooks and educational software such as educational computer simulations. This paper studies the issue of<br />

111


educational standards in practice through studying how standards shape the design, marketing, and use of educational<br />

computer simulations.<br />

Research Methods<br />

This study was part of a larger dissertation project (Fleischmann, 2004) and other findings of this study have been<br />

published elsewhere (Fleischmann, 2003, 2005, 2006a, 2006b). Data from a variety of sources, including semistructured<br />

interviews, participant observation, and content analysis of software and promotional materials are used<br />

examine the impact of science and technology standards on educational software design, marketing, and use. The<br />

larger case study that informs this paper included 51 interviews, including 14 designers of six frog dissection<br />

simulations (some of whom currently are or previously were biology teachers), 29 users of frog dissection<br />

simulations (including three biology teachers, a principal, and 25 biology students), and 8 animal advocates who play<br />

a role in marketing dissection simulations. Participant observation consisted of fifteen days of fieldwork over a sixmonth<br />

period, including six days spent at three dissection simulation design laboratories, four days spent at a science<br />

education conference, and five days spent in a high school biology classroom. Print materials included promotional<br />

materials produced by dissection simulation manufacturers and animal advocacy organizations and state educational<br />

standards and other materials published by educational policymaking bodies. It is important to note that the scope of<br />

this study is limited almost entirely to the United States (although the study did discover some anecdotal evidence of<br />

United States educational standards influencing educational standards and software design abroad (specifically, in<br />

Canada).<br />

Data analysis for this study was based on the grounded theory approach to qualitative data analysis (Strauss &<br />

Corbin, 1998). Interviews were transcribed as soon as they were conducted, and each interview was coded according<br />

to a constantly changing set of categories that was initially based on the literature to date and then was continually<br />

modified as a result of new categories that emerged from interviews and old categories that lost their saliency as the<br />

data pointed to other categories. Specifically, the issue of standardization emerged as a result of the interviews,<br />

participant observation, and content analysis of software and promotional materials. Memos were written based on<br />

the coded interviews, and these memos then led to the development of theories that could best explain various<br />

aspects of the data. Specifically, the contrast between top-down and bottom-up standardization regimes was a result<br />

of this process.<br />

Results: Teachers’ Experiences with Standards and <strong>Educational</strong> Software Use<br />

The data collected for this study provide evidence that science and technology standards have a strong influence on<br />

the use of educational simulation software in biology classes, and this influence may be growing. As one teacher<br />

explains, “Everything that you’re doing will be aligned to the standards, or will be addressing the standards.” As one<br />

simulation designer and former teacher explains, “a teacher is looking for something that they can do their job with,<br />

and their job is to present those goals and objectives by the state or the nation.” Similarly, another simulation<br />

designer and former high school teacher argues, “as long as a teacher’s evaluation and salary is connected to the<br />

assessment tests, then what’s going to get taught is exactly what’s in the curriculum.” Thus, standards come with a<br />

strong motivation to teachers, not only to ‘do their job’ but also to continue to get paid for doing it. Another science<br />

teacher in a state that has recently begun to implement science testing as a requisite for graduation is concerned that<br />

this emphasis on testing may create more emphasis on textbook learning specifically geared toward improving test<br />

scores without activities such as hands-on laboratories or educational simulation use that might provide more<br />

innovative and engaging opportunities for learning.<br />

Perhaps the most interesting and vivid experience illustrating this point from the participant observation occurred<br />

during participant observation in the biology classroom. One day, the teacher began class by showing the students<br />

the state academic content standards for science (via the Web) and then explained, point by point, how the software<br />

that they were using, The Digital Frog 2, met specific standards in the area of physiology. Apparently, showing<br />

students the state standards is a typical activity for this teacher, since it helps to illustrate why it is important to learn<br />

the material. This example illustrates that not only are teachers aware of the state standards that they must meet, but<br />

they may also explicitly inform their students of these standards and use them to defend their use of educational<br />

software, in this case as a replacement for a hands-on laboratory activity, frog dissection. This state is also in the<br />

112


process of requiring graduating students to pass a science exam, making the issue of meeting educational standards<br />

particularly compelling and current.<br />

The emphasis on science standards on schools in the United States has a significant impact in shaping the content of<br />

all curricular materials, ranging from textbooks to educational software. As a teacher asks, “if it’s not addressing the<br />

main topics of the curriculum, which are the standards, then what is the point?” While textbooks were the original<br />

model for this, the same forces are now shaping educational software adoption. A teacher explains:<br />

Nobody these days will buy a book that’s not aligned [to state standards], and I see the same thing<br />

could definitely happen with software, although it doesn’t seem to be at that level yet because<br />

software is not as widely implemented as textbooks, but that will probably change too. So I think the<br />

companies would do well – it would serve their clientele better to align the curriculum with their<br />

software – but states have different standards.<br />

Interestingly, as one informant pointed out, this leads to a major problem in terms of equity – large states<br />

traditionally run the textbook industry, and may now have the same impact on the educational software industry,<br />

because adoption by large states can determine whether or not a textbook is successful. Smaller states, then, tend to<br />

follow the lead of the large states, and are not able to have as much of a direct impact on textbook and educational<br />

software adoption. Thus, the reliance on state standards has the effect of privileging large states while<br />

disempowering smaller ones.<br />

Given the constraints they are under, teachers seem to naturally gravitate toward materials that make explicitly clear<br />

how they correspond to state standards. For example, a teacher comments, “I think it would be awesome if the<br />

materials were provided with the standards links embedded, and some materials are.” The teacher continues, “I<br />

would think that software companies would do enormously well to produce software…that addresses the standards<br />

and make sure that it’s targeted to the standards.” The teacher then provides the following example:<br />

For me as a teacher, if I was to purchase this particular software…the superintendent and the board,<br />

they will not approve it unless I can show that it’s aligned. I would choose software that already says<br />

to me, it’s aligned; these are the standards that are addressed and here’s where they are addressed in<br />

the package. Then I can present it and say, look, it’s aligned already, and here it is. If that’s work that I<br />

have to do, if I have to go through and find the links, then honestly I would rather find one that’s<br />

easier to present and easier to get approved.<br />

This hypothetical example clearly illustrates why it is important, under the current educational standards regime, for<br />

educational software to meet state standards and explicitly demonstrate how their software meets various state<br />

standards, at least, as noted above, for the large states that play the most powerful role in influencing nationwide<br />

textbook and educational software adoption.<br />

An interesting feature of the high school, which was located in a very rural area, was the emphasis on technology.<br />

The high school had fairly sophisticated computing resources, and placed significant emphasis on these resources.<br />

The biology teacher and principal frequently referred to their high school as a “Digital High School,” since it was<br />

supported by a state educational initiative of the same name that provided technology resources to high schools<br />

across the state. As part of this program, the school must continue to make good use of these technology resources,<br />

and both the teacher and the principal explicitly explained that using dissection simulation software was one way to<br />

meet this requirement. The teacher and principal also emphasized to me that the computer use at the high school<br />

would help prepare students for the job market, since despite its rural surroundings, the high school was still<br />

relatively proximate to a major high-tech center. <strong>Educational</strong> simulation software use helps schools to spotlight and<br />

make good use of their technology resources provided as a result of technology standards.<br />

Use of simulations is also often connected to technology standards that control the types of computers available in<br />

computers as well as the student-to-computer ratio (Fleischmann, 2005). Specifically, the student-to-computer ratio<br />

is important for science software design. One of the key findings of this study published elsewhere (Fleischmann,<br />

2005) was that although an assumption is typically made based on the conventional understanding of humancomputer<br />

interaction that students should each get their own computer and work individually to complete computerbased<br />

tasks, this social environment differs from the traditional K-12 dissection experience and other science<br />

113


laboratory activities, which are built upon interaction among students working in small groups. As a result of<br />

transforming the learning process in science from a richly interactive and social environment to a solitary activity,<br />

students have less opportunity for peer learning and social support. As a result, the overall educational experience<br />

may suffer, an outcome presumably not intended or expected by educational software designers and technology<br />

standard-setters.<br />

Results: Designers’ Experiences with Standards and <strong>Educational</strong> Software Design and<br />

Marketing<br />

Science and technology standards also have a strong and growing influence on the design and marketing of<br />

educational simulation software. Indeed, this influence is so strong that a product may succeed or fail due largely to<br />

standards. A software designer explains, “Every software manufacturer that does anything with client software in<br />

particular has to look at goals and objectives [contained in science standards]. Maybe on a state level, maybe on a<br />

national level.” One particularly compelling example is of a software product that failed because of national and state<br />

standards. In this case, the product focused on a topic that was not covered within most state standards, and as a<br />

result, it was doomed to failure. As one of the designers explains, “we found out after that product that if our<br />

simulations aren’t part of the standards and part of the standard course of study, then there’s not going to be much<br />

interest in it.” Thus, when designing simulations, designers must be mindful to match the content of their simulation<br />

to existing educational standards.<br />

Another educational software design company was also influenced by science standards in its development of<br />

educational simulations. After finishing their first simulation, a frog dissection simulation, the company began<br />

developing a series of digital field trips, which explored various ecosystems. Yet, the company made a major<br />

strategic readjustment, moving from digital field trips (including canceling a half-completed product) and instead<br />

chose to focus on curriculum-specific units to meet particular standards. In this case, the standards seem to have<br />

robbed the software designers to some extent of their creativity, and certainly of their intellectual freedom to develop<br />

content, since they must develop content specifically to meet standards.<br />

<strong>Educational</strong> standards also have a significant impact on the marketing of educational simulation software. For<br />

example, the website of one of the studied educational software manufacturers includes a page where teachers and<br />

administrators can find out how the company’s software meets specific science standards. The page includes the<br />

science standards of eight U.S. states (including four of the five largest states in the US) and two Canadian<br />

provinces. For nine of these states and provinces, the company has developed their own page, but in the case of<br />

California, the site links to an external page developed by the California Learning Resource Network which<br />

evaluates the company’s simulations and how well they match up with the California’s science standards. The<br />

California Learning Resource Network was established by the California Department of Education to evaluate<br />

educational software and its alignment to state academic content standards. It has reviewers that provide the<br />

information about particular software. Software must be submitted for review, and this particular software package is<br />

apparently the only frog dissection simulation that has been submitted for review by its designers. Thus, educational<br />

software manufacturers can conduct their own analyses of its software’s compliance with state and province<br />

standards and also submit their software for review to websites such as the California Learning Resource Network.<br />

Teacher conferences are a good opportunity for simulation designers to interact directly with teachers. Frog<br />

dissection simulation designers routinely attend national, state, and local science teacher conferences in an effort to<br />

market their simulation products and boost their sales. Animal advocates representing various organizations also<br />

attend these conferences. The primary message of the animal advocates is that dissection simulation software and<br />

other alternatives to dissection can meet science standards as well as or better than the practice of wetlab dissection.<br />

So, at teacher conferences, not only do dissection simulation designers argue that their products cover educational<br />

standards, but so do animal advocates. Further, animal advocates, both in their print materials and in the interviews,<br />

emphasize that state science standards do not contain any explicit requirement for students to participate in animal<br />

dissection, and that in contrast, many states have passed dissection choice laws requiring teachers to provide<br />

alternatives such as dissection simulations to students who object to the activity of animal dissection on moral or<br />

ethical grounds (Fleischmann, 2003). Interviews with students revealed a variety of perspectives on the issue of<br />

alternatives to dissection, ranging from emphasis on student choice to a preference for required dissection.<br />

114


Design and marketing of simulations in classrooms are also tied to technology standards. For many designers, it was<br />

their interest in technology and the growth of technology use that interested them in simulation design, rather than<br />

the particular content of the simulations that they participated in designing. Further, without the increased availability<br />

of computers in schools brought about in part by technology standards, they would not have an adequate market for<br />

their products. In the marketing of educational simulations, an emphasis on the importance of learning about and<br />

with technology is often present in the literature of dissection simulation manufacturers as well as their allies in the<br />

animal advocacy movement (Fleischmann, 2003).<br />

The most compelling quote relevant to the issue of standards in educational simulation design was provided by a<br />

simulation designer and former teacher, who argues “The selection of the curriculum ought to reflect the local<br />

community…with a state-wide curriculum, there are needs that are not addressed or overly addressed in different<br />

areas of the state.” This teacher argues that standards may be a step too far in the wrong direction – instead of<br />

standardizing everything, teachers should have the ability to tailor their curriculum to meet the needs of their local<br />

school district. Certainly, this issue has direct relevance on the design of educational software, since educational<br />

standards reduce not only teachers’ ability to meet the specific needs of their students but also designers’ ability to<br />

innovate and produce products that are relevant to particular target audiences.<br />

Discussion: Top-Down <strong>Educational</strong> Standards Versus Local Knowledges<br />

What are the motivations for implementing standards? According to Feng (2002), standards historically have been<br />

used to achieve several different goals, including consistency and efficacy. These motivations may occur in the case<br />

of educational standards such as science and technology standards. Feng argues that consistency is used to make<br />

arguments that standards can serve the cause of social justice, by serving as an equalizer. Swain and Pearson (2003)<br />

make a similar argument about technology standards. However, Feng cautions that efficacy of standardization leads<br />

to hegemony, as a form of uniformity from above. The potentially hegemonic nature is illustrated in the example<br />

provided by Evans (2002) above, where the state board of education uses standards to dictate content to school<br />

districts and schools. This power relationship is also present in the data provided above, such that teachers must<br />

follow state standards and software developers must produce products that not only follow state standards but also<br />

explicitly demonstrate which standards they address.<br />

Do standards serve social justice? Swain and Pearson (2002) make a convincing argument that they do, since they<br />

can help to ensure that all schools, teachers, and students have equal access to equipment, training, and software that<br />

builds higher-order thinking skills. Yet, there is a danger that, swooping in from above, standards may ignore the<br />

social, cultural, and geographic context in which the students are learning. Monahan (2005a, 2005b) cautions, in<br />

some cases, that “the question of where standards are set and by whom determines where power is shifting to, on the<br />

one hand, and where autonomy is lost, on the other” (2005a: 601). When seen in this way, standards from above can<br />

be seen as endangering local control of educational experiences. The final quote provided in the results section above<br />

most clearly makes this point.<br />

According to Geertz (1983) and Hess (1995), all knowledges are socially, culturally, and geographically situated.<br />

Hess relies on the case of medicine, and discusses the various non-Western medical traditions that have evolved, like<br />

biomedicine, over long periods of time, and which, at least in many cases, are becoming increasingly popular in the<br />

contemporary United States. Eglash (1999) demonstrates that local knowledges can also apply to areas such as<br />

mathematics. He finds that many African cultures developed a deep understanding of fractal geometry long before it<br />

was “discovered” in the West. He then puts this research into practice by using it to encourage achievement in<br />

mathematics by African-American students. In his current research, he makes a similar intervention into the teaching<br />

of Native American students. Thus, local knowledges can be useful for stimulating interest in an educational subject,<br />

especially among specific social, cultural, and geographical groups. While standards clearly have many positive<br />

benefits as described above, they may also reduce the autonomy and specificity of local educational curricula when<br />

they are created and implemented through a largely top-down process.<br />

Conclusion: A Bottom-Up Approach to Science and <strong>Technology</strong> Standards<br />

Science and technology standards play a significant role in the design, marketing, and use of educational software.<br />

Certainly, there are benefits to science and technology standards, as discussed above. Yet, the standardization<br />

115


process is currently a largely top-down process, with design occurring primarily at the national and state level while<br />

implementation takes place at the local level with pressure applied from above, as explained by Hoff (2002) and<br />

Evans (2002). As demonstrated here, top-down state science standards have the effect of stifling innovation in<br />

software design, as in the case of two of the simulation design companies described above. Similarly, technology<br />

standards may not take into consideration the specific needs of local communities. After examining the impacts of<br />

the current top-down regime of science and technology standards on educational software design, marketing, and use<br />

in practice, it seems useful to consider an alternative approach to standard-setting, a bottom-up approach.<br />

Is a top-down process the only or even the best way of designing and implementing science and technology<br />

standards? Standards-setting processes are often promoted as participatory, with input being sought from teachers<br />

and lower-level administrators. Yet, this is still a top-down approach, since standards are first set at the national or<br />

state level and then trickle down to districts and schools who are compelled, often through incentives such as<br />

standardized testing and the purchasing of texts, software, and other equipment, to adopt the dominant standards.<br />

Further, at the state level, it is the large states that have the most impact on textbooks and software, creating an<br />

inequality of fairness among the states. A bottom-up approach to educational standards would replace the centralized<br />

power of standard-setting bodies at the national level and within large states with a more diffuse power that is spread<br />

more evenly among schools and school districts, giving them more autonomy to control their own classroom content<br />

and giving software developers more room to innovate to meet the diverse needs of local schools.<br />

Perhaps it would be useful to examine not only the effectiveness of this structure but also the potential of inverting it<br />

to allow for more local control of educational content and equipment. In such a scenario, schools and districts might<br />

begin the standard-setting process, which would then be built up to the state and then the national level as a process<br />

of consensus-building. Local standards-setters could get the input of larger educational bodies in an advisory role,<br />

rather than the reverse. A process of standardization from below might allow teachers, students, and administrators to<br />

reap the benefits of standardization discussed above while still retaining local autonomy and control over content,<br />

and leaving the door open for more innovation in educational software design, marketing, and use. By empowering<br />

teachers to serve not only as software designers (Fleischmann, 2006a) but also as standards-setters, it would allow<br />

them to control the content that they teach rather than merely implementing the wills of faceless standards boards,<br />

ensuring that they would be able to meet the real needs of the students in their classrooms.<br />

Acknowledgements<br />

Thanks go to David J. Hess, Bo Xie, and three anonymous reviewers for reading and commenting on earlier drafts of<br />

this paper, as well as to all of the interviewees (named and anonymous) who participated in this study. This study<br />

was funded by a Dissertation Research Improvement Grant from the Science and Society Program of the National<br />

Science Foundation (SES-0217996).<br />

References<br />

Awalt, C., & Jolly, D. (1999). An inch deep and a mile wide: Electronic tools for savvy administrators. <strong>Educational</strong><br />

<strong>Technology</strong> & Society, 2 (3), 97-<strong>10</strong>5.<br />

Eglash, R. (1999). African fractals: Modern computing and indigenous design, New Brunswick, NJ: Rutgers<br />

University Press.<br />

Evans, S. M. (2002). Aligning to state standards. Science Teacher, 69 (3), 54-57.<br />

Feng, P. (2002). Designing a “global” privacy standard: Politics and expertise in technical standards-setting,<br />

Unpublished doctoral dissertation, Rensselaer Polytechnic Institute, Troy, NY.<br />

Fleischmann, K. R. (2003). Frog and cyberfrog are friends: Dissection simulation and animal advocacy. Society and<br />

Animals, 11 (2), 123-143.<br />

116


Fleischmann, K. R. (2004). Exploring the design-use interface: The agency of boundary objects in educational<br />

technology, Unpublished doctoral dissertation, Rensselaer Polytechnic Institute, Troy, NY.<br />

Fleischmann, K. R. (2005). Virtual dissection and physical collaboration. First Monday, <strong>10</strong> (5), retrieved <strong>October</strong> 15,<br />

<strong>2007</strong> from http://www.firstmonday.org/issues/issue<strong>10</strong>_5/fleischmann/index.html.<br />

Fleischmann, K. R. (2006a). Do-it-yourself information technology: Role hybridization and the design-use interface.<br />

Journal of the American Society for Information Science and <strong>Technology</strong>, 57 (1), 87-95.<br />

Fleischmann, K. R. (2006b). Boundary objects with agency: A method for studying the design-use interface. The<br />

Information Society, 22 (2), 77-87.<br />

Geertz, C. (1983). Local knowledge: Further essays in interpretive anthropology, New York: Basic Books.<br />

Hess, D. J. (1995). Science and technology in a multicultural world, New York: Columbia University Press.<br />

Hoff, D. J. (2002). Science standards have yet to seep into class, panel says. Education Week, 21 (37), <strong>10</strong>.<br />

Monahan, T. (2005a). The school system as a post-Fordist organization: Fragmented centralization and the<br />

emergence of IT specialists. Critical Sociology, 31 (4), 583-615.<br />

Monahan, T. (2005b). Globalization, Technological Change, and Public Education, New York: Routledge.<br />

National Education Association (2003). Timeline for future NCLB mandates. NEA Today, 21 (8), 34.<br />

National Research Council (1996). National Science Education Standards, retrieved <strong>October</strong> 15, <strong>2007</strong> from<br />

http://books.nap.edu/readingroom/books/nses/.<br />

Owen, M. (1999). Appropriate and appropriated technology: technological literacy and educational software<br />

standards. <strong>Educational</strong> <strong>Technology</strong> & Society, 2 (4), 62-69.<br />

Strauss, A. & Corbin, J. (1998). Basics of qualitative research: Techniques and procedures for developing grounded<br />

theory (2 nd Ed.), Thousand Oaks, CA: SAGE Publications.<br />

Swain, C., & Pearson, T. (2002). Educators and technology standards: Influencing the Digital Divide. Journal of<br />

Research on <strong>Technology</strong> in Education, 34 (3), 326-335.<br />

117


Hsu, Y.-S., Wu, H.-K., & Hwang, F.-K. (<strong>2007</strong>). Factors Influencing Junior High School Teachers’ Computer-Based Instructional<br />

Practices Regarding Their Instructional Evolution Stages. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 118-130.<br />

Factors Influencing Junior High School Teachers’ Computer-Based<br />

Instructional Practices Regarding Their Instructional Evolution Stages<br />

Ying-Shao Hsu<br />

Department of Earth Sciences & Science Education Center, National Taiwan Normal University, Taiwan //<br />

yshsu@.ntnu.edu.tw<br />

Hsin-Kai Wu<br />

Graduate Institute of Science Education, National Taiwan Normal University, Taiwan // hkwu@.ntnu.edu.tw<br />

Fu-Kwun Hwang<br />

Department of Physics, National Taiwan Normal University, Taiwan // hwang@phy.ntnu.edu.tw<br />

ABSTRACT<br />

Sandholtz, Ringstaff, & Dwyer (1996) list five stages in the “evolution” of a teacher’s capacity for computerbased<br />

instruction—entry, adoption, adaptation, appropriation and invention—which hereafter will be called the<br />

teacher’s computer-based instructional evolution. In this study of approximately six hundred junior high school<br />

science and mathematics teachers in Taiwan who have integrated computing technology into their instruction,<br />

we correlated each teacher’s stage of computer-based instructional evolution with factors, such as attitude<br />

toward computer-based instruction, belief in the effectiveness of such instruction, degree of technological<br />

practice in the classroom, the teacher’s number of years of teaching experience (or “seniority”), and the<br />

teacher’s school’s ability to acquire technical and personnel resources (i.e. computer support and maintenance<br />

resources). We found, among other things, that the stage of computer-based instructional evolution and teaching<br />

seniority, two largely independent factors, both had a significant impact on the technical and personnel<br />

resources available in their schools. Also, we learned that “belief” in the effectiveness of computer-based<br />

instruction is the single biggest predictor of a teacher’s successful practice of it in the classroom. Future research<br />

therefore needs to focus on how we can shape teachers’ beliefs regarding computer-based learning in order to<br />

promote their instructional evolution.<br />

Keywords<br />

<strong>Technology</strong> adoption, Teachers’ beliefs, <strong>Educational</strong> technology, In-service teachers<br />

Introduction<br />

The rapid development of modern information and communication technologies has opened new possibilities for<br />

establishing and delivering distance learning. Given the popularity of the Internet, computer applications have<br />

recently become one of the most promising kinds of educational tool. Computers can now help educators in<br />

designing and promoting the teaching and learning process (Ministry of Education in Taiwan, 1999; Sinko &<br />

Lehtinen, 1999; Smeets, Mooij, Bamps, Bartolomé, Lowyck, Redmond, & Steffens, 1999). From studies (Angeli &<br />

Valanides, 2005; Hsu, Cheng , & Chiou, 2003), computers or/and Internet technology have positive impacts on<br />

students learning only when teachers know how to use computers or/and Internet technology to promote students’<br />

knowledge construction and thinking. How can teachers use computers or/and Internet technology to promote<br />

students’ meaningful learning?<br />

Firstly, the teacher’s role should no longer be that of a traditional lecturer; rather, the teacher must now be a coach or<br />

co-learner (Beaudion, 1990; Brophy & Good, 1986). Secondly, activities in the classroom should become learnercentered<br />

and flexible in order to help students organize information and undergo self-initiated, exploratory learning<br />

processes (McKenzie, Mims, Davidson, Hurt & Clay, 1996; Winn, 1993). With computers and Internet technology, a<br />

teacher can utilize online teaching resources to arrange flexible learning activities; these can assist students in<br />

analyzing and organizing large amounts of information. Thirdly, the teacher’s attitude toward computers will be<br />

important to the way computer-based technology is used in instruction (Beaudion,1990; Ercan & Ozdemir, 2006;<br />

Gardner ,Discena & Dukes, 1993). Lloyd and Gressard (1984) have pointed out that a teacher’s positive feelings<br />

about computers will also help to generate or reinforce such feelings in the students. Comber et al. (1997) found that<br />

younger teachers might have more experience in computer use and thus a more positive attitude toward computers<br />

(Jennings & Onwuegbuzie, 2001). Braak (2001) noted that personal acceptance of technological innovation would<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

118


influence attitudes toward computers, and furthermore that computer experience tends to directly affect attitudes<br />

toward computers (Kay, 1989; Gardner, Discena & Dukes, 1993; Woodrow, 1994; Yildirim, 2000).<br />

The teachers will need to adapt to the impact of computing technology while integrating it into their classrooms.<br />

During this adaptation process, a teacher may need to change his educational beliefs, values, and rationale in order to<br />

properly integrate computing and Internet technology into daily teaching. A researcher who wants to understand a<br />

teacher’s adaptation process should investigate the changes in the teacher’s teaching strategies, the design of his/her<br />

learning activities, and students’ assessments (Beaudoin, 1990; Brophy & Good, 1986; Mc Kenzie, Mims, Davidson,<br />

Hurt & Clay, 1996; Thatch & Murphy, 1995; Verduin & Clark, 1991). Therefore, integrating technology into<br />

instruction not only requires dealing with hardware and software issues but also dealing with complex issues like<br />

human cognition, policy and values (Sandholtz, Ringstaff, & Dwyer, 1996). For instances, teachers’ knowledge<br />

(OTA, 1995; Pelgrum, 2001), educational rationales (Czerniak & Lumpe, 1996; Niederhauser & Stoddart, 2001;<br />

Ruthven Hennessy & Brindley, 2004) and instructional strategies (Becker, 2000a, 2000b; Ravitz, Becker & Brindley,<br />

2000) about integrating computers into instruction affect students’ meaningful learning with computers. <strong>Educational</strong><br />

policy, curriculum standards, school culture and peer supports influence teachers’ intention to use computers in<br />

classrooms (Chiero, 1997; Mooij & Smeets, 2001; Rogers, 2000; Russell, Bebell, O’Dwyer & O’Connor, 2003;<br />

Ruthven et al., 2004; Windschitl & Sahl, 2002; Teo & Wei, 2001; Zhao & Frank, 2003). Above all, the key to<br />

successful computer-based instruction, especially with teachers who are new to it, is to find the methods which can<br />

help teachers face and adjust their beliefs, attitude, and instructional strategies regarding computer-based instruction.<br />

From the above studies, a comprehensive list of factors influencing computer-based instruction has been addressed<br />

but there is a systematical investigation necessary to reveal the interactions among factors and the possibility of<br />

incorporating computers into a broader educational reform context. This study takes structural factors such as<br />

teachers’ instructional evolution and teaching seniority into consideration and explores how they interact with<br />

teachers’ beliefs regarding, attitude toward and practices of computer-based teaching. We conducted a national<br />

survey in Taiwan and collected a pool of six hundred science and mathematics teachers who had integrated<br />

computing technology into their instruction in junior high schools. The following questions guided this study: (1)<br />

What are the numbers of teachers in different stages of instructional evolution as defined by Sandholtz, Ringstaff and<br />

Dwyer (1996)? (2) How do teachers’ instructional evolution and teaching seniority interact with their beliefs<br />

regarding, attitudes toward and practices of computer-based instruction? (3) What regression models can be proposed<br />

to examine the relationships among teachers’ beliefs, attitudes, practices, and resources when using computing<br />

technology in the classroom?<br />

Research Methods<br />

This research project employed a survey method to investigate teachers’ beliefs regarding, attitudes toward and<br />

practices of computing technology in the classroom.<br />

Sample selection<br />

The population under investigation included all science and mathematics teachers at junior high schools in Taiwan.<br />

A stratified cluster random technique was employed to select the sample according to school size (large: more than<br />

37 classes, middle: 16-36 classes, small: less than 15 classes) and school region (schools in the northern, middle,<br />

southern, and eastern parts of Taiwan). Among 892 junior high schools in Taiwan, approximately 11% (99 schools)<br />

were selected. Of 2019 questionnaires mailed out, <strong>10</strong>02 replies from 82 schools were received (an 89.1% school<br />

response rate and 49.6% teacher response rate). After ignoring the questionnaires which were not filled in<br />

completely, the valid sample size was determined to be 940. Of the valid sample, 613 respondents who reported that<br />

they had used computing technology in their teaching were finally examined.<br />

Instrumentation<br />

We developed the questionnaire to collect information on teachers’ use of computing technology in their classrooms.<br />

The questionnaire was divided into three sections: (1) demographic background, (2) current stage of instructional<br />

119


evolution with computing technology, and (3) perceptions and practices of computer-based instruction. Demographic<br />

background included information about age, gender, teaching seniority, school size and school region. Items in the<br />

latter two sections were rated on a 5-point Likert-type scale from 1 (strongly disagree) to 5 (strongly agree). From<br />

data in the second section, the respondents were classified into five evolutionary stages (entry, adoption, adaptation,<br />

appropriation and invention) to indicate teachers’ level of computer use, according to the definitions of Sandholtz,<br />

Ringstaff, & Dwyer (1996): (1) entry stage: teachers spend a lot of time in installing software and managing<br />

hardware and students spend most of time in learning computer skills instead of subject contents; (2) adoption stage:<br />

teachers utilize software (i.e. word processors, excel etc.) to assist their traditional teaching; (3) adaptation stage:<br />

teacher apply various software for instructional purposes and integrate technology successfully in<br />

classrooms; (4) appropriation stage: teachers develop multiple teaching strategies to promote students’ cognitive<br />

ability, share computer-based teaching experience with other teachers, and feel confident in integrating technology<br />

into teaching; (5) invention stage: teachers lead students to use software as a learning tool, develop innovative<br />

teaching strategies and assessments with computers, and affirm the value of computer-based instruction.<br />

Table 1 shows teachers’ instructional evolution with computers in terms of five categories: classroom management,<br />

software use, teaching strategies, learning efficiency, and confidence and beliefs. There is an item for each of the five<br />

“evolutionary” stages plotted against the five categories: thus 25 items in all. The respondents needed to pick one<br />

description for each category to represent their current level of experience with computing technologies. The round<br />

average of the values (from 1: entry to 5: invention) in these five categories represents the current stage of the<br />

teacher’s computer-based instructional evolution.<br />

Contractors<br />

Table 1. Characteristics of the Stages of Instructional Evolution (Sandholtz, Ringstaff, & Dwyer, 1996)<br />

Stages<br />

Classroom<br />

management<br />

Software use<br />

Teaching<br />

strategies<br />

Learning<br />

efficiency<br />

Confidence<br />

and beliefs<br />

Reacting to<br />

problems<br />

Entry Adoption Adaptation Appropriation Invention<br />

Dealing with<br />

problems in<br />

software<br />

installations and<br />

management<br />

No change<br />

Students spend<br />

time in learning<br />

computer skills<br />

No faith in<br />

computer-<br />

based<br />

instruction;<br />

having doubts<br />

most of the time<br />

Anticipating and<br />

developing strategies<br />

for solving problems<br />

Learning software<br />

Designing activities to<br />

teach students<br />

computer skills<br />

Promoting learning<br />

motivation but not<br />

improving conceptual<br />

understanding<br />

(sometimes having a<br />

negative impact on<br />

students’ grades)<br />

Attempting to use<br />

technology in<br />

classrooms<br />

Utilizing the<br />

technological<br />

advantage in<br />

managing the<br />

classroom<br />

Using software for<br />

instructional<br />

purposes<br />

Integrating<br />

technology to<br />

improve students’<br />

knowledge<br />

comprehension<br />

Reducing students’<br />

learning load and no<br />

significant<br />

improvement in<br />

conceptual<br />

understanding<br />

Often integrating<br />

technology<br />

successfully in<br />

classrooms<br />

Intertwining instruction approaches and<br />

management strategies<br />

Integrating software<br />

in learning<br />

processes and<br />

enhancing students’<br />

mutual support in<br />

software use<br />

Using multiple<br />

methods to promote<br />

students’ cognitive<br />

ability<br />

Cultivating<br />

cognitive ability<br />

Having confidence<br />

in integrating<br />

technology into<br />

teaching; sharing<br />

experiences with<br />

other teachers<br />

Leading students<br />

to use software<br />

as a learning tool<br />

Developing<br />

strategies for<br />

innovative<br />

teaching such as<br />

project-based<br />

learning,<br />

modeling etc.<br />

Promoting<br />

problem-<br />

solving ability<br />

Affirming the<br />

value of<br />

computer-based<br />

instruction<br />

120


In the third section, factor analytical techniques were used to determine the underlying structure of teachers’<br />

responses to items. Principal axis factor analysis with varimax rotations was employed for the factor analysis. The<br />

results for both the Kaiser-Meyer-Olkin Measure of Sampling Adequacy (0.85) and the Bartlett Test of Sphericity<br />

(χ2 =6803.8, N = 613, p < 0.0001) were significant, indicating that factor analysis was suitable for this sample. By<br />

using Cattell’s screen test and examining the factor loadings of the items, we removed 13 items from the<br />

questionnaire: five factors emerged. According to the pattern of correlation between and among items and factors,<br />

we assigned a descriptive name to each of the factors. The factors arrived at were: (1) belief in the use of computing<br />

technology in classrooms (e.g., “I believe that technology-based instruction can improve learning achievement”); (2)<br />

high degree of interactive use of technology (e.g., “I have had students learn collaboratively through the Internet”);<br />

(3) technical and personnel resources available in a given teacher’s school (e.g., “In my school, there are enough<br />

technicians to maintain computers”); (4) low degree of interactive use of technology (e.g., “I have used computers to<br />

play videos in classrooms”); and (5) attitude toward technology-based instruction (e.g., “Learning software will not<br />

make me nervous and uncomfortable”). The validity values of the factors (eigen values) were 4.82, 2.13, 1.73, 1.24,<br />

and 1.01. The factor loadings of the 24 items ranged from 0.53 to 0.70, and 58% of the variations were included<br />

within the five factors (see details in Table 2). As shown in Table 2, the composite reliability coefficients were<br />

ranged from 0.69 to 0.89 and the overall instrument reliability reached 0.87. Therefore, the reliability of the<br />

instrument was established.<br />

Data Analysis<br />

We used frequency analysis to show the distribution of teachers in different stages of instructional evolution and<br />

MANOVA techniques to examine correlations and interactions between two factors (teaching seniority and the stage<br />

of computer-based instructional evolution) on five measures (beliefs, high-level interactive practices, technical and<br />

personnel resources, low-level interactive practices, and attitudes). The stepwise method of multiple linear regression<br />

was applied to indicate the relationships between and among these variables. The calculations to determine the<br />

coefficients of regression models and MANOVA were performed through the application of the SPSS 12.0 package.<br />

No. Items and Factors<br />

Table 2. Questionnaire Items and Factor Loading<br />

Factor<br />

Loading<br />

Average<br />

variance<br />

extracted<br />

Composite<br />

reliability<br />

Subscale score<br />

Mean S.D.<br />

Belief 0.53 0.89<br />

e19 I think technology is helpful for my teaching. 0.73 3.60 0.73<br />

e26 I believe that technology-based instruction can improve<br />

learning achievement.<br />

0.71 3.51 0.78<br />

e9 I think technology-based instruction is one of the future trends<br />

in education.<br />

0.69 3.93 0.76<br />

e27 I believe that technology-based instruction can make my<br />

teaching more lively and energetic.<br />

0.68 3.89 0.67<br />

e21 I should create different teaching strategies for technologybased<br />

instruction.<br />

0.68 3.74 0.67<br />

e25 I believe that technology-based teaching can increase students’<br />

motivation.<br />

0.68 3.81 0.66<br />

e20 Using technology can help me share my teaching experiences<br />

with others.<br />

0.68 3.59 0.68<br />

e<strong>10</strong> I am willing to follow school policy on implementing<br />

technology-based instruction.<br />

0.66 3.80 0.71<br />

e22 I should develop different assessment strategies for technologybased<br />

instruction.<br />

0.63 3.72 0.68<br />

e1 I believe that conventional teaching methods are more efficient<br />

than technology-based instruction.<br />

0.54 3.35 0.85<br />

High-interaction practices (behaviors) 0.70 0.84<br />

e35 I have designed activities that allow students to learn through<br />

0.84 2.97 1.04<br />

the Internet.<br />

e36 I have had students learn collaboratively through the Internet. 0.84 2.86 1.02<br />

e37 I have used the Internet to support individual learning. 0.80 2.66 1.01<br />

e33 I have used computers and the Internet to collect and grade<br />

students’ assignments.<br />

0.72 3.24 1.09<br />

121


Resources 0.55 0.72<br />

e17 In my school, there are enough technicians to maintain<br />

computers.<br />

0.83 3.06 1.01<br />

e16 In my school, administrators can provide hardware and<br />

software for supporting technology-based instruction.<br />

0.77 3.40 0.90<br />

e13 In my school, teachers often discuss computer-related topics<br />

and exchange ideas about computer hardware and software.<br />

0.65 3.03 0.87<br />

e14 In my school, teachers often surf teaching websites. 0.56 3.55 0.77<br />

Low-interaction practices (behaviors) 0.56 0.72<br />

e32 I have used educational software to promote learning. 0.74 3.54 0.95<br />

e30 I have used computers to play videos in classrooms 0.70 3.55 0.96<br />

e34 I have used computer applications to create pictures, videos and<br />

animations and used them in classrooms.<br />

0.61 3.55 0.92<br />

e31 I have used document management software, such as Word and<br />

Powerpoint, to display my syllabus and my lectures.<br />

0.58 4.02 0.79<br />

Attitude toward computer technology 0.62 0.69<br />

e3 I will not feel anxious when I take any computer-related<br />

courses.<br />

0.87 3.61 0.97<br />

e2 Learning software will not make me feel nervous and<br />

uncomfortable.<br />

0.82 3.67 0.84<br />

e5 Currently, information on the Internet is useful to my teaching.<br />

Instrument reliability: 0.87<br />

0.51 3.71 0.80<br />

Results and Discussion<br />

The results are presented in three sections. The first section shows the descriptive statistics associated with two<br />

factors (stage of computer-based instructional evolution and teaching seniority) and five dependent measures<br />

(beliefs, high-interaction practices, low-interaction practices, resources and attitudes). In the second section, results<br />

of MANOVA are presented and interactive effects between/among the stage of instructional evolution and teaching<br />

seniority in relation to the five dependent measures are shown. The third section outlines the relationships among<br />

beliefs, high- and low-interaction practices, resources and attitudes.<br />

Descriptive analyses<br />

According to the participants’ responses in the second section of the questionnaire (current stage of instructional<br />

evolution with computing technology), one-third of the teachers (about 37%) were in the third or “adaptation” stage;<br />

the teachers in that stage can use appropriate software and information technology to improve their teaching and<br />

students’ learning in science and mathematics. Roughly one-fifth of the teachers were in either the “entry” stage<br />

(15%), “adoption” stage (23%) or “appropriation” stage (18%). Very few teachers (about 7%) were in the final<br />

“invention” stage; the teachers in this stage need to be able to use computer-based instruction in a creative way,<br />

guiding students to use software as a learning tool in their knowledge construction, modeling, and communication.<br />

After surveying junior high school science and mathematics teachers who had integrated computing technology into<br />

their instruction in Taiwan, most of them were in adaptation and appropriation stages; that meant they usually<br />

applied computer software for instructional purposes, developed multiple teaching strategies to promote<br />

students’ cognitive ability, and felt confident in integrating technology into teaching.<br />

Teachers’ degree of seniority could indicate the degree of their teaching experiences, pedagogical content knowledge<br />

(PCK), subject knowledge, and computer skills. Teachers with a seniority of less than <strong>10</strong> years tended to have more<br />

computer skills because they more likely were able to take computer-based instruction courses in their teachers’<br />

training programs. In contrast, the teachers who had already taught for more that 11 years tended to lack training in<br />

computer skills and to feel anxious about computers, while also having a higher degree of PCK due to their greater<br />

teaching experience. As Figure 1 shows, most teachers with low seniority are in the third or “adaptation” stage: 43%<br />

of the teachers who have taught for less than 5 years are in this stage. This means that most of them often integrate<br />

technology into their instruction but cannot use multiple methods to promote students conceptual understanding and<br />

cognitive ability because they lack the teaching experience, even though their computer skills are probably excellent.<br />

About 76% of this low-seniority group are in the first three stages (entry, adoption, and adaptation). On the other<br />

122


hand, more teachers whose seniority ranged between 16 and 20 years were in the fourth or “appropriation” stage<br />

(25%). This means teachers with more teaching experience and greater PCK can use multiple methods to integrate<br />

technology into their classroom teaching; however, with greater seniority the percentage of teachers in the “entry”<br />

and “adoption” stages increased as well, suggesting again that younger teachers’ greater computer skills and more<br />

positive attitudes toward computers do also affect computer-based instructional evolution .<br />

Figure 1. Frequencies of teachers’ seniorities at each evolutionary stage<br />

Effects of instructional evolution and teaching seniority<br />

Table 2 outlines the mean scale scores and standard deviations for different -stage and different-seniority teachers’<br />

beliefs, high-interaction practices, low-interaction practices, resources and attitudes. Compared with the teachers in<br />

the entry stage, the teachers in the appropriation and invention stage tended to have higher mean scores on all of<br />

dependent measures. It is not surprising that the teachers in higher stages of instructional evolution seemed to hold<br />

more positive belief and attitude toward technology and to have more resources to support technology-based<br />

instruction. In order to examine the effects of computer-based instructional evolution and teaching seniority on<br />

beliefs, high-interaction practices, low-interaction practices, resources and attitudes, a 5 (stages) × 5 (levels of<br />

teaching experience or seniority) MANOVA was employed. As Table 4 shows, there were significant differences<br />

regarding teachers’ beliefs in, attitudes toward, and practices of computer-based instruction. For instance, teachers in<br />

the “adaptation” and “appropriation” stages tended to have more positive beliefs about and attitudes toward<br />

computer-based instruction than those just in the “entry” stage; teachers in the “entry” stage also tended to perform<br />

low-interaction computer-based activities than those in the “appropriation” and “invention” stages; the latter two<br />

groups were still behind those in the “invention” stage as regards the degree of interactivity of their computer-based<br />

learning practices. The results imply few things: (1) teachers in the later stages (adaptation, appreciation, and<br />

invention) of computer-based instructional evolution held more positive beliefs and attitudes, and practiced more<br />

computer-based instructions in classroom; (2) teachers with positive beliefs and attitudes possibly moved to the later<br />

stages of computer-based instructional evolution and intended to practice computer-based instructions; (3) the<br />

successful teaching experiences in computer-based instructional evolution could promote teachers’ positive beliefs<br />

and attitudes, and encourage them to practiced more computer-based instructions in classroom. Therefore, teachers<br />

with different computer-based instructional evolution could be due to their different beliefs and attitudes; also,<br />

experiences of computer-based teaching practices could feedback to teachers’ beliefs, attitudes, and computer-based<br />

instructional evolution.<br />

As we can see in Table 4, teachers’ stage of instructional evolution and degree of teaching seniority had a significant<br />

impact on the amount of technical and personnel resources available at their schools. Since the interaction between<br />

123


teachers’ instructional evolution and seniority had reached a significant level, an ANOVA simple-effect analysis was<br />

conducted (see Table 5 and Figure 2). The results showed that teachers in the “entry” stage whose seniority was 6-<strong>10</strong><br />

years reported that they had less technical and personnel resources available in their school than “entry” stage<br />

teachers whose seniority who were more than 21 years; these same teachers with 6-<strong>10</strong> years of seniority in the<br />

“entry” stage also reported, very predictably, that they had less technical and personnel resources available in school<br />

than teachers with 6-<strong>10</strong> years of seniority in the “appropriation” and “invention” stages. This means that teachers<br />

who hold computer-instruction skills and pedagogies could move to the later stage of computer-based instructional<br />

evolution if there are technical and personnel resources available in school.<br />

Table 3. Stage & Seniority vs. Beliefs, Practice, Resources, and Attitude<br />

Condition Beliefs Resources High practice Low practice Attitudes<br />

Mean SD Mean SD Mean SD Mean SD Mean SD<br />

< 5 years<br />

Entry 3.51 0.52 3.23 0.84 2.86 0.92 3.66 0.71 3.32 0.81<br />

Adoption 3.63 0.44 3.17 0.59 2.88 0.78 3.62 0.61 3.65 0.69<br />

Adaptation 3.70 0.49 3.20 0.67 2.92 0.87 3.85 0.56 3.86 0.62<br />

Appropriation 3.94 0.45 3.39 0.65 3.<strong>10</strong> 0.92 3.99 0.51 4.00 0.56<br />

Invention 3.65 0.40 3.23 0.79 3.61 0.66 3.78 0.76 3.43 0.83<br />

6-<strong>10</strong> years<br />

Entry 3.29 0.52 2.77 0.60 2.52 0.81 3.42 0.79 3.49 0.74<br />

Adoption 3.64 0.52 3.20 0.63 2.95 0.95 3.73 0.51 3.71 0.52<br />

Adaptation 3.85 0.45 3.26 0.66 2.95 0.93 3.81 0.63 3.93 0.59<br />

Appropriation 3.99 0.45 3.61 0.59 3.32 0.75 4.01 0.71 4.09 0.60<br />

Invention 3.98 0.40 3.79 0.54 3.71 0.91 4.<strong>10</strong> 0.55 3.36 1.00<br />

11-15 years<br />

Entry 3.46 0.57 3.36 0.64 2.71 0.82 3.25 0.47 3.58 0.67<br />

Adoption 3.63 0.44 3.<strong>10</strong> 0.60 2.90 0.58 3.33 0.62 3.56 0.64<br />

Adaptation 3.74 0.51 3.20 0.72 2.86 0.86 3.76 0.57 3.62 0.68<br />

Appropriation 3.82 0.51 3.22 0.56 3.17 0.77 3.56 0.73 3.74 0.65<br />

Invention 3.47 0.60 2.82 0.76 2.39 0.81 3.32 0.70 3.95 0.36<br />

16-20 years<br />

Entry 3.19 0.58 3.09 0.53 2.25 0.74 3.03 0.62 3.33 0.71<br />

Adoption 3.62 0.44 3.21 0.47 2.71 0.99 3.56 0.55 3.62 0.49<br />

Adaptation 3.69 0.41 3.37 0.65 2.65 0.55 3.58 0.72 3.56 0.63<br />

Appropriation 3.90 0.59 3.17 0.61 3.21 0.61 3.75 0.54 3.72 0.69<br />

Invention 3.78 0.57 3.38 0.48 2.81 1.07 2.94 1.23 2.83 0.33<br />

> 21 years<br />

Entry 3.53 0.52 3.43 0.46 3.00 0.96 3.32 0.75 3.13 0.59<br />

Adoption 3.56 0.47 3.30 0.59 2.76 0.59 3.35 0.62 3.27 0.62<br />

Adaptation 3.67 0.58 3.42 0.70 2.76 0.80 3.49 0.74 3.39 0.71<br />

Appropriation 3.76 0.44 3.57 0.59 2.70 0.96 3.57 0.75 3.67 0.47<br />

Invention 4.07 0.12 3.50 0.43 3.58 1.28 4.17 0.52 4.33 0.33<br />

Total<br />

< 5 years 3.70 0.48 3.23 0.68 2.98 0.87 3.79 0.61 3.74 0.69<br />

6-<strong>10</strong> years 3.76 0.52 3.29 0.68 3.03 0.92 3.80 0.67 3.79 0.68<br />

11-15 years 3.65 0.51 3.18 0.65 2.87 0.77 3.49 0.63 3.64 0.64<br />

16-20 years 3.65 0.53 3.24 0.56 2.75 0.79 3.49 0.70 3.52 0.63<br />

> 21 years 3.65 0.51 3.42 0.60 2.82 0.83 3.46 0.71 3.40 0.66<br />

Total<br />

Entry 3.42 0.54 3.17 0.69 2.71 0.87 3.39 0.69 3.39 0.72<br />

Adoption 3.62 0.45 3.18 0.58 2.87 0.76 3.53 0.60 3.58 0.63<br />

Adaptation 3.73 0.49 3.26 0.68 2.88 0.85 3.76 0.62 3.75 0.66<br />

Appropriation 3.90 0.47 3.41 0.62 3.13 0.84 3.83 0.66 3.90 0.60<br />

Invention 3.76 0.47 3.37 0.73 3.37 0.95 3.75 0.79 3.50 0.83<br />

124


Table 4. Summary of MANOVA Results<br />

Condition Beliefs Resources High practice Low practice Attitude<br />

F E.S. F E.S. F E.S. F E.S. F E.S.<br />

Main effect<br />

Stages 11.86 *** 0.29 1.76 0.11 4.29 ** 0.17 6.51 *** 0.21 6.00 *** 0.20<br />

Post Hoc (1


Regression models indicating relationships<br />

The questions in the third section of the questionnaire were grouped by dependent variable (high- and lowinteraction<br />

practices) and independent variable (beliefs, attitudes, and resources). The means of the responses were<br />

then used to represent each variable, since this is one of the most-used parameters to represent a group of values<br />

(Moore & Benbasat, 1991; Holcombe, 2000). After calculating these means, an analysis of the correlation between<br />

variables was done. As we can see in Table 6, there were significant correlations among teachers’ beliefs, attitude<br />

toward computers, available resources, and teaching practices (including high- and low- interaction practices).<br />

Beliefs show the greatest correlation with the dependent variables (high- and low-interaction practices); therefore<br />

beliefs will have a greater impact in the regression models.<br />

Table 6. Correlation Matrix for the Five Teacher Factors (N = 613)<br />

Pearson Correlation Beliefs H-Practice L- Practice Attitude Resources<br />

Beliefs 1<br />

H-Practice 0.31** 1<br />

L- Practice 0.48** 0.44** 1<br />

Attitude 0.33** 0.09* 0.32** 1<br />

Resources 0.27** 0.21** 0.16** 0.05 1<br />

**Correlation is significant at the 0.01 level (2-tailed).<br />

*Correlation is significant at the 0.05 level (2-tailed).<br />

As we can see in Table 7, these coefficients of the multiple linear regressions were grouped into two categories: (1)<br />

the dependent variable is high-interaction practices; (2) the dependent variable is low-interaction practices. In each<br />

category six regression models, including one group with all the subjects and five groups with the subjects<br />

distributed into five stages, were calculated in order to compare the different contributions of variables (beliefs,<br />

attitudes toward computers, and available resources). Teachers’ beliefs and available resources turned out to be the<br />

major predictors of their high-interaction practices. Besides, all of the t values reached a significant level (0.05),<br />

which means that independent variables (beliefs and available resources) contributed significantly to the prediction<br />

of the dependent variable (high-interaction practices). The same procedure was used to analyze the subjects in the<br />

five stages of computer-based instructional evolution. As we can see in Table 7, beliefs are the greatest predictor of<br />

high-interaction practices except at the “entry” stage; here teachers’ beliefs and available resources are the best<br />

predictors of this variable.<br />

By contrast, beliefs and attitudes contribute significantly to the prediction of low-interaction practices. As we can<br />

see, beliefs are the greatest predictor for these practices except at the “adaptation” stage. In this stage, teachers’<br />

beliefs and attitudes best predict that teachers will engage in low-interaction activities when they integrate computerbased<br />

technology into their classrooms. In conclusion, teachers’ beliefs are the most reliable predictors of their<br />

computer-based instructional practices. Similar findings can be found in many studies (Czerniak & Lumpe, 1996;<br />

Niederhauser & Stoddart, 2001; Ruthven, et al., 2004; Sandholtz, Ringstaff & Dwyer, 1996 ).<br />

Total<br />

(N=613)<br />

Entry<br />

(N=89)<br />

Table 7. Summary of Regression Models<br />

High-Interaction Practice<br />

Variables Beta S.E. t (p) R<br />

Constant 0.63 0.26 2.41(.016)<br />

Beliefs 0.47 0.07 6.97(.000)<br />

.34<br />

Resources 0.18 0.05 3.37(.001)<br />

Constant 0.29 0.56 0.51(.061)<br />

Resources 0.34 0.14 2.50(.015)<br />

.44<br />

Beliefs 0.39 0.18 2.19(.031)<br />

Constant<br />

Beliefs<br />

1.82<br />

0.29<br />

0.51<br />

0.14<br />

3.55(.001)<br />

2.04(.043)<br />

.17<br />

Constant<br />

Beliefs<br />

1.41<br />

0.39<br />

0.42<br />

0.11<br />

3.33(.001)<br />

3.49(.001)<br />

.23<br />

Constant<br />

Beliefs<br />

0.31<br />

0.73<br />

0.61<br />

0.16<br />

0.50(.615)<br />

4.71(.000)<br />

.41<br />

Adoption<br />

(N=141)<br />

Adaptation<br />

(N=229)<br />

Appropriation<br />

(N=111)<br />

Invention Constant 0.21 1.09 0.19(.850) .42<br />

126


(N=43) Beliefs 0.84 0.29 2.93(.006)<br />

Low-Interaction Practice<br />

Variables Beta S.E. t (p) R<br />

Total<br />

(N=613)<br />

Entry<br />

(N=89)<br />

Adoption<br />

(N=141)<br />

Adaptation<br />

(N=229)<br />

Appropriation<br />

(N=111)<br />

Invention<br />

(N=43)<br />

Conclusion<br />

Constant 0.97 0.19 5.19(.000)<br />

Beliefs 0.56 0.05 11.46(.000)<br />

Attitude 0.17 0.04 4.73(.000)<br />

Constant 1.85 0.45 4.14(.000)<br />

Beliefs 0.45 0.13 3.45(.001)<br />

Constant 1.38 0.37 3.74(.000)<br />

Beliefs 0.59 0.<strong>10</strong> 5.88(.000)<br />

Constant 1.000 0.30 3.34(.001)<br />

Beliefs 0.48 0.08 6.23(.000)<br />

Attitude 0.26 0.06 4.55(.000)<br />

Constant 0.65 0.47 1.37(.172)<br />

Beliefs 0.48 0.12 4.00(.000)<br />

Attitude 0.34 0.09 3.69(.000)<br />

Constant 0.15 0.83 0.18(.855)<br />

Beliefs 0.96 0.22 4.39(.000)<br />

The purpose of this study was to identify the factors which influence teachers’ computer-based instructional<br />

practices. After conducting a survey, we found teachers’ beliefs, attitudes toward computers, and available school<br />

resources to be the factors that most affected their computer-based teaching practices. The contribution of each factor<br />

to teachers’ computer-based instruction is different for different stages of instructional evolution. Teachers’ beliefs<br />

and available resources are the major predictors of their high-interaction practices; beliefs and attitudes contribute<br />

significantly to the prediction of low-interaction practices. Overall, teachers’ belief is the biggest predictor of the<br />

way in which they practice computer-based instruction. The next question is then: what affects teachers’ beliefs,<br />

attitudes, and available resources in schools? The findings indicated that teachers in the “adaptation” and<br />

“appropriation” stages have more positive beliefs and attitudes regarding computer- based instruction than those still<br />

in the “entry” stage, while their teaching seniority has no significant influence on beliefs, attitudes and available<br />

resources. The interaction between teachers’ instructional evolution and seniority reached a significant level when<br />

measured against the dependent variable of available resources in school. Teachers with different seniorities and at<br />

different stages of instructional evolution regarded “available resources in school” as a crucial factor. Teachers who<br />

had taught for 6 to <strong>10</strong> years at the “entry” stage reported that they had less technical and personnel resources<br />

available in school than the teachers in the “appropriation” and “invention” stages. In other words, after teachers has<br />

taught for 6 to <strong>10</strong> years they could move to the “appropriation” and “invention” stages as long as they felt there were<br />

plentiful personnel and technical resources available in school.<br />

The regression analysis shows teachers’ belief is the best predictor of their computer-based instruction practices. A<br />

useful focus of future research would be the issue of how to create the learning environments in which teachers<br />

might best learn the necessary computer-instruction skills and pedagogies, and might best develop more positive<br />

beliefs and attitudes with regard to computer-based learning. Perhaps training programs must allow especially<br />

“entry”-stage teachers to visit expert teachers’ classes, in order to see how they use computer technology to interact<br />

in intensive and exciting ways with their students. Such “instruction” is after all the best way to change teachers’<br />

beliefs about and attitudes toward computers. School culture is another factor to affect teachers’ beliefs. The<br />

implementation of computer-based instruction at school is consistent with the existing beliefs and practices of school<br />

members including teachers and administrators (Zhao, Pugh, Sheldon & Byers, 2002). How to change school culture<br />

for classroom technology innovations is an important issue and educational policy makers need to find an<br />

encouraging way to promote it.<br />

From the results, it is strongly suggested that teachers could move to the later stage of computer-based instructional<br />

evolution if there were technical and personnel resources available in school and teachers hold positive beliefs about<br />

using computers in classroom. Therefore, if educational policy is to integrate computers into teaching and learning,<br />

the school managers need to provide necessary technical and personnel supports for teachers’ computer-based<br />

instruction, encourage teachers to share their experiences in innovative instruction, and shape teachers’ development<br />

.51<br />

.35<br />

.45<br />

.53<br />

.56<br />

.57<br />

127


of beliefs about integrating computers into teaching and learning. More instructional examples and design principles<br />

are needed to facilitate teachers to create efficient instructions that enable to promote meaningful learning.<br />

Acknowledgements<br />

This research project was funded by the National Science Council of the Republic of China under contract no. NSC<br />

92-2511-S-003-039 and NSC 95-2524-S-003-012. The author gratefully acknowledges the collaboration of her<br />

colleagues: Tai-Yih Tso and Chang Yung-Ta.<br />

References<br />

Angeli, C., & Vanlanides, N. (2005). Preservice elementary teachers as information and communication technology<br />

designers: an instructional systems design model based on an expanded view of pedagogical content knowledge.<br />

Journal of Computer Assisted Learning, 21, 292-302.<br />

Beaudoin, M. F. (1990). The instructor’s changing role in distance education. The American Journal of Distance<br />

Education 4, 21-29.<br />

Becker, J. (2000a). Findings from the teaching, learning, and computing survey: Is larry cuban right? <strong>Educational</strong><br />

Policy Analysis Archives, 8, retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.crito.uci.edu/tlc/findings/ccsso.pdf.<br />

Becker, J. (2000b). Who's wired and who's not: Children's access to and use of computer technology. The Future of<br />

Children, <strong>10</strong> (2), 44-75.<br />

Borphy, J., & Good, T. L.(1986)Teacher behavior and student achievement. In M. C. Wittrock (Ed.), Handbook<br />

of Research on Teaching, New York, NY: Macmillan.<br />

Chiero, R. T. (1997). Teachers' perspectives on factors that affect computer use. Journal of Research on Computing<br />

in Education, 30 (2), 133-126.<br />

Comber, C., Colley, A., Hargreaves, D. J., & Dorn, L. (1997). The effects of age, gender and computer experience<br />

upon computer attitude. <strong>Educational</strong> Research, 39 (2), 123-133.<br />

Czerniak, C. M., & Lumpe, A. T. (1996). Relationship between teacher beliefs and science education reform.<br />

Journal of Science Teacher Education, 7 (4), 247-266.<br />

Ercan, K., & Ozdemir, D. (2006). The relationship between educational ideologies and technology acceptance in preservice<br />

teachers. <strong>Educational</strong> <strong>Technology</strong> & Society, 9 (2), 152-165.<br />

Fetters, M.K., & Vellom, P. (2001). Linking schools and universities in partnership for science teacher preparation.<br />

In Lavoie, D. R. & Roth, W. M. (Eds.), Models of science teacher preparation: Theory into practice, The<br />

Netherlands: Kluwer Academic Publishers, 97-88.<br />

Gardner, D. G., Discenza, R., & Dukes, R. L. (1993). The measurement of computer attitudes: An empirical<br />

comparison of available scales. Journal of <strong>Educational</strong> Computing Research, 4, 487-507.<br />

Hargrave, C. P., & Thompson, A. D. (2001). TEAMS: A science learning and teaching apprenticeship. In Lavoie, D.<br />

R. & Roth, W. M. (Eds.), Models of science teacher preparation: Theory into practice, The Netherlands: Kluwer<br />

Academic Publishers, 11-30.<br />

Holcombe, M.C. (2000). Factors influencing teacher acceptance of the Internet as a teaching tool: a study of Texas<br />

schools receiving a TIF or a TIE Grant, Unpublished dissertation, Baylor University, School of Education. USA.<br />

128


Hsu, Y.-S., Cheng ,Y.-J., & Chiou, G.-F. (2003). Internet uses in the school: A case study of the Internet adoption in<br />

a senior high school. Innovations in Education and Teaching International, 40 (4), 356-368.<br />

Jennings, S. E., & Onwuegbuzie, A. J. (2001). Computer attitudes as a function of age, gender, math attitude, and<br />

developmental status. Journal of <strong>Educational</strong> Computing Research, 25 (4), 367-384.<br />

Jonassen, D. H. (1996). Computers in the classroom: Mindtools for critical thinking, Englewood Cliffs, NJ: Prentice-<br />

Hall.<br />

Kay, R. (1989). A Practical and Theoretical Approach to Assessing Computer Attitudes: The Computer Attitude<br />

Measure (CAM). Journal of <strong>Educational</strong> Computing Research, 3, 456-463.<br />

Lavoie, D. R. (2001). New technologies and science teacher preparation. In Lavoie, D. R. & Roth, W. M. (Eds.),<br />

Models of science teacher preparation: Theory into practice, The Netherlands: Kluwer Academic, 163-176.<br />

Loyd B. H., & Gressard, C. (1984). Reliability and factorial Validity of Computer Attitude Scales. <strong>Educational</strong> and<br />

Psychological Measurement, 2, 501-505.<br />

McKenzie, B. K., Mims, N. G., Davidson, T. J., Hurt, J., & Clay, M. N. (1996). The design and development of a<br />

distance education training model. Paper presented in Society for Information <strong>Technology</strong> and Teacher Education<br />

annual conference, March 13-16, 1996, Phoenix, Arizona, USA.<br />

Ministry of Education in Taiwan (1999). National standards of 1-9 science and technology curriculum, Taipei,<br />

Taiwan: Ministry of Education in Taiwan.<br />

Mooij, T., & Smeets, E. (2001). Modelling and supporting ICT implementation in secondary schools. Computers &<br />

Education, 36, 265-281.<br />

Moore, G.C., & Benbasat, I. (1991). Developemnt of an instrument to measure the perceptions of adopting an<br />

information technology innovation. Information Systems Research, 2 (3), 192-222.<br />

Niederhauser, D. S., & Stoddart, T. (2001). Teachers’ instructional perspectives and use of educational software.<br />

Teaching and Teacher Education, 17, 15-31.<br />

OTA, O. T. A. (1995). Teachers and technology: Making the connection, Washington, DC: Government Printing<br />

Office.<br />

Pelgrum, W. J. (2001). Obstacles to the integration of ict in education: Result from a world wide educational<br />

assessment. Computers & Education, 37, 163-178.<br />

Ravitz, J., Becker, H., & Wong, Y. (2000). Constructivist-compatible beliefs and practices among U.S. Teachers,<br />

Irvine,CA: Center for Research on Information <strong>Technology</strong> and Organizations.<br />

Rogers, P. L. (2000). Barriers to adopting emerging technologies in education. Journal of <strong>Educational</strong> Computing<br />

Research, 22 (4), 455-472.<br />

Russell, M., Bebell, D., O’Dwyer, L., & O’Connor, K. (2003). Examining teacher technology use: Implication for<br />

pre-service and in-service teacher preparation. Journal of Teacher Education, 54 (4), 297-3<strong>10</strong>.<br />

Ruthven, K., Hennessy, S., & Brindley, S. (2004). Teacher representations of the successful use of computer-based<br />

tools and resources in secondary-school English, Mathematics and Science. Teaching & Teacher Education, 20 (3),<br />

259-275.<br />

Sandholtz, J. H., Ringstaff, C., & Dwyer, D. C. (1996). The evolution of instruction in technology: rich classroom. In<br />

J. H. Sandholtz, C. Ringstaff, & D. C. Dwyer (Eds.), Teaching with technology: Creating student-centered<br />

classrooms, New York and London: Teachers College Press, 33-54.<br />

129


Sinko, M., & Lehtinen, E. (1999). The challenge of ICT in Finnish education, Jyväskylä, Finland: Atena.<br />

Smeets, E., Mooij, T., Bamps, H., Bartolomé, A., Lowyck, J., Redmond, D., & Steffens, K. (1999). The impact of<br />

Information and Communication <strong>Technology</strong> on the teacher, Nijmegen, The Netherlands: KU/ITS, retrieved <strong>October</strong><br />

15, <strong>2007</strong>, from http://webdoc.ubn.kun.nl/anon/i/impaofina.pdf.<br />

Teo, H. H., & Wei, K. K. (2001). Effective use of computer aided instruction in secondary schools: A causal model<br />

of institutional factors and teachers’ role. Journal of <strong>Educational</strong> Computing Research, 25 (4), 285-315.<br />

Thatch, E. C., & Murphy, K. L. (1995). Competencies for Distance Education Professionals. <strong>Educational</strong><br />

<strong>Technology</strong> Research and Development, 43, 57-72.<br />

van Braak, J. (2001). Individual characteristics influencing teachers’ class use of computer. Journal of <strong>Educational</strong><br />

Computing Research, 25 (2), 141-157.<br />

Verduim, J. R., & Clark, T. A. (1991). Distance Education: The Foundations of Effective Practice, San Francisco,<br />

CA: Jossey-bass.<br />

Windschitl, M., & Sahl, K. (2002). Tracing teachers’ use of technology in a laptop computer school: The interplay of<br />

teacher beliefs, social dynamics, and institutional culture. American <strong>Educational</strong> Research Journal, 39, 165-205.<br />

Winn, W. (1993). Instructional Design and Situated Learning: Paradox or Partnership. <strong>Educational</strong> <strong>Technology</strong>, 33<br />

(3), 16-21.<br />

Woodrow, J. J. (1994). The development of computer-related attitude of secondary students. Journal of Research on<br />

Computing in Education, 7 (2), 165-187.<br />

Yildirim, S. (2000). Effects of an <strong>Educational</strong> Computing Course on Pre-service and In-service Teachers: A<br />

Discussion and Analysis of Attitude and Use. Journal of Research on Computing in Education, 4, 479-496.<br />

Zhao, Y., & Frank, K. A. (2003). Factors affecting technology uses in schools: An ecological perspective. American<br />

<strong>Educational</strong> Research Journal, 40 (4), 807-840.<br />

Zhao, Y., Pugh, K., Sheldon, S., & Byers, J. L. (2002). Conditions for classroom technology innovations. Teachers<br />

College Record, <strong>10</strong>4 (3), 482-515.<br />

130


Deryakulu, D., & Olkun, S. (<strong>2007</strong>). Analysis of Computer Teachers’ Online Discussion Forum Messages about their<br />

Occupational Problems. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 131-142.<br />

Analysis of Computer Teachers’ Online Discussion Forum Messages about<br />

their Occupational Problems<br />

Deniz Deryakulu<br />

Department of Computer and Instructional Technologies Education, Ankara University, Turkey<br />

Tel: +90 (312) 363 3350 Ext: 3203 // Fax: +90 (312) 363 6145 // deryakul@education.ankara.edu.tr<br />

Sinan Olkun<br />

Department of Elementary Education, Ankara University, Turkey<br />

Tel: +90 (312) 363 3350 Ext: 5111 // Fax: +90 (312) 363 6145 // sinanolkun@gmail.com<br />

ABSTRACT<br />

This study, using content analysis technique, examined the types of job-related problems that the Turkish<br />

computer teachers experienced and the types of social support provided by reciprocal discussions in an online<br />

forum. Results indicated that role conflict, inadequate teacher induction policies, lack of required technological<br />

infrastructure and technical support, and the status of computer subject in school curriculum were the most<br />

frequently mentioned problems. In addition, 87.9% of the messages were identified as providing emotional<br />

support, while 3.1% messages were identified as providing instrumental support. It is concluded that content<br />

analysis technique provides an invaluable tool to understand the nature of communication and social interaction<br />

patterns among users in online environments. CMC in education should not only be considered to be a tool for<br />

content delivery and instructional interaction, but also a feedback mechanism and a platform for professional<br />

support, as well as an informal learning environment.<br />

Keywords<br />

Teacher stress, Social support, Online discussion forums, Computer-mediated communication<br />

Introduction<br />

Many studies have examined job-stress, burnout and job dissatisfaction among teachers (Abel & Sewell, 1999;<br />

Farber, 1984; Friedman, 1991; van Dick & Wagner, 2001). These studies demonstrated that teachers often<br />

experience a great deal of stress when there was an imbalance between the demands of the job environment and their<br />

response capability. Wolpin, Burke, and Greenglass (1991) found that the negative work setting characteristics<br />

resulted in greater work stressors that in turn were associated with increased teacher burnout thus resulted in<br />

decreased job satisfaction. Some other studies revealed that when these negative working conditions merge with poor<br />

staff communication and lack of social support that might result in increased teacher stress and burnout (Black, 2003;<br />

Brissie, Hoover-Dempsey, & Bassler, 1988; Burke & Greenglass, 1993).<br />

Although there is a huge body of research on teacher stress and burnout, few studies have specifically dealt with<br />

computer teachers (see Deryakulu, 2005, 2006). Most of these studies concerning teacher stress and burnout used<br />

self-report as a source of data. There are extensive critiques related to the use of self-report measures as having<br />

significant shortcomings in assessing sources, types and levels of job-stress and burnout (see Guglielmi & Tatrow,<br />

1998). In addition, self-report questionnaires are susceptible to the negative effects of social desirability (see Evers,<br />

Brouwers, & Tomic, 2002). Therefore, we decided to use a different data source to examine the sources of jobstresses<br />

for computer teachers in order to identify potential barriers to effective and efficient computer education.<br />

The present study aims at analyzing the content of messages posted by Turkish computer teachers in an online<br />

asynchronous discussion forum about their occupational problems. We believe the examination of the content of<br />

messages in this specific online discussion forum would help us portray the major stress-inducing problems of<br />

computer teachers, as well as the types of social support provided by reciprocal online discussions. Furthermore,<br />

examining the kind of data derived from an open, spontaneous online discussion forum (not initiated by an external<br />

researcher) could help us identify new factors, which otherwise could not be obtained by self-reported measures.<br />

Background<br />

In the following sections, we first provide an overview on the nature of computer education in Turkey and review the<br />

existing research findings concerning stress and burnout in computer teachers. Second, we briefly introduce the<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

131


concept of social support. Lastly, we present the potential use of computer-mediated communication tools for<br />

providing online social support.<br />

The Nature of Computer Education in the Turkish <strong>Educational</strong> System<br />

In 1998, The Turkish Ministry of National Education (MNE) received a loan from the World Bank for the Basic<br />

Education Program (BEP). The primary aims of the BEP are to expand the scope of basic education and to improve<br />

the quality of education. The MNE also set such aims as to ensure that each student and teacher become computer<br />

literate, to integrate ICT into school curriculum, and to establish computer laboratories in schools (MEB, 2004). In<br />

this context, computer as an elective subject was added to the elementary school curriculum in 1998 as 1 or 2 hours<br />

per week for grades 4 through 8, and later was added to the academic high school curriculum in 2000 for grades 9<br />

and <strong>10</strong>. The primary aim of this course is to increase the number of computer literate students to facilitate and<br />

accelerate the diffusion of ICT usage across school curriculum. In 2005, the MNE allowed students to take computer<br />

subjects as electives from the first to the eighth grade. However, total teaching time is restricted to 1 hour per week.<br />

Turkey still has difficulties in widening computer literacy among students and integrating ICT into school<br />

curriculum. Many students still do not have access to computers in the teaching-learning processes of other subjects<br />

such as science and mathematics in public schools (Olkun, Altun, & Smith, 2005). Thus, the only opportunity for<br />

many students to have access to computers (especially in high-poverty areas) might be elective computer subjects.<br />

However, there are 41,091 public schools housing 12.7 million students in Turkey, while the number of computer<br />

teachers who graduated from an accredited computer teacher-training program is only <strong>10</strong>% of the number of schools<br />

(MEB, 2004, 2005b). According to these statistics, majority of the schools with computers have no computer teacher<br />

specifically trained for teaching computers. The MNE has been trying to solve computer teacher shortage by<br />

employing one computer teacher for two or more schools, employing computer coordinators as computer teachers or<br />

allowing schools to make contract with individuals who have the relatively appropriate background and skills. In<br />

short, computer teachers may come from different disciplines in Turkish educational system. This causes other<br />

problems especially dissatisfaction among accredited computer teachers in terms of employee rights and<br />

responsibilities. For example, computer engineers employed as computer teachers enjoy better payment and seniority<br />

advantages while accredited computer teachers do not.<br />

Recently, Deryakulu (2005) examined the levels of burnout in Turkish computer teachers using the Maslach Burnout<br />

Inventory-Turkish Version. Because there are no norms for this inventory, by using percentage values, she exposed<br />

that the surveyed computer teachers displayed fewer symptoms of depersonalization, relatively moderate symptoms<br />

of emotional exhaustion and reduced personal accomplishment. In this study, using an open-ended form, lack of<br />

technical support, lack of student interest, and large class sizes were found to be the foremost stress-inducing<br />

problems that the computer teachers experienced.<br />

Deryakulu (2006) also examined the factors predicting burnout in Turkish computer teachers. She found two<br />

significant predictors of emotional exhaustion and depersonalization. These were the types of problems they<br />

encountered while teaching computers under a heavy teaching load. These findings suggest that the more the<br />

teachers suffered from job-stress, the more they showed symptoms of burnout. As is well known, job-stress and<br />

burnout may cause decline in teachers’ job performance and also may be detrimental to the physical and<br />

psychological health of teachers. Therefore, we have to think about the ways to reduce teachers’ job-stress to prevent<br />

our teachers and students from its harmful effects. One of the effective ways to reduce job-stress is to provide social<br />

support to those who are stressed.<br />

The Concept of Social Support<br />

Social support has been identified as a resource provided by another person that enables individuals to cope with<br />

stress (Russell, Altmaier, & Van Velzen, 1987). There are many forms of social support. Beehr and Glazer (2001)<br />

have mentioned two main social support categories: (a) structural and (b) functional. According to Cohen and Wills<br />

(1985), structural support means that a relationship with one or more others exists. Structural support does not<br />

indicate however what other people do or what functions they perform for the focal person. Functional support, on<br />

132


the other hand, implies that the supportive people are performing some functions for the focal person and this kind of<br />

support must perform at least one of these two functions: (a) emotional or (b) instrumental (Beehr & Glazer, 2001).<br />

Emotional support means that the supportive people make the focal person feel emotionally better in a number of<br />

ways while instrumental support is the kind of help or assistance from other people that tangibly helps focal person<br />

to solve a problem or get a task done (Beehr & Glazer, 2001). Communicating with the focal person and providing<br />

him/her with feedback, praise or approval are the examples of emotional support. Instrumental support, on the other<br />

hand, includes such support as doing physical or mental labor or providing informational or financial resources that<br />

make it easier for the focal person to solve his/her problems or complete a stress-inducing task (Beehr & Glazer,<br />

2001).<br />

In school settings, support from a teacher’s colleagues has been identified as preventive and remedial mechanisms<br />

for job-stress and burnout as well as an aid in coping with the job demands (see Brissie et al., 1988). Therefore, to<br />

widen the size of social interaction network of a teacher may increase the probability of available social support.<br />

Computer-mediated communication (CMC) tools can contribute to widening the size of social interaction network<br />

that can function as an alternative channel for giving and receiving social support.<br />

Computer-Mediated Communication and Online Social Support<br />

Computer-mediated communication refers to both synchronous and asynchronous modes of communication between<br />

individuals and among groups via networked computers (Naidu & Järvelä, 2006). Electronic mail, listservs,<br />

newsgroups and online discussion forums are just a few examples of CMC tools. Among these, online discussion<br />

forums provide an open electronic environment that allows the member (user) to post a message on a specific topic<br />

for others to read. Other members can respond to this message asynchronously. This usually leads to the emergence<br />

of threads, where a number of participants provide responses and counter-responses to an original posting, thus<br />

forming a dialogue among contributors. Participation to an open online discussion forum is voluntary. Therefore, it<br />

can be thought that participants are self-motivated, purposeful, and willing to express their experiences, emotions<br />

and thoughts, and listen to others’ concerns related to the issue(s) being discussed. In this process, participants must<br />

engage in a “conversation” to express themselves to obtain and provide social support. Burleson and Goldsmith<br />

(1998, p.260) described conversation as “a medium in which a distressed person can express, elaborate, and clarify<br />

relevant thoughts and feelings.” Obviously, such kind of personal participation requires friendly, safe, nonthreatening,<br />

and comfortable environments. Caplan and Turner (<strong>2007</strong>) suggest that establishing such an environment<br />

may be easier and more effective if the conversation is computer-mediated. According to Walther and Parks (2002),<br />

the Internet is a successful medium for social support. Recent studies indicated that both emotional and instrumental<br />

support can be found in online communication (Eastin & LaRose, 2005; Weisskirch & Milburn, 2003). In addition,<br />

there has been strong evidence that frequent use of online communication increased perceived social support, which<br />

in turn reduced users’ stress levels (Wright, 2000). Therefore, we tried to find answers to the following questions:<br />

1. What types of job-related problems are reflected in online discussion forums by computer teachers?<br />

2. What types of social support do computer teachers provide to one another within an online discussion forum?<br />

Method<br />

Sample<br />

This study examined the content of messages posted by Turkish computer teachers to a specific online asynchronous<br />

discussion forum –entitled “Unfairness that Computer Teachers Encounter”<br />

(http://forum.memurlar.net/topic.aspx?id=58164). This specific forum was voluntarily opened by computer teachers<br />

on September 27, 2005, and turned out to be a very active threaded-discussion forum for these civil servants. Its<br />

main aims were to provide a common ground for computer teachers to share and discuss their occupational problems<br />

with their colleagues and to let the policy and decision makers of the Ministry of National Education know about<br />

these problems. Participation to the forum was also voluntary. In other words, we did not do any effort to get<br />

teachers involved in the discussions. There were 128 anonymous participants. We collected all forum postings in the<br />

period of 12 weeks. A total of 543 messages were analyzed.<br />

133


Procedure<br />

Content analysis procedures were applied to the messages. Content analysis is “a research technique for the<br />

objective, systematic, quantitative description of the manifest content of communication” (Berelson, 1952, p.18).<br />

Quantitative description process includes segmenting communication content into units, assigning each unit to a<br />

category, and providing tallies for each category (Rourke & Anderson, 2004). Hara, Bonk, and Angeli (2000) note<br />

that there is no standard scheme for analyzing the computer-mediated communication, but they suggest gathering<br />

quantitative information about the number and type of messages and qualitative information about the content of<br />

messages. We used the individual text-based message as the unit of analysis (segment) that has significant<br />

advantages such as being objectively identifiable for raters (see Rourke, Anderson, Garrison, & Archer, 2001). Since<br />

the forum message is a fixed unit which is determined by horizontal lines representing the start and end of sentences<br />

in the communication transcript, a separate segmentation procedure was not applied. Thus, a segmentation reliability<br />

coefficient was not calculated.<br />

The variables investigated in this study were the types of job-related problems, which computer teachers mentioned<br />

in their messages and the types of social support which provided by teachers to one another via reciprocal<br />

discussions. When we were identifying the computer teachers’ job-related problems we used an inductive approach<br />

since we did not have proper pre-determined problem categories. That is, coding categories were derived from the<br />

data set by the authors. While we were classifying the types of social support, we on the other hand, used a deductive<br />

approach. In other words, a well-founded pre-existing coding schema (i.e., emotional and instrumental support; see<br />

Beehr and Glazer, 2001) was used.<br />

After multiple readings of the messages the authors derived tentative problem categories mentioned by the computer<br />

teachers. The first author trained a research assistant for coding. They independently assigned each unit to a<br />

category, and provided tallies for each category in order to quantify the data. During the coding, however, initial<br />

tentative coding categories were modified in accordance with the categories emerging from the data. As<br />

recommended by De Wever, Schellens, Valcke, and Van Keer (2006), we used more than one method for calculating<br />

inter-rater reliability coefficients in order to present more evidence about the reliability of classification. These<br />

coefficients are reported in Table 1. Differences in classification between the two raters were resolved by discussion.<br />

Results<br />

Problem Analysis<br />

Table 1. The inter-rater reliability coefficients<br />

Method Classification<br />

Problem Support<br />

Cohen’s kappa 0.98 0.92<br />

Kendall’s tau-b 0.98 0.93<br />

Spearman’s rho 0.98 0.94<br />

The content analysis revealed that almost half of the messages (49.5%; f=269) contained expressions of problems.<br />

Since some of these messages comprised of more than one type of problem a total of 375 problem expressions were<br />

identified. These problems were grouped under 12 categories as depicted in Table 2.<br />

Table 2. The types of job-related problems of Turkish computer teachers<br />

Problem Categories f %<br />

1- Role Conflict 96 25.6<br />

2- Inadequate Teacher Induction Policies 95 25.3<br />

3- Lack of Required Technological Infrastructure and Technical Support 77 20.5<br />

4- The Status of Computer Subject in School Curriculum 42 11.2<br />

5- Lack of Appreciation and Positive Feedback from Colleagues 19 5.1<br />

6- Unsupportive Administrators 9 2.4<br />

7- Rapidly Changing Nature of Content Knowledge in Computer Education 8 2.1<br />

8- Lack of Cohesive Computer Curriculum 8 2.1<br />

134


9- Insufficiency of Pre-Service Teacher Training Programs 7 1.9<br />

<strong>10</strong>- Large Class Sizes 7 1.9<br />

11- Indifferent Students 4 1.1<br />

12- Inadequate Supervision and Inspection 3 0.8<br />

Total 375 <strong>10</strong>0<br />

As can be seen in Table 2, many of the problems stated by the Turkish computer teachers are mainly related to<br />

educational policy and organizational factors. The three most common job-related problems the Turkish computer<br />

teachers faced were role conflict, inadequate teacher induction policies, and lack of required technological<br />

infrastructure and technical support. As stated earlier, these problem categories were derived from the messages<br />

posted by computer teachers. For example, one computer teacher wrote the following about his/her experience of<br />

“role conflict”:<br />

Now I terribly regret that I became a computer teacher. It has only been a month since I started to<br />

work but it seems like ten years. One thing that really upsets and makes me angry is to be looked at as<br />

“a handy-man”, “a repairman”. Besides the computers that need to be repaired at school, I also am<br />

frequently asked to fix the computers at teachers’ boarding house. Honestly, I really want to leave my<br />

profession. We are teachers, for goodness sake not repairmen... (Message 206)<br />

As shown in the excerpt, when teachers are expected to perform a demand that could be considered to be unrelated to<br />

their actual job description, stress is often the consequence. These kinds of demands often cause role conflict in<br />

teachers. Role conflict is defined as the simultaneous occurrence of two or more sets of inconsistent, expected role<br />

behaviors and has been found to be a major source of teacher stress in many different studies (see Cooper &<br />

Marshall, 1978; Kyriacou, 2001; Schwab & Iwanicki, 1982; Schwab, Jackson, & Schuler, 1986). Due to inadequate<br />

technical facilities and support services in most public schools, the computer teachers are expected to repair broken<br />

down computers or to clean computer labs in addition to their routine teaching responsibilities. Therefore, computer<br />

teachers’ roles and responsibilities should be more clearly described.<br />

Another computer teacher wrote the following complaint about the “inadequate teacher induction policies”:<br />

... Those who do not have relevant education are employed as computer teachers. As a result, those<br />

who graduated from universities after years of studying, sometimes ranks lower than computer<br />

coordinators. Who are these computer coordinators? They are teachers from other disciplines.<br />

Following a short-term in-service education, they are employed as computer teachers in the case of<br />

lack of accredited computer teachers. But now it seems there are plenty of those people around… I<br />

now would like to ask our Minister of National Education, why we, the computer teachers, are put in a<br />

position to compete with computer coordinators who in reality lack the real computer education. And<br />

why do you employ them at well-equipped schools, and leave repairs and fixing jobs to the properlyeducated<br />

computer teachers. (Message 192)<br />

As mentioned before, because of the shortage of accredited computer teachers, persons with different backgrounds<br />

such as engineers, programmers, classroom teachers or statisticians can be employed as computer teachers in the<br />

Turkish school system. Among them, only the accredited computer teachers have proper and higher level of<br />

education related to teaching computers. The literature suggests that higher level of education usually leads to higher<br />

career aspirations (Friedman, 1991). Depending on their higher career aspirations, accredited computer teachers<br />

might expect more respect of their expertise, and consequently might be relatively more sensitive or intolerant of the<br />

employment of out-of-field persons as computer teachers at public schools. Instead of employing out-of-field persons<br />

as computer teachers, existing computer teachers should be utilized more effectively. Namely, computer teachers<br />

should not be employed at schools which have no computers.<br />

A computer teacher wrote the following about the “lack of required technological infrastructure and technical<br />

support” preventing effective and efficient computer education in schools:<br />

...in my computer lab, seven of the computers have Windows 3.1, the other six have Windows 95.<br />

Although it is the year 2005, we use the first version of Windows operating system. …The keyboards<br />

135


and mice are out of order. I have to explain MS Word, MS Excel and other subjects by writing it on<br />

the blackboard. In some schools, some of our colleagues work with 486 DX computers. The system<br />

configurations of computers are so out of date that it is impossible to update to new OS and software.<br />

Moreover, in some schools there are computer teachers but no computers. (Message 319)<br />

Similar complaints could be found in the following quote by a computer teacher about “the status of computer<br />

subject in school curriculum” and the “lack of required technological infrastructure and technical support” in<br />

schools:<br />

Look friends; let’s raise our voices against the move toward lowering computer courses to one hour<br />

per week. I give computer courses at two different schools, one of which does not have a computer<br />

lab. I am trying to teach computers to the students most of whom have never set an eye on one. In the<br />

other school I work for, there are really out-dated machines that only come to life in 15 minutes after<br />

switching on. So the result is I spend half of one-hour-class to start the computers and to restore<br />

classroom discipline. How much can the students benefit from this? Can anybody have a guess?<br />

(Message 470)<br />

As stated earlier, the status of computer subject in the Turkish elementary school curriculum is elective and the total<br />

teaching time is merely one hour per week. Accordingly, the computer teachers frequently considered the status of<br />

this subject to be a restrictive factor decreasing the effectiveness and efficiency of their teaching practices. They<br />

consistently expressed that, because the computer subject was one hour per week, they were not able to provide<br />

enough hands-on practice for each student. Furthermore, because the computer subject was elective, the majority of<br />

students considered this subject to be unimportant. Similarly, Hendley, Stables, and Stables (1996) suggested that the<br />

subjects which occupy little time in the curriculum may be regarded as of low status by students. Therefore, the<br />

weekly course hours should be increased to provide enough hands-on practice for each student. Increasing the total<br />

teaching time may also help to improve students’ perceptions regarding the significance of this subject.<br />

Furthermore, without well-equipped computer labs and technical supports it is impossible for computer teachers to<br />

continue their classes properly. Studies suggest that poor working conditions including lack of educational supplies<br />

and inadequate resources for teaching may lead to job-stress in teachers (Abel & Sewell, 1999; Kyriacou, 2001).<br />

Besides, when lack of required technological infrastructure and technical support merge with the limited teaching<br />

time, teacher stress and ineffective computer education seem unavoidable. For an effective and efficient computer<br />

education in schools, technological infrastructure, hardware and software investments should not be one-shot<br />

investments. Instead, continuous technological renovation and technical support services should be provided.<br />

Following excerpt is a typical complaint about the “lack of appreciation and positive feedback from his/her<br />

colleagues” from a computer teacher:<br />

It is my third year in teaching. I have chosen willingly to become a computer teacher with full of ideas<br />

in the beginning. However all my enthusiasm has gone astray after the first year... For one thing, you<br />

are extremely overloaded with works. I have given computer courses to the teachers at every seminar<br />

session in my school. While the computer coordinators are paid a hell of a lot of money for doing such<br />

courses, I was even spared a simple “thank you.” I designed the school’s web site… which I believe<br />

could not to be done for less than $500 by somebody from the outside… I was criticized for not<br />

updating it regularly. I don’t want to be misunderstood on money matters, but I certainly think that<br />

one needs to be at least appreciated for the work one does. As a result, now my feet go backwards<br />

when I go to my classes. Does a working person look for his/her retirement on the third year, well I<br />

do... I do love teaching, but not under these conditions. It is high time that our complaints should now<br />

be heard and acknowledged by authorities. Reading similar complaints from our colleagues, we feel<br />

we are not alone, but these should not fall on deaf ears. (Message 118)<br />

Teachers usually need to receive positive feedback, praise or approval from their colleagues and support from<br />

administrators to cope with stressful job demands. Otherwise, they may feel that their work is not important enough<br />

to justify someone else’s attention and they are alone in trying to do their work (see Pines, 2002). Lack of feedback<br />

and support were identified as causing additional stress in teachers (see Brown & Nagel, 2004; Kyriacou, 2001).<br />

However, job-stress can be reduced by positive communication and supportive relationships among colleagues. In<br />

136


this context, CMC seems to open new doors for teachers who need to set up supportive communication networks<br />

with their colleagues.<br />

Furthermore, unsupportive administrators, the rapidly changing nature of content knowledge in the field of computer<br />

education, lack of cohesive computer curriculum, poor university preparation, large class sizes, indifferent students,<br />

and inadequate supervision and inspection were found to be relatively less frequently mentioned problems of the<br />

Turkish computer teachers. Due to dynamic nature of their field of study, the computer teachers need to continuously<br />

update their content knowledge. According to them, the main barrier was the insufficient in-service training<br />

opportunities. Therefore, computer teachers should be provided with rich and continuous in-service training. The<br />

teachers also expressed that in addition to the lack of well-equipped computer labs, insufficiency of pre-service<br />

teacher training programs, out-of-date computer curricula and large class sizes were increased the students’<br />

indifferences and inattentiveness. These problems were also affected negatively the teachers’ effectiveness in the<br />

computer classes. Thus, pre-service computer teacher training programs should be reformed. The elementary and<br />

secondary schools’ computer curricula should frequently be revised.<br />

One computer teacher noted the following about his/her experience of “inadequate supervision and inspection”:<br />

Last year a supervisor visited one of my classes and just because one of the computers was not loaded<br />

with the MS Word (the students have removed it) he graded me the lowest. But, the fact is that now<br />

three of the computers are out of order in the computer lab and the school administration refuses to<br />

have them repaired claiming they don't have money to spare for it. Now I ask you, who is going to be<br />

responsible in this case? Is it my fault? (Message 286)<br />

According to the contemporary supervision approaches, supervisors should be a guide for teachers. However,<br />

supervisors’ lack of technological knowledge and skills may hinder their ability to make changes in their supervision<br />

practices. A possible solution could be to train supervisors both about the computer technology and the proper<br />

criteria for evaluating effectiveness of computer education.<br />

Social Support Analysis<br />

The content of computer teachers’ online discussion forum messages were also analyzed in terms of the types of<br />

social support provided. These categories are reported in Table 3. Majority of the messages (87.9%) were identified<br />

as providing with “emotional” support. Studies revealed that emotional concerns and support are widespread in<br />

CMC. In a study by Anderson and Lee (1995), it was found that beginning teachers offered emotional and moral<br />

support (personal concerns), rather than curricular and instructional advice (technical concerns) in their e-mail<br />

messages. Similarly, Nicholson and Bond (2003) found that pre-service teachers used the electronic discussion board<br />

by expressing many thoughts, experiences, and emotions. Over time it became a place of professional support and<br />

community.<br />

Table 3. The types of social support provided by messages<br />

Social Support Categories f %<br />

Emotional Support<br />

Feedback 4<strong>10</strong> 75.5<br />

Approval/Praise 51 9.4<br />

Humor 16 3.0<br />

Subtotal 477 87.9<br />

Instrumental Support<br />

Informational 17 3.1<br />

No Social Support 49 9.0<br />

Total 543 <strong>10</strong>0<br />

In the present study, the most frequent subtype of emotional support was feedback (75.5%) (e.g., teachers brought<br />

relevant personal experiences or thoughts into the discussion as a response to another message). Sharing personal<br />

experiences and thoughts has been considered to be useful for establishing and maintaining “group cohesion,” a key<br />

137


feature of supportive online groups. Babinski, Jones, and DeWert (2001) exposed that teachers were prone to express<br />

their immediate personal experiences and ask for advice in online discussion forums. The importance of knowing<br />

peers’ rich resources of practical knowledge in the professional development process is often emphasized. When<br />

professionals search for similarities from across the profession, it can “yield a fresh exchange of ideas, practices, and<br />

solutions to common problems” (Cervero, 1988, p.15 as cited in Anderson & Kanuka, 1997). Besides, Beehr and<br />

Glazer (2001) suggest that by conversing with others and learning about their experiences, people might learn to<br />

cope with their own stress factors. Therefore, the examined online discussion forum can be described as a very rich<br />

professional platform that teachers heavily exchanged their personal professional experiences and thoughts in order<br />

to draw attention to and find solution alternatives for their job-related problems.<br />

Other subtypes of emotional support were approval / praise (9.4%) (e.g., teachers expressed their emotional<br />

appreciation or praise as a response to another message), and humor (3%) (e.g., teachers told a joke or drew attention<br />

to an irony related to the topic being discussed). On the other hand, merely 3.1% of the messages were identified as<br />

providing “instrumental” (informational) support (e.g., teachers provided with specific information to their<br />

colleagues when they asked for) while 9% of the messages did not include any type of social support.<br />

In an ironic response to another teacher’s message who expressed his/her resentment against lowering the total<br />

teaching time of computer courses in the elementary schools, one computer teacher wrote:<br />

…I think one hour for computer courses is enough... Yes, that's right... You have heard it all right.<br />

What I am simply saying is that it takes at least a half hour to start the computer in our lab and another<br />

half hour to shut it down... (Message 406)<br />

Humor has been considered a good coping strategy (Austin, Shah, & Muncer, 2005). Coping however, refers to the<br />

stressed person’s own behavior, actions or intentions (Beehr & Glazer, 2001). When humor comes from other<br />

people, it can function as an emotional support that might make stressed people feel emotionally better. Furthermore,<br />

humor is one of the verbal immediacy behaviors that can lessen the psychological distance among users in an online<br />

discussion forum (Swan, 2002). The computer teachers used humor occasionally, yet effectively, to make their<br />

colleagues feel emotionally better and to air their problems.<br />

Because the computer teachers’ job-related problems mostly resulted from educational policies and organizational<br />

factors rather than the teachers’ personal deficiencies, both individual information seeking and informational support<br />

were extremely rare. However, in a case of a teacher asking for a specific type of information, his/her colleagues<br />

immediately provided him/her with such information. We observed that the teachers mainly asked for information<br />

from their veteran peers about their rights and responsibilities in schools, and potential solution alternatives for their<br />

technological problems in computer labs. These kinds of information exchanges among teachers are the instances of<br />

informational support. Here is an example of a reciprocal correspondence concerning this kind of information<br />

exchange:<br />

...The students delete program files of computers in order to prevent class work time and to instead<br />

engage in games. Could anybody help me to put an end to this? (Message 229)<br />

...Currently I use “Deep Freeze.” I highly recommend it. Also I think the “Ghost”, “NetOp School” or<br />

“NetSupport” could be useful... Here’s how you can set up and use the above mentioned software...<br />

(Message 248)<br />

Lastly, one computer teacher wrote the following to express his/her approval/praise about the content of ongoing<br />

discussions:<br />

...Thanks ever so much for the information you've passed on. I rather think deleting the program files<br />

by the students is as important as the other problems cited in the forum. To discuss in details and to<br />

explain which software could be used in order to put an end to such problem has certainly been useful<br />

to at least solve one of the problems we face. (Message 237)<br />

To provide a person with approval or praise concerning a particular behavior might improve his/her emotions such as<br />

self-confidence, self-esteem, and enthusiasm increasing the probability of performing this behavior. Approval and<br />

138


praise are also described as useful strategies to foster social interaction among participants in online settings (see<br />

Hara et al., 2000). By providing with such online emotional support, the teachers created positive communication<br />

and supportive relations among their colleagues, and helped them to feel emotionally better.<br />

Discussion and Conclusion<br />

This study examined the content of messages posted by Turkish computer teachers in an online discussion forum<br />

about their job-related problems. Findings suggest that computer teachers face a number of problems mostly resulted<br />

from educational policies and organizational factors such as role conflict, inadequate teacher induction policies, and<br />

lack of technological infrastructure and technical support. The frequency and variety of these problems indicate a<br />

need of a national comprehensive technology planning. According to Anderson (1999), an important step in<br />

technology planning is to face the honest reality of a condition, then to work together to build a strategy for success.<br />

Therefore, determining computer teachers’ job-related problems could provide the necessary data for the first step of<br />

technology planning. In this context, CMC can be an excellent channel for information exchange. Indeed, the value<br />

of CMC lies in its ability to facilitate professional collaboration between teachers and to encourage critical reflection<br />

on educational policy and practice (Hawkes & Romiszowski, 2001). Moreover, because of the anonymity, teachers<br />

can be more open and honest in CMC environments, especially in online forums. Therefore, educational policy and<br />

decision makers might benefit from CMC as a feedback mechanism to analyze the context, determine the needs and<br />

specify the goals for successful implementations. System developers must consider designing an online system that<br />

provides rapid information flow from school teachers to the central and/or local policy centers.<br />

Another noteworthy finding of this study is that the computer teachers mostly share their personal-professional<br />

experiences and thoughts via online discussions. These personal experiences and thoughts and emotional support<br />

were very common in the reciprocal online discussions. Therefore it can be concluded that the online social<br />

interaction among the computer teachers was quite reflective. Reflectivity is considered to be the heart of<br />

professional development. Reflection is a continual process that engages teachers in framing and re-framing<br />

problems while designing and evaluating solutions (Hawkes & Romiszowski, 2001). Reflective teachers tend to<br />

examine and re-examine their personal-professional experiences to improve their teaching practices and working<br />

conditions. The present study confirmed that the Turkish computer teachers used the online discussion forum as a<br />

social-professional platform for sharing their job-related problems, suggesting potential solutions, and providing<br />

and/or receiving social support. Professional development for teachers constitutes formal and informal processes of<br />

knowledge and skill building (Hawkes & Romiszowski, 2001). In this context, CMC tools have the potential to<br />

become rich, flexible, formal or informal, and personal learning environments (Attwell, 2006; Downes, 2006). For<br />

example, case libraries, online libraries, video-cases, online technical services, special interest groups, access to<br />

written regulations, social support groups, and open-access curriculum materials are such applications that can be<br />

available for teachers in CMC environments.<br />

Finally, content analysis technique provides an invaluable tool to understand the nature of communication and social<br />

interaction patterns among users in online environments. It is hoped that the findings of this study could be helpful in<br />

stimulating educational researchers to pay attention to alternative data sources such as communication transcripts of<br />

teachers’ discussion forum messages to better understand what types of problems they have and what types of<br />

support they need. Further studies are needed to uncover several points including what are the essential role of<br />

computer teachers, and to what extent they felt supported or became relieved as a result of online discussions. Lastly,<br />

the variables that can affect the functionality and viability of online groups such as group size, group compositions,<br />

and participation degree of users are the issues needing further exploration.<br />

References<br />

Abel, M. H., & Sewell, J. (1999). Stress and burnout in rural and urban secondary school teachers. The Journal of<br />

<strong>Educational</strong> Research, 92 (5), 287-293.<br />

Anderson, L. S. (1999). <strong>Technology</strong> planning: It’s more than computers, retrieved on <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.nctp.com/articles/tpmore.pdf.<br />

139


Anderson, J., & Lee, A. (1995). Literacy teachers learning a new literacy: A study of the use of e-mail in a reading<br />

instruction class. Reading Research and Instruction, 34, 222-238.<br />

Anderson, T., & Kanuka, H. (1997). On-line forums: New platforms for professional development and group<br />

collaboration, ERIC Document Reproduction Service No. ED- 418693.<br />

Attwell, G. (2006). Personal learning environments, retrieved on <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.knownet.com/writing/weblogs/Graham_Attwell/entries/6521819364.<br />

Austin, V., Shah, S., & Muncer, S. (2005). Teacher stress and coping strategies used to reduce stress. Occupational<br />

Therapy International, 12 (2), 63-80.<br />

Babinski, L. M., Jones, B. D., & DeWert, M. H. (2001). The roles of facilitators and peers in an online support<br />

community for first-year teachers. Journal of <strong>Educational</strong> and Psychological Consultation, 12 (2), 151-169.<br />

Beehr, T. A., & Glazer, S. (2001). A cultural perspective of social support in relation to occupational stress. In P.<br />

Perrewé & D. C. Ganster (Eds.), Research in occupational stress and well-being, Greenwich, CO: JAI Press, 97-142.<br />

Berelson, B. (1952). Content analysis in communication research, Glencoe, IL: Free Press.<br />

Black, S. (2003). Stressed out in the classroom. American School Board Journal, 190 (<strong>10</strong>), 36-38.<br />

Brissie, J. S., Hoover-Dempsey, K. V., & Bassler, O. C. (1988). Individual, situational contributors to teacher<br />

burnout. Journal of <strong>Educational</strong> Research, 82 (2), <strong>10</strong>6-112.<br />

Brown, S., & Nagel, L. (2004). Preparing future teachers to respond to stress: Sources and solutions. Action in<br />

Teacher Education, 26 (1), 34-42.<br />

Burke, R. J., & Greenglass, E. R. (1993). Work stress, role conflict, social support and psychological burnout among<br />

teachers. Psychological Reports, 73, 371-380.<br />

Burleson, B. R., & Goldsmith, D. J. (1998). How the comforting process works: Alleviating emotional distress<br />

through conversationally induced reappraisal. In P. A. Anderson, & L. K. Guerrero, (Eds.), Handbook of<br />

communication and emotion: Theory, research, application, and contexts, San Diego, CA: Academic Press, 245-280.<br />

Caplan, S. E., & Turner, J. S. (<strong>2007</strong>). Bringing theory to research on computer-mediated comforting communication.<br />

Computers in Human Behavior, 23 (2), 985-998.<br />

Cervero, R. (1988). Effective continuing education for professionals, San Francisco, CA: Jossey-Bass.<br />

Cohen, S., & Wills, T. A. (1985). Stress, social support, and the buffering hypothesis. Psychological Bulletin, 98,<br />

3<strong>10</strong>-357.<br />

Cooper, C. L., & Marshall, J. (1978). Understanding executive stress, London: Macmillan.<br />

Deryakulu, D. (2005). Bilgisayar öğretmenlerinin tükenmişlik düzeylerinin incelenmesi. Eğitim Araştırmaları, 19,<br />

35-53.<br />

Deryakulu, D. (2006). Burnout in Turkish computer teachers: Problems and predictors. International Journal of<br />

<strong>Educational</strong> Reform, 15 (3), 370-385.<br />

De Wever, B., Schellens, T., Valcke, M., & Van Keer, H. (2006). Content analysis schemes to analyze transcripts of<br />

online asynchronous discussion groups: A review. Computers & Education, 46, 6-28.<br />

Downes, S. (2006). Learning networks and connective knowledge, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://it.coe.uga.edu/itforum/paper92/paper92.html.<br />

140


Eastin, M. S., & LaRose, R. (2005). Alt.support: Modeling social support online. Computers in Human Behavior, 21,<br />

977-992.<br />

Evers, W. J. G., Brouwers, A., & Tomic, W. (2002). Burnout and self-efficacy: A study on teachers’ beliefs when<br />

implementing an innovative educational system in the Netherlands. British Journal of <strong>Educational</strong> Psychology, 72,<br />

227-243.<br />

Farber, B. A. (1984). Stress and burnout in suburban teachers. The Journal of <strong>Educational</strong> Research, 77, 325-331.<br />

Friedman, I. A. (1991). High-and-low-burnout schools: School culture aspects of teacher burnout. Journal of<br />

<strong>Educational</strong> Research, 84 (6), 325-333.<br />

Guglielmi, R. S., & Tatrow, K. (1998). Occupational stress, burnout, and health in teachers: A methodological and<br />

theoretical analysis. Review of <strong>Educational</strong> Research, 68, 61-99.<br />

Hara, N., Bonk, C. J., & Angeli, C. (2000). Content analysis of online discussion in an applied educational<br />

psychology course. Instructional Science, 28, 115-152.<br />

Hawkes, M., & Romiszowski, A. (2001). Examining the reflective outcomes of asynchronous computer-mediated<br />

communication on inservice teacher development. Journal of <strong>Technology</strong> and Teacher Education, 9 (2), 285-308.<br />

Hendley, D., Stables, S., & Stables, A. (1996). Pupils’ subject preferences at key stage 3 in South Wales.<br />

<strong>Educational</strong> Studies, 22, 177-186.<br />

Kyriacou, C. (2001). Teacher stress: Directions for future research. <strong>Educational</strong> Review, 53 (1), 27-35.<br />

MEB. (2004). Temel eğitim ikinci faz başlangıç semineri, Ankara: Imaj.<br />

MEB. (2005a). Bilgisayarlı eğitime destek kampanyası başladı, Retrieved on <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.meb.gov.tr/.<br />

MEB. (2005b). Milli eğitim istatistikleri 2004-2005, Ankara: Devlet Kitapları Basımevi.<br />

Naidu, S., & Järvelä, S. (2006). Analyzing CMC content for what? Computers & Education, 46, 96-<strong>10</strong>3.<br />

Nicholson, S, A., & Bond, N. (2003). Collaborative reflection and professional community building: An analysis of<br />

preservice teachers’ use of an electronic discussion board. Journal of <strong>Technology</strong> and Teacher Education, 11 (2),<br />

259-279.<br />

Olkun, S., Altun, A., & Smith, G. (2005). Computers and 2D geometric learning of Turkish fourth and fifth graders.<br />

British Journal of <strong>Educational</strong> <strong>Technology</strong>, 36 (2), 317-326.<br />

Pines, A. M. (2002). Teacher burnout: A psychodynamic existential perspective. Teachers and Teaching: Theory and<br />

Practice, 8 (2), 121-140.<br />

Rourke, L., & Anderson, T. (2004). Validity in quantitative content analysis. <strong>Educational</strong> <strong>Technology</strong> Research &<br />

Development, 52 (1), 5-18.<br />

Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2001). Methodological issues in the content analysis of<br />

computer conference transcripts. International Journal of Artificial Intelligence in Education, 12, 8-22.<br />

Russell, D. W., Altmaier, E., & Van Velzen, D. (1987). Job-related stress, social support, and burnout among<br />

classroom teachers. Journal of Applied Psychology, 72 (2), 269-274.<br />

Schwab, R. L., & Iwanicki, E. F. (1982). Perceived role conflict, role ambiguity, and teacher burnout. <strong>Educational</strong><br />

Administration Quarterly, 18, 60-74.<br />

141


Schwab, R. L., Jackson, S. E., & Schuler, R. S. (1986). Educator burnout: Sources and consequences. <strong>Educational</strong><br />

Research Quarterly, <strong>10</strong> (3), 14-30.<br />

Swan, K. (2002). Building learning communities in online courses: The importance of interaction. Education,<br />

Communication and Information, 2, 23-49.<br />

van Dick, R., & Wagner, U. (2001). Stress and strain in teaching: A structural equation approach. British Journal of<br />

<strong>Educational</strong> Psychology, 71, 243-259.<br />

Walther, J. B., & Parks, M. R. (2002). Cues filtered out, cues filtered in: Computer-mediated communication and<br />

relationships. In M. L. Knapp, J. A. Daly, & G. R. Miller (Eds.), The handbook of interpersonal communication (3 rd<br />

Ed.), Thousand Oaks, CA: Sage, 529-563.<br />

Weisskirch, R. S., & Milburn S. S. (2003). Virtual discussion: Understanding college students’ electronic bulletin<br />

board use. Internet and Higher Education, 6, 215-225.<br />

Wolpin, J., Burke, R. J., & Greenglass, E. R. (1991). Is job satisfaction an antecedent or a consequence of<br />

psychological burnout? Human Relations, 44 (2), 193-209.<br />

Wright, K. (2000). Computer-mediated social support, older adults and coping. Journal of Communication, 50, <strong>10</strong>0-<br />

119.<br />

142


Johnson, R., Kemp, E., Kemp, R., & Blakey, P. (<strong>2007</strong>). A theory for eLearning. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 143-<br />

155.<br />

The learning computer: low bandwidth tool that bridges digital divide<br />

Russell Johnson, Elizabeth Kemp, Ray Kemp and Peter Blakey<br />

Massey University, Palmerston North, New Zealand // R.S.Johnson@massey.ac.nz // E.Kemp@massey.ac.nz //<br />

R.Kemp@massey.ac.nz // P.Blakey@massey.ac.nz<br />

ABSTRACT<br />

This article reports on a project that explores strategies for narrowing the digital divide by providing a<br />

practicable e-learning option for the millions living outside the ambit of high performance computing and<br />

communication technology. The concept is introduced of a learning computer, a low bandwidth tool that<br />

provides a simplified, specialised e-learning environment which works with or without an internet connection.<br />

This concept is contrasted with the Learning Management System model widely adopted by universities, in<br />

which teaching material is accessed as web pages from a central repository. The development of an initial<br />

prototype and its field testing under realistic conditions is reviewed, and plans for future work outlined.<br />

Keywords<br />

Distance learning, Digital divide, Low bandwidth, Accessibility, Ubiquitous computing<br />

Motivation for the project<br />

Over the past decade, mechanisation has had a major impact upon distance learning, with computer-based courses<br />

delivered over the World-wide Web increasingly replacing traditional correspondence courses. As rapid advances in<br />

computer and communication technology made possible real-time video-conferencing and virtual classrooms via the<br />

PC many began to question the future of “bricks-and-mortar” learning institutions, promoting e-learning as the future<br />

of education in an internet-linked global village.<br />

Despite this potential, it is widely acknowledged that the boom in computing and communication resources is<br />

actually deepening the digital divide between people who have access to and know how to use technology and those<br />

who do not (Bork, 2001; Bindé, 2005; Marine & Blanchard, 2004). This divide exists between developed and<br />

developing countries, but also between urban and rural regions within every country. As recently as 2002, one<br />

communication studies researcher observed, online information services such as e-learning were still focussed upon a<br />

few hundred “wired cities”, while ninety-seven percent of the world’s population had no Internet access and almost<br />

two-thirds of households had no telephone, let alone the broadband delivery and multimedia computer required for<br />

virtual classrooms (Hope, 2002).<br />

The learning computer project began as a search for ways in which information technology could be applied to<br />

distance learning so as to reach beyond the “wired cities” to narrow this digital divide in education and to enhance<br />

all distance students’ learning experiences. In harmony with distance education’s goal of providing learning<br />

opportunities to those unable to attend conventional learning institutions by reason of geographic location, job,<br />

disability or age, we were seeking a path towards anybody, anywhere, anytime e-learning. Special factors that had to<br />

be considered included poor internet service, older computers, and isolated users struggling to complete computing<br />

tasks unaided.<br />

Anecdotal evidence and personal experience suggested that the commercially-available web-based Learning<br />

Management Systems (LMS) were too reliant on higher end technologies, too complex for a relatively novice user to<br />

negotiate unaided, and too pedagogically-limited to meet these requirements. Using LMS’s to replace<br />

correspondence courses may actually perpetuate and even widen educational inequalities through creating a two-tier<br />

system based on access to, and ability to use, the technology.<br />

This article reports on the results from the first three stages of our project. First we began by investigating the<br />

problem. From a review of prior research we identified some key criteria for designing an effective computersupported<br />

distance learning system. Second we conceptualised an e-learning system that met these criteria. Then we<br />

tested this conceptualisation through an initial prototype which we evaluated with users. After reporting on these<br />

stages the article draws some initial conclusions and identifies the further steps in evaluating our approach.<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

143


Investigating the problem<br />

An extensive review of conference papers, journal articles, books, web sites, industry publications, news reports and<br />

software was conducted, highlighting the rich research record in this field. The main results of our review have been<br />

published previously (Johnson et al, 2002). Here we can only briefly summarise those issues most relevant to our<br />

conclusions.<br />

“Distance learning” covers a broad range of scenarios with very different requirements – from specialised in-house<br />

job training, through radio schools for children in isolated rural areas, to open university adult education courses. For<br />

this project, it was necessary to focus upon a specific sector: the computer-based delivery of university-level, home<br />

study programmes for students unable to attend normal lectures.<br />

Computer systems, through their ability to rapidly store, manipulate and communicate multimedia information, have<br />

the potential to provide multi-dimensional distance learning environments, which incorporate passive (teachercentred)<br />

or active (student-centred) learning styles, group or individual work, interaction, and simulation. However,<br />

such environments have not been fully realised at the university-level. Moreover, e-learning systems often assume<br />

levels of computer literacy and access to reliable, high-speed technology more in line with the resources of public or<br />

private institutions than with students studying at home or in developing countries.<br />

All the systems in wider use at the university level were built upon the LMS model. This is a “one-size-fits-all”<br />

approach in which teaching material is stored on a central repository and delivered page by page over the Internet to<br />

a web browser on the learner’s computer. The learner must establish a live connection to the university server, and<br />

then wait until the next page is downloaded before he/she can proceed to study.<br />

The LMS’s originated as practical tools to aid university staff to author and administer their internal and external<br />

courses e.g. WebCT (Goldberg, 1997) and Blackboard (Kubarek, 1999). A major distinction is now drawn between<br />

LMS’s and Learning Content Management Systems (LCMS). The Brandon Hall website, for example, explains that:<br />

"The primary objective of a learning management system is to manage learners, keeping track of their progress and<br />

performance across all types of training activities. By contrast, a learning content management system manages<br />

content or learning objects that are served up to the right learner at the right time" (Brandon Hall, 2005, para. 2).<br />

From this perspective, the primary target users of an LMS are "training managers, instructors, administrators", while<br />

for an LCMS they are "content developers, instructional designers, project managers" (ibid, para. 5). However, from<br />

the standpoint of the usability and accessibility of the system to distance students there is no essential difference<br />

between these models. To access learning material, the student must correctly navigate the operating system, the web<br />

browser and the multi-page web application. None of these tasks is trivial. Needing to learn them can inhibit learning<br />

the course content (Smulders, 2003).<br />

Some LMS’s have been prototyped that address the accessibility issue by acting as a standalone system when an<br />

internet connection is unavailable, including the TILE system developed at Massey University (Jesshope et al.,<br />

2000). This hybrid model does not address the usability issues, however.<br />

There are some good examples of one-off adaptive tutoring systems at the university-level. While offering improved<br />

functionality over a LMS, they have proven too complex to implement to come into general use. More recent<br />

research efforts have focussed upon incorporating adaptive elements into LMS’s and LCMS’s (e.g. Jong et al.,<br />

2003).<br />

It was also noted that e-learning has been most effective when integrated with on-campus teaching or research.<br />

Attempts to replace teacher and textbook altogether and create entirely electronic virtual universities have largely<br />

failed (Education Review, 2004).<br />

Mobile e-learning, or “m-learning”, is a growing research area. M-learning explores ways of using mobile devices<br />

like PDA’s and cellular phones for supporting and/or delivering some elements of teaching and learning processes,<br />

especially with a view to drawing young people into learning (Attewell, 2004). While a cellular phone is a powerful<br />

networked computing device its very short transmission range and restricted interface limit its potential for extending<br />

the boundaries of e-learning into rural areas.<br />

144


From our search for technologies that could be generalised and implemented as a practical alternative to the LMS<br />

model, we identified eight key criteria that our system would have to meet. These were:<br />

1. Target the distance learner as primary user. Systems with a focus on course authoring or administration,<br />

supporting on-campus study, or providing in-house job training, have different requirements from those oriented<br />

towards the distance university student.<br />

2. Prioritise the student view. Designing as a learning environment means focussing upon the content, interactions,<br />

presentation styles and user support at the student interface. A learning environment integrates all the functions<br />

and features that support the student's learning tasks. It treats the student view as the front end of a learning<br />

system rather than as the back end of a teaching application.<br />

3. Recognise the university itself as the most important learning community. The learning environment extends the<br />

university’s reach to the distance student. This means facilitating a range of learning features and modes, which<br />

complement rather than replace the roles of the teacher and the textbook, encourage classmates to work together,<br />

and draw students into the university learning community.<br />

4. Provide for the environment to adapt or be adapted to the individual learning task or student. This may include<br />

how and what learning material is presented when, help in the use of one or other learning features, querying of<br />

subject matter, and assistance in locating supporting material. However, it may not be desirable or feasible to<br />

individualise all learning features. And sometimes the individual learner should adapt to facilitate collaborative<br />

work and discussion between students.<br />

5. Design as an information system. The learning environment is best conceived of as an information system<br />

incorporating hardware, software and human factors, in which automation is not always assumed to be better. A<br />

decision is made on what to computerise and what not to, based upon what works best for the distance learner.<br />

Some aspects of a course may be better implemented using alternative technologies to the PC or a human<br />

teacher.<br />

6. Design for reusability on three levels - the programmer, the course author, and the student. For an extramural elearning<br />

system to provide a practical alternative, it must not only be good for student learning, it must also<br />

efficiently support authoring, managing and delivering multiple courses of study. In addition, it should be able to<br />

readily integrate new and improved learning technologies.<br />

7. Consider alternative delivery technologies. The Internet, as the backbone of any networked e-learning system,<br />

embraces synchronous and asynchronous alternatives to the ubiquitous web-server/browser-client model. A<br />

specialised interface may be more effective for learning than a general purpose web browser. One-way satellite<br />

broadcasting has a much wider geographical reach than cellular or landline telecommunications.<br />

8. Evaluate for functionality, usability and accessibility. In evaluating the system's reliability and performance a<br />

lowest common denominator approach is required. All the key functions and features should not only work with<br />

top of the range computers and high-speed urban communication networks, but should be tested for usability<br />

and accessibility in more challenging circumstances like that of isolated rural students. Interactive and<br />

multimedia features delivered over the Internet may not be available to the remote student in a timely manner.<br />

Conceptualisation and specification of a learning computer<br />

The most fundamental conclusion we drew from our review was that for an e-learning system to successfully support<br />

university distance education the focus had to shift from the education-provider to the distance learner as the primary<br />

user. In particular, some research results suggested to us that our objectives of maximising the learning functionality<br />

available to the distance student, while minimising the usability and accessibility problems of web-browser-based<br />

systems, could be better met through user-centred strategies of specialisation, customisation, and localisation. By this<br />

we mean:<br />

• That special-purpose computer tools (interfaces, programming languages, hardware...) offer a way of providing<br />

a simpler and more usable learning environment than general-purpose ones (web browser, HTML, PC…) by<br />

helping to render the computer invisible to the learning process (Bork, 2001; Hoppe et al., 2000);<br />

• That user-oriented interface design, incorporating customisable and collaborative strategies, can more easily and<br />

simply achieve many of the individualisation and interactivity goals of adaptive systems, while maintaining the<br />

locus of control with the student (Murray et al., 2000); and<br />

• That by placing more of the system's functionality on the student’s computer and then using the internet or<br />

alternative media to update the student’s computer, the greater functionality associated with standalone systems<br />

and the collaborative benefits of networked computers can be incorporated into an e-learning system that better<br />

145


meets the anywhere, anytime requirements for distance study than server-centred systems (Dietinger & Maurer,<br />

1998; Jesshope et al., 2000).<br />

By considering these strategies together, the concept crystallised of a learning computer – a special-purpose<br />

computational device which integrates all the features a student needs for learning into a simplified, specialised,<br />

individualised, networked machine. The distinctive elements in the learning computer model include:<br />

• Learner-centred – all system features are designed from the student end back, and retain the locus of control<br />

with the student.<br />

• Asynchronous – no essential communications (student-teacher, student-student, system-system, etc.) require<br />

completion in real-time.<br />

• Integrated – seamless conflating of operating system, browser and application functionality through a specialised<br />

interface.<br />

• Decentralised – functionality and content are distributed to the learner end, requiring only periodic updating.<br />

• Modular – system functionality and content are built up from interchangeable modules to support course<br />

customisation and reuse.<br />

The learning computer provides alternative study modes, defined by a set of elements, each of which corresponds to<br />

a basic constituent of university study. Learning by lecture, for example, may involve a presentation by the lecturer,<br />

a set of slides provided by the lecturer, and notes made by the individual student. Learning by tutorial, on the other<br />

hand, may involve a one-to-one interaction with a tutor as well as notes made by the individual student.<br />

Having visualised the new system in this way, it was then necessary to specify a prototype in more detail. The key<br />

decisions made at this stage were:<br />

• That the learning computer would be implemented as the client in a client/server network in which most<br />

functionality is distributed to the client side. Communications and learning content would be updated by<br />

periodically linking to a central repository of messages and learning resources via the Internet or one-way<br />

satellite download (IHug, 2004) using the FTP protocol, or via the post (i.e. CDs), rather than as a browser client<br />

downloading pages from a web server. Updates could then proceed in the background while the learning<br />

computer is in use.<br />

• That the student interface would be implemented on the Windows platform as a specialised graphical Learning<br />

Shell replacing the operating system GUI, rather than by adding functionality to a general-purpose web browser.<br />

This Shell would provide "just-enough" of the underlying Windows functionality "just-in-time" to support the<br />

required learning task. It would be assembled on a modular basis using an object-oriented rapid application<br />

development environment that supported reusable software components. Delphi was chosen for this.<br />

• That navigation and other user interactions with the Shell would be simplified by managing them through an<br />

intermediate system layer between the interface and the operating system, which models the modular,<br />

hierarchical structure of a distance learning course and maps the user’s learning tasks to the appropriate<br />

computer operations. This layer would also model the student sufficiently to provide the reference point for<br />

individualising the Shell to a specific user.<br />

• That learning assistance would be provided to the student through an integrated system of electronic mail,<br />

collaborative work groups and online help built around a replicated SQL-compliant database. The online help<br />

system would be queried by the student via a user-friendly interface to obtain more information on an issue or<br />

guidance on where to find more information. In this way the learner obtains assistance through discussion with<br />

other students, contacting the course tutor, or by directly querying the database. Student queries and their<br />

outcomes could be logged to facilitate dynamically improving the support system.<br />

To implement these decisions and evaluate the overall concept, a course authoring and management application and<br />

a communications management component needed to be built and integrated with the learning computer into a<br />

network. This networked prototype we have called IMMEDIATE (Integrating MultiMEdia in a DIstAnce learning<br />

and TEaching environment).<br />

146


Evaluation through prototyping and user testing<br />

Incremental prototyping was used to evaluate and refine the main elements of the design specification and to develop<br />

IMMEDIATE into a fully operational network. The approach taken was to focus upon meeting the requirements of<br />

the Learning Shell as the front-end of the system and then to address networking, authoring and administration issues<br />

in the context of supporting those requirements.<br />

The Learning Shell user interface has been assembled in Delphi from reusable “drag and drop” software components.<br />

Each of these encapsulates a particular learning element such as a student notebook or a lecture presentation. They<br />

are built to a template to ensure consistency across the system and are self-contained in terms of learning<br />

functionality, only needing to call the Shell API to access or update learning content. An algorithm maps each<br />

learning content file to a particular learning component and topic in the course, enabling the system to synchronise<br />

components by topic and provide context-sensitive help. The learning support system is built upon a database of<br />

definitions, elaborations, references and URLs linked to keywords associated with each topic in a course. This<br />

components-based architecture provides a robust, modular structure capable of incorporating more advanced<br />

technologies as appropriate.<br />

As a network user interface in an internet-based client/server network, the Learning Shell has some functional and<br />

architectural similarities to a platform-specific web browser such as Microsoft's Internet Explorer that is tightly<br />

linked to the underlying operating system. It is built around a Controller module providing an API through which<br />

interface components interact with each other and with the reference model layer (Figure 1). The model layer is a set<br />

of data structures and related operations, each of which models a different aspect of the learning computer and its<br />

current state. The System Model, for example, defines the different study modes available in a course, and maps<br />

individual learning components to each mode.<br />

Desktop<br />

Enables user to<br />

interact with the<br />

system controller<br />

to change modes,<br />

launch interface<br />

components, etc.<br />

Operating<br />

System API<br />

Called by<br />

components<br />

for their<br />

internal<br />

functionality<br />

User Info<br />

Enables system to<br />

directly collect info<br />

from the user - logon,<br />

backup drive,etc.<br />

Generic Learning<br />

Component<br />

Tools available in any<br />

mode. Managed by<br />

the user, e.g. Student<br />

Notes<br />

Manager Manager Manager Manager<br />

Student Model<br />

Holds student<br />

info needed by<br />

system & not<br />

stored<br />

elsewhere,<br />

e.g. current<br />

topic & mode,<br />

password<br />

System Model<br />

Maps<br />

components to<br />

modes and<br />

holds other<br />

static info<br />

about system<br />

System Tree<br />

Maps structure<br />

of course.<br />

Stores<br />

dynamic info -<br />

nodes (topics)<br />

visited or<br />

completed, etc<br />

Figure 1. Learning Shell architecture<br />

Resources<br />

Model<br />

Maps stored<br />

files to<br />

learning<br />

components<br />

Mode-Specific<br />

Learning Component<br />

Accessible only by<br />

changing mode -<br />

managed by system<br />

e.g. Lecture Notes.<br />

Controller<br />

All interactions between components are channelled through here. Manages the<br />

interface components - mode changes, etc. Interacts with system components via<br />

respective managers.<br />

Learning shell directory system<br />

Manager<br />

System<br />

Database<br />

Integrates<br />

messages &<br />

learning help<br />

The Controller plays an analogous role to a browser’s controller, which manages the other browser components and<br />

calls on them to perform operations specified by the user (Comer, 1999, p. 427). However, the Learning Shell<br />

Controller, through reference to the model layer, has additional capacities for managing the interface to simplify and<br />

147


minimise housekeeping tasks and free the user for learning. In tandem with the learning components architecture it<br />

provides a direct manipulation interface which seamlessly integrates all necessary functionality into a single,<br />

simplified, adaptable framework. In contrast to a cross-platform browser environment learning components can call<br />

the Windows API directly, taking full advantage of its rich interactive functionality.<br />

Network architecture<br />

The IMMEDIATE network architecture is shown in Figure 2. The university end of IMMEDIATE uses an FTP<br />

server as its gateway, controlling access to the Course Repository. Because most functionality resides at the student<br />

end the only materials that need to be transferred across the network are messages and updated learning or system<br />

management files. Most updates are short text files (messages and help updates) which can be readily transferred<br />

even over slow Internet connections. Where land-based Internet service is too slow or unreliable, larger files can be<br />

updated by alternative media such as CD or DVD disks or satellite. FTP offers a simple basis for meeting these<br />

requirements and enables updates to proceed behind the scenes without the user having to wait for downloads to<br />

complete.<br />

To support a communications model based on a replicated database, each user's view of the database is copied to that<br />

user's machine from a central master copy. A set of protocols has been devised and implemented for updating and<br />

synchronising these views and the master copy over FTP. All updates are SQL result sets which are transferred as<br />

plain text files.<br />

The course authoring, tutoring and administration application was built to prototype a method for assembling and<br />

managing courses that does not require the tutor to understand IMMEDIATE's inner workings, and to help evaluate<br />

the communication and learning support capabilities of the learning computer. It uses the same mapping algorithms<br />

and data structures as the Learning Shell to enable a course to be outlined, study modes to be defined and learning<br />

content to be inserted correctly via a direct-manipulation, graphical interface. This mechanism enables course<br />

content to be dynamically improved, updated, and re-used after it has been deployed.<br />

Course Management &<br />

Authoring System<br />

(Tutor User)<br />

On-campus<br />

Off-c ampus<br />

LAN<br />

Reposit ory<br />

(Database,<br />

update files)<br />

Repository<br />

Manager<br />

FTP Server<br />

(Security)<br />

Internet<br />

Learning Shell<br />

(St udent us er)<br />

Figure 2. IMMEDIATE network architecture<br />

CD-ROM<br />

148


The final incremental prototyping step was to install IMMEDIATE’s component applications on a LAN to ensure<br />

that they would work correctly together to support the key learning computer requirements. Once we had achieved<br />

this our next task was to evaluate these features under field conditions.<br />

Evaluation with users<br />

IMMEDIATE was installed to run over the rural telephone network and then tested with volunteer users from a<br />

remote farming and fishing community over a period of two days. This involved setting up the Learning Shell on a<br />

PC, the selection of four volunteers, a pilot study with one user and an in-depth study with the other three. If<br />

IMMEDIATE was to have universal application then it was vital to test it under some of the more difficult field<br />

conditions that might be faced by distance students. It was necessary to ensure that the essential functionality of the<br />

system was accessible with an older model PC and operating system, a slow and unreliable rural Internet connection,<br />

and a relatively inexperienced computer user working alone.<br />

The goals of this evaluation were to determine that IMMEDIATE performed reliably under these conditions and that<br />

all of the learning computer's functionality was accessible to the user, and to assess the usability of the Learning<br />

Shell. Developers of any computer-based learning must consider students both as learners and as users, that is, they<br />

must design both for form and content (Smulders, 2003). Poor usability inhibits learning. Inaccessibility prevents it<br />

altogether. This is a critical issue where the student is studying at home alone without access to user support.<br />

In this phase of the project, the evaluation was focussed on the student as user, that is, on the technical question of<br />

whether the users could carry out the main tasks, without considering the pedagogical effectiveness of the learning<br />

computer. This involved evaluating the functionality of all three subsystems – the Repository Manager, the Course<br />

Authoring and Management System and the Learning Shell. For students to carry out the specified tasks, all the three<br />

subsystems had to work together effectively.<br />

The usability of the student end also had to be evaluated. Could the participants complete a set of typical learning<br />

tasks unaided? How efficiently could they do this?<br />

Finally, the following questions needed to be addressed with regard to accessibility. Was the system runnable where<br />

the Internet was slow and unreliable? Could key functions, such as updating resources, communicating with other<br />

students and getting help from the tutor, work over a rural network?<br />

The Learning Shell was installed on PC running Windows 95 at a small rural school. Four members of the<br />

community serviced by the school volunteered to assist in evaluating the usability of the prototype. All four<br />

volunteers had studied at the tertiary level and three were current or very recent tertiary distance students. One<br />

agreed to make herself available for a pilot study which was used to refine the evaluation to ensure that all major<br />

aspects of the system would be tested.<br />

The three remaining volunteers participated in the in-depth study. They were selected on the basis that they lived in a<br />

rural area, had distance learning experience and some knowledge of computers (the relevant information was<br />

obtained by a questionnaire). This is an example of purposive sampling (Yin, 1984, Patton, 1990) where appropriate<br />

individuals who meet the specified requirements are selected. It was necessary to find individuals who had the<br />

potential to successfully complete the exercises under conditions similar to those faced by a distance student,<br />

working unassisted from a remote location.<br />

The three volunteers were asked to complete seven scenarios covering setting up the student Learning Shell and<br />

exploring all aspects of its functionality, including accessing learning material in six different study modes: lecture,<br />

group work, tutorial, textbook, collaboration, and practice. They were expected to complete these in two one-hour<br />

sessions spread over two days. The first exercises were quite detailed in their instructions, the later ones,<br />

progressively less so. To ensure that no user was advantaged by knowledge of the course content, the material was<br />

taken from a third year university paper on a topic that none of them had studied. This was appropriate given the<br />

nature of the evaluation. The sequence of scenarios was as follows:<br />

1. Initialise Learning Shell to own preferences<br />

2. Log onto course<br />

149


3. Join Group Discussion<br />

4. Attend Lectures, Seek Help, and Complete Self-Assessment<br />

5. Monitor the Assignment Discussion<br />

6. Explore On-line resources<br />

7. Complete Individual Tutorial<br />

One issue that had to be faced was of training. When the system was fully developed, student users would be<br />

supplied with a CD or DVD containing the software and course materials, a demonstration video and a hardcopy<br />

guide. Since at this stage there was no demonstration video, some training had to be provided to the users. Each user<br />

was given a demonstration of the student software as it would be shown in the video and then completed the first two<br />

exercises under supervision. They were then left to complete the scenarios unassisted, except for the provision of a<br />

sheet of Handy Hints that would form part of the hardcopy guide.<br />

Three forms of data collection were used during the experiment: logging their activities (Dix et al., 1993),<br />

observation and interviews (Patton, 1990, Scott et al, 1991). Where, when and what each user did during their<br />

sessions was tracked by the system and saved in a log file, primarily to analyse users' navigation paths and<br />

completion times, and record where they ran into difficulties. Because the student software is designed to support<br />

user-centred exploration and multiple navigation paths, a user should be able to complete a task even it they diverge<br />

from the shortest path one would expect an experienced user to follow.<br />

Each participant was interviewed individually, immediately after they had completed each session (i.e. twice). Semistructured<br />

interviews took place. A set of questions was prepared as the starting point for exploring the person's<br />

experience with the prototype. Questions such as the difficulties faced, what aspects of the system they liked, and<br />

what they would want changing were asked. These interviews were recorded and transcribed.<br />

Results of the Evaluation<br />

IMMEDIATE ran successfully and without major incident under quite challenging conditions. The results of this<br />

evaluation were very positive, with all volunteers able to complete the exercises within the two one-hour sessions.<br />

All three stated that by the end of the exercises they were confident that they were able to use the system unaided.<br />

The users were only observed working through the first two scenarios. The data from this observation was of<br />

somewhat secondary value because these two exercises were part of the supervised demonstration. However it did<br />

confirm the data from their profile questionnaires indicating that they represented a range of computing experience<br />

from confident to unsure. Two of the users were inhibited by previous negative experiences with Windows,<br />

including the fear of crashing the system if they did something wrong.<br />

An analysis of the log files revealed that whilst each of the volunteers got held up at least once they were able to<br />

complete all their scenarios. User 2, the least experienced user, took significantly longer to complete several<br />

scenarios (Figure 3).<br />

The most fruitful form of data collection proved to be the interviews which were undertaken with each user<br />

individually at the end of each one-hour session. Each volunteer was asked the same set of questions, as the basis for<br />

exploring their experience with the prototype. The interviews averaged between 20 and 30 minutes. No major<br />

usability problems were identified during the interviews.<br />

The complete interviews were transcribed and then analysed for significant, common themes relating to the goals of<br />

the evaluation. Major themes identified were:<br />

• The complexity of the general-purpose PC environment.<br />

• The problem of poor Internet service in rural areas.<br />

• The negative effects of personal isolation.<br />

• The importance of training and help.<br />

• Simplifying searches to avoid information overload.<br />

Support for the Learning Shell.<br />

150


Figure 3. Comparison of scenario completion times<br />

Many of the interviewees' comments alluded to the frustrations they faced using their computers at home. These<br />

comments strongly support the argument that modern personal computing systems have become too complex for<br />

learning (“there's a lot of stuff that you just don't need. And to get around to doing the task that you want to do, it's<br />

harder to get there because you've got obstacles in the way.”); that poor rural internet service renders server-centred<br />

systems unworkable (“waiting, waiting”; "You can dialup and get online. But you can't get anything."); and that<br />

without anyone more experienced on hand to help, the home student may not be able to complete their learning tasks<br />

(“It's easier to do the basic stuff, [easier to drive the 70 kilometres into town to] go to the library and things like that<br />

and photocopy pages out of a library book [than find the item on the web]. But it should be easier on a computer but<br />

I find that it is not.”).<br />

Overall the interviewees embraced the learning computer as an easier and more effective vehicle for achieving tasks<br />

related to e-learning. "Half the time I get frustrated when I have to work with my computer... If I was to be studying<br />

at Massey and there was a program like that, that would probably be one of the main reasons why I would study<br />

online… there were things that I got confused on but overall it was very clear what I was doing to the point where I<br />

thought: this is too clear, and too easy.” Some of the features they singled out were: simplicity, ease of use,<br />

integration, speed, timeliness, relevance, robustness, friendliness, transparency, consistency, and clear instructions.<br />

One user liked the way the Shell was simplified and standardised across all screens. Another noted that the<br />

advantages of this simplicity are that “you don’t have to remember a lot of things” and that “you don't have to worry<br />

about all the bells and whistles and everything else going on because it is all just there”.<br />

One of the benefits of the Learning Shell's simplified, consistent, modular architecture is that help screens,<br />

emphasising a "Handy Hints" approach, can be easily linked to each component in the system to provide just-in-time,<br />

just-enough, context-sensitive help to the user (Figure 4). For one user, “if I click on Help I have found every time<br />

what I need to do and its been able to solve it for me,” while accessing the help system in Windows applications<br />

didn’t solve his problem because “quite often it's not the scenario where you have got yourself”. Another noted that<br />

“the content is relevant to what you are doing" whereas on her home computer it was often “irrelevant drivel”.<br />

151


Figure 4. Learning Shell showing Help screen for Lecture component<br />

The volunteers also commented favourably on how the Learning Support system helped them rapidly locate<br />

supporting material compared to using a general web search engine where you "get 300 sites of which 275 of them<br />

are useless". "It was easy and it was fast. And that is my two main things at home… Because you didn't have to<br />

waste all that time in sorting through the information that you are given as to what's relevant and what's not."<br />

Comments were also solicited on how the system might be improved. One proposal was to add an “Undo" function<br />

by which the user could step back from their last action.<br />

Whilst there were only three users in the in-depth evaluation, they were representative of distance learners in remote<br />

environments. The number of users was sufficient to demonstrate that the concept was workable, that the Repository<br />

Manager could handle communication between the university and student end, that the Course Authoring and<br />

Management system could assist a remote student and that the system was operational over a two day period. The<br />

system proved accessible to users providing speedy access in contrast to the long delays that occur when using the<br />

Internet. Whilst the users took varying lengths of time to accomplish the tasks, all were able to complete them even<br />

though the course material was unfamiliar. In their comments the participants all acknowledged the benefits of a<br />

system of this kind. They were able to run this distributed system over a telephone network and demonstrate that the<br />

main functions could all be used.<br />

Conclusion and discussion<br />

The main result of this project so far has been to demonstrate the feasibility of the learning computer concept as a<br />

means of combining the benefits and power of standalone computing with the collaborative potential of the network<br />

in an easy-to-use distance learning environment. This ease of use and integrated functionality offers advantages to all<br />

distance students. IMMEDIATE’s ability to support students in remote locations beyond the reach of web-serverbased<br />

systems suggests the approach has real potential as a path towards bridging the digital divide in universitylevel<br />

distance education that merits further exploration.<br />

152


Technologically, the learning computer is distinguishable from either a LMS or a LCMS by its distributed<br />

functionality, asynchronous communications and integrated interface. But its most significant difference is that the<br />

distance students themselves, rather than the education providers, are the primary target users. An integrated, holistic<br />

approach is taken to e-learning in which the usability of the student’s overall computing environment is considered<br />

as important to successful learning outcomes as the learning content itself.<br />

The IMMEDIATE prototype is first and foremost a specialised learning environment in which network and<br />

education provider features are designed as support services rather than as authoring, teaching or course<br />

administration productivity tools. It demonstrates the potential for specialised computing environments to address<br />

usability concerns resulting from the increasing complexity of general purpose systems as they encompass more and<br />

more functions and features.<br />

IMMEDIATE models the forms in which learning will be delivered, while leaving most content to be authored<br />

separately or picked up from reusable learning repositories. The learning interface is assembled from reusable<br />

learning components, an extension of the drag and drop components of visual programming tools like Delphi or<br />

Visual Studio. Learning components differ from reusable software components (controls) in that they encapsulate a<br />

specific learning domain task rather than a more generally-applicable software feature. They differ from reusable<br />

learning objects in that they implement the form in which the content will be displayed rather than the content of a<br />

learning module. The learning components are the centrepiece of a modular construction which supports reuse on<br />

three levels – the student (multiple courses), the teacher (reusable learning materials), and the software engineer<br />

(reusable code).<br />

By combining an emphasis on interface design − especially user-centred adaptable and collaborative strategies −<br />

with an emphasis on e-learning as an extension of the human teacher and the university, the framework for an<br />

integrated, individualised, multi-dimensional, easy-to-use learning environment has been laid. This supports the<br />

findings of Murray et al. (2000) that “good interface design and passive but powerful user features can sometimes<br />

provide the benefits that are ascribed to more sophisticated or intelligent features” (para. 41).<br />

The learning computer concept is also distinct in that it draws upon the ubiquitous (embedded) computing approach<br />

to e-learning (Bork, 2001; Hoppe et al, 2000). The object is to so seamlessly embed the computer functionality into<br />

the learning domain – by taking into account the total context including hardware, software and human factors – that<br />

the computer will vanish for the learner. We are trying to enhance and extend the university learning environment<br />

rather than create an alternative virtual one.<br />

The Learning Shell uses specialised software to embed the computer in the learning environment in the form of a<br />

graphical user shell over the most widely-used family of operating systems, Microsoft Windows. In this way we<br />

implement a “what” interface centred upon the task in the user’s domain rather than a “how” interface focussed<br />

upon the computing mechanisms for achieving that task. This addresses a long-recognised challenge for simplifying<br />

and improving the usability of computer systems, as discussed in Gentner and Grudin (1990). The next challenge is<br />

to extend this approach to support special-purpose hardware, cross-platform functionality and mobile computing.<br />

Ongoing Work<br />

Research is proceeding along two axes: evaluating IMMEDIATE in an actual distance learning course, and reengineering<br />

the prototype to support mobile learning and low-cost hardware.<br />

Preparations are underway to trial a further-refined learning computer in the distance teaching at the university level<br />

of Maori and English as second languages. The trials provide opportunities to evaluate its pedagogical effectiveness<br />

with a particular emphasis on assessing the learning support and search facilities. They also provide opportunities for<br />

refining and testing IMMEDIATE's course authoring and management tool.<br />

IMMEDIATE implements the learning computer as an application that is installed on top of the Windows operating<br />

system on a specific computer. However, its anybody, anywhere, anytime goal would be better met if students were<br />

not confined to using their own computer but could use any available one. As a first step in this direction we are re-<br />

153


engineering the Learning Shell so it can be installed on and run from a plug-in flash-memory device as a mobile<br />

learning computer.<br />

Another possibility is to re-engineer the Learning Shell to run as a specialised user shell on top of a Linux kernel.<br />

While a Linux version offers the advantages of a cleaner and leaner implementation with lower system speed,<br />

memory and storage requirements, it carries a much greater programming overhead. All the rich interface<br />

functionality of the learning computer would have to be built from much more basic building blocks than in a<br />

Windows development environment like Delphi. As a medium-term goal, we are exploring the feasibility of<br />

implementing a Linux version along two lines:<br />

• As a low-cost single-purpose computer. Potentially, a Linux-based learning computer could be built from<br />

recycled PCs and components and distributed cheaply to students. Around the world millions of computers are<br />

scrapped as obsolete each year, many of which could be reclaimed for this purpose (ERG, 2006).<br />

• As a self-booting, cross-platform, portable device that can be plugged into any PC to temporarily convert it into<br />

a specialised learning computer. This offers a way for a learner to take full advantage of the learning computer<br />

approach, while sharing a computer with other household members and other tasks, or while studying from<br />

different locations. Black Dog (LinuxDevices, 2005) is an example of a portable self-booting device for running<br />

Linux-based software in a Windows environment.<br />

Reaching out to the many millions of potential e-learners beyond the ambit of current web technology is one of the<br />

biggest challenges and opportunities in computing today.<br />

References<br />

Attewell, J. (2004). Mobile technologies and learning. A technology update and m-learning project summary,<br />

London: Learning and Skills Development Agency.<br />

Bindé, J. (2005). Towards Knowledge Societies, Paris: UNESCO Publishing.<br />

Bork, A. (2001). Tutorial Learning for the New Century. Journal of Science Education and <strong>Technology</strong>, <strong>10</strong> (1), 57-<br />

71.<br />

Brandon Hall (2005). LMSs and LCMSs Demystified, retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.cedmaeurope.org/newsletter%20articles/Brandon%20Hall/LMSs%20and%20LCMSs%20Demystified.pdf.<br />

Comer, D. E. (1999). Computer Networks and Internets (2 nd Ed.), New Jersey: Prentice Hall.<br />

Dietinger, T., & Maurer, H. (1998). GENTLE - (GEneral Networked Training and Learning Environment). In T.<br />

Ottmann, T. & I Tomek (Eds.), ED--Media/ED--Telecom`98, Charlottesville, VA: AACE, 358-364.<br />

Dix, A., Finlay, J., Abowd, G., & Beale R. (1993). Human Computer Interaction, Prentice Hall.<br />

Education Review (2004). UK e-university ”fiasco”. Education Review, March 31-April 6, 2004, reprinted in EXMSS<br />

Off Campus, June 2004.<br />

ERG (2006). Electronic Waste, E-waste Research Group, Griffith University, Queensland, retrieved <strong>October</strong> 15,<br />

<strong>2007</strong>, from http://www.griffith.edu.au/engineering-information-technology/electronic-waste.<br />

Gentner D. R., & Grudin, J. (1990). Why Engineers (Sometimes) Create Bad Interfaces. In Carrasco Chew, J. &<br />

Whiteside, J. (Eds.), CHI'90 Proceedings, New York: ACM, 277-282.<br />

Goldberg, M. (1997). Communication and Collaboration Tools in WebCT. Paper presented at the Conference on<br />

Enabling Network-Based Learning, May 28-30, 1997, Espoo, Finland.<br />

Hope, W. (2002). Evidence of Unequal Access. NZ Infotech, 530.<br />

154


Hoppe, U., Lingnau, A., Machado, I., Paiva, A., Prada, R., & Tewissen, F. (2000). Supporting Collaborative<br />

Activities in Computer Integrated Classrooms – the NIMIS Approach. Paper presented at the 6 th International<br />

Workshop on Groupware, 18-20 <strong>October</strong>, 2000, Madeira, Portugal.<br />

IHug (2004). High Speed Satellite Internet, retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.ihug.co.nz/.<br />

Kubarek, D. (1999). Introducing and Supporting a Web Course Management Tool. Syllabus magazine, June,<br />

retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.cit.cornell.edu/atc/cst/SyllabusWeb/syllabus.pdf.<br />

Jesshope, C., Heinrich, E., & Kinshuk (2000). <strong>Technology</strong> Integrated Learning Environments for Education at a<br />

Distance. Paper presented at the DEANZ 2000 Conference, April 26-29, 2000, Dunedin, New Zealand, retrieved<br />

<strong>October</strong> 15, <strong>2007</strong>, from http://www.deanz.org.nz/docs/jesshope.doc.<br />

Johnson, R., Kemp, E., Kemp, R., & Blakey, P. (2002). From electronic textbook to multidimensional learning<br />

environment: overcoming the loneliness of the distance learner, In Kinshuk, Lewis, R., Akahori, K., Kemp, R.,<br />

Okamoto, T., Henderson, L. & Lee, C.H. (Eds.), Proceedings of 2002 International Conference on Computers in<br />

Education, Los Alamitos, CA: IEEE CS Press, 632–636.<br />

Jong, B.S., Lin, T.W., Chan, T.Y., & Wu, Y.L. (2003). Using VR <strong>Technology</strong> to support the Formation of<br />

Cooperative Learning Groups. In Devedzic, V., Spector, J. M.,Sampson, D. & Kinshuk (Eds.), Proceedings of the 3 rd<br />

IEEE International Conference on Advanced Learning Technologies, Los Alamitos, CA: IEEE CS Press, 37-41.<br />

LinuxDevices (2005). Pocketable Linux server creates plug-and-go Linux desktop, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.linuxdevices.com/news/NS8562564746.html.<br />

Marine, S., & Blanchard, J-M. (2004). Bridging The Digital Divide: An Opportunity For Growth In The 21st<br />

Century, retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www1.alcatel-lucent.com/doctypes/articlepaperlibrary/pdf/<br />

ATR2004Q3/S0408-Bridging_opportunity-EN.pdf.<br />

Murray, T., Condit, C., Piemonte, J., Shen, T., & Khan, S. (2000). Evaluating the Need for Intelligence in an<br />

Adaptive Hypermedia System. Lecture Notes in Computer Science, 1839, 373-382.<br />

Patton, M. (1990). Qualitative Evaluation Methods, California: Sage Publications.<br />

Scott, C., Clayton, J., & Gibson, E. (1991). A Practical Guide to Knowledge Acquisition, Reading: Addison-Wesley.<br />

Smulders, D. (2003). Designing for Learners, Designing for Users, eLearn Magazine, February 3, retrieved <strong>October</strong><br />

15, <strong>2007</strong>, from http://www.elearnmag.org/subpage.cfm?section=best_practices&article=11-1.<br />

Yin, R. (1984). Case Study Research: Design & Methods (1 st Ed.), London: Sage Publications.<br />

155


Olfos, R., & Zulantay, H. (<strong>2007</strong>). Reliability and Validity of Authentic Assessment in a Web Based Course. <strong>Educational</strong><br />

<strong>Technology</strong> & Society, <strong>10</strong> (4), 156-173.<br />

Reliability and Validity of Authentic Assessment in a Web Based Course<br />

Raimundo Olfos<br />

Pontificia Universidad Católica de Valparaíso. Mathematic Institute, Valparaíso, Chile// Raimundo.olfos@userena.cl<br />

Hildaura Zulantay<br />

Universidad de La Serena. Mathematic Departemant, La Serena, Chile // hzulantay@yahoo.com<br />

ABSTRACT<br />

Web-based courses are promising in that they are effective and have the possibility of their instructional design<br />

being improved over time. However, the assessments of said courses are criticized in terms of their validity.<br />

This paper is an exploratory case study regarding the validity of the assessment system used in a semi presential<br />

web-based course. The course uses an authentic assessment system that includes online forums, online tests,<br />

self-evaluations, and the assessment of e-learner processes and products by a tutor, peers, and an expert. The<br />

validity of the system was checked using internal and external criteria. The results show that the authentic<br />

assessment system addresses technical problems, especially regarding reliability of instruments. Some<br />

suggestions are proposed to strengthen authentic assessment in web-based courses and to test it.<br />

Keywords<br />

Authentic assessment, Reliability and validity, Web based instruction, Evaluation<br />

Introduction<br />

Web-based instruction offers flexibility, and it is becoming increasingly popular (Walles, 2002). Some experiences<br />

show Web-based instruction is as effective as classroom instruction (White, 1999; Ryan, 2001; Tucker, 2000).<br />

Moreover, Wilkerson and Elkins, (2000) concluded that students felt that web-based instruction was as effective as<br />

that in traditional courses. The use of the Web for instruction is in an early stage of development, and its<br />

effectiveness may not yet be fully known (Downs et al., 1999). In an extensive study including 47 assessments of<br />

web-based courses, Olson and Wisher (2002) assert that web-based instruction appears to be an improvement over<br />

conventional classroom instruction.<br />

To Phipps and Merisotis (1999) quality of the research in distance learning courses is questionable: much of the<br />

research does not control for extraneous variables; the validity and reliability of the instruments used to measure<br />

student outcomes and attitudes are questionable; and many studies do not adequately control for the feelings and<br />

attitudes of students. Attitude is a learned predisposition to consistently respond to a given social object (Segarra et<br />

al. 1997); for instance, technology tools.<br />

Authentic Assessment<br />

The use of new technology raises issues related to pedagogy, content, and interaction. As these issues are addressed,<br />

there needs to be a subsequent alteration in the type of assessment used on such courses and the associated<br />

procedures (Walles, 2002).<br />

A new approach to evaluation is authentic assessment. This modality connects teaching to realistic and complex<br />

situations and contexts. Also called performance assessment, appropriate assessment, alternative assessment, or<br />

direct assessment; authentic assessment includes a variety of techniques such as written products, portfolios, check<br />

lists, teacher observations, and group projects.<br />

According to Herrington and Herrington (1998) authentic assessment occurs within the context of an authentic<br />

activity with complex challenges, and centers on an active learner that produces refined results or products, and is<br />

associated with multiple learning indicators. It includes the development of tests and projects (Condemarín, 2000).<br />

The AAS not only evaluates the products, but also the processes involved. It is a process that monitors the learner’s<br />

progress using a variety of methods, such as observation records, interviews, and evidence gathering. It is consistent<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

156


with Vygotsky’s dynamic assessment in that it is a mediated process in which social interaction stimulates the<br />

learning process. It also allows for collaborative learning.<br />

Authentic assessment of educational achievement directly measures actual performance in the subject area. It was<br />

developed as a result of criticism of multiple-choice tests, which usually only provide a superficial idea of what a<br />

student has learned and do not indicate what a student can do with what was acquired (Aiken, 1996). Authentic<br />

assessment can provide genuine accountability. All forms of authentic assessment can be summarized numerically,<br />

or put on a scale, to make it possible to combine individual results and to meet state and federal requirements for<br />

comparable quantitative data (National Center for Fair and Open Testing, 1992). One method to this end, according<br />

to Wiggins (1998), is the use of rubrics, which are sets of criteria that evaluate performance. Points are assigned<br />

according to how well each criterion is fulfilled, and are then used to provide the quantitative values.<br />

The validity of Authentic Assessment<br />

Aiken (1996) plainly indicates the difficulty in establishing the validity and reliability of any authentic assessment.<br />

Schurr (1999) states that a disadvantage of authentic assessment is the difficulty, maybe even the impossibility, of<br />

obtaining consistency, objectivity, and/or standardization in its results. Wolf (cited by Herrington and Herrington,<br />

1998), sees the problem as a tradeoff between validity and reliability. In the same vein, Linn, Baker and Dunbar<br />

(1991), hold that performance-based assessments are valid in terms of consequence, impartiality, transference,<br />

content coverage, cognitive complexity, significance, judgment, cost, and efficiency; consequently, reliability,<br />

understood as the stability of the results, is difficult to obtain.<br />

Linn et al. state that “a greater dependence of critical judgments on the performance of tasks” is inevitable. This is a<br />

problem for large-scale assessments, but not for the smaller and more specific contexts to be found in superior<br />

education (Gipps, 1995), or in those cases in which the student is evaluated through class activities, where learning<br />

and evaluation are essentially one and the same (Reeves and Okey, 1996).<br />

There are also problems with reliability in standardized tests. Gipps (1995) mentions national tests in the UK with<br />

reliability rates in posttests that are lower than those obtained in performance tests. Reeves and Okay (196) state that<br />

authentic assessment takes place within a real-world context, where generalizations are of little value and, therefore,<br />

reproducibility should not be a concern. Young (1995) adds that assessment needs can be perceived in a more<br />

functional way, and assessments can be validated in terms of their value in the real world rather than for their<br />

stability as instruments (Herrington and Herrington, 1998).<br />

Authentic assessment in web based courses<br />

Computing-mediated distance education introduces extraneous factors that could affect the validity of the course<br />

assessment system. One of these factors is identified as the usability of the web site. Usability deals with how well a<br />

system satisfies user needs and requirements. It applies to all the aspects of a system with which a user might<br />

interact, including installation and maintenance procedures. It is usually associated with aspects such as ease of<br />

learning, user friendliness, ease of memorizing, minimal errors, and user satisfaction. (Sánchez, 1999, 2000).<br />

Another factor is attitude. According to Ceballos et al. (1998), only when there is a favorable attitude toward the<br />

TICs, can an e-learner effectively face learning tasks; therefore a web-based assessment system requires a positive<br />

attitude from the users to show their full potential. In other words, the degree of effectiveness of the assessment<br />

system could be affected by a negative attitude towards the TICS by some of the e-learners.<br />

Literature provides scarce information about authentic assessment in web-based courses. It only refers to case studies<br />

and similarities. Weller (2002) examines technical barriers in the assessment process of a web-based course, and<br />

points out the tension between individuality and robustness in submissions and the detection of plagiarism. Clarke et<br />

al. (2004) state that feedback to students on their assignments is an essential activity. They point out that tutorial<br />

support must be a crucial part of good distance course, where emails should be considered as a non-intrusive means<br />

of communication. Orde (2001) offers some suggestions to develop online courses, for instance: to consider a<br />

description of learners; to provide readily available technical support; to eliminate group activities easily done faceto-face;<br />

and to record and grade interactions such as e-mail and group discussion contributions. To Orde, the testing<br />

157


portion of CourseInfo requires that each quiz item be individually entered and submitted. If this feature of the<br />

software is used, the ID students advise allowing multiple attempts. To Orde, formative evaluation is an essential<br />

component and necessary to online course development.<br />

Gatlin and Jacob (2002) discuss advantages of digital portfolios as part of one university’s authentic pre-service<br />

teacher assessment. Chang (2002) asserts that web based learning portfolios are useful for students to obtain<br />

feedback from other students. Collins et al (2001) state that the design of assessment for web based courses should<br />

measure student participation, and the development of skills.<br />

Literature recommends centering authentic assessment on individual tasks, but also connecting it to real life,<br />

including interactive work among peers. Fenwick and Parsons (1997) assert that effective assessment must be<br />

intricately woven throughout the teaching-learning process. Collaborative learning activities enable subjects to share<br />

their abilities and limitations, providing a better quality product than one that is the mere sum of individual<br />

contributions. Group interactions, following the indications for collaborative work given in a course, facilitate<br />

vicarious learning, which is hard for some subjects to experience if they do not interact with their peers (Bandura and<br />

Walters, 1963; Vygotsky, 1985).<br />

The purpose of this study<br />

The purpose of this study is to analyze the validity of the Authentic Assessment System (AAS) used in a web-based<br />

online course. Leading questions of this study are: Is there consistency between AAS results and effectiveness of<br />

products in a real life context? How much do AAS results correlate with external individual learning criteria? What<br />

evidence can be gathered about the reliability and validity of AAS’s different phases? “Do external factors such as<br />

web usability and participants’ attitude invalidate the AAS design?<br />

Method<br />

Subjects<br />

Twenty-eight teachers participated, 13 primary school teachers and 15 secondary school math teachers, who taught<br />

different levels of math, from 5th to <strong>10</strong>th grade. All except one of the participants worked in schools in two nearby<br />

cities or their surroundings. The cities had a population of approximately 120,000 inhabitants each. None of the<br />

students had previous experience with distance learning.<br />

Setting<br />

A.- The course. This study took place within the context of the twelve-week course called "Didactics of Elementary<br />

Algebra" with 89% effectiveness, offered to mathematics teachers. Twenty-five of the 28 teachers –hereinafter called<br />

students- passed the course and met AAS requirements. It was a semi presential course. The students did most of the<br />

activities by means of distance learning. To do so, the students had access to a web site with the course contents and<br />

computer resources such as e-mail, forums, FTP, and online tests in order to interact with the rest of the students and<br />

with the tutor in order to meet the course's evaluation requirements. The course included three classroom activities<br />

and an authentic assessment system made up of eight internal examinations.<br />

The course was managed by a coordinator and led by a tutor, both of whom were academicians with experience in<br />

distance learning. The students used computers as a means of communication with their peers as well as with the<br />

tutor. In addition, computers were used to do the course assignments as well as facilitating the search for and<br />

creation of didactic material.<br />

The course objectives were: a) to develop the capacity to design, implement, and evaluate didactic units regarding<br />

elementary algebra, b) strengthen the capacity to work collaboratively, design and evaluate the use of didactic units<br />

centered on elementary algebra, and c) develop Internet skills with the aim of strengthening professional skills and<br />

efficiency in teaching elementary algebra.<br />

158


The first unit, "Internet tools" (1 week), included a classroom session in which surfing the Internet, using e-mail, and<br />

the website's communications services were reviewed. The second unit, "Didactic Design Concept" (2 weeks),<br />

showed the students how to characterize the unit's terms, didactic design, lessons and activities, by establishing a<br />

basic language and conceptual framework upon which to develop the didactic designs. An individual evaluation was<br />

included. It was called "Design forum" (I-1) referring to units 1 and 2. The third unit, "Professional Knowledge" (2<br />

weeks), introduced specific concepts on types of learning, curricular conceptions, and learning evaluation. The unit<br />

included an individual examination called "On-line test" (I-2) referring to course units 1, 2, and 3.<br />

The fourth unit, "Creating a Didactic Design" (4 weeks), guided the student in creating a didactic design. In this unit,<br />

collaborative work in small groups was realized, in which decisions were negotiated and roles were assigned to the<br />

participants with the aim of achieving common objectives. Three evaluations were done: assignment of “individual<br />

contributions" (I-3), the group creation of a "preliminary didactic design" (I-4), and the creation of a "didactic<br />

design" (I-5). During this period, the fifth week, a classroom session took place to get feedback from the<br />

collaborative work. Material was shared and the work was checked ahead of time. The fifth unit "Validation of the<br />

Designs" (2 weeks) provided the guidelines for testing the didactic designs with schoolchildren. In addition, it<br />

provided the guidelines for creating a final report that included a commented version of the didactic design<br />

subsequent to its application. This unit included three evaluations: the "final report" (I-6), a self-evaluation of the<br />

design through the "Own Design Forum" (I-7) and self-evaluation or "personal evaluation guideline" (I-8). The<br />

course ended after 12 weeks with the third classroom session. In this session, the designs were shared.<br />

B.- Authentic Assessment System. The assessment system regarding the students' learning and group production<br />

included the eight examinations mentioned (I-1 to I-8), which were structured following the four criteria summarized<br />

by Herrington and Herrington (1998).<br />

• The authentic assessment took place in a real context. It was connected with the students' daily professional<br />

performance (Meyer, 1992; Wiggins, 1993; Reeves & Okey, 1996).<br />

• In the authentic assessment, the students assumed a leading role in collaboration with their peers (Linn et al.,<br />

1991; Kroll, Masinglia y Mau, 1992) displaying their performance and presenting their products (Wiggins, 1989,<br />

1990, 1993).<br />

• In the authentic assessment, the evaluation activities were authentic. In other words, they were inextricably<br />

integrated with course learning activities and the student's daily professional activity (Young, 1995; Reeves &<br />

Okey, 1996). The evaluation activities corresponded to unstructured, complex challenges which demand<br />

students' own opinions (Wiggins, 1993; Linn et al., 1991; Torrance, 1995).<br />

• In the authentic assessment, multiple indicators (Lajoie, 1991; Linn et al., 1991) and several criteria were used to<br />

grade the variety of requested products (Wiggins, 1993; Lajoie, 1991; Resnick & Resnick, 1992).<br />

Table 1. Summarizes structure and content of each instrument used in an AAS<br />

Description<br />

Procedure<br />

I-1 Forum Design<br />

Participation in the forum, Dichotomist (does/ does not)<br />

I-2 On Line Test<br />

30 graded items attending 6 content dimensions, 3 levels of complexity.(Resent<br />

options were admitted)<br />

I-3 Individual Contributions 2 records referred to e-mails sent Dichotomist:Sent pertinent messages or not<br />

I-4 Preliminary Design 5 graded records referred to pre design attributes. Rubric based<br />

I-5 Didactic Design<br />

11 graded items referred to Design. Rubric based<br />

I-6 Final Report<br />

9 graded items referring to the design application. Rubric based<br />

I-7 Own Design Forum 1 numeric record summarizing didactic design quality opinion<br />

I-8 Personal Evaluation 20 Likert scale items. Knowledge, abilities and feelings acquired<br />

Individuals were basically tested on knowledge domain. The first part of the course provided instrumental<br />

knowledge about how to build didactic design, how to work in a collaborative approach, and how to use e-mails and<br />

the web site. Individuals were required to answer various tests and were also asked to report their learning perception<br />

during the process. The second part of the course focused on a productive approach. Participants formed small<br />

groups to elaborate didactic designs. Accordingly, evaluation procedures were applied to partial products of the<br />

didactic design building.<br />

159


Table 2. Assignment evaluation responsibilities and the weight of each procedure in relation to the assessment<br />

system<br />

Responsible of assignment Evaluated Weight<br />

I-1 Automatic assignment<br />

I-2 Automatic assignment.<br />

I-3 Automatic assignment. Pertinence was decided by tutor<br />

I-4 Tutor does assignments.<br />

I-5 Peers: two or more colleges do assignments<br />

I-6 Expert does assignments<br />

I-7 Assignment by participants (self evaluation)<br />

I-8 Self evaluation learning<br />

Research Design<br />

Individual<br />

Individual<br />

Individual<br />

Group Group<br />

Group Group<br />

Individual<br />

This single case study focused on multiple sources as Yin (1984), Stake (1995) and Weiss (1997) have<br />

recommended. Tests, archival records, and participants’ perspective were recorded. According to Yin (1994), this<br />

case study design was used to describe the AAS implemented in a specific web based course, and to analyze several<br />

variables related to a small number of subjects.<br />

The study considered individuals and small groups as units of analysis, according to an embedded case study design<br />

(Yin, 1994). Data were connected to indicators by means of correlations and triangulation analysis. Traditional alpha<br />

0.5 and 0.01 were used.<br />

Both the evidence of validity based on criteria of judges, parallel instruments, and non obstructive data, and the<br />

evidence of reliability obtained from indices of association between equivalent items, and reiterated measurements,<br />

are sustained in correlations. In effect, the Rho of Spearman coefficient for nonparametric data, index r of Pearson<br />

for variables of normal distribution, and the value alpha of Cronbach for groups of variables, express a correlation or<br />

degree of association between the measured variables. These univariated measurements are used in exploratory<br />

studies because they are simple and they aid in elaborating integrated models. 5% and 1% of the critical values<br />

considered in this study are used habitually in education with cuasi-experimental designs, unlike the values such as<br />

0.1%, or lower, used in experimental designs or in the area of medicine.<br />

Instruments<br />

This study used records of the eight AAS procedures, and three additional data sources to validate the AAS.<br />

The eight AAS instruments<br />

AAS was made up of 8 procedures associated with their respective instruments. When using the Cronbach<br />

coefficient as an indicator of AAS's reliability, we got a positive, but not significant value (α(26)=.22, p>.05), which<br />

was expected because AAS measurements did not refer to a one-dimensional variable.<br />

Internal Instruments<br />

Table 3. Correlation of each instrument with the rest of the AAS<br />

Correlation 1 (one-tailed)<br />

I-1 Forum Design contribution<br />

rho(26) = 0,37 p.05<br />

I-3 Individual Contributions mailed<br />

rho(28) = 0,50 p.05<br />

I-7Own Design Forum opinion<br />

rho(26) = 0.47 p


The eight AAS procedures measured different types of individual learning and group work. The eight correlations<br />

obtained between each instrument and the rest of the assessment system, as internal consistence indicators, were low,<br />

making it evident that the measurements were heterogeneous. See table 3.<br />

The online test (I-2) reached Cronbach α(26)=.88, which was used as a reliability measure. The Likert scale (I-8)<br />

reached Cronbach α(13)=.96, p


E-3c. Likert scale on usability: The scale was implemented as an indicator of the incidence of the usability of the<br />

web site in the course results. An adaptation of the Sanchez (2000) usability test was carried out, focusing on the<br />

measurement of the usability of the Website in relation to the course’s authentic assessment system. The author used<br />

five proposed dimensions: learning, satisfaction, error, efficiency and memory factors. Twenty-three items were<br />

created. They were assigned a score on a Likert scale, of 1 to 5 points, one being the lowest grade and five being the<br />

highest. The test was subjected to the remote criticism of educational research specialists for the validation of its<br />

content. The instrument's internal consistency was measured with Cronbach's alpha, obtaining respectively for each<br />

one of the factors, the coefficients 0.68, 0.77, 0.84, 0.54, 0.89, and for the complete instrument, α = 0.74 (n=22).<br />

Procedures<br />

The first two procedures<br />

They refer to global indicators of AAS’ validation: the relation with real context consequences, and with individual<br />

learning.<br />

Consistency between AAS’ results, and product effectiveness in real context. After designs were elaborated, materials<br />

for schoolchildren were reproduced to use in one or more classes. Then lessons were implemented and<br />

schoolchildren were evaluated. The averages of these evaluations were compared with the students’ AAS results by<br />

means of Spearman’s range correlation coefficient.<br />

Measurement of degree of association between AAS results and individual learning criteria. A post-test was<br />

considered as an indicator of the students’ learning. The posttest (E-2) was applied at the end of the course without<br />

prior warning; the students had not prepared for the test, and the results did not have any effect on the class grades.<br />

For this concurrent validity analysis, the students' AAS average was correlated with the post-test average.<br />

What evidence can be gathered about AAS validity in either individual or group phases?<br />

Analysis of the validity of AAS’s Individual Component<br />

The individual phase focused on instrumental learning. Three criteria were considered:.<br />

The student's capacity to recognize the components of a didactic design. The student's capacity to recognize the<br />

components of a didactic design was measured in one of AAS's instruments as well as in the external post-test. This<br />

analysis compared the percentages of correct answers from the items referring to this capacity included on the<br />

posttest (E-2) as well as on the online test (I-2).<br />

The student's capacity to surf the web site and send e-mails. This analysis about the students' capacity to navigate the<br />

web site and use e-mails was based on information gathered from the evaluations applied to the students and from<br />

non-obstructive information from the website. Specifically, two indicators of the students' capacity to surf the<br />

website and use e-mails were created. The first indicator was the sum of four AAS data, namely: a publication in<br />

"forum 1" (I-1), answer to an item in the "on-line test" (I-2), answers to two items in the "self-evaluation" (I-8). The<br />

second indicator was the sum of the data obtained from the instruments that were external to AAS, namely: the<br />

number of e-mails sent, with a maximum of three (E-1 b), an item from the “external questionnaire” (E-3 a), three<br />

items referring to the website usability (E-3 c), and three items from the post-test" (E-2). To create these indicators,<br />

each variable was adjusted to a scale of 0 to 1, without considering the values assigned to the AAS instruments. The<br />

AAS information was correlated with the indicator associated with external data.<br />

Students' opinion on learning about didactic designs and IT. During the course, there were two opinion instruments<br />

that referred to the learning level attained during the course; One was part of the AAS (I-8) and the other was not (E-<br />

3 a). Although the instruments were administered at different times, one in the middle and the other at the end of the<br />

course, the degree of association between the answers of groups of equivalent items was used as an indicator of the<br />

criterion's validity. One indicator compared "the opinions on learning with didactic designs" which were made<br />

162


during the external tests as well as during the internal tests which were part of the AAS. Another indicator compared<br />

"the opinions made on both questionnaires regarding acquisition of computer skills.”<br />

Analysis of the validity of the AAS's Group Component<br />

The collaborative phase focused on group production. In this section, the validation procedures related to AAS along<br />

with collective information are described.<br />

Consistency among peer opinions on didactic designs. Once the groups created their didactic designs, the designs<br />

were submitted to two or three students that were not part of the group. Each one of these students independently<br />

evaluated his or her peers' work according to designed guidelines based on rubrics. The correlations between the<br />

evaluations of the different peers on the same design were used as a peer evaluation consistency index (I-5). The<br />

average of the indexes associated with evaluations of each one of the 8 designs created by the students was<br />

considered as the internal peer evaluation consistency index.<br />

Consistency between the AAS evaluations on creation and validation of the didactic designs. The analysis, in this<br />

case, refers to the four evaluations regarding the creation and validation of the didactic designs (I-4 to I-7).<br />

- Consistency between the tutor, peer, expert, and student evaluations on didactic designs. The first three evaluations<br />

(I-4, I-5 and I-6) were structured as part of a series of criteria defined by rubrics. Only the final evaluation, the selfevaluation<br />

(I-7), consisted in a grade, on a scale of 1 to 7, which corresponded to the student’s overall appreciation of<br />

the design created by his or her work group.<br />

The four evaluations, although they refer to the same object, i.e. to the didactic design, measured its diverse aspects<br />

at different times. Such measurements were made with different emphases, which were indicated by means of<br />

assigning different values to the items. As a criterion of consistency, the four valuations were correlated in two parts.<br />

- Consistency between the peer and the expert evaluation regarding the level of suitability of the activities for the<br />

learning expected to be achieved by means of the designs. The didactic designs were assessed by the peers before<br />

they were applied in the classroom. Then, after they were applied (I-5), they were assessed by an academic didactics<br />

expert (I-6). The peers as well as the expert provided their opinions regarding the "suitability of the activities<br />

proposed in the designs to achieve the purpose of the designs themselves". The opinions refer both to the learning<br />

activities as well as to the evaluation activities. The opinions were made within the context of a scale associated with<br />

rubrics, and they made up part of AAS’s instruments. In other words, the opinions made up part of the guidelines<br />

applied by the peers (I-5) and the guidelines applied by the academic didactics expert (I-6).<br />

Web usability and participants’ attitude as possible invalidation factors<br />

A significant positive correlation between AAS results and factors such as web usability and participant attitude was<br />

understood as an invalidation factor of the assessment system.<br />

Participants’ attitude. The Attitude towards mathematics Test (Aikeen, 1996) was adapted to measure student<br />

attitude to the course’s AAS. 24 items were organized according to a Likert scale. The test reliability was Cronbach<br />

α=0.62.<br />

Usability. The Sanchez usability test (2000) was adapted to measure the usability of the AAS component of the web<br />

site. The 21 items referring to five factors called learning, satisfaction, error, efficiency, and memory were organized<br />

according to the Likert scale. The test reliability was Cronbach α=0.74.<br />

Results<br />

This section provides the test results used to characterize the AAS’s validity and reliability. These tests mainly<br />

consisted in correlations as indicators of concurrent validity and coefficients of internal consistency; which were<br />

163


organized in four groups, namely: The tests that compared the AAS results with the student’s learning and with the<br />

group productions in a real context; the isolated tests as indicators of validity of the AAS’s individual component;<br />

the tests related to the validity of the AAS’s group component; and, finally, the tests related to two possible<br />

invalidation factors.<br />

AAS’ relation with both real context products and individual learning<br />

Consistency between AAS’ results and products’ effectiveness in real context. The averages of the children’s results<br />

were compared with the teachers’ AAS results. Two of the teachers’ reports about design application did not provide<br />

enough information to be considered in the correlation analysis. As shown in table 5, some designs were<br />

implemented in more than one class. The Spearman correlation was positive, rho(6)=.30 p=.2, but not significant.<br />

<strong>Number</strong> of classes<br />

Schoolchildren results<br />

AAS average results<br />

* missing value<br />

Table 5. Results of implemented designs<br />

Groups<br />

Gr1 Gr2 Gr3 Gr4 Gr5 Gr6 Gr7 Gr8<br />

3 2 2 1 4 m.v.* 3 m.v.*<br />

66% 80% 80% 98% 93% m.v.* 74% m.v.*<br />

70 74 78 77 92 53 79 80<br />

Measurement of degree of association between AAS results and individual learning criteria. The Spearman<br />

correlation as a concurrent validity test gave a value rho=.24, p>0.05, which was not significant.<br />

Validity of AAS's individual component<br />

The individual component was instrumental instruction, meaning that, in this part of the course, they were given<br />

conceptual tools to elaborate didactical designs in order to progress to a collaborative approach during the next part<br />

of the course.<br />

The students´ ability to recognize the components of a didactic design. The item “What are the main phases of a<br />

didactic design?” included on the pre-test, as well as on the online test (I-2) and on the post-test (E-3), had an<br />

unexpected variation regarding the number of correct answers in each of the applications: Meanpretest = 5/20 (25%),<br />

Meanon_line_test = 19/25 (76%), and Meanpost-test = 12/21 (52%).<br />

The online test and post-test results were expected to be consistent, particularly those items that were the same.<br />

Nevertheless, the correlations were not significant; they were low and even negative. The correlation was rho(19)=-<br />

.23, p>.05, (two-tailed).<br />

The students' capacity to explore the website and send e-mails. The estimated level of association between external<br />

information and that gathered through AAS according to a Spearman correlation was rho(23)=0.44, p


Validity of AAS´s productive evaluation component<br />

The group component was focused on a productive approach, where students were asked to build and apply a design.<br />

Peer evaluation (I-5). The peer evaluations of the same design were quite coincidental. A high average of internal<br />

consistency for an average of 8 groups was attained: α(6)=0.96, p


The correlations were not high enough to be significant. They were rho(5)=0.11, p>.05 and rho(6)=0.46, p>.05,<br />

respectively. As one can observe, both the expert and the peers assigned high scores to their evaluations. The slight<br />

differences between them led to null correlations.<br />

Web usability and participants’ attitude as possible factors of invalidation<br />

Attitude of participants. Students’ attitudes to the AAS aspects of the WEB were positive, and the association of this<br />

variable to AAS results was null, rho(20) = 0.09.<br />

Web usability. Students considered AAS’ aspects of the web usability to be adequate. The correlation between<br />

students’ evaluation of AAS aspects of web usability with their results in AAS was positive, but not significant. The<br />

Spearman correlation was rho(15)=0.27. This means there was no significant relation to affirm that the variables<br />

were directly related.<br />

Discussion<br />

This chapter summarizes the study results, analyzes the technical problems of the AAS, and offers suggestions to<br />

elaborate an Authentic Assessment System for a web-based course.<br />

Study results: Validity of AAS<br />

This study considered three perspectives to analyze the Authentic Assessment System’s validity. The first<br />

perspective, using concurrent validation techniques, analyzed the AAS’s validity as a whole, as a one-dimensional<br />

variable. The second perspective, also using correlation techniques, analyzed local aspects of the AAS, focusing on<br />

performance in real contexts and non-obstructive data. The third perspective was a “reciprocal” modality; instead of<br />

validating the AAS according to its degree of association with other criteria, the objective was to demonstrate a null<br />

association between the AAS and possible invalidation factors of the system.<br />

One-dimensional perspective. From this view, the AAS results were considered as whole values, which were<br />

correlated with two external variables. One was the degree of effectiveness of the didactic designs elaborated by the<br />

students, and the other was the students’ results in a posttest. The two correlation indexes were positive, but neither<br />

was significant; thereby confirming the difficulty in validating the authentic assessment procedures, as several<br />

researchers warned (Linn et al., 1991; Aiken, 1996; Herrington and Herrington, 1998; Schurr, 1999). The question<br />

remains if other indexes inform some degree of validity of the AAS used in the course.<br />

Desegregated perspective. Considering that the AAS was multidimensional and that diverse factors affected the<br />

results, this second approach separately analyzed the AAS’s group component from the individual component. These<br />

analyses were limited to selected data, and some gathered non-obstructively. Out of the five indicators used to<br />

analyze the individual component, four correlated significantly: the ability to search in the web, the ability to use email,<br />

the opinion about ICT learning, and the ability to build didactic designs. On the other hand, only three<br />

indicators used to analyze the AAS’s group component were significantly favorable, which can be partially<br />

explained by the small number of student groups. In fact, the third indicator, in spite of reaching the correlation rho<br />

= 0.46, was not significant. This second analysis perspective was fruitful, offering some insight in terms of the<br />

characteristics of an authentic assessment system that evidences validity.<br />

Reciprocal perspective. The third perspective of analysis of the AAS’s validity confirmed that we cannot attribute<br />

the students’ success to either of the two invalidation factors considered, the attitude towards the evaluation system<br />

and its usability. Under this perspective, the AAS offered an additional partial evidence of validity.<br />

Then, the results of this exploratory correlational case study indicated some evidence of validity, but also several<br />

technical problems with the authentic assessment system. We may conclude that the authentic assessment system<br />

implemented with two components, one individual and the other in groups, showed some problems and some<br />

insights.<br />

166


The following sections offer interpretations based on the in-depth analysis of available data. These interpretations<br />

give rise to suggestions for elaborating authentic assessment systems and to formulate hypotheses in subsequent<br />

studies.<br />

Reliability and validity of each AAS’s component<br />

AAS's individual component. It should be mentioned that the correlation between AAS’s results and the posttest<br />

achievement was not significant. Although AAS's results were good with 89% effectiveness, they cannot be<br />

understood as learning in the manner that the posttest measured it. Even though the post-test pre-test differences<br />

were significant according to the t-test (p


Data illustrate some reliability and validity problems, which are very typical of authentic assessment. Because<br />

effectiveness of products reached through the course referred to variables measured in context, it was difficult to<br />

obtain a good correlation with both the process and the results of the course. Similarly, since authentic assessment is<br />

related to complex learning in special multiple dimensions, it was also difficult to obtain a good correlation with a<br />

single multiple-choice test.<br />

We should mention that AAS instruments were technically designed to measure knowledge, abilities, competency,<br />

and attitudes in accordance with the main principles of authentic assessment. In addition, AAS has evolved into a<br />

highly effective course. So in spite of the evidence of lack of reliability, this assessment system shows evidence of<br />

cognitive complexity, significance, and efficiency; that is, authentic validity.<br />

Normal-based and criteria-based evaluation. According to an “edumetric” assessment model, almost all people are<br />

expected to learn, but according to the psychometric measurement model, normal distribution is expected to describe<br />

individual variability in learning. In this second view, learning differences between higher and lower groups are<br />

expected to be stable and significant. So, reliability is expected as a necessary condition of validity. According to the<br />

former view, significant differences are not expected between learners. So, some slight differences arising from<br />

uncontrolled variables are expected. Therefore, stability of differences in results should not be considered as the<br />

central point of an effective course validation.<br />

Straetmans et al., (2003) assert that principles of classical test theory may be applied to qualitative assessment, and<br />

measurements of quality control such as validity and reliability should be applied to forms of competency assessment<br />

in a similar way as they were used for other tests. We should keep in mind that psychometric refers to maximizing<br />

differences between individuals and edumetric refers to measuring within-individual growth, without reference to<br />

other individuals. Frederiksen & Collins (1989) definitively propose specific criteria, and criteria other than validity,<br />

for newer forms of assessment.<br />

Valid measures with low reliability. Several correlations used in this study refer to the consistent association between<br />

variables, in relation to reliability. Stability, consistency, and concurrent validity were not reached in data. How<br />

much stability should be expected as part of the validation process? Can valid measures with low reliability be<br />

obtained, as is often the case in authentic assessment systems and was this the case in this study? We understand that<br />

both reliability and validity are expected in an assessment process. But, following the traditional path, validity is<br />

sacrificed in order to obtain reliability, and according to an authentic assessment approach, soundness of<br />

measurement is privileged as compared to stability of results.<br />

Four major problems that affect AAS validation<br />

Online tests, rubrics measuring, self-evaluation techniques, student dropout, and restrictions in timing tasks, which<br />

are common in distance education, demonstrated some technical problems. In effect, the online tests with the<br />

possibility of repeating answers is questionable; the self-evaluations should be adjusted to more specific referents;<br />

the use of rubrics looks promising but needs to be improved; and the various participants’ contributions to the<br />

evaluation process, which has proven to be effective and widely accepted, still needs to be adjusted.<br />

Validity of the online test with multiple attempts. The opportunity granted to the students to answer the online test<br />

after several attempts did not favor AAS's reliability. The online test was part of the authentic assessment system's<br />

individual phase. In order to complete the test, the computer system allowed the students to make several attempts<br />

before recording their final answers. Even though this favored AAS performance, the online test was not confirmed<br />

as being a contributing factor to the improvement of post-test scores. Students seem to only partially retain the<br />

knowledge that was required of them on the posttest, which they had taken two months after taking the online test.<br />

The opportunity provided by the online test proved to be attractive to the students. Once the test was completed, the<br />

computer system automatically sent them the test results with percentages for each group of questions. Since these<br />

percentages referred to groups of items and not each individual item, and the computer system assigned partial credit<br />

to the partially correct answers, it was not easy for the students to know which items they had gotten wrong. They<br />

needed to analyze each item of a group in order to improve their test scores.<br />

168


This opportunity may have had a positive short-term effect without requiring long-term retention of the material. The<br />

knowledge being tested on the online test was part of the theoretical knowledge that the students would use when<br />

creating their didactic designs. Therefore, we expected that this theoretical knowledge would be meaningful to them,<br />

thus remaining in their long-term memory until taking the posttest. Apparently thought processes during the on line<br />

test were not sufficient to consolidate such knowledge.<br />

Validity of AAS's group component: This component included tutor, peer, expert and student evaluations. Only<br />

partial evidence of the consistency between the tutor, peer, expert, and student evaluations regarding the creation and<br />

validation of the didactic designs was obtained. The peer and expert evaluations agreed that the didactic designs<br />

were good. However, they were not coincidental enough when establishing a hierarchy with regard to that quality.<br />

Although the evaluations carried out by the different participants were positive and the didactic designs in question<br />

were effective, the evaluations were not consistent enough among themselves. Why was there not enough<br />

consistency between the peer evaluations and the expert evaluation in the cases when they assessed the same aspect?<br />

The dissonance could have its origins in the assessment instruments' deficiencies and in the research design's<br />

insufficiency. This would not have allowed the assimilation of the fact that the students' behavior varied over time.<br />

The group component evaluation that was not carried out on the basis of rubrics was a self-evaluation of the students.<br />

The assessment consisted of grading the didactic design on which they worked. Apparently, it is necessary to check<br />

the effect of subjectivity on the self-evaluations. We suggest doing this because the students evaluated their designs<br />

quite positively. On average, they assigned themselves a score of 97%. This differed from the expert’s evaluations<br />

of the reports on didactic designs, which averaged only 75%. The objectivity might be increased if the students also<br />

assigned themselves a grade on the basis of rubrics and not based on holistic perception.<br />

Dropouts. Another element to consider is the number of groups about which the data are gathered. The low number<br />

of only eight work groups constituted a limitation on the analysis of AAS’s group component. Statistical significance<br />

is difficult to obtain with a limited number of cases. In addition to following up on evaluation procedures and the<br />

methodological design adopted to examine AAS validity, an analysis of the students' behavior should be conducted.<br />

Not all of the students finished the course as expected. Only 52% (13 of 25) of the students submitted their selfevaluations<br />

despite the fact that it had a detrimental effect on their grade. Only five of the eight groups submitted a<br />

completed final report. In spite of these deficiencies, the students were able to pass the course with a good grade.<br />

The students would have liked to have finished the course by submitting all of their work. However, the due dates<br />

expired and, faced with high demands at their jobs, they did not meet the final requirement.<br />

Mortality, referring to subjects who were not included in the analysis because they did not provide certain responses,<br />

had an important effect on the study. In fact, if the omitted answers had been considered incorrect and their<br />

corresponding instruments had received a score of zero, not only would we have obtained positive correlations, but<br />

also the majority of them would have been significant. This would have corroborated the validity of the AAS.<br />

Time factor limits production. During the creation of the didactic designs, the evaluators indicated the weak aspects<br />

of each didactic design. This gave the students the opportunity to correct their designs before the next evaluator's<br />

revision. This allowed, within a highly limited time frame, time margins for some groups of students to improve their<br />

weakest aspects. Thus, the designs had the tendency to improve. However, the groups' work could not always be<br />

completed within the established period of time. This was noticeable at the end of the course, when the expert<br />

conducted the final evaluation of the didactic designs subsequent to their application in the classroom. The moment<br />

arrived when the students had to deliver their reports in whatever state they were in, and in some cases this implied<br />

an interruption in their production.<br />

Final suggestions to improve evaluation systems in web based courses<br />

We expect improving assessment instruments and the assessment system design of a web based course by means of<br />

successive approximation processes: Multidimensional authentic assessment demands, integrated techniques,<br />

purified instruments, and an extensive sample to achieve acceptable reliability.<br />

Refining and integrating evaluation instruments. Using integrated evaluation procedures that pervade the course's<br />

main concept seems to be appropriate. Considering that the authentic assessment system is multidimensional by<br />

169


nature, some specific core objectives would be identified, and the number of variables to be assessed should be<br />

reduced. Thus, the evaluation process would be permanently focused on those objectives with the aim of monitoring<br />

progress..<br />

Bearing in mind that the results of the self-evaluations and the evaluation through rubrics were very elevated, the<br />

question arises whether those results came from low requirement levels. In order to supply a satisfactory answer,<br />

standardized evaluation instruments are vital, which unfortunately stray from the authentic assessment method.<br />

Self-regulation techniques such as self and co-evaluation proved to be attractive, because they fostered course<br />

effectiveness. However, they need to be refined in order to attain better reliability indices and evidence of validity.<br />

Self-regulation techniques seem to be a high priority; experts establish more rigorous criteria on the quality of the<br />

products to be created by students, safeguarding AAS’s ecological validity. In a course for teachers, didactic designs<br />

effective in the classroom should be supported on the basis of testing the schoolchildren on whom the designs are<br />

applied, with validity recognized by peers as well as by the expert. AAS should provide the guarantee that classroom<br />

achievements are adjusted to anticipated learning according to the official attainments.<br />

Even though pertinence and effectiveness of didactic designs are essential requirements for favoring the assessment<br />

system validity, they are difficult to meet.<br />

In this study, the validation of AAS's group component provided evidence that the procedures and instruments<br />

employed require greater integration, as the literature recommends (Chong, 1998; Roschelle, 1992; Lave, 1991).<br />

Improving Rubrics: Rubrics need to be further developed. Their use requires a great deal of specificity in their<br />

operation. Perhaps it is appropriate for the evaluators to limit themselves to areas of specific domain. An example<br />

would be for the same evaluator to judge the same aspect in several didactic designs instead of judging all of the<br />

dimensions of a specific design.<br />

Improving the authentic assessment design. When assessment design includes both individual and group<br />

components, a differentiated evaluation could be instated within each group on the basis of individual roles. The<br />

students could assume differentiated tasks that are assessed individually even though the final product is a shared<br />

responsibility. For example, a student can act as a graphic designer and learn more than his or her peers about the<br />

use of related software. In each subject, individual growth, according to previous experience, expectations, and<br />

variety of interactions generated within the group, can be verified. In this manner, we assume a priori that the types<br />

of learning are different. However, at the same time, they support each other and enrich the collaborative work's<br />

productivity.<br />

Even though this approach would be far from the traditional approach that measures all students with the same stick,<br />

it would be consistent with the principles of an authentic assessment and would favor AAS's ecological validity.<br />

A research design that takes recommended variations into account should be assimilated into a time series analysis.<br />

The data should be gathered gradually throughout the process so that progress may be observed and consistency<br />

between measurements may be appreciated. Within that context, information should be gathered about those aspects<br />

of the students and their work in which change is expected, which at the same time provides feedback to the<br />

individual learning and group work processes.<br />

Qualitative data support<br />

Qualitative data were gathered during the AAS application process. Content of students’ e-mails, an open-ended<br />

questionnaire, two focus group discussions, and notes from participant observation strengthened our understanding<br />

of the AAS limitations. A qualitative approach was not developed in these pages because of its length; however some<br />

commentaries were provided that appear in the discussion section. These comments are specifically about online test<br />

multiple attempts and the reasons why some students did not meet the final course requirements (see discussion<br />

points) Absolute qualitative data enrich our understanding and improve AAS information for subsequent researchers<br />

to use for new studies.<br />

170


Summary<br />

This section summarizes the study’s results and discusses the weaknesses in the validity and reliability of the AAS,<br />

which emerge from the contextual and multidimensional characteristics of the evaluation in a non-experimental<br />

design. Then it discusses the importance of achieving a balance between the reliability and validity criteria in an<br />

Authentic Assessment System.<br />

The following section associates the weaknesses of the AAS to the Online Test modality, to the use of rubrics and<br />

self-evaluation and co-evaluation techniques, to the drop out rate, and to the time restrictions of an evaluation in<br />

context. The thoughts about these weaknesses give rise to suggestions for improving assessment systems in web<br />

based courses.<br />

Finally, suggestions are given for improving the validity and reliability of the AAS in a gradual process using time<br />

series design and maintaining the measurements in real contexts. Improvement was focused on the evaluation<br />

instruments and in the system’s design. The suggestions regarding the instruments were: to use integrated<br />

instruments, refined rubrics, and rigorous quality criteria. The suggestions for improving the design of the authentic<br />

assessment are: to reduce the number of specific objectives, to harmonize the individual component with the group<br />

component, if applicable, to include self-regulation procedures, to apply differentiated evaluation in accordance with<br />

the roles in the collaborative work and to the potentialities of the peer evaluators and/or experts, in order to favor a<br />

context of effectiveness and ecological validation.<br />

References<br />

Aiken, L. R. (1996). Tests Psicológicos y Evaluación, México. Prentice Hall.<br />

Bandura, A., & Walters, R.H. (1963). Social Learning and Personality Development, New York: Holt, Rinehart and<br />

Winston.<br />

Benzie, D. (1999). Formative evaluation: Can models help us to shape innovative programmes? Education and<br />

Information Technologies, 4 (3), 251-262.<br />

Chang, C.-C. (2002). Assessing and Analyzing the Effects of WBLP on Learning Processes and Achievements:<br />

Using the Electronic Portfolio for Authentic Assessment on University Students' Learning. Paper presented at the<br />

EdMedia 2002 Conference, June 24-29, 2002, Denver, CO, USA.<br />

Chong, S.M. (1998). Models of Asynchronous Computer Conferencing for Collaborative Learning in Large College<br />

Classes. In C. J. Bonk & K. S. King (Eds.), Electronic collaborators: learner-centered technologies for literacy,<br />

apprenticeship, and discourse. Mahwah, N.J.: Lawrence Erlbaum Associates, 157-182.<br />

Clarke, M., Butler, C., & Schmidt-Hansen, P. (2004). Quality assurance for distance learning: A case study at Brunel<br />

University. British Journal of <strong>Educational</strong> <strong>Technology</strong>, 35 (1), 5-11.<br />

Collis, B., De Boer, W., & Slotman, K. (2001). Feedback for web-based assignments. Journal of Computer Assisted<br />

Learning, 17 (3), 306-313.<br />

Condemarin, M., & Medina, A. (2000). Evaluación Auténtica de los Aprendizajes: Un medio para mejorar las<br />

competencias en lenguaje y comunicación, Santiago de Chile: Editorial Andrés Bello.<br />

Downs, E., Carlson, R. D., Repman, J., & Clark, K. (1999). Web-Based Instruction: Focus on Learning. Georgia<br />

Southern University. Paper presented at the SITE Conference, February 28-March 4, 1999, San Antonio, TX, USA.<br />

Duart J., & Sagrá, A. (2000). La formación en web: del mito al análisis de la realidad, retreived <strong>October</strong> 15, <strong>2007</strong>,<br />

from http://cvc.cervantes.es/obref/formacion_virtual/campus_virtual/sangra2.htm.<br />

Fenwick, T., & Parsons, J. (1998). Starting with our stories: Towards more authentic assessment in adult education.<br />

Adult Learning, 26 (4), 25-30.<br />

171


Frederiksen, J.R., & Collins, A. (1989). A systems approach to educational testing. <strong>Educational</strong> Researcher, 18, 27-<br />

32.<br />

Gatlin, L., & Jacob, S. (2002). Standards-Based Digital Portfolios: A Component of Authentic Assessment for<br />

Preservice Teachers. Action in Teacher Education, 23 (4), 28-34.<br />

Gipps, C. (1994). Beyond Testing: Towards a Theory of <strong>Educational</strong> Assessment, London, The Falmer Press.<br />

Herman, J., Aschbacher, P., & Winters, L. (1997). A practical guide to alternative assessment, CA: CRESST.<br />

Herrington, J., & Herrington, A. (1998). How university students respond to a model of authentic assessment. Higher<br />

Education Research and Development, 17 (3), 305-322.<br />

Kroll, D., Masingila, O., & Mau, S. (1992). Grading Cooperative Problem Solving. The Mathematics Teacher, 85 (8)<br />

617-626.<br />

Lajoie, S. (1991). A framework for authentic assessment in mathematics. NCRMSE Research Review, 1 (1), 6-12.<br />

Lave, J. (1991). Situating Learning in Communities of Practice. In L. B. Resnick, J. M. Levine and S. D. Teasley<br />

(Eds.), Perspectives on socially shared cognition, Washington, DC: American Psychological Association, 63-82.<br />

Linn, R. L., Baker, E., & Dunbar, S. (1991). Complex, performance-based assessment: Expectations and validation<br />

criteria. <strong>Educational</strong> Researcher, 20 (8), 15-21.<br />

Meyer, C.A. (1992). What’s the difference between authentic and performance assessment? <strong>Educational</strong> Leadership.<br />

49 (8). 39-40.<br />

National Center for Fair and Open Testing (1992). What Is Authentic Assessment? Cambridge, MA. NCFOT.<br />

Newmann, F., & Wehlage, G. (1993). Five Standards of Authentic Instruction. <strong>Educational</strong> Leadership, 50 (7), 8-12.<br />

Olson, T., & Wisher, R. (2002). The Effectiveness of Web-Based Instruction: An Initial Inquiry. International<br />

Review of Research in Open and Distance Learning; 3 (2), retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.irrodl.org/index.php/irrodl/article/view/<strong>10</strong>3/182.<br />

Orde, B. (2001). Online course development: summative reflections. International Journal of Instructional Media,<br />

28 (4), 397-403.<br />

Peterson, M. (2000). Electronic Delivery of Career Development University Courses. In J.W. Bloom, & G. R. Walz<br />

(Eds.), Cybercounseling and Cyberlearning: Strategies and Resources for the Millennium, Alexandria, VA:<br />

American Counseling Association, 143-159.<br />

Phipps, R., & Merisotis, J. (1999). What’s the difference? A review of contemporary research on the effectiveness of<br />

distance learning in higher education, Washington DC: The Institute for Higher Education Policy.<br />

Reeves, T. C., & Okey, J. (1996). Alternative assessment for constructivist learning environments. In B. Wilson,<br />

(Ed.), Constructivist learning environments, Englewood Cliffs, NJ: <strong>Educational</strong> <strong>Technology</strong>, 191-202.<br />

Resnick, L.B., & Resnick, D.P. (1992). Assessing the thinking curriculum: New tools for educational reform. In B.R.<br />

Gilford & M.C. O´Connor (Eds.), Changing assessment: Alternative views of aptitude, achievement and instruction,<br />

Boston: Kluwer, 37-75.<br />

Roschelle, J. (1992). What Should Collaborative <strong>Technology</strong> Be? A Perspective from Dewey and Situated Learning.<br />

ACM SIGCUE Outlook, 21 (3), 39-42.<br />

Ryan, W. J. (2001). Comparison of Student Performance and Attitude in a Lecture Class to Student Performance and<br />

Attitude in a Telecourse and a Web-Based Class, Ph.D. Dissertation No. ED467394, Nova Southeastern University.<br />

172


Salomon, G. (1992). What Does the Design of Effective CSCL Require and How Do We Study Its Effects? ACM<br />

SIGCUE Outlook, 21 (3), 62-68.<br />

Scardamalia, M., & Bereiter, C. (1996). Computer Support for Knowledge-Building Communities. In T. Koschmann<br />

(Ed.), CSCL, theory and practice of an emerging paradigm, Mahwah, NJ: Lawrence Erlbaum Associates, 249-268.<br />

Schurr, S. (1999). Authentic Assessment From A to Z, USA: National Middle School Association.<br />

Stake, R. (1995). The art of case research, Thousand Oaks, CA: Sage.<br />

Straetmans, G., Sluijsmans, D., Bolhuis, B., & Van Merrienboer, J. (2003). Integratie van instructie en assessment in<br />

competentiegericht onderwijs [Integration of instruction and assessment in competency-based education]. Tijdschrift<br />

voor Hoger Onderwijs, 21, 171-198.<br />

Torrance, H. (1995). Evaluating authentic assessment: Problems and possibilities in new approaches to assessment,<br />

Buckingham: Open University Press.<br />

Tucker, S. (2000). Assessing the Effectiveness of Distance Education versus Traditional On-Campus Education.<br />

Paper presented at the Annual Meeting of the AERA, April 24-27, 2000, New Orleans, Louisiana, USA.<br />

Vygotsky, L. S. (1985). Thought and Language, Cambridge, MA: MIT Press.<br />

Weiss, C. (1997). Investigación Evaluativa. Métodos para determinar la eficiencia de los programas de acción (4 th<br />

Ed.), Mexico: Editorial Trillas.<br />

White, S. (1999). The effectiveness of web-based instruction: A case study. Paper presented at the Annual Meeting<br />

of the Central States Communication Association, April, St. Louis, MO, USA.<br />

Wiggins, G. (1989). A True Test: Toward More Authentic and Equitable Assessment. Phi Delta Kappan, 70 (9),<br />

703-713.<br />

Wiggins, G. (1998). Educative Assessment. Designing Assessments to Inform and Improve Student Performance, San<br />

Francisco: Jossey-Bass.<br />

Wiggins, G. (1990). The case for authentic assessment. Practical Assessment, Research & Evaluation, 2 (2),<br />

retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://pareonline.net/getvn.asp?v=2&n=2.<br />

Wiggins, G. (1993). Assessment: Authenticity, context, and validity. Phi Delta Kappan, 75 (3), 200-208.<br />

Wilkerson, J., & Elkins, S. (2000). CAD/CAM at a Distance: Assessing the Effectiveness of Web-Based Instruction<br />

to Meet Workforce Development Needs. Paper presented at the Annual Forum of the Association for Institutional<br />

Research, May 21-24, 2000, Cincinnati, OH, USA.<br />

Woolf, B. P., & Regian, J. W. (2000). Knowledge-based training systems and the engineering of instruction. In S.<br />

Tobias and J. Fletcher (Eds.), Training and retraining: A handbook for business, industry, government, and the<br />

military, New York: Macmillan Reference, 339-356.<br />

Weller M. (2002). Assessment Issues on a Web-based Course. Assessment & Evaluation in Higher Education, 27<br />

(2), <strong>10</strong>9-116.<br />

Yin, R. (1984). Case study research: Design and methods (1 st Ed.), Beverly Hills, CA: Sage.<br />

Yin, R. (1994). Case study research: Design and methods (2 nd Ed.), Beverly Hills, CA: Sage.<br />

Young, M.F. (1995). Assessment of situated learning using computer environments. Journal of Science Education<br />

and <strong>Technology</strong>, 4 (19), 89-96.<br />

173


Bottino, R. M., & Robotti, E. (<strong>2007</strong>). Transforming classroom teaching & learning through technology: Analysis of a case study.<br />

<strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 174-186.<br />

Transforming classroom teaching & learning through technology: Analysis of a<br />

case study<br />

Rosa Maria Bottino and Elisabetta Robotti<br />

Consiglio Nazionale delle Ricerche, Istituto Tecnologie Didattiche, Genova, Italy // Tel. +39 01 06 47 56 76 // Fax.<br />

39 01 06 47 53 00 // bottino@itd.cnr.it // robotti@itd.cnr.it<br />

ABSTRACT<br />

The paper discusses the results of a research project based on the field testing of a course aimed at developing<br />

arithmetic problem solving skills in primary school pupils. The course was designed to incorporate e-learning<br />

techniques, including the use of ARI@ITALES authoring tools. These tools allowed the integration in the<br />

course of constructivist activities based on interaction with a set of different microworlds. The aims of the<br />

project were twofold: to analyse how the adopted approach and tools could help the teacher design and manage<br />

classroom activities integrating technology; and to evaluate the effectiveness of the ARI@ITALES tools for<br />

supporting pupils’ acquisition of mathematical skills.<br />

Keywords<br />

E-learning, Authoring software, Primary education, Numeracy, Problem Solving<br />

Introduction<br />

Despite the positive results obtained in a number of experimental settings and the large investments made by many<br />

governments for equipping schools with hardware and software, it can be said that the integration of computer<br />

technologies in the classroom still remains a limited phenomenon at primary school level (see, for example, Venezky<br />

& Davis, 2002; Sutherland, 2004). This is true also for disciplines like mathematics, which, from the beginning, has<br />

been one of the school subjects that has attracted considerable educational research attention concerning the<br />

development and use of ICT tools (Artigue, 2000; Lagrange, Artigue, Laborde & Trouche, 2001). One of the main<br />

reasons for this low impact is that technology has often been introduced as an addition to an existing, unchanged<br />

classroom setting (De Corte, 1996). Even though it now appears necessary to adopt a more integrated vision in which<br />

ICT is considered in conjunction with educational strategies, contents and activities (Bottino, 2004), this change of<br />

perspective is unlikely to be straightforward and means taking different perspectives into account. The perspective<br />

considered in this paper is that of teachers and of the difficulties they encounter in integrating ICT tools in their<br />

classroom practice. In order to bring meaningful innovation to teaching and learning processes, changes are required<br />

in content, organization and management of classroom activity – changes that cannot be accomplished effectively by<br />

a teacher operating alone.<br />

In this paper the use of an e-learning approach, supported by specifically designed authoring tools, it is examined<br />

with the objective to understand how effectively it can support the teacher in the development of classroom activities<br />

integrating technology. In particular, the paper analyses the design, implementation and field testing of an online<br />

course aimed to promote the development of arithmetic problem solving skills in primary school pupils. This course<br />

was built using a set of authoring tools (ARI@ITALES tools) that were developed for implementing interactive<br />

constructivist activities in mathematics.<br />

In the following, the ARI@ITALES tools are briefly examined as well as the approach used to build the course using<br />

these tools. Selected findings from the field testing of the course with a fourth-grade primary school class (age 9-<strong>10</strong>)<br />

are then analysed. This analysis focuses on understanding whether the approach followed in designing the course and<br />

the authoring tools adopted effectively supported teachers’ efforts in integrating new technologies in their classes.<br />

Furthermore, the paper considers whether this integration had a positive impact on students’ learning of mathematics<br />

content.<br />

Background<br />

Our research group has been involved in the design and experimentation of mathematics educational software since<br />

several years. In particular, the ARI-LAB system has been developed to promote arithmetic problem solving skills in<br />

pupils of primary and lower secondary school.<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

174


ARI-LAB is a multi-environment open system based on constructive principles that was widely experimented and<br />

used in a variety of class situations (Bottino & Chiappini, 2002). Such experiments showed its remarkable cognitive<br />

potentialities but evidenced also that its effective use in classroom practice (like the use of any other open interactive<br />

environment) requires the teacher a considerable work for integrating it in class activities that effectively impact on<br />

students’ learning. The design and the management of such activities are often difficult to be performed by a single<br />

teacher without a proper support.<br />

Such considerations led us to design and to implement tools and methodologies able to sustain the teachers in<br />

carrying out ICT-based activities in their classes. We performed this work within ITALES, a European Commission<br />

co-funded project (IST-2000-26356) with several objectives. Among them, the development of a set of authoring<br />

tools to enable teachers to prepare digital content for personalised learning, and the development of a new learning<br />

management system that can be used by teachers to plan and build their own online courses.<br />

Within ITALES, our research group designed and implemented ARI@ITALES tools based on the ARI-LAB2<br />

system, the new current version of the ARI-LAB system. Information on ARI-LAB2 system and on ARI@ITALES<br />

tools can be found at: http://www.itd.cnr.it/arilab/.<br />

ARI@ITALES is a set of authoring tools for building and implementing e-learning activities in mathematics. Using<br />

ARI@ITALES tools, e-learning courses have been designed and tested in real class situations. One such course is<br />

analysed in this paper.<br />

It is worth noting that the term e-learning is not used here to indicate a particular delivery strategy but rather an<br />

evolution in educational organization aimed at supporting teaching and learning processes through the use of social<br />

and technological resources (Alvino & Sarti, 2004). ARI@ITALES tools allow an innovative shift towards elearning<br />

since they permit to design courses which integrate a constructivist learning approach (activities in<br />

microworlds) with the learning object philosophy (production of digital resources for online learning).<br />

Field experiments involving the use of ARI-LAB and ARI@ITALES tools have been carried out not only at the local<br />

level but also considering a wider perspective. For example, within the TELMA (<strong>Technology</strong> Enhanced Learning in<br />

MAthematics) joint research activity of the Network of Excellence Kaleidoscope (http://www.noe-kaleidoscope.org -<br />

accessed January <strong>2007</strong>) a cross-experimentation project was organised to compare the different approaches to the<br />

study of ICT-based teaching and learning environments adopted by the European teams participating to TELMA.<br />

The key idea was that each team designed and implemented a teaching experiment in a real classroom making use of<br />

an ICT-based tool developed by another team. ARI-LAB has been, in particular, experimented by French and Greek<br />

teams. First results of such cross-experimentation project are reported by Artigue et al. (2006; <strong>2007</strong>). One of the<br />

findings that has been pointed out is that it is necessary to support teachers not only with knowledge about how to<br />

use a software tool but also with its key theoretical and educational principles, explicating the settings and the<br />

pedagogical practices which can foster its educational potential. This suggests that the tools and the approach here<br />

reported can be useful for supporting teachers in designing not only specific learning activities integrating<br />

technology but also in building the whole pedagogical itineraries where such activities are to be inserted in order to<br />

suit stated learning objectives.<br />

The ARI@ITALES tools<br />

ARI@ITALES is a set of authoring tools for building and implementing mathematics e-learning activities at primary<br />

and lower secondary school levels (from grades 2 to 8).<br />

The ARI@ITALES tools were designed to support the teacher in the preparation of mathematics learning activities<br />

based on interaction with a number of microworlds. In these activities students can approach abstract and formal<br />

concepts (e.g. the number concept) by working with concrete representations (e.g. money, calendar, abacus, etc.).<br />

The tools of ARI@ITALES are: the Text Editor, the Microworlds, the Solution Sheet, the Simulator, and the<br />

Player. These are currently available in three languages: Italian, English and Spanish.<br />

175


The Text Editor allows the editing and saving of problem texts as objects to be included in a course. When a text is<br />

saved, the Text Editor automatically inserts a command that allows the course participant to solve the problem by<br />

accessing the Solution Sheet and then the Microworlds. The Solution Sheet and Microworlds are automatically<br />

instantiated with the text of the problem at hand.<br />

Microworlds are mediating tools for building the solution to a problem. Within microworlds, the user, through the<br />

creation and manipulation of computational objects, can visually represent problem situations in a variety of concrete<br />

contexts, which are meaningful also from the mathematics point of view. The Microworlds currently available in<br />

ARI@ITALES are: “Euro”, “Calendar”, “Abacus”, “<strong>Number</strong> Building”, “<strong>Number</strong> Line”, “Graphs”, “Spreadsheet”,<br />

“Arithmetic Operations”, “Fractions”, and “Arithmetic Manipulator”.<br />

Some microworlds (such as Euro or Calendar) were designed to model common everyday situations such as ‘buying<br />

and selling’ or ‘time’ problems. For example, to solve a problem involving counting days, students can enter the<br />

Calendar microworld and visualise a month, mark intervals of days, pass from one month to another, etc. Similarly,<br />

to solve a problem involving a money transaction students can enter the Euro microworld and generate Euros to<br />

represent a given amount, move coins on the screen, change them with other Euro coins or banknotes of an<br />

equivalent value, etc. Figure 1 shows the interface of the Euro microworld with an example of interaction.<br />

Figure 1. Interface of the Euro Microworld with an example of interaction<br />

Others microworlds are designed to offer different ways of representing and manipulating numbers (<strong>Number</strong> Line,<br />

Abacus, Graphs), of performing calculations (Spreadsheets, Arithmetic Operations), and of dealing with more<br />

abstract mathematical concepts (Fractions, Arithmetic Manipulator). During problem solving, microworlds allow the<br />

users to manipulate computational objects and interact with them using operational tools specific to each microworld.<br />

While interacting with microworlds, users receive various kinds of feedback that may foster the emergence of<br />

goals for problem solution and the construction of meaning for the strategies developed. For example, in the Euro or<br />

in the Abacus microworld, it is possible to select a coin or group of coins (or a configuration of balls in the abacus)<br />

and to hear the corresponding amount pronounced orally by means of a voice synthesizer incorporated in the system.<br />

Moreover, in each microworld, some tasks performed by the users are controlled by a set of rules integrated in the<br />

system that prevent them from taking specific incorrect steps. For example, if the users try to perform an incorrect<br />

change of coins, or of balls in the Abacus, the system prevents them to continue and addresses a specific error<br />

message.<br />

176


In the Solution Sheet it is possible to elaborate the solution process enacted within the microworlds, transforming it<br />

into a product to reflect on and to share with others. The underlined metaphor is that of the math workbooks where<br />

usually students do exercises and build solutions to problems. In the solution sheet users builds up a solution to the<br />

problem at hand by copying into this space the visual representations produced in the microworlds that they consider<br />

meaningful for working towards the solution. Students can employ verbal language and arithmetic symbolism to<br />

comment on the graphical representations copied and thus to explain their solution. This can be done by means of the<br />

“post-it” function. This function allows editing a short note, a comment, or, also, a mathematical expression, that it is<br />

possible to add, remove, correct, and move about in the solution space. Figure 2 shows the interface of the solution<br />

sheet with a solution produced by a student tackling a “purchase and sale” problem.<br />

Figure 2. Interface of the Solution Sheet with an example solution<br />

The Simulator allows the building of a ‘simulation’, in that it automatically records a sequence of actions performed<br />

by the users (in one or more microworlds and/or in the Solution Sheet). These can then be viewed as a sort of movie<br />

in the Player environment, possibly with an accompanying audio commentary. A simulation can be used to illustrate<br />

a concept, to describe how a specific problem can be solved, or, also, to explain the functioning of a microworld.<br />

The course on arithmetic problem solving<br />

This course was designed to introduce 9-<strong>10</strong> year-old students to the solution of arithmetic problems of additive and<br />

multiplicative structure. The course was designed to be carried out in the classroom under teacher supervision, not<br />

for individual distance learning outside this context.<br />

Objectives<br />

The main research objectives underlying the design of the course on arithmetic problem solving were twofold: firstly<br />

to study whether and to what extent ARI@ITALES tools could support teachers in designing and managing<br />

classroom activities integrating technology and to assist them in overcoming some of the difficulties they usually<br />

encounter in this process; and secondly, to understand whether the designed activities could enhance students’<br />

learning of arithmetic concepts and problem solving strategies.<br />

177


Mathematical content and structure<br />

The course consisted of three modules comprising activities built with the ARI@ITALES tools: explanations,<br />

simulations, examples, problems to be solved, solutions to be completed by pupils, tests, consolidation exercises. All<br />

the modules were designed in collaboration with the two teachers who then ran the course in their class.<br />

Simulations were prepared that explained how to use a microworld or a specific function therein (e.g. the change<br />

function in the Euro microworld) as well as to introduce concepts like monetary equivalence. Examples of solved<br />

problems were included to present a new solution strategy or to introduce a new mathematics concept. Tests were<br />

developed to assess the learning of specific concepts and procedures and to provide diagnostic feedback, with<br />

suggestions for further activities. Problems (mainly verbal problems) were formulated according to questions of<br />

increasing difficulty. Sometimes different versions of the same problem were proposed to different groups of<br />

students according to their level of ability. The course was initially assembled using the LRN Editor of Microsoft<br />

LRN Toolkit 3.0, then using the ITALES course assembly tool.<br />

As to the mathematics learning objectives, the course was mainly focused to the development of arithmetic abilities<br />

of increasing level of formalization: counting, reading and writing numbers and making calculations; building<br />

strategies in microworlds for solving concrete arithmetic problems; converting the solution produced in a microworld<br />

into written verbal language and then into arithmetical relations; recognizing invariant properties in the structure of a<br />

problem and in its solution (e.g. reduction to the unit for proportionality problems).<br />

More specifically, the activities proposed in Module 1 were mainly dedicated to develop counting capabilities and to<br />

solve additive and multiplicative purchase and sales problems by applying different strategies (e.g.<br />

total/part/remainder, completion, containment, partition). These activities entailed the use of the Euro and Abacus<br />

microworlds. Module 2 activities were aimed at reinforcing counting strategies in various situations (e.g. counting<br />

days and intervals of days) and at introducing concepts such as multiple, sub-multiple, and lowest common<br />

denominator. They mainly involved the use of the Calendar microworld. Module 3 was designed to introduce<br />

concepts pertaining to data representation by means of tables, histograms and graphs and involved the use of the<br />

Graph microworld. The calculation of parameters such as mode, arithmetic mean and median were introduced using<br />

the Spreadsheet microworld. Exercises were also included to foster reflection on simple formulas involving variables<br />

so as to make pupils aware of possible generalizations (e.g. direct and inverse proportion problems).<br />

The field testing<br />

Methodology<br />

The course was tested in 2004 in a fourth-grade class composed of 20 pupils. The teaching experiment took place in<br />

the school computer laboratory equipped with 12 computers with web connections. For the lab sessions, the students<br />

were divided into two groups: in this way each pupil had a computer at his/her disposal. Each group attended the lab<br />

two hours per week for four months.<br />

The experiment took place during normal school hours and was carried out by the two class teachers, alternatively<br />

one taking the group in the lab while the other followed the other group in class. Elisabetta Robotti participated in all<br />

the laboratory sessions as observing researcher. During class work, when all pupils were present, the teachers always<br />

sum up the activities performed in the laboratory and discussed them with the whole class.<br />

By the end of the field experiment all the pupils had completed the first two modules of the course; time limitations<br />

prevented all but a few from completing Module 3. Consequently in the following section data from this last Module<br />

are not presented.<br />

Data gathering<br />

In order to collect data for evaluating the teaching experiment, two different types of observational assessment sheet<br />

were prepared. Such sheets were filled in for each pupil and for each laboratory session.<br />

178


The first type of sheet (‘course sheet’) was designed to measure the efficacy of the course organization and<br />

components (simulations, problems, explanations, tests, etc.) as a support for the teacher. The second type of sheet<br />

(‘tools sheets’) was designed to measure the efficacy of the ARI@ITALES tools as a support for students in carrying<br />

out arithmetic problem solving activities.<br />

In both types of sheet, the observations made during the experiment were clustered around three different issues:<br />

‘ease of use’, ‘impact’, and ‘effectiveness’. Each aspect was attributed, by the observing researcher, a numerical<br />

score from 1 (poor) to 4 (very good); space was also provided for comments. Issues considered in the sheets are<br />

mediated from <strong>Technology</strong> Acceptance Model theory (Davis, 1989). Such theory proposes a model to describe how<br />

users come to accept and use a technology, and states that acceptance and use of a technology was determined by<br />

two factors: perceived usefulness and perceived ease of use. In the analysis here reported the meaning of these<br />

crucial factors was adapted taking into account the specific educational and pedagogical objectives of the study<br />

under examination. More specifically, in the course sheets, ‘ease of use’ indicated the degree of difficulty<br />

encountered by a pupil in dealing with an activity in the course, e.g. downloading a simulation on the computer and<br />

using it. ‘Impact’ gave a general evaluation of the pupils’ reaction to a course component, especially when they used<br />

it for the first time. For example, the impact of a simulation was evaluated in terms of the level of acceptance<br />

(whether the pupils liked it), possible impeding factors like length (overly long simulations might be boring, overly<br />

short ones vague), and the clarity of the explanation. ‘Effectiveness’ estimated whether a component fulfilled the aim<br />

for which it was designed in a given context. For example, simulation effectiveness was measured in terms of its<br />

success in explaining a concept and in helping the pupil understand errors.<br />

The Tools Sheets evaluated how ARI@ITALES tools supported students in the problem solving process. Particular<br />

attention was dedicated to the main functions of the Solution Sheet and of the Microworlds. This evaluation was<br />

aimed at analysing the characteristics of the computer transposition of arithmetic knowledge and at studying whether<br />

the design of the learning activities successfully exploited those characteristics.<br />

In the Tools sheets, ‘ease of use’ and ‘impact’ indicated respectively the degree of ease with which a function was<br />

used and the way in which pupils perceived it. ‘Effectiveness’ measured the extent to which the functions<br />

incorporated in the Solution Sheet and in the Microworlds were useful in promoting the development of specific<br />

arithmetic competencies and in helping students validating their actions. For example, it was demonstrated that some<br />

functions in the Euro microworld, like generating and moving coins on the screen, fostered the development of<br />

counting strategies (e.g. making groups of the same value, completing an amount to obtain a whole, etc.). Similarly<br />

functions in the Solution Sheet like the post-it function promoted pupils’ abilities in explaining and verbalizing their<br />

solution strategies.<br />

Tables 1 and 2 show an elaboration of the evaluation data collected with the course and tools sheets. Table 1 shows<br />

the evaluation given to the considered issues at three different moments in the experiment: the beginning and the<br />

conclusion of Module 1 and the end of Module 2. Table 2 shows the development of the pupils’ arithmetic<br />

competencies at the beginning and at the end of Module 1.<br />

Course<br />

components<br />

Simulations<br />

Tests<br />

Table 1. Results concerning the course components<br />

Ease of use Impact Effectiveness<br />

Module1 Module1 Module2 Module1 Module1 Module2 Module1 Module1 Module2<br />

Start End End Start End End Start End End<br />

(2) (2) 17% (2) 22% (2) (2) 17% (3) <strong>10</strong>0% (3) 55% (3) 70% For first<br />

<strong>10</strong>0% (3) 82% (4) 77% <strong>10</strong>0% (3) 82%<br />

(4) 30% (4) 29% explanations:<br />

(2) 11%<br />

(3) 33%;<br />

(4) 55%;<br />

For help:<br />

(0) 45%<br />

(1) 55%;<br />

(2) 25% (2) 3%<br />

(3) 50% (3) 17%<br />

(3) 32% (3) 35%<br />

(3) 40% (3) 8%<br />

(4) 35% (4) 82%<br />

(4) 23% (4) 44%<br />

(4) 30% (4) 88%<br />

179


Consolidation exercises<br />

Solution completion exercises<br />

Explanations<br />

(2) <strong>10</strong>%<br />

(3) 20%<br />

(4) 60%<br />

(2) <strong>10</strong>%<br />

(3) 40%<br />

(4) 45%<br />

(2) <strong>10</strong>%<br />

(3) 31%<br />

(4) 58%<br />

(2) 15%<br />

(3) 26%<br />

(4) 58%<br />

(2) 11%<br />

(3) 11%<br />

(4) 55%<br />

(3) 50%<br />

(4) 25%<br />

(2) 25%<br />

(3) 55%<br />

(4) 20%<br />

Used<br />

them:<br />

20%<br />

Did not<br />

use<br />

them:<br />

80%<br />

(3) 58%<br />

(4) 41%<br />

(2) 1%<br />

(3) 82%<br />

(4) 17%<br />

Used<br />

them:<br />

5%<br />

Did not<br />

use<br />

them:<br />

94%<br />

(2) 11%<br />

(3) 22%<br />

(4) 44%<br />

(3) 55%<br />

(4) 44%<br />

(4) 85% (4) 58% Intersection<br />

of time<br />

intervals<br />

(2) 11%<br />

(3) 11%;<br />

(4) 66%;<br />

<strong>Number</strong><br />

composition<br />

and<br />

factorisation<br />

(1) 33%<br />

(2) 22%;<br />

(2) 25%<br />

(3) 70%<br />

(4) 5%<br />

(3) 50%<br />

(4) 50%<br />

Table 2. Results concerning ARI@ITALES tools<br />

ARI@ITALES Ease of use Impact Effectiveness<br />

Tools Module1 Module1 Module1 Module1 Module1<br />

Start End Start End Start<br />

Cut & Paste (2) 5% (4) <strong>10</strong>0% (2)5% (4) <strong>10</strong>0% (2)<strong>10</strong>%<br />

(3) 20%<br />

(3) 25%<br />

(3) 20%<br />

(4) 55%<br />

(4) 60%<br />

(4) <strong>10</strong>0%<br />

Solution Sheet<br />

Post-it (4) 65% (4) <strong>10</strong>0% (4) 65% (4) <strong>10</strong>0% Assigned Task<br />

(4) 45%<br />

Free use<br />

Accessing<br />

microworlds<br />

(2)<strong>10</strong>%<br />

(3) 25%<br />

(4) 50%<br />

(4) <strong>10</strong>0% (2)<strong>10</strong>%<br />

(3) 25%<br />

(4) 50%<br />

(4) <strong>10</strong>0%<br />

(4) 35%<br />

(2) 1%<br />

(3) 52%<br />

(4) 47%<br />

(3) 17%<br />

(4) 82%<br />

Module1<br />

End<br />

(4) <strong>10</strong>0%<br />

Free use<br />

(4) <strong>10</strong>0%<br />

(4) 33%;<br />

Partition of<br />

time<br />

intervals:<br />

(1) 11%<br />

(2) 22%;<br />

(3) 22%;<br />

(4) 33%;<br />

Numerical<br />

partition:<br />

(1) 33%<br />

(2) 11%;<br />

(4) 33%;<br />

Support for<br />

the learning<br />

of a new<br />

concept:<br />

(2) 55%<br />

(3) 11%;<br />

(4) 33%;<br />

Recalling of<br />

specific<br />

concepts:<br />

(3) 55%<br />

(4) 44%;<br />

180


Euro Microworld<br />

Abacus<br />

Microworld<br />

Voice<br />

synthesizer<br />

Moving<br />

coins on the<br />

screen<br />

Changing<br />

coins<br />

Changing<br />

balls in the<br />

abacus<br />

(4) 65% (4) <strong>10</strong>0% (4) 65% (4) 95% For validating:<br />

75%<br />

(2) 5%<br />

(3) <strong>10</strong>%<br />

(4) 60%<br />

Module 2<br />

Start<br />

(2) 40%<br />

(3) 60%<br />

Discussion of findings<br />

For counting: 25%<br />

(4) <strong>10</strong>0% (4) 65% (4) 98% Arranging coins in<br />

groups: 0%<br />

Counting coins of<br />

the same value:<br />

75%<br />

Forming groups of<br />

integer value: 25%<br />

Counting in<br />

sequence: 55%<br />

For validating: 29%<br />

For counting: 64%<br />

Arranging coins in<br />

groups: 17%<br />

Counting coins of<br />

the same value: 21%<br />

Forming groups of<br />

integer value: 57%<br />

Counting in<br />

sequence: 15%<br />

Completion strategy:<br />

4%<br />

Total/part/reminder<br />

strategy: 11%<br />

Additive strategy:<br />

90%<br />

(4) 76% (4) 76% Understanding monetary equivalence:<br />

82%<br />

Changing coins in a suitable way for<br />

enacting a solution strategy: 47%<br />

division as repetition of subtractions: 41%<br />

Module 2<br />

End<br />

(3) 30%<br />

(4) 70%<br />

Module 2<br />

Start<br />

(2) 40%<br />

(3) 60%<br />

Module 2<br />

End<br />

(3) 70%<br />

(4) 30%<br />

Module 2<br />

Start<br />

Representing a<br />

number: 90%<br />

Performing<br />

additions and<br />

subtractions: 40%<br />

Module 2<br />

End<br />

Representing a<br />

number: 90%<br />

Performing additions<br />

and subtractions:<br />

80%<br />

In the following, a brief analysis, both qualitative and quantitative, of some findings from the experiment is provided<br />

linking them with the objectives of the project. This analysis provides indications on the course and on its<br />

components that may give suggestions for future work in the field.<br />

Supporting teachers in the design and management of classroom activity<br />

First of all, the teachers appreciated the opportunity that the course offered for better following the learning rhythm<br />

of each pupil and for monitoring pupils who had been absent from time to time. Although the students participated in<br />

class activities, the course allowed them to follow the proposed learning itinerary on an individual basis. In this way<br />

students who missed classes could resume their learning itinerary without gaps, and those who experienced<br />

difficulties with some concepts could go back to pertinent explanations, simulations and examples.<br />

The teachers also appreciated the possibility of designing activities to meet the needs of different students. For<br />

example, they found it useful to submit different versions of the same problem to pupils with different ability levels.<br />

Moreover, they considered it very useful to be able to save on their computers the work completed by each student<br />

during a session. In this way they could check and compare the different solutions and thus personalize the activities<br />

better during subsequent lessons.<br />

181


As to course components, the activities entailing completion of a partially implemented solution proved to be<br />

effective for gradually consolidating pupils’ acquisition of mathematical concepts. These activities were<br />

autonomously and correctly completed by 75% of the pupils in the initial sessions of the course and by 99% of them<br />

in the final sessions. Moreover, while at the beginning of Module 1 only 5% of pupils obtained a very good<br />

evaluation (score 4), at the end of this module this percentage rose to 47%.<br />

Simulations proved to be quite useful for helping the teacher introduce a concept or a particular function of a<br />

microworld. Table 1 shows that, at the beginning of Module 1, simulations were perceived as effective by 30% of<br />

pupils. At the end of Module 2, this percentage increased to 55%. Nevertheless a significant percentage of pupils<br />

(45%) did not use simulations autonomously, after a first launch attempt, to recall a concept or a procedure,<br />

preferring to ask their classmates or the teacher instead. This was probably due to the fact that launching and<br />

watching a simulation was not straightforward and so many pupils avoided repeating this process.<br />

The tests helped to verify the acquisition of specific procedures or concepts (e.g. ‘changing’ coins in the Euro<br />

microworld or balls in the Abacus). The tests were designed not only to provide evaluation feedback but also to<br />

introduce the pupil to specific remedial activities. This approach proved effective for the management of the activity,<br />

since the teacher did not have to rush from one pupil to another for checking and offering suggestions. The data show<br />

that, at the beginning of Module 1, 55% of pupils completed the tests correctly. At the end of this Module this<br />

percentage rose to 79%.<br />

The ARI@ITALES microworlds have specific functions for validating crucial arithmetic competencies. For<br />

example, as said before, in the Euro microworld it is possible to select a coin or group of coins and to hear the<br />

corresponding amount pronounced by means of a voice synthesizer incorporated in the system. During the<br />

experiment, this function took on a crucial role in helping the pupil to learn how to count money. The voice<br />

synthesizer allowed pupils to compare autonomously what they thought they had done (e.g. generating 1,50 euro<br />

worth of coins) with what they had actually done (generating a sum of 1,05). This allowed pupils to correct their<br />

work autonomously by practising the rules governing the counting of coins. At the same time, the teacher was freed<br />

from the need to check the students’ work for errors. The voice synthesizer was mainly used at the beginning of the<br />

course, when pupils had to strengthen counting and representation skills. As work progressed, they resorted to it less<br />

and less, although some pupils continued to use it as part of a trial and error strategy so as to avoid the effort of<br />

counting. This brought to light the need to provide the teachers with a function for disabling the voice synthesis as<br />

they saw fit.<br />

During the course, the voice synthesis was also used for reading aloud problem texts. This proved useful for pupils<br />

who had difficulties due to reading or sight problems and who would otherwise have asked the teacher repeatedly to<br />

read the problem text aloud during the solution activity. So in this case the voice synthesizer made it easier for the<br />

teacher to manage class activities.<br />

Other feedback functions incorporated in ARI@ITALES microworlds also proved to be effective. For example, the<br />

feedback provided by the system when students perform a change of coins in the in Euro microworld or of balls in<br />

the Abacus. Since this feedback is formulated to help students identify the error committed, if any, and correct it, it<br />

was frequently exploited during the experiment. For example, the teachers designed and proposed additional practice<br />

exercises for students with difficulties and asked them to perform these autonomously, relying on the system’s<br />

feedback for checking their answers.<br />

Development of arithmetic problem solving skills<br />

Arithmetic problem solving is a field in which primary school students often experience difficulties, as indicated by<br />

many research studies and reported by many teachers (see, for example: Mullis, Martin, Foy, 2005). These<br />

difficulties have strong repercussions on students' self-esteem and future mathematics performance. Even at primary<br />

school level, students frequently perceive ‘doing maths’ as the execution of repetitive exercises according to formal<br />

rules whose meaning they often do not understand, or master only at the syntactic level. For many pupils problem<br />

solving is limited to ‘guessing’ the right arithmetic operation and carrying out the written calculations, since the<br />

semantics they associate with arithmetic symbols is poor and frequently limited to what the result of a computation<br />

182


denotes. These considerations highlight the importance of developing new methodologies and tools to better support<br />

the development of meanings for arithmetic operators and for problem solving strategies.<br />

One of the objectives of the work here reported was to study how ARI@ITALES tools could be used to assist pupils<br />

in visually representing and solving arithmetic problems. Visual representation systems play a central role in<br />

mathematics education as a way of linking the symbolic approach to mathematics concepts to perceptive experience.<br />

ARI@ITALES microworlds were designed to offer pupils a variety of computational objects for representing and<br />

manipulating problem situations and resolutions steps.<br />

Consider, for example, the development of counting strategies, which are crucial for arithmetic problem solving<br />

(Adetula, 1996). The Euro microworld allowed pupils to represent amounts concretely by means of coins. At the<br />

beginning of the course, 75% of the pupils counted coins in sequence, according to their value, and often made errors<br />

when dealing with an increasing number of coins or with higher amounts. Becoming familiar with the Euro<br />

microworld and taking advantage of the movement and change opportunities it offers, the pupils gradually learnt<br />

strategies better suited to facilitate the counting process. For instance, by moving coins on the screen, they composed<br />

groups of coins whose amount corresponds to an integer value, e.g. 1 euro, first using coins of the same value (e.g.<br />

groups of five 20-cent coins) and then coins of different values. The acquisition of progressively structured counting<br />

strategies was assisted by inserting examples of problems solved in the course. Looking at these examples, pupils<br />

were exposed to new ways of arranging and counting coins and banknotes. These skills were then consolidated by<br />

proposing partially solved problems that pupils had to complete. At the end of Module 1, 57% of pupils were able to<br />

count mentally without moving coins on the screen since they had internalised the process of moving and grouping<br />

coins and only used the mouse pointer to support the counting process and the voice synthesizer to validate their<br />

results.<br />

As to the development of solution strategies, the possibility of moving coins on the screen helped the pupils to put in<br />

action total/part/remainder strategies and supported the attribution of a concrete meaning to the obtained groups of<br />

coins (the part, the remainder).<br />

The generation of coins facilitated pupils in mastering completion strategies in additive problems. These strategies,<br />

which are crucial for mental calculation, are usually rather difficult for pupils to control. With the support of the Euro<br />

microworld, they were able to control all the phases of the process. For example, starting from a given quantity, they<br />

first generated the cents necessary to reach the successive decimal and then the decimals needed to reach the whole,<br />

and so on until they reach the target amount.<br />

The change function of the Euro microworld supported the learning of the additive composition and decomposition<br />

of numbers by means of monetary equivalence. Results showed that a good percentage of pupils acquired these<br />

competencies from the early sessions of the course (82%). Almost half of the pupils also changed coins correctly<br />

with given constraints (e.g. changing a given amount using the least number of coins); this demonstrated that they<br />

had achieved a firm grasp of the manipulation of coins and numbers. The change function was used by a<br />

considerable number of pupils (47%) to perform the partition of a given amount into groups of given value. It is<br />

worth noting that the direct intervention of the teacher helped to avoid some possible misuse of the microworld<br />

functions (e.g. the opportunistic use of the change function or of the voice synthesizer).<br />

The activities in the Abacus microworld were designed to support both the exploration of the rules involved in the<br />

decimal positional writing of numbers and the acquisition of concepts involved in adding and subtracting decimal<br />

numbers (e.g. the carry over concept). At the beginning of Module 2, about 90% of pupils represented correctly in<br />

the Abacus a monetary value previously built up with coins, and 40% were able to perform operations (additions and<br />

subtractions) by changing balls properly. At the end of Module 2, the percentage of students performing correct<br />

operations in the Abacus reached 80%.<br />

The ARI@ITALES Solution Sheet supported the elaboration of the solution process enacted within the microworlds<br />

by allowing pupils to describe the strategy they had adopted both by means of the visual representations obtained in<br />

the microworlds and by means of verbal and symbolic language (using the ‘post-it’ function). The possibility of<br />

comparing different representations for the same value (for instance copying in the Solution Sheet the representations<br />

obtained in different microworlds, e.g. the Euro, the Abacus, and the <strong>Number</strong> Line microworlds) was used in the<br />

course to induce pupils to think over the symbolic representation of decimal numbers and to develop the capacity to<br />

183


handle more formal representations and their related properties. At the end of the course, almost 90% of the pupils<br />

were able to shift correctly from the verbal representation of a number to its decimal writing and to describe verbally<br />

in the Solution Sheet a procedure previously accomplished in a microworld; they subsequently managed to formalize<br />

this description using arithmetic symbols.<br />

The activities in the Calendar microworld proved effective for developing meanings for some mathematical<br />

concepts, such as those of multiple, sub-multiple, and lowest common denominator (seen, for example, as the<br />

multiple interval of two or more time intervals). Then problems concerning the integer partition of a given amount<br />

were proposed. Eventually, more formal ways to tackle the concept were gradually introduced. Multiple problems<br />

were correctly solved by 80% of pupils, while 55% managed to solve problems related to the concept of lowest<br />

common denominator and integer partition.<br />

Conclusions<br />

Many research studies reveal that it is pointless from a pedagogical point of view to make computers and educational<br />

digital media available in schools if their use is not properly embedded in suitably articulated educational itineraries<br />

in which the whole learning context is taken into account, including the pedagogical and curriculum objectives, the<br />

tools and the way in which they are used, the teaching/learning paths, the different actors and their social<br />

relationships, etc. (Dias De Figueiredo & Afonso, 2006). Proper contextualization therefore becomes decisive in<br />

making educational software effective; otherwise, the potential of even the best program will remain largely<br />

unexploited. The design of effective contexts of use for ICT-based tools is a complex process that also requires<br />

changes in content, organization and management of classroom activity, innovations that are difficult for a teacher to<br />

accomplish effectively. One of the objectives of the project discussed here was to analyse whether an e-learning<br />

approach, supported by specifically designed authoring tools, can help the teacher in facing such changes.<br />

The analysis of the teaching-learning activity carried out during the class experiment pointed out elements of the<br />

course and of the ARI@ITALES tools that had proved to be effective in supporting the management of classroom<br />

activities and the development of students’ cognitive processes in arithmetic problem solving.<br />

For example, the experiment highlighted the crucial supporting role of the feedback provided by ARI@ITALES<br />

tools. Direct diagnostic feedback proved useful in the acquisition of specific skills or procedures and in preventing<br />

students from making further incorrect steps. Indirect diagnostic feedback, such as that provided by the voice<br />

synthesizer, was helpful for supporting pupils in validating their work, thus fostering the development of crucial<br />

competencies such as counting. Moreover, some course components, such as tests, were designed to provide<br />

feedback in a way that would help students to understand errors made and to guide them in correction.<br />

Backtracking and the possibility of revising the work previously done proved useful for supporting the ability of<br />

verbalizing and explaining the actions performed. This ability was also supported by the Solution Sheet, which<br />

allowed elaboration of the solution process enacted within the microworlds, transforming it into a product to reflect<br />

on and to share with others.<br />

Some of the activities proposed stimulated students’ attitude to anticipate mentally hypotheses and problem<br />

solutions. For example, exercises that involved completing a partially implemented solution, letting students<br />

concentrate on single steps, facilitated acquisition of gradually articulated solution strategies.<br />

A decisive role was played by the pupils’ interaction: they often discussed and exchanged opinions and advice on<br />

strategies to be used, and compared results. This important aspect of the activity was only partially supported by the<br />

technology used, since at the time the experiment was conducted the ITALES platform was not fully implemented<br />

and the online communication function could not yet be used. This was certainly a limit that prevented the design of<br />

course activities that exploit computer-mediated communication to promote mathematics learning. In previous work,<br />

the rich learning opportunities offered by such activities were analysed on the basis of a small-scale experiment<br />

carried out using a simple local network connection and an early version of the ARI-LAB system (Bottino, 2000). In<br />

future, a rethink of course design will be needed in order to include computer-mediated communication and<br />

collaboration activities. It can be said that the flexibility of the approach followed in course design will make it<br />

easier, than in a traditional setting, to modify and reorganise learning activities and to change their content.<br />

184


Finally, it is worth noting that, even though this paper mainly focuses on analysing aspects related to teachers’<br />

management of ICT-based activities and on evaluating students’ learning of arithmetic concepts, future work in the<br />

field will examine how the adopted approach can mediate the growth of communities of researchers and teachers<br />

who collaboratively develop, share and discuss the design of classroom activities based on the use of technology.<br />

This analysis will be carried out under the REMATH European project (EC-IST-4-26751-STP) that it is currently<br />

under development.<br />

Acknowledgements<br />

Special thanks to Roberto Carpaneto and Anna Macchello, teachers at the M. Mazzini Primary School in Genova.<br />

Their valuable collaboration and support was crucial in the design and testing of the course under examination.<br />

References<br />

Adetula, L. (1996). Effects of counting and thinking strategies in teaching addition and subtraction problems.<br />

<strong>Educational</strong> Research, 38, 183-198.<br />

Alvino, S., & Sarti, L. (2004). Learning Objects e Costruttivismo. In Consorzio Omniacom (Eds.), Atti Didamatica,<br />

Ferrara, Italy, 761-772.<br />

Artigue, M. (2000). Instrumentation Issues and the Integration of Computer Technologies into Secondary<br />

Mathematics Teaching, Developments in Mathematics Education in German-speaking Countries: Selected Papers<br />

from the Annual Conference on Didactics of Mathematics, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://webdoc.sub.gwdg.de/ebook/e/gdm/2000/artigue_2000.pdf.<br />

Artigue, M., Bottino. R.M., Cerulli. M., Mariotti. M.A., & Morgan, C. (2006). Developing a joint methodology for<br />

comparing the influence of different theoretical frameworks in technology enhanced learning in mathematics: the<br />

TELMA approach. 17th ICMI Study: <strong>Technology</strong> Revisited, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.math.msu.edu/~nathsinc/ICMI/.<br />

Artigue, M., Bottino, R.M., Cerulli, M., Georget, J.P., Maffei, L., Maracci, M., Mariotti, M.A., Pedemonte, B.,<br />

Robotti, E., & Trgalova J. (<strong>2007</strong>). <strong>Technology</strong> Enhanced Learning in Mathematics: the cross-experimentation<br />

approach adopted by the TELMA European Research Team. La Matematica e la sua Didattica, 21 (1), 67-74.<br />

Bottino, R.M. (2000). Computer-based communication in the classroom: defining a social context. In Watson, D.M.<br />

& Downes, T. (Eds.), Communications and Networking in Education: Learning in a Networked Society, Dordrecht,<br />

The Netherlands: Kluwer Academic Publishers, 343-354.<br />

Bottino, R.M. (2004). The evolution of ICT-based learning environments which perspectives for the school of the<br />

future. British Journal of <strong>Educational</strong> Technologies, 35 (5), 553-567.<br />

Bottino, R.M., & Chiappini, G. (2002). Technological Advances and Learning Environments. In English, L. (Ed.),<br />

Handbook of International Research in mathematics Education, Mahwah, NJ: Lawrence Erlbaum, 757-786.<br />

Davis, F.D. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. MIS<br />

Quaterly, 13, 319-340.<br />

De Corte, E. (1996). Changing views of computer supported learning environments for the acquisition of knowledge<br />

and thinking skills. In Vosniadou, S., De Corte, E., Glaser, R. & Mandl, H. (Eds.), International perspectives on the<br />

designing of technology-supported learning environments, Mahwah, NJ: Lawrence Erlbaum, 129–145.<br />

Dias De Figueiredo, A., & Afonso, A.P. (2006). Managing learning in virtual settings: the role of context, Hershey,<br />

PA, USA: Information Science Publishing.<br />

185


Lagrange, J. B., Artigue, M., Laborde, C., & Trouche, L. (2001). Meta study on IC technologies in education.<br />

Towars multidimensional framework to tackle their integration into the teaching of mathematics. In Heuvel-<br />

Panhuizen, M. v. d. (Ed.), Proceedings of the 25th Conference of International Group for Psychology of<br />

Mathematics Education, Utrecht, Pays Bas: Freudenthal Institute, Utrecht University, 1, 111-122.<br />

Mullis, I.V.S., Martin, M.O., & Foy, P. (2005). Findings from a developmental project, Chestnut Hill, MA: TIMSS<br />

& PIRLS International Study Center, Boston College.<br />

Salomon, G. (1996). Computers as a Trigger for Change. In Vosniadou, S., De Corte, E., Glaser, R. & Mandl, H.<br />

(Eds.), International Perspectives on the Desing of <strong>Technology</strong>-Supported Learning Environments, Hillsdale, NJ:<br />

Lawrence Erlbaum, 363-377.<br />

Sutherland, R. (2004). Designs for learning: ICT and knowledge in the classroom, Computers & Education, 43, 5-16.<br />

Venezky, R. L., & Davis, C. (2002). Quo vademus? The transformation of schooling in a networked world,<br />

OECD/CERI, retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.oecd.org/dataoecd/48/20/2073054.pdf.<br />

186


Lu, H., Jia, L., Gong, S.H., & Clark, B. (<strong>2007</strong>). The Relationship of Kolb Learning Styles, Online Learning Behaviors and<br />

Learning Outcomes. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 187-196.<br />

The Relationship of Kolb Learning Styles, Online Learning Behaviors and<br />

Learning Outcomes<br />

Hong Lu<br />

Department of educational technology, Shandong Normal University, China // Tel: +86-531-86188575 //<br />

luhong_1968@yahoo.com<br />

Lei Jia<br />

Department of English, Shandong Normal University, China // Tel: +86-531-86181218 // krowbat@gmail.com<br />

Shu-hong Gong<br />

Department of educational technology, Shandong Normal University, China // Tel: +86-531-86182530 //<br />

gongshuhong@21cn.com<br />

Bruce Clark<br />

Faculty of Education, University of Calgary, Canada // Tel: +1-403-220-7363 // bclark@ucalgary.ca<br />

ABSTRACT<br />

This study focused on the relationship between Kolb learning styles and the enduring time of online learning<br />

behaviors, the relationship between Kolb learning styles and learning outcomes and the relationship between<br />

learning outcomes and the enduring time of a variety of different online learning behaviors. Prior to the<br />

experiment, <strong>10</strong>4 students majoring in <strong>Educational</strong> <strong>Technology</strong> completed Kolb’s Learning Style Inventory<br />

(KLSI). Forty students were chosen to be subjects in an online learning experiment. Results indicated that there<br />

was a significant effect of Kolb learning style on the total reading time and total discussion time of the subjects.<br />

Although there was no significant effect between Kolb learning styles and learning outcomes, data from the<br />

experiment showed that the mean of learning outcomes of Convergers and Assimilators was higher than that of<br />

Divergers and Accommodators. There were two models of linear regression between learning outcomes and the<br />

enduring time of different online learning behaviors. Both of them were significant at the 0.001 level, and they<br />

accounted for 54.9% and 60.8% of the variance of the dependent respectively. The findings of this study were<br />

instrumental to instructors and moderators of online courses. First, instructors using online courses should<br />

seriously consider the diversity of learning styles when designing and developing online learning modules for<br />

different students. Second, they should provide a large number of electronic documents for students and give<br />

enough time to let them absorb knowledge by online reading. These could be effective methods to improve the<br />

quality of online courses.<br />

Keywords<br />

Kolb learning styles, Online learning behaviors, Learning outcomes<br />

Introduction<br />

Although online learning was growing rapidly, the effect of it was not yet satisfactory. For example, some students<br />

often complained that they could not find sufficient online learning resources to support their online courses, whereas<br />

other students were restricted by what they felt as a lack of opportunities to communicate with their instructors<br />

(Huang, 2003). Leigle & Janicki (2006) offered solutions to these problems, arguing that by customizing learning<br />

modules for differing student types, the learning outcome would be increased. Based upon this solution, the present<br />

study focused on the relationship of Kolb learning styles, online learning behaviors and learning outcomes. It was<br />

hoped that this study could help instructors understand the function of learning style in an online learning<br />

environment and thus develop corresponding online learning modules for different students.<br />

Kolb Learning Style Model<br />

The Kolb learning style model was based on Kolb’s experiential learning theory. In this model, Kolb defined<br />

learning style on a two-dimensional scale based on how a person perceived and processed information. How a person<br />

perceived information was classified as concrete experience or abstract conceptualization, and how a person<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

187


processed information was classified as active experimentation or reflective observation (Simpson & Du, 2004).<br />

Accordingly, Kolb (1985) described the process of experiential learning as a four-stage cycle involving four adaptive<br />

learning modes: Concrete Experience (CE), Reflective Observation (RO), Abstract Conceptualization (AC), and<br />

Active Experimentation (AE). CE tended towards peer orientation and benefited most from discussion with fellow<br />

CE learners. AC tended to be oriented more towards symbols and learned best in authority-directed, impersonal<br />

learning situations, which emphasized theory and systematic analysis. AE tended to be an active, “doing” orientation<br />

to learning that relied heavily on experimentation and learned best while engaging in projects. RO relied heavily on<br />

careful observation in making judgments. Kolb (1985) also identified four learning style groups based on the four<br />

learning modes: Divergers favored CE and RO, Assimilators favored AC and RO, Convergers favored AC and AE,<br />

and Accommodators favored CE and AE.<br />

Kolb Learning Style Inventory<br />

There were many different learning style models. Many of them were derived from a common ancestry and<br />

measured similar dimensions (Brown, 2005). Accompanied by a vast collection of learning models, there was also a<br />

wealth of confusing assessment tools, amongst which, Kolb Learning Style Inventory (KLSI) remained one of the<br />

most influential and widely distributed instruments used to measure individual learning preference (Kayes, 2005).<br />

The original KLSI encountered serious attacks because of its low test-retest reliability and limited construct validity<br />

(John et al., 1991). In 1985, Kolb and his associates revised the KLSI to improve and refine its psychometric<br />

properties (Smith & Kolb, 1986).<br />

Some researchers had examined and found support for the revised KLSI. Veres et al. (1991) examined the revised<br />

KLSI and found increased stability. They argued that the revised KLSI might well be useful to researchers, educators<br />

and practitioners. Raschick, Maypole & Day’s (1998) research found the revised KLSI a useful tool for optimizing<br />

the relationship between supervisors and their students. The tool enabled both groups to view learning as a four-step<br />

process that involved experiencing, reflecting, conceptualizing and creatively experimenting. Based on these<br />

favorable results, KLSI became a well-accepted instrument for this experiment.<br />

Review of Literature<br />

Some recent studies analyzed the online learning behaviors between Kolb learning style groups. Simpson & Du<br />

(2004) explored the effect of Kolb learning styles on students’ online participation and self-reported enjoyment<br />

levels in distributed learning environments. Multiple regression analysis found that learning style had a significant<br />

impact on the students’ participation and enjoyment level. Fahy (2005) conducted a study of the relations between<br />

Kolb learning style and online communication behavior, and found that Convergers demonstrated their willingness to<br />

spend more time and energy on the network itself. Liegle & Janicki (2006) found that learners classified as<br />

“Explorers” (Active experimenters) tended to create their own path of learning (learner control), while subjects<br />

classified as “Observers” (Reflective observers) tended to follow the suggested path by clicking on the “Next” button<br />

(system control) in a web-based training program. However, there were studies that reached opposite conclusions.<br />

For example, in the environment of hypermedia learning, Reed et al. (2000) argued that Kolb learning styles had no<br />

effect on the number of linear steps, which determined whether the steps were the next logical, sequentially forward<br />

movement, and on the number of nonlinear steps, which determined whether the steps were branches or sidetracks.<br />

Why did inconsistent results occur in above-mentioned studies? The answer was that the lack of effect of Kolb<br />

learning styles could be due to that those studies involved the research variables, such as linear steps vs. nonlinear<br />

steps, that Kolb instrument did not seem to measure (Miller, 2005). According to the view, if research variables of<br />

the experiment could be measured by the KLSI, the effect of Kolb learning style would appear. To test it, this study<br />

chose for the online learning behaviors related with KLSI to be researching variables.<br />

Research investigating the learning outcome in an online or a hypermedia environment also reached confusing<br />

conclusions. For instance, Melara (1996) examined the effect of Kolb learning styles on learner performance within<br />

two different hypertext structures, and showed no significant difference in achievement for learners of different<br />

learning style using either hypertext structure. Davidson-shivers et al. (2002) investigated the effect of Kolb learning<br />

styles on undergraduate writing performance in a multimedia lesson. No statistically significant difference in writing<br />

performance among the learning styles was found. Howard and colleagues (2004) argued that, even though<br />

188


significant learning occurred, no significant difference in achievement was observed within any Kolb's<br />

classifications. Miller (2005) found no effect of Kolb learning styles on performance when using a computer-based<br />

instruction system to teach introductory probability and statistics. Some experiments, however, showed positive<br />

effects of Kolb learning styles on students’ performance. Oughton & Reed (2000) tested 21 graduate students<br />

enrolled in a graduate hypermedia education class. They were told to construct concept maps on the term of<br />

hypermedia. Findings indicated that Assimilators and Divergers were the most productive on their concept maps.<br />

Terrell (2002) indicated that, in a web-based learning environment, students whose learning styles belonged to<br />

Convergers and Assimilators were likely to succeed than students whose learning styles belonged to Divergers and<br />

Accommodators. These confusing conclusions might be produced by various factors, such as the topic of the course<br />

and how grades are given. Therefore, practitioners of online courses had to take these factors into consideration when<br />

they hoped to make use of relevant research conclusions.<br />

Most of the previous research focused on investigating either online learning behavior or the learning outcome<br />

between Kolb learning style groups. Little research dealt with the relationship between online learning behavior and<br />

learning outcome. The purpose of this study was to prove if there were differences in the online learning behavior<br />

between Kolb learning style groups, and if so, whether the differences would lead to differences in the learning<br />

outcome.<br />

Research Variables<br />

Kolb Learning Style Groups<br />

The KLSI could identify subjects’ preference for perceiving and processing information. Subjects responded to the<br />

12-item Kolb instrument and were categorized as Convergers, Divergers, Assimilators or Accommodators.<br />

Online Learning Behavior<br />

Corresponding with the four learning modes in the Kolb learning style model, four different online learning<br />

behaviors were identified as research variables in this study. The enduring time of these variables was measured<br />

when the subjects designed Flash animations in an online learning environment. These types of behavior involved<br />

online discussion preferred by CE, online reading of electronic documents preferred by AC, Flash animation<br />

designing preferred by AE, and online observation of onscreen activities of other subjects preferred by RO.<br />

Learning Outcome<br />

The subjects’ task was to design one animation using Flash software. The animation included ten different text<br />

effects. Each effect of the animation counted as one point for a total of ten points. The learning outcome was to be<br />

measured according to the amount of text effects completed by the subjects.<br />

Research Questions<br />

The authors of this experiment hypothesized that the subjects with different learning styles tended to choose different<br />

online learning behaviors, which would subsequently result in different learning outcomes of the subjects.<br />

The research questions guiding this experiment were as follows: (1) What was the relationship between learning<br />

styles and the enduring time of online learning behaviors? (2) What was the relationship between learning styles and<br />

learning outcomes? (3) What was the relationship between learning outcomes and the enduring time of different<br />

online learning behaviors?<br />

189


Method<br />

Participants<br />

The participants were third-year undergraduate students in the Department of <strong>Educational</strong> <strong>Technology</strong> at Shandong<br />

Normal University in China. <strong>10</strong>4 students took part in the test of KLSI, 40 of whom, demonstrating evident<br />

preferences for learning styles, were chosen as subjects, with ten subjects in each learning style category. Table 1<br />

showed gender distributions of learning style categories.<br />

Table 1. Gender distribution of Kolb categories<br />

Male Female<br />

Convergers 5 5<br />

Divergers 4 6<br />

Assimilators 5 5<br />

Accommodators 4 6<br />

These subjects had grasped some basic computer knowledge, such as the application of Internet, the use of<br />

communicating software, drawing software and word processing software. They also acquired basic knowledge of<br />

Flash when they were freshmen.<br />

Procedure<br />

The experiment was performed in the university computer laboratory. The subjects were divided into ten groups.<br />

Each group contains four subjects including one Converger, one Diverger, one Assimilator and one Accommodator.<br />

Then they were given 120 minutes to perform the designated task. During the 120 minutes, these four subjects<br />

worked individually and each one met with one experimenter, who observed and recorded their behaviors. Four<br />

graduate students who were familiar with the application of Flash were arranged to communicate with the subjects<br />

through the instant messaging software of QQ. A website encompassing an electronic document of how to design the<br />

animation using Flash was also provided. This electronic document was a detailed guide of how to design the<br />

animation in the task.<br />

Initially, the subjects were given 20 minutes to respond to the task. They were required to do so individually, without<br />

the help of online consultations, observations or references. After this pretest, they were given a <strong>10</strong>-minute break.<br />

Each subject was then administered a posttest lasting 90 minutes, to respond to the task again. This time they could<br />

discuss with the graduate students through QQ, observe the designing process of other subjects (The subjects were<br />

authorized to access to the onscreen operations of others with their own computers.), read the electronic document on<br />

the Internet or design Flash animations by themselves. During the posttest, the experimenter observed and recorded<br />

the participants’ enduring time in online discussion with the graduate students (total discussion time), the enduring<br />

time observing the onscreen activities of others (total observation time), the enduring time actively reading the<br />

electronic document (total reading time) and the enduring time designing Flash animations (total designing time).<br />

Data analyses were performed by using, the Statistical Package for the Social Sciences, for Windows ([SPSS] ver.<br />

13.0).<br />

Results<br />

The relationship between learning styles and the enduring time of online learning behaviors<br />

The relationship between learning styles and the enduring time of online learning behaviors was analyzed in one-way<br />

ANOVA. The analysis found that learning styles had no significant effect on total observation time and total<br />

designing time. In fact, all subjects spent more than 45 minutes on designing. Only five subjects spent one or two<br />

minutes on observing the onscreen activities of others and the rest spent no time on the observation. However,<br />

190


subjects with different learning styles demonstrated significant differences in the categories of total discussion time<br />

and total reading time.<br />

Table 2. ANOVA for different learning styles and the enduring time of online learning behaviors<br />

Convergers Divergers Assimilators Accommodators<br />

M<br />

(min)<br />

SD<br />

M<br />

(min)<br />

SD<br />

M<br />

(min)<br />

SD<br />

M<br />

(min)<br />

SD<br />

F Prob.<br />

Total<br />

discussion<br />

time<br />

Total<br />

7.8 3.2931 14.1 4.4083 6.4 3.3400 12.6 3.2042 <strong>10</strong>.617 0.000<br />

observation<br />

time<br />

0.2 0.6325 0.1 0.3162 0.3 0.6749 0.1 0.3162 0.347 0.791<br />

Total reading<br />

time<br />

Total<br />

20.9 3.9847 15.9 3.3813 21.2 2.7809 14.1 3.6652 <strong>10</strong>.525 0.000<br />

designing<br />

time<br />

55.1 6.7569 53.8 4.8717 56.9 5.3427 57.7 5.9451 0.929 0.437<br />

Table 3. Scheffé post hoc comparison of total discussion time<br />

Convergers Divergers Assimilators Accommodators<br />

(Prob.)<br />

(Prob.)<br />

(Prob.)<br />

(Prob.)<br />

Convergers 0.005 0.859 0.045<br />

Divergers 0.005 0.000 0.832<br />

Assimilators 0.859 0.000 0.006<br />

Accommodators 0.045 0.832 0.006<br />

Table 4. Sceffé post hoc comparison of total reading time<br />

Convergers Divergers Assimilators Accommodators<br />

(Prob.)<br />

(Prob.)<br />

(Prob.)<br />

(Prob.)<br />

Convergers 0.027 0.998 0.001<br />

Divergers 0.027 0.017 0.722<br />

Assimilators 0.998 0.017 0.001<br />

Accommodators 0.001 0.727 0.001<br />

Table 3 and Table 4 (the Scheffé post hoc comparisons) showed that two significant effects appeared between the<br />

subjects who favored abstract conceptualization (Convergers and Assimilators) and those who favored concrete<br />

experience (Divergers and Accommodators). That is, on the one hand, subjects identified as Convergers and<br />

Assimilators spent more time on online reading than those identified as Divergers and Accommodators. On the other<br />

hand, subjects identified as Divergers and Accommodators spent more time on online discussing than those<br />

identified as Convergers and Assimilators. Furthermore, significant differences were not found between gender and<br />

total discussion time (F(1,38)=0.041, p=0.841), total observation time (F(1,38)=0.009, p=0.926), total reading time<br />

(F(1,38)=0.432, p=0.515) and total designing time (F(1,38)=0.009, p=0.925).<br />

The relationship between learning styles and learning outcomes<br />

The subjects had learned some introductory knowledge of Flash in their first year of university. Owing to scarce<br />

practice during the following two years, only seven subjects finished one text effect in the pretest. Among them were<br />

two Convergers, one Diverger, one Assimilator and three Accommodators. No subject completed the ten effects in<br />

the posttest. The task seemed challenging to all subjects.<br />

191


Table 5. Learning outcomes of different learning styles<br />

Convergers Divergers Assimilators Accommodators<br />

M SD M SD M SD M SD<br />

Pretest 0.2 0.422 0.1 0.316 0.1 0.316 0.3 0.483<br />

Posttest 5.3 2.163 4.4 1.776 4.9 2.183 4.8 2.658<br />

Learning outcomes 5.1 1.912 4.3 1.567 4.8 2.044 4.5 2.224<br />

To analyze the relationship between learning styles and learning outcomes, each subject was categorized as<br />

demonstrating either a high or low learning outcome. High learning outcome was defined as a learning outcome<br />

which was equal to or more than five points (higher than the mean learning outcome for subjects, M=4.68). The<br />

result of chi-square test showed that there was no significant association between learning styles and learning<br />

outcomes (χ 2 (3, N=40)=2.707, p=0.538), and no significant association between gender and learning outcomes (χ 2 (1,<br />

N=40)=0.123, p=0.726).<br />

The relationship between learning outcomes and the enduring time of different online learning behaviors<br />

To answer the research question of “what was the relationship between learning outcomes and the enduring time of<br />

different online learning behaviors?”, a multiple linear regression was conducted regressing the learning outcomes on<br />

the four predictor variables (total discussion time, total observation time, total reading time and total designing time).<br />

Table 6. Correlations between learning outcome and independent variables<br />

Intercorrelations<br />

Variables<br />

M SD<br />

X1 X2 X3 X4 Y<br />

Total discussion time (X1) 1 -0.082 -0.226 -0.624 0.319 ∗ <strong>10</strong>.225 4.7420<br />

Total observation time (X2) 1 0.155 -0.143 -0.127 0.175 0.5006<br />

Total reading time (X3) 1 -0.535 0.566 ∗∗ 18.025 4.5825<br />

Total designing time (X4) 1 -0.702 ∗∗ 55.875 5.7565<br />

Learning outcomes (Y) 1 4.675 1.8999<br />

∗∗ . Correlation is significant at the 0.01 level (2-tailed).<br />

∗ . Correlation is significant at the 0.05 level (2-tailed).<br />

The correlations between the dependent variable (learning outcome) and the independent variables (total discussion<br />

time, total observation time, total reading time and total designing time) were shown in Table 6. The dependent<br />

variable was significantly correlated with total designing time (r=-0.702, p


Total 140.775 39<br />

2 b<br />

Regression 85.594 4 21.398 13.572 0.000<br />

Residual 55.181 35 1.577<br />

Total 140.775 39<br />

a. Predictors: (Constant), total discussion time, total reading time, total designing time. Dependent Variable: learning<br />

outcome.<br />

b. Predictors: (Constant), total observing time, total discussion time, total reading time, total designing time.<br />

Dependent Variable: learning outcome.<br />

Table 9. Results of Multiple Regression Analysis (Coefficients)<br />

Model a<br />

Unstandardized Coefficients<br />

B Std. Error<br />

Standardized Coefficients<br />

Beta<br />

t Prob.<br />

(Constant) 7.953 8.461 0.940 0.354<br />

1<br />

Total discussion time<br />

Total reading time<br />

0.069<br />

0.167<br />

0.<strong>10</strong>8<br />

0.<strong>10</strong>3<br />

0.173<br />

0.402<br />

0.641<br />

1.612<br />

0.525<br />

0.116<br />

Total designing time -0.125 0.<strong>10</strong>3 -0.379 -1.219 0.231<br />

(Constant) 13.092 8.312 1.575 0.124<br />

Total discussion time 0.004 0.<strong>10</strong>6 0.009 0.035 0.972<br />

2 Total observing time -0.969 0.423 -0.255 -2.289 0.028<br />

Total reading time 0.125 0.<strong>10</strong>0 0.302 1.257 0.217<br />

Total designing time -0.189 0.<strong>10</strong>1 -0.572 -1.868 0.070<br />

a. Dependent Variable: learning outcome.<br />

Findings from the multiple regression analysis were summarized in Table 7, 8 and 9. The linear regression analysis<br />

encompassed the individual-level variables of the subject’s learning outcome, total discussion time, total observation<br />

time, total reading time and total designing time. Of course, there was a precondition for model 1 and 2, that is, all<br />

subjects spent more than half of the total time on designing. In the first step of the analysis (Model 1), the<br />

simultaneous entry was specified for total discussion time, total reading time and total designing time. From table 7,<br />

we could find that model 1 accounted for 54.9% (R 2 =0.549) of the variance, which was significant at the 0.001 level.<br />

In the second step (Model 2), total observation time was added. It increased the R 2 by 5.9%. In table 8, the value of F<br />

was the mean square regression divided by mean square residual. The probability of the F-values in two models<br />

showed that the likelihood of the given correlation occurring by chance was less than 1 in <strong>10</strong>,000. It meant that both<br />

linear regression equations were significant. In table 9, the values of B were the coefficients and constant of the<br />

linear regression equation. Beta was the B-value for standardized scores of the independent variables. The Betavalues<br />

indicated the relative influence of the independent variables to dependent variable. From table 9, we could<br />

find that, in model 1, total reading time and total discussion time had positive influence, while total designing time<br />

had negative influence. In model 2, total reading time and total discussion time had positive influence, while total<br />

designing time and total observation time had negative influence.<br />

Discussion<br />

This study explored a new and important issue on the relationship of Kolb learning styles, online learning behaviors<br />

and learning outcomes. It highlighted the emergent themes in following areas. Firstly, there was a significant effect<br />

of learning styles on total reading time and total discussion time. Convergers and Assimilators spent more time on<br />

online reading than Divergers and Accommodators, while Divergers and Accommodators spent more time on online<br />

discussing than Convergers and Assimilators. The findings were found to be theoretically consistent with the<br />

predictions of the Kolb learning style model. Convergers and Assimilators possessed the character of Abstract<br />

Conceptualization (AC). One with a high score in AC indicated that s/he was more oriented towards symbols and<br />

learned best in authority-directed, impersonal learning situations (Kolb, 1985). Therefore, s/he tended to read the<br />

electronic document of how to design the animation in the experiment. Divergers and Accommodators possessed the<br />

character of Concrete Experience (CE). One with a high score in CE indicated that s/he was more oriented towards<br />

193


peers and benefited most from discussions. Therefore, s/he tended to discuss with the graduate students who acted as<br />

online consultants in the experiment.<br />

Secondly, learning styles had no significant effect on learning outcomes. This experiment result was not anticipated<br />

by the researchers. However, Table 5 showed that the mean of learning outcomes of Convergers and Assimilators<br />

was higher than that of Divergers and Accommodators, which was in accordance with some previous conclusions.<br />

For example, Terrell (1995) predicted that students taking computer-mediated coursework would primarily be<br />

Convergers and Assimilators. Henke (2001) postulated that Assimilators and Convergers might be more successful<br />

to computer-based trainings than other students with different learning styles. In fact, same conclusion could also be<br />

drawn from the linear regression between learning outcome and the enduring time of different online learning<br />

behaviors. In the full model, total designing time, total reading time and total discussion time were significantly<br />

related to learning outcomes. Table 9 showed that either in model 1 or model 2, the standardized regression<br />

coefficient of total reading time was larger than the standardized regression coefficient of total discussion time. This<br />

meant that the influence of total reading time on learning outcomes was larger than the influence of total discussion<br />

time. Therefore, students who spent more time on online reading could get better learning outcomes than students<br />

who spent more time on online discussions. It explained the reason why the mean of learning outcomes of<br />

Convergers and Assimilators was higher than those of Divergers and Accommodators in this experiment.<br />

Thirdly, some previous studies reported a relation between gender and online learning patterns (Herring, 1992; Fahy,<br />

2002), but none was found in this experiment. This finding might result from the fact that computers and Internet<br />

access were relatively inexpensive and had become readily available in recent years. Using computers and the<br />

Internet was no longer seen as an exclusively or even predominantly male activity. At the university this study was<br />

conducted, many of the study programs had a computer literacy requirement and a degree of familiarity with<br />

standard computer software packages was a basic requirement for both male and female students. Students were<br />

accustomed to online learning environment despite gender differences; thus no significant difference was found<br />

between male and female students on online learning behaviors and learning outcomes.<br />

Finally, this study found no significant effect of learning style on total observation time and total designing time. The<br />

environment of the experiment design might contribute to this result. The authors of this experiment found that the<br />

arrangement of “online observation” was rather artificial. The subjects who were asked to “observe” the onscreen<br />

operation of others with their own computers might find it unhelpful to learn animation design and became reluctant<br />

to carry out online observation. It was likely that, in a more natural learning environment, subjects would consult<br />

others personally and observe what they were doing beside them, and consequently significant effect of different<br />

learning styles might be found. The authors of this experiment also found that the specialty of the task might be the<br />

reason why no significant effect of learning style was found on total designing time. In order to achieve the<br />

animation effects, all subjects had to spend a lot of time (more than half of the total time) dealing with the software<br />

of Flash.<br />

Conclusion<br />

There was a potential value in the results of the study for instructors of online courses. As was discussed earlier,<br />

students of Abstract Conceptualization might find that abundant electronic documents satisfied their online learning<br />

requirements, whereas students of Concrete Experience might find that communicative learning environments, such<br />

as the BBS, met their online learning demands. Based on these results, instructors using online courses should<br />

seriously consider the diverse learning styles when designing and developing online learning modules for different<br />

students. Many scholars had offered various suggestions, which included designing course modules to meet the<br />

requirements of observing, participation, thinking and summarizing learning circles to accommodate different<br />

learning styles (Simpson & Du, 2004) or offering students a learning environment that provided a variety of ways by<br />

which they could access course information (Ruokamo & Pohjolainen, 2000).<br />

In addition, maximizing students’ learning outcome through online learning was one goal of using online courses. In<br />

order to acquire better learning outcomes, instructors of online courses were inclined to encourage students to<br />

participate in online discussion activities. This study gained a different conclusion. As could be seen from this study,<br />

the influence of online reading played an important role in students’ learning outcome, so providing a large number<br />

194


of electronic documents and giving enough time to let students to absorb knowledge by online reading might also be<br />

effective methods to improve the quality of online courses.<br />

Future research<br />

Data analysis of this experiment proved that students belonging to different learning style types tended to have<br />

different online learning behaviors. It also produced the following questionable issues: Should we design online<br />

learning modules to meet the students’ need of different learning style types? The answers to the question might be<br />

arguable. For example, in Miller’s (2005) study, he claimed that understanding the compatibility of CBI (Computer<br />

Based-Instruction) formats for different styles allowed us to create instructional systems that were effective for all<br />

types of students, and CBI designers should put effort into designing systems that met the needs of all styles of<br />

learning/thinking. However, Robothm (1995) argued that a truly proficient learner was someone who could switch<br />

between styles and take advantage of all educational offerings and was someone who directed their own education.<br />

He believed that course design should focus on teaching students to self-direct their learning and not force students<br />

into a specific learning style. Taking these two different views into consideration, instructors or moderators of online<br />

courses should provide a variety of learning modules for students and help them learn how to switch between<br />

learning styles in order to take advantage of these choices. It was undoubtedly a challenging task, and would be a key<br />

issue of future research in distance education.<br />

Acknowledgments<br />

The authors wished to acknowledge Ling-Ling Guo for her helpful assistance in the collection of the data. In<br />

addition, we would like to thank the students and staff who participated in this study.<br />

References<br />

Brown, E., Cristea, A., Stewart, C., & Brailsford, T. (2005). Patterns in authoring of adaptive educational<br />

hypermedia: a taxonomy of learning styles. Education <strong>Technology</strong> & Society, 8 (3), 77-90.<br />

Davidson-shivers, V., Nowlin, B., & Lanouette, M. (2002). Do multimedia lesson structure and learning styles<br />

influence undergraduate writing performance? College Student Journal, 36 (1), 20-31.<br />

Fahy, P.J. (2005). Student learning style and asynchronous computer-mediated conferencing (CMC) interaction. The<br />

American Journal of Distance Education, 19 (1), 5-22.<br />

Fahy, P.J. (2002). Epistolary and expository interaction patterns in a computer conference transcript. Journal of<br />

Distance Education, 17 (1), 20-35.<br />

Henke, H. (2001). Learning theory: applying Kolb’s learning style inventory with computer based training, retrieved<br />

<strong>October</strong> 15, <strong>2007</strong>, from http://www.chartula.com/learningtheory.pdf.<br />

Herring, S.C. (1992). Gender and participation in computer-mediated linguistic discourse. Paper presented at the<br />

Annual Meeting of the Linguistic Society of America, January 9-12, 1992, Philadelphia, USA.<br />

Howard, W.G., Ellis, H.H., & Rasmussen, K. (2004). From the arcade to the classroom: capitalizing on students’<br />

sensory rich media preferences in disciplined-based learning. College Student Journal, 38 (3), 431-440.<br />

Huang, R.H. (2003). The theories and Methods of Computer-Supported Cooperative Learning, Beijing: People’s<br />

Education Press.<br />

John, M.C, Pamela, A.M., & William, P.D. (1991). Factor analysis of the 1985 revision of Kolb’s Learning Style<br />

Inventory. <strong>Educational</strong> and Psychological Measurement, 51 (2), 455-462.<br />

195


Kayes, D.C. (2005). Internal validity and reliability of Kolb’s Learning Style Inventory version 3 (1999). Journal of<br />

Business and Psychology, 20 (2), 249-257.<br />

Kolb, D.A. (1985). Learning-style inventory: Self-scoring inventory and interpretation booklet, Boston: McBer and<br />

Company.<br />

Liegle, J.O., & Janicki, T.N. (2006). The effect of learning styles on the navigation needs of web-based learners.<br />

Computers in Human Behavior, 22 (5), 885-898.<br />

Melara, G.E. (1996). Investigating learning styles on different hypertext environments: hierarchical-like and<br />

network-like structures. Journal of Research on Computing in Education, 14 (4), 313-328.<br />

Miller, L.M. (2005). Using learning styles to evaluate computer-based instruction. Computers in Human Behavior,<br />

21 (2), 287-306.<br />

Oughton, J.M., & Reed, W.M. (2000). The effect of hypermedia knowledge and learning style on student-centered<br />

concept maps about hypermedia. Journal of Research on Computing in Education, 32 (3), 366-384.<br />

Raschick, M., Maypole, D., & Day, P. (1998). Improving field Instruction through Kolb learning theory. Journal of<br />

Social Work Education, 34 (1), 31-42.<br />

Reed, W.M., Oughton, J.M., Ayersman, D.J., Ervin, J.R., & Giessler, S.F. (2000). Computer experience, learning<br />

style, and hypermedia navigation. Computers in Human Behavior, 16 (6), 609-628.<br />

Robotham, D. (1995). Self-Directed Learning: the Ultimate Learning Style? Journal of European Industrial<br />

Training, 19 (7), 3-7.<br />

Ruokamo, H., & Pohjolainen, S. (2000). Distance learning in a multimedia networks project: main results. British<br />

Journal of <strong>Educational</strong> <strong>Technology</strong>, 31 (2), 117-125.<br />

Simpson, C., & Du, Y. (2004). Effects of learning styles and class participation on students’ enjoyment level in<br />

distributed learning environments. Journal of Education for Library & Information Science, 45 (2), 123-136.<br />

Smith. D. M., & Kolb. D. A. (1986). Learning Style Inventory: User's guide, Boston: McBer & Company.<br />

Terrell, S. (1995). Predicting success in computer-mediated coursework. Paper presented at the 6 th International<br />

Conference on <strong>Technology</strong> and Distance Education, <strong>October</strong>, San Jose, Costa Rica.<br />

Terrell, S. (2002). The effect of learning style on doctoral course completion in a web-based learning environment.<br />

Internet and Higher Education, 5 (4), 345-352.<br />

Veres, J.G. (1991). Improving the reliability of Kolb’s revised Learning Style Inventory. <strong>Educational</strong> and<br />

Psychological Measurement, 51 (1), 143-150.<br />

196


Linsey, T., & Tompsett, C. (<strong>2007</strong>). In an Economy for Reusable Learning Objects, Who Pulls the Strings? <strong>Educational</strong><br />

<strong>Technology</strong> & Society, <strong>10</strong> (4), 197-208.<br />

In an Economy for Reusable Learning Objects, Who Pulls the Strings?<br />

Tim Linsey<br />

ADC (<strong>Educational</strong> <strong>Technology</strong>), London, UK // Tel: +44 20 8547 7779 // t.linsey@kingston.ac.uk<br />

Christopher Tompsett<br />

Learning <strong>Technology</strong> Research Centre, Kingston University, London, UK // Tel: +44 20 8547 7520 //<br />

c.tompsett@kingston.ac.uk<br />

ABSTRACT<br />

It seems a foregone conclusion that repositories for reusable learning objects (RLOs), based on common<br />

standards and supported by suitable search facilities, will foster a global economic market in the production of<br />

RLOs. Actual reuse will support producers of high-quality RLOs, and other producers will be unable to<br />

compete, i.e. competition within the market will implicitly define the qualities that are needed. This paper<br />

challenges the suggestion that this will occur. If the marked is defined as cost versus value, then the set of<br />

qualities that distinguishes RLOs from other educational software prohibits the development scalable search<br />

engines to search the repositories. At a more sophisticated level of market analysis, it is the needs of the<br />

producers, rather than the purchasers, that will define quality in the market. Any attempt to limit this imbalance<br />

will, paradoxically, require acceptance of alternative constraints that many may find hard to accept.<br />

Keywords<br />

Reusable learning object, Reconfigurability, Course design, Complexity, Learning Object Economy, Repositories<br />

Introduction<br />

Reusable learning objects (e.g., Polsani, 2003; Downes, 2003; Liber, 2005) allow the cost of initial development to<br />

be offset by subsequent reuse in a wide range of different contexts (Nitto et al., 2006). Conformance to a standard<br />

should ensure that the same RLO will function consistently in any virtual learning environment (VLE) that supports<br />

the same standard. As RLOs can be subdivided and recomposed in different combinations, the potential for reuse in a<br />

global market is significantly enhanced (Hodgins, 2002). The development of this market depends on three<br />

components. These are a technical and legal infrastructure to support the market, the ability to decompose RLOs to<br />

allow for reuse within a different learning context, and a market that supports a community of producers (Liber,<br />

2005) and a community of purchasers. Purchasers should be those reusing RLOs ‘at the third level’ (Koper et al.,<br />

2004, p. 16), where there is no connection between the context of the original design and the context of reuse, other<br />

than the RLO itself.<br />

Tompsett (2005) argues that creating complex learning objects is inherently more difficult than decomposing one<br />

RLO into sub-components. He argues that two core problems in educational design, consistency within any set of<br />

RLOs and sequencing a set of RLOs, are computationally complex. In each case, small examples appear trivial to<br />

solve, moderately sized examples require unreasonable computing resources, and larger scale examples are<br />

impossible to solve. Thus, as the number of potentially useful RLOs increases and the size of the possible RLOs<br />

decreases, it becomes impossible to search for useable sets of RLOs, irrespective of the technological infrastructure<br />

or computing power that is available.<br />

This paper considers whether the establishment of a market in RLOs can overcome these restrictions. <strong>Educational</strong><br />

designers may expect sophistication in teaching students, whereas course delivery, from an institutional perspective,<br />

may be more concerned with average effectiveness and economic efficiency (cf. Fletcher & Sackett, 1979). If the<br />

delivery of a course using RLOs is cost effective, then finding a ‘good’ educational design may be unnecessary. This<br />

requires, of course, that a market for RLOs exists and that a suitable set of RLOs can be discovered to match the<br />

economic requirements. If the conventional view (e.g. Leeder et al., 2004) holds true, then competition will drive up<br />

the underlying standards, to the mutual benefit of the institutions, the students and those that produce the best RLOs.<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

197


This paper considers the potential for this market to exist. Firstly, taking Porter’s basic definition of a market (1985),<br />

the paper shows that the searches that are needed to exploit the market are computationally complex. Secondly,<br />

simplifying these assumptions in order to remove complexity, it then shows that this undermines what is novel in the<br />

RLO model. Finally, providing greater sophistication to the analysis by integrating Porter’s model of competitive<br />

strategy (ibid.), the paper suggests either that the market fragments, or that the producers, not the purchasers, will<br />

control quality.<br />

Background<br />

This argument is directed towards the design of courses that are suitably complex and contingent on the integration<br />

of RLOs for their design.<br />

A first proviso is that the courses are to be delivered within an education system in which advanced knowledge is<br />

delivered through institutions. The emphasis on advanced knowledge ensures that we are not dealing with relatively<br />

simple levels of understanding which can be constrained by additional factors (e.g. a national curriculum or standard<br />

assessments). The requirement that knowledge is to be delivered through institutions (e.g. Eraut, 1994, p. 118) avoids<br />

an outright rejection of the delivery of codified knowledge, on socio-constructivist principles (e.g. Lave & Wenger,<br />

1991; Brown & Duguid, 1991; Wenger, 1998).<br />

A second proviso is that reuse will only be considered at the third level (Koper et al., 2004) ‘at a price’. In a mature<br />

market, it cannot be presumed that existing objects will be reused simply because they are placed in a repository (e.g.<br />

Walsh, 2006).<br />

A third proviso is that successful course design depends explicitly on the novel, assured integration of RLOs rather<br />

than the linking of hypermedia resources over the Internet. Without this proviso, the discussion on RLOs would add<br />

little to previous work that has been achieved using Internet protocols (e.g. XML Clark, 1997; Clark & Deach, 1998)<br />

and hypermedia/ hypertext models over 15 years, or earlier (Nelson, 1965; Bush, 1945).<br />

Problems that are specific to the current market (see for example, Christiansen & Anderson, 2004) help us to<br />

understand why design using RLOs may be difficult now, but such problems could be considered as too detailed, or<br />

too specific to remain unsolved in the longer term. The discussion presumes that any transitional problems that occur<br />

in establishing a global market have been resolved. If the market is to provide a consistent force to increase quality,<br />

then the market must have acquired a degree of consistency, scale and stability for transitional effects to have<br />

disappeared.<br />

Reusable Learning Objects and Metadata<br />

The discussion that follows is deliberately generic, so that the interrelationships between some critical aspects are<br />

identified. We start the discussion with four principles, which are presented as a technical framework for RLOs in<br />

order to avoid arguments over particular standards.<br />

The first principle concerns the relationship between the technical environment and the final ‘standard’ that is<br />

established for RLOs.<br />

1. Any ‘final’ standard for RLOs will be platform neutral/independent.<br />

This is unlikely to be controversial since all the current standards accept this principle (cf. Johnson, 1998;<br />

Jesukiewicz, 2006). It provides assurance that an individual RLO will function with technical integrity on any<br />

platform (even if additional equipment may be required). How this will be achieved remains uncertain (Smythe,<br />

2004; CETIS, 2004).<br />

2. Integration of content is assured by conformance to standards at the metadata level.<br />

198


This principle ensures that ‘equivalent’ RLOs can be exchanged without losing any educational value that is covered<br />

by the standard. This should also be uncontroversial. Although these standards are expected to evolve, there is no<br />

indication that this is infeasible (CETIS, 2004; CanCore: Friesen et al., 2002; EML: Manderveld & Koper, 2004;<br />

SCORM: Blackmon et al., 2004, etc.).<br />

With these two principles in place, any set of RLOs should be interoperable, and function correctly in any new<br />

context, and on any technical platform, independently of the context or platform for which each of them was<br />

developed (IEEE, 2002). Failure to achieve this fragments the global market and limits the set of useful RLOs to<br />

those that are consistent with each institution’s VLE.<br />

The next two principles ensure that RLOs are essential to the design of a course.<br />

3. The standard must ensure that any RLO can be broken down into a number of smaller RLOs, or<br />

else is one for which any further decomposition fails to make sense (either technically, or<br />

educationally).<br />

This principle distinguishes RLOs from other existing educational resources. It is assured by the use of XML,<br />

although, for this analysis, there is no requirement that the decomposition is hierarchical. This principle establishes a<br />

key property: each fragment is less tightly bound to the context of its original development (Hodgins, 2002) and, as a<br />

consequence, the market will contain a large number of RLOs for which the context of reuse is under-specified.<br />

Finally we require that:<br />

4. It must be possible to identify relevant RLOs to integrate into a course on the basis of searchable<br />

meta-data.<br />

This is inherent in any of the current proposals as meta-data is included within each RLO, but a wider range of<br />

solutions could be envisaged. The critical issue is that the potential value of any RLO can be represented as static<br />

information. If a full assessment requires that an RLO be inspected in detail, then it cannot be claimed that<br />

integration between RLOs is achieved simply through the use of RLOs, rather than through some additional factor<br />

that is intricately ‘designed in’ to individual RLOs but which is not coded in the meta-data.<br />

These four principles should be sufficient to ensure, given a suitable technical and legal infrastructure (e.g., Downes,<br />

2003; Koper et al., 2004, et al.), that a global market for RLOs could exist in which:<br />

• a search on meta-data across repositories (e.g. van Assche et al., 2006), or meta-repositories, would identify any<br />

possible RLOs that could be used in constructing a course from RLOs.<br />

• individual RLOs that have been ‘purchased’ can be integrated into more complex RLOs without the need to test<br />

that the combination will function as expected.<br />

At this point the concept of ‘course design’ is deliberately left unrefined. Although many argue that current models<br />

are limited (e.g. Kassahun et al., 2006), a more critical issue is whether the market will be of sufficient size to be<br />

self-supporting. It is suggested that four assumptions are essential. It will only be necessary to consider what is<br />

meant by course design using RLOs if such a market is viable.<br />

A market for RLOs?<br />

Few authors have considered that a market for RLOs might not exist. To clarify this, four critical assumptions are<br />

identified on which the rest of the argument is structured. These should be uncontroversial.<br />

The first two ensure that the market is of a sufficient size to allow a choice for most courses to be designed using<br />

RLOs.<br />

1. Almost all searches for a single RLO, with some search criteria unspecified, will produce multiple solutions.<br />

2. Almost all searches for a single, fully specified RLO, which could be used as a self-sufficient component of a<br />

course, will fail.<br />

199


The first assumption ensures that creating a course from existing RLOs will not fail because specific components do<br />

not yet exist. The second condition reflects the fact that the majority of components are under-specified. The second<br />

condition also excludes the utopian scenario in which any course could be constructed by selecting a handful of<br />

learning objects. If this were to be allowed, then it becomes impossible to argue that the success of any course<br />

depended on design by RLO, rather than on detailed interrelationships within each RLO.<br />

Once RLOs are available on this scale, then the simplest ‘market’ principles must also apply. As Porter notes:<br />

“Buyers must be willing to pay a price for a product that exceeds its cost of production, or an industry will not<br />

survive in the long run” (1985, p. 8). Since the market must reward those that produce ‘good’ RLOs, two further<br />

assumptions are introduced.<br />

The first is:<br />

3. Purchasers must be willing to pay a price for each RLO that allows the producers of good quality RLOs to make<br />

a reasonable profit in the long run.<br />

The ‘cost’ to the institution for using the RLO will need to include this price, together with predicted estimates for<br />

technical, teaching and administrative support. The final assumption reflects the reciprocal nature of the market. If a<br />

purchaser buys any RLO from the market, then they must be able to associate an ‘educational value’ (EV) to each<br />

possible purchase without detailed inspection of the RLO. This leads to the final assumption:<br />

4. Every purchaser must be able to give an estimated EV for each RLO that is calculated on the meta-data<br />

available.<br />

For simplicity EV is assumed to be numerical. Within the argument that follows, there is no need for EV to be<br />

expressed in financial terms, nor is there a need to be more precise about how this is calculated. More complicated<br />

evaluations of individual RLOs will only exacerbate the decision problem that follows.<br />

The set of principles and assumptions listed above should be sufficient to allow each purchaser to select a set of<br />

RLOs in order to construct a course from the global market of RLOs, in which technical and educational<br />

effectiveness is assured.<br />

If the market exists, then the market will genetically develop a set of qualities that are, de facto, those that ensure<br />

competitive success. The critical question is whether these qualities will be consistent with ‘fitness for course<br />

delivery’ from an institutional perspective, or whether other forces in the market will generate an alternative set of<br />

qualities.<br />

Collecting sets of RLOs and the knapsack problem<br />

To assess this argument the economic decision must be analysed in more detail. The fundamental question is: can a<br />

cost effective set of RLOs be found whilst only considering the cost and EV of each RLO?<br />

This is implicitly a question of reconfigurability: “the potential of a collection of existing RLOs to be re-configured<br />

as a larger, educationally effective part of a course and to integrate with that course” (Tompsett, 2005, p. 446). In the<br />

original paper, two educational properties of a set of RLOs were identified, each of which required a search through<br />

repository based properties of individual RLOs. Both of these problems were shown to belong to a wider set of<br />

decision problems, that are termed ‘Non-Polynomial Complete’ (NPC, Garey & Johnson, 1990). in order to establish<br />

a subset to meet property of the complete set. All NPC problems are equally complex to solve and none are possible<br />

to solve in practical terms, irrespective of the computing power that is available (or the speed of any network). The<br />

first critical point is to show that this market problem is indeed an NPC problem.<br />

The search that is needed can be modelled as a ‘knapsack problem’, which belongs to the same set. A knapsack<br />

problem, with minor rephrasing from Garey & Johnson (1990, p. 65), requires that:<br />

“A number of objects can be taken on a journey. Each of them has a certain value and occupies a<br />

certain size in the rucksack - we do not worry, for now, whether they would actually fit together. The<br />

problem is to decide if a subset of these can be found that will:<br />

200


e below a maximum size (of the knapsack), but allow you to take at least a certain<br />

amount of value.” (italics as original)<br />

If we draw a parallel between value and EV, and between size and cost, then the course designer is faced with the<br />

following problem:<br />

A number of RLOs can be purchased to use on a course. Each of them has a certain EV and can be<br />

purchased at a given cost - we do not worry, for now, about other criteria. The problem is to decide if<br />

any subset of the RLOs can be found that will:<br />

have a total cost within maximum budget for the course but offer at least a certain<br />

amount of EV.<br />

From an institutional perspective, it is impossible for the course designer to search across the global market in order<br />

to select an effective choice from the full set of RLOs. As the number of useable RLOs increases, and the size of the<br />

possible RLOs decreases, then it becomes infeasible to collect economically useful sets of components, irrespective<br />

of the technological infrastructure, computing power, or standards that are established. State of the art algorithms<br />

(see Tompsett, 2005) fail; ignoring ‘other criteria’ invalidates the approximations that are applied in the ‘best’<br />

algorithms (see Appendix A). The only alternative is to place an upper limit on the number of components that can<br />

be selected - contradicting principle 3.<br />

At this point we reach a central point of the paper. If the market for RLOs conforms to the principles and<br />

assumptions that have been outlined above, then a global search cannot be supported by software, either locally, or<br />

centrally within repositories. The ‘mathematical’ complexity of reconfigurability will persist irrespective of changes<br />

in coding such as those envisaged by the introduction of the semantic coding (Tompsett, 1991), the semantic net<br />

(Dzbor et al., <strong>2007</strong>) or the introduction of peer-to-peer technology (Brito & Moura, 2005).<br />

Contrary to expectations, fragmenting RLOs into an increasing number of smaller components will reduce the<br />

possibility that the purchaser can make an effective decision.<br />

In order to circumvent this problem, we will either need to remove the ‘mathematical’ complexity from the decision<br />

problem or add more detail to the analysis of the market in the hope that additional detail will simplify the search (cf.<br />

Waltz, 1975).<br />

Simplifying the market<br />

Although scale is the key factor that controls the size of the search, the inherent complexity of this problem is created<br />

by the ratio between the EV and the cost of each RLO. Simplifying the EV to cost ratio (EVCR) will remove<br />

complexity from the problem. Two classes of simplification: minimal-cost and cost-related-to-value are considered<br />

below.<br />

Minimal-cost covers scenarios in which the cost of course design using RLOs is so low that any other approach to<br />

course delivery would be dismissed. A universally low cost model covers any scenario in which the price of every<br />

RLO is extremely low. This must be dismissed as irrelevant to improving quality. If the market is large, then this<br />

scenario treats the actual decisions as irrelevant and removes any possible influence that these decisions will have to<br />

improve the quality of the products that are available. Altruism, an alternative to universally low cost, would suggest<br />

that a suitably large number of RLOs are made available to the market at an artificially low cost, as the development<br />

is benevolently funded from external sources. At the first level of analysis this sounds ideal for both the purchaser,<br />

who has little financial risk, and the producer, who has assured development costs. However, it is not the consumer<br />

that is directing the market in this case: the critical choices are made by those who fund the development. Any<br />

developer that attempts to compete on the ‘open market’, without initial funding, is immediately placed at an<br />

economic disadvantage and so competition is no longer ‘open’.<br />

This problem exists even if funding is restricted to the early stages of market development. Assured funding allows a<br />

small number of producers to establish a number of RLOs within the market of ‘early adopters’. These will provide a<br />

funding stream to finance future work – an advantage which is even more critical if the price of the existing RLOs is<br />

201


allowed to rise when external funding is removed. With established teams and procedures, these producers will be<br />

able to provide new RLOs to the market with lower overheads for production and financing. These producers can<br />

then choose their own strategy to limit any ‘new entrant’ producers (Porter, 1985, p. 131). This could vary, from full<br />

exclusion – offering new RLOs at costs that cannot be matched by new competitors, to strategic selection of new<br />

competitors in order to provide a ‘cost umbrella’ to increase profitability (‘competitor selection’, 1985, p. 201 ff.).<br />

Such strategies are only vulnerable if a significant number of purchasers collaborate to widen the range of producers,<br />

and accept the current cost disadvantage as part of a longer-term strategy to broaden the market. Even if the costs for<br />

these funded RLOs are not allowed to rise, the market penetration, almost certainly in the most re-useable RLOs, will<br />

give these funded producers significant opportunities to select one of a wide range of market strategies (see below) to<br />

control the sections of the market with the highest profitability.<br />

Cost-related-to-value characterizes a range of solutions in which the cost is constrained (or calculated by methods<br />

that do not introduce further complexity) to fall within specific limits of the EV, or vice-versa. In such cases, only<br />

one of the factors needs to be considered as the control factor in selecting a set of RLOs. In the simplest version, the<br />

cost of RLOs with equivalent meta-data must be the same and it is tempting to view this as a scenario that sets<br />

uniformly high standards. Unfortunately it is equally possible that all RLOs are as ineffective as each other. In the<br />

more general case, the cost of any particular RLO must lie between a maximum and minimum value, both of which<br />

can be calculated from the EV. However, the producers will then lose the ability to differentiate between their own<br />

RLOs. With a maximum and minimum possible cost for each RLO, competition between producers will produce<br />

RLOs to either one or both of these costs (e.g. as is currently the case with ‘top-up fees’ in UK universities).<br />

Competitiveness between producers is then based on minimizing the cost of production to achieve fixed standards.<br />

This divides the market and, unless any other strategy is in place, forces each producer to adopt a 'cost leadership'<br />

strategy (minimizing production costs: Porter, 1985, p. 11; economies of scale: Shapiro & Varian, 1999, p. 25).<br />

Whilst this is apparently attractive to the purchaser, the scenario decreases quality overall. The producer must set out<br />

to achieve as small a reduction in the cost to the purchaser as necessary, whilst minimizing their own production<br />

costs compared with other producers. The benefits to the purchaser are minimal in the short term and contradictory in<br />

the long term. As Porter notes, when more than one producer adopts this strategy, the market becomes unstable. The<br />

producers undercut each other and the overall quality of what is produced is pushed down in order to maintain a<br />

viable level of profit. The only positive outcome for the purchaser is the development of a new market for cheap,<br />

low-end products with unreliable quality. Alternatively, both quality and margins continue to be cut until a single<br />

producer remains. At this point the cost for the low-end product can increase without any competition to ensure that<br />

quality will also increase. Cost-related-to-value does not improve quality from the purchaser’s perspective (cf. Tesco<br />

Supermarket in the UK).<br />

Neither of these simplifications: minimal-cost or cost-related-to-value, produces an economic market in which the<br />

purchaser controls quality. Both, however, raise questions about the nature of the relationships between producers,<br />

and between purchasers and producers.<br />

Improving the market model<br />

It still remains plausible to suggest that a more detailed analysis of the market, providing more economic reality, will<br />

remove complexity, without the purchaser losing control over quality.<br />

In order to do so we must move to a model in which each producer adopts a strategy for competition within the<br />

market. Porter’s model identifies four strategies that can be used by a producer of RLOs to compete with others in<br />

order to gain, or protect market share. Despite the changes that might be considered to have changed in a networked<br />

world, the economic principles of these markets remain remarkably similar (Shapiro & Varian, 1999, p. 2). Two<br />

strategies act globally, lowest-cost and brand-name. The other two, focus and differentiation, segment the market.<br />

Cost leadership has already been discussed and shown to undermine, rather than improve quality. A brand-name<br />

strategy is considered next.<br />

A brand-name strategy builds an image of high quality, in direct contrast to the lowest-cost strategy. The strategy is<br />

based on trading at a higher-than-market price, on the basis of an apparent value, even if there may be no actual<br />

202


increase in quality. In the most successful cases, the brand name can become a de facto justification of a good<br />

decision (cf. ‘no-one ever got sacked for buying IBM’). When written in its simplest form, this makes clear that the<br />

purchaser is paying a cost for the illusion of quality, rather than for quality itself. More realistically this approach<br />

exploits the difference between indicators of quality, rather than quality itself. If it is more efficient to create the<br />

impression of higher value, rather than to build higher quality into the RLOs, then a higher level of profit can be<br />

maintained. The producer may choose to invest this profit into promoting the brand name, or into improving quality,<br />

but the customer cannot affect this decision. Unless the reputation is established, in the first place, by a de facto<br />

higher quality relative to cost for the purchaser, and the higher level of profit is used to maintain this ratio relative to<br />

other competitors, the effect on the market is to reduce quality relative to cost. This strategy is naturally restricted to<br />

a small number of producers, and the value of any margin is expected to be less marked for information, as opposed<br />

to material products (Shapiro & Varian, 1999, p. 31).<br />

A strategy based on focus or differentiation, or any combination of the two, responds to differences between<br />

purchasers. Focus responds to characteristics that are inherent in the market (Porter, 1985, p. 131) and concentrates<br />

on the delivery of products to that sector. This allows internal processes to be optimized to produce a narrower range<br />

of products where optimization would not be cost-effective within a global market. For any purchaser within that<br />

sector, the scale of the search is proportionately reduced, but these linear effects have no more impact on the<br />

effectiveness of the search process than increasing the speed of the underlying computers. Such changes have<br />

negligible effect on complexity unless the actual number will always remain low – i.e. the customer has fewer<br />

options and less choice. The effect on other sectors is minimal. If purchasers are to control quality, then this can only<br />

come from the one remaining approach: differentiation. This final strategy is the most complex and is considered in<br />

more detail.<br />

Differentiation requires that “a firm be uniquely able to create competitive advantage for its buyer in ways besides<br />

selling to them at a lower price” (Porter, 1985, p. 131). This takes into account factors that are specific to the<br />

competitive strategy of each purchaser, rather than those that are common across a sector. If the supplier of an RLO<br />

can design their product to meet the needs of a producer and increase the profit margin of the producer in their own<br />

product, then the producer of the RLO can demand a higher price for the RLO relative to competitors’ RLOs. This<br />

depends on the ‘use criteria’ for each purchaser, i.e. the qualities that allow each one to differentiate themselves<br />

within their own market (Porter, 1985, p. 137). If an institution can increase their profit margin on a particular course<br />

by selecting one producer of RLOs, rather than another, then it is cost-effective to pay more than the lowest-cost<br />

price for an equivalent RLO. This is a symbiotic relationship. Both the producer of the RLO and the course provider<br />

must increase their profits, and protect this increase, a form of economic lock-in (Shapiro & Varian, 1999, p. 1<strong>10</strong> ff.)<br />

We shall return to this point later.<br />

The most obvious examples occur at the lower end of the purchaser’s value-chain, for example by customizing each<br />

RLO to the specific needs of the purchaser at a cost that is lower than would be incurred by the purchaser. Less<br />

obvious examples occur where the value is added directly to the upper end of the purchaser’s value-chain (cf. ‘intel<br />

inside’). Porter discusses a number of standard approaches to differentiation, of which three (summarised from p.<br />

135) could be seen as relevant to the market for RLOs. Expressed in terms of how these affect the value-chain for the<br />

delivery of a course, these would be: lowering the direct cost of teaching time, lowering the cost of support staff, and<br />

lowering the risk of failure. Although each of these seems indicative of an increase in quality, a more detailed<br />

analysis needs to be taken to understand the impact of this effect on the global market.<br />

A reduction in teaching time can occur either through reducing the time that is spent on integrating an RLO into a<br />

course, or by directly reducing the time that is needed, e.g. with automated assessment. Reducing the cost of<br />

integration with part of the course that is not supported by an RLO suggests that a ‘gap’ exists within the market<br />

(contradicting assumption 1). Suggesting that there is a reduction in the cost of integration between RLOs indicates<br />

that cost-effective design depends on more than metadata (contradicting principle 4). Reducing the cost associated<br />

with running an RLO is evidently advantageous for that particular institution and, as the relationship is designed to<br />

be of mutual benefit, differentiation circumvents the search process for that institution. However, within the global<br />

market, the effect is minimal. The cost of an RLO may be reduced, but the RLO must still be discovered through the<br />

search process by other institutions. The changes will only have an effect on the wider market if the reduced cost<br />

reduces the complexity of the search – which is not the case. Since there is no reduction in complexity within the<br />

global market, there is no advantage, unless the same improvements can be copied by all the producers (see below).<br />

203


The second option, lowering the cost of support staff (and facilities) is unlikely to be significant. These costs are<br />

almost certain to depend on the underlying technology, and not the RLO itself (principles 1, 2 and 3). Switching<br />

costs should be minimal and reduce the risk of hardware/software lock-in (Shapiro & Varian, 1999, p. 116). In<br />

specialised fields, some differentiation may occur. For simple anatomy, the low-cost, ubiquitous, ‘Anatomy<br />

Colouring Book’ (Kapit & Elson, 2001) may require far less technical support than a virtual reality model to do the<br />

same. However, if the technical infrastructure is available to support a virtual model, then the same technology can<br />

be used in a wider set of courses (Székely & Satava, 1999). However, such cases are limited and will have little<br />

impact on the overall standards that are established.<br />

The final consideration, lowering the risk of failure, might appear to act as a discriminating factor at the institutional<br />

level but is implicitly limited by the design of RLOs. If RLOs conform to the standards, there should be no risk of<br />

technical failure (principle 1), or lack of integration with other RLOs (principle 2). Neither should there be any<br />

possibility of failure in educational terms, as long as the meta-data is valid (principle 4). The only scope for increased<br />

differentiation in this category is that some RLOs will lower the probability of failure in educational terms by factors<br />

that cannot be determined from the meta-data. We suspect that this may be the most interesting aspect – but that<br />

contradicts principle 4.<br />

However, before we close the discussion on differentiation, we must note that the symbiotic relationship provides an<br />

additional limit to the effect that differentiation can have on a global market. As noted above, an increase in profit<br />

must be protected for both the producer of the RLOs and the particular institution that is involved. If other producers<br />

of RLOs can imitate the same changes in their products, then competition is reintroduced and the added value of the<br />

specialization for the producer decreases, or disappears. The same effect applies to the purchaser’s market. If every<br />

course provider can achieve the same benefit, then the increase at the upper-end of the value-chain is lost and the<br />

higher price for each RLO cannot be financed. This implies that both the producer and the purchaser must prevent<br />

imitation, through an effective rights management strategy (copyright and patent protection, etc.) Actual ‘bitlegging’,<br />

as Shapiro and Varian comment, is naturally limited; the more effective a ‘bitlegger’ is at advertising, the more<br />

easily they are discovered (1999, p. 92).<br />

Power to the purchaser?<br />

The introduction of a more sophisticated model of the global market suggests that, in almost all cases, the<br />

institutions, when acting as independent purchasers, will have only minimal effect on the standards and underlying<br />

quality of the RLOs. Only differentiation, as a competitive strategy for the producer, suggests that competition leads<br />

to improvements from an institutional perspective. Even then, the institutions and producers involved must act to<br />

prevent, rather than support, a wider increase in quality.<br />

Conclusions<br />

This paper reviewed the proposition that good quality would become defined and controlled by the purchasers within<br />

a global market for RLOs. On the basis of a minimal set of principles and assumptions, it was argued that the<br />

economic decision to select an economically efficient set of RLOs creates a search problem that is impossible to<br />

support in software.<br />

This model of the market was then reviewed in two ways. Simplifying the model, in order to reduce complexity,<br />

either lowers standards, or allows external agencies rather than educational institutions to define the qualities that are<br />

needed. Providing a more detailed analysis of the market cannot reduce the complexity of the search process and, if<br />

anything, suggests that the institutions will have even less control over quality. Even though quality might be<br />

improved in a limited number of instances, market forces will act to retain distinctions between producers, rather<br />

than to promote consistent improvement across the market. Whilst standards in RLOs emphasize uniformity, the<br />

economics of the market place react to foster diversity between producers and to fragment the market.<br />

Two issues offer scope for a deeper analysis than is possible here. The first is that both ‘cost’ and ‘educational value’<br />

have been left under-specified. This was chosen to emphasize that the argument holds true irrespective of whether<br />

204


these were defined on a global, or local, basis. Contextualising these definitions for each institution will, almost<br />

certainly, increase complexity but allow the possibility that quality may still be achieved, from a local perspective,<br />

even if the global market cannot be controlled in this way. However, that would suggest that the concept of coursedesign<br />

based on RLOs is contingent on local, non-generic effects. The inter-relationship between market structure,<br />

market strategy and context-dependent value will need much more analysis if quality is to imply anything more<br />

useful to course designers than conformance to technical standards.<br />

The second issue to explore is the importance of signalling criteria (Porter, 1985, p. 142), indirect indicators used by<br />

a purchaser to identify ‘good’ producers (cf. institutional reputation in filtering a literature search). Signalling criteria<br />

cover the additional information, beyond product specification, that is used to build trust between purchasers and<br />

producers that each RLO will perform ‘as specified’. Assumption 4 may therefore need to reflect this more directly.<br />

However, although collection and management of this information is well-understood within recommender systems<br />

(Resnick & Varian, 1997), as in MERLOT (2003), the parallel management and control of such information from the<br />

producers’ perspective would also need to be considered.<br />

Even so, it should be clear that a global market provides no quick solution to defining quality from a purchaser’s<br />

perspective. This is a new facet of reconfigurability (Tompsett 2005) - integrating individual RLOs into a more<br />

complex RLO is categorically harder than taking a complex RLO and fragmenting it into a set of simpler RLOs.<br />

Acknowledgements<br />

The authors are grateful for general editorial comment from Dr. Linda Burke and Hilary Tompsett and for a critical<br />

appraisal of the economic arguments from Dr. R. Roberts of Kingston University. We are also grateful to the critical<br />

comment from both reviewers of the original submission. These have offered the opportunity for some of the<br />

arguments to be elaborated and others to be made more precise.<br />

References<br />

Blackmon, W., Brooks, J., Roberts, E., & Rehak, D. (2004). The Overlap and Barriers between SCORM, IMS Simple<br />

Sequencing, and adaptive Sequencing, retrieved <strong>October</strong> 15, <strong>2007</strong> from http://141.225.40.64/lsal/expertise/papers/<br />

conference/ah2004/wit20040329.pdf.<br />

Brito, G. A. D. D., & Moura, A. M. d. C. (2005). ROSA - P2P: a Peer-to-Peer System for Learning Objects<br />

Integration on the Web. In R. P. M. Fortes (Ed.), Proceedings of the 11 th Brazilian Symposium on Multimedia and<br />

the web, New York: ACM Press, 1-9.<br />

Brown, J. S., & Duguid, P. (1991). Organizational learning and communities of practice: towards a unified view of<br />

working, learning and innovating. Organization Science, 2 (1), 40-58.<br />

Bush, J. V. (1945). As We May Think. The Atlantic Monthly, 176 (1), <strong>10</strong>1-<strong>10</strong>8.<br />

CETIS (2004). The e-learning Framework Summary June 2004, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.jisc.ac.uk/uploaded_documents/elf-summary7-04.doc.<br />

Christiansen, J.-A., & Anderson, T. (2004) .Feasibility of Course Development Based on Learning Objects: Research<br />

Analysis of Three Case Studies. International Journal of Instructional <strong>Technology</strong> and Distance Learning, March,<br />

retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.itdl.org/Journal/Mar_04/article02.htm.<br />

Clark, J. (1997). Comparison of SGML and XML, retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.w3.org/TR/NOTEsgml-xml-971215.<br />

Clark, J., & Deach, S. (1998). XSL version 1.0, retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.w3.org/TR/1998/WDxsl-19980818.<br />

205


Downes, S. (2003). Design and Reusability of Learning Objects in an Academic Context: A New Economy of<br />

Education? United States Distance Learning Association Journal, 17 (1), 3-22.<br />

Dzbor, M., Stutt, A., Motta, E., & Collins, T. (<strong>2007</strong>). Representations for semantic learning webs. Journal of<br />

Computer Assisted Learning, 23 (1), 69-82.<br />

Eraut, M. (1994). Developing professional knowledge and competence, London: Falmer.<br />

Fletcher, S., & Sackett, D. (1979). The periodic health examination: Canadian Task Force on the Periodic Health<br />

Examination. Canadian Medical Association Journal, 121, 1193-1254.<br />

Friesen, N., Roberts, A., & Fisher, S. (2002). CanCore: Metadata for Learning Objects. Canadian Journal of<br />

Learning and <strong>Technology</strong>, 28 (3), retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.cjlt.ca/content/vol28.3/<br />

friesen_etal.html.<br />

Garey, M. R., & Johnson, D. S. (1990). Computers and Intractability; A Guide to the Theory of NP-Completeness,<br />

New York: W. H. Freeman & Co.<br />

Hodgins, H. W. (2002). The future of learning objects, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.reusability.org/read/chapters/hodgins.doc.<br />

Horowitz, E., & Sahni, S. (1974). Computing Partitions with Applications to the Knapsack Problem. Journal of the<br />

ACM, 21 (2), 277-292.<br />

IEEE (2002). LOM Data Model Standard (1484.12.1), retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://ltsc.ieee.org/wg12/.<br />

Jesukiewicz, P. (2006). Sharable Content Object Reference Model (SCORM): An Overview and Update for HPT<br />

Professionals, retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.adlnet.gov/downloads/DownloadPage.aspx?ID=238.<br />

Johnson, D. (1998). ADL Initiative Overview, Tasking and Direction Briefing, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://adlnet.gov/downloads/downloadpage.aspx?ID=15.<br />

Kapit, W., & Elson, L. M. (2001). The Anatomy Colouring Book, Wokingham: Addison Wesley.<br />

Kassahun, A., Beulens, A., & Hartog, R. (2006). Providing Author-Defined State Data Storage to Learning Objects.<br />

Education <strong>Technology</strong> and Society, 9 (2), 9-32.<br />

Koper, R., Pannekeet, K., Hendriks, M., & Hummel, H. (2004). Building communities for the exchange of learning<br />

objects: theoretical foundations and requirements. ALT-J, 12 (1), 21-35.<br />

Lave, J., & Wenger, E. (1991). Situated Learning: Legitimate peripheral participation, Cambridge: Cambridge<br />

University Press.<br />

Leeder, D., Boyle, T., Morales, R., Wharrad, H., & Garrud, P. (2004). To boldly GLO - towards the next generation<br />

of Learning Object. Paper presented at the E-Learn 2004 Conference, 1 - 5 November 2004, Washington, DC, USA.<br />

Liber, O. (2005). Learning Objects: conditions for viability. Journal of Computer Assisted Learning, 21 (5), 366-73.<br />

Manderveld, J., & Koper, E. J. R. (2004). <strong>Educational</strong> modelling language: modelling reusable, interoperable, rich<br />

and personalised units of learning. British Journal of Education <strong>Technology</strong>, 35 (5), 537-551.<br />

MERLOT (2003). Multimedia <strong>Educational</strong> Resource for Learning and Online Teaching, retrieved <strong>October</strong> 15, <strong>2007</strong>,<br />

from http://www.merlot.org/.<br />

206


Mitchell, G. G., O'Donoghue, D., & Trenaman, A. (2000). A New Operator for Efficient Evolutionary Solutions to<br />

the Travelling Salesman Problem. Applied Informatics 2000, Innsbruck, Austria, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.cs.may.ie/~georgem/papers/TSP_AI_2000.ps.<br />

Nelson, T. H. (1965). A File Structure for the Complex, the Changing and the indeterminate. Proceedings of the<br />

1965 20 th national conference, New York: ACM Press, 84-<strong>10</strong>0.<br />

Nitto, E. D., Mainetti, L., Monga, M., Sbattella, L., & Tedesco, R. (2006). Supporting Interoperability and<br />

Reusability of Learning Objects: The Virtual Campus Approach. Education <strong>Technology</strong> and Society, 9 (2), 33-50.<br />

Polsani, P. R. (2003). Use and Abuse of Reusable Learning Objects. Journal of Digital Information, 3 (4), retrieved<br />

<strong>October</strong> 15, <strong>2007</strong>, from http://jodi.tamu.edu/Articles/v03/i04/Polsani/.<br />

Porter, M. E. (1985). Competitive Advantage, New York: The First Free Press.<br />

Resnick, P., & Varian, H. R. (1997). Recommender Systems. Communications of the ACM, 40 (3), 56-58.<br />

Shapiro, C., & Varian, H. R. (1999). Information Rules: A strategic guide to the networked economy, Boston, MA,<br />

USA: Harvard Business School Press.<br />

Smythe, C. (2004). State-of-the-Art Report on Technologies and Techniques for Testing, retrieved <strong>October</strong> 15, <strong>2007</strong>,<br />

from http://www.opengroup.org/telcert/documents/TELCERT_State_of_the_Art_Report.pdf.<br />

Székely, G., & Satava, R. M. (1999). Virtual Reality in Medicine. BMJ, 319 (13 November), 1305, retrieved <strong>October</strong><br />

15, <strong>2007</strong>, from http://www.bmj.com/cgi/content/full/319/7220/1305.<br />

Tompsett, C. (1991). Contextual Browsing in a Hypermedia Environment. In Giardina, M. (Ed.), NATO Invited<br />

International Workshop on Interactive Multimedia, New York: Springer Verlag, 95-1<strong>10</strong>.<br />

Tompsett, C. (2005). Reconfigurability: creating new courses from existing learning objects will always be difficult!<br />

Journal of Computer Assisted Learning, 21 (6), 440-448.<br />

van Assche, F., Duval, E., Massart, D., Olmedilla, D., Simon, B., Sobernig, S., Ternier, S., & Wild, F. (2006).<br />

Spinning Interoperable Applications for Teaching & Learning using the Simple Query Interface. Education<br />

<strong>Technology</strong> and Society, 9 (2), 51-67.<br />

Walsh, K. (2006). Reusable Learning Objects. BMJ, 332 (20 May), 1193, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.bmj.com/cgi/content/full/332/7551/1193-a.<br />

Waltz, D. (1975). Understanding line drawings of scenes with shadows. In Winston, P. H. (Ed.) The Psychology of<br />

Computer Vision, New York: McGraw-Hill, 19-91.<br />

Wenger, E. (1998). Communities of Practice: Learning, Meaning and Identity, Cambridge: Cambridge University<br />

Press.<br />

207


Appendix A. Good-enough approaches<br />

The standard method to solving knapsack problems follows the approach developed by Horowitz & Sahni (1974; see<br />

also Mitchell et al., 2000). In purchasing a set of RLOs, the organisation is assumed to be able to estimate a target<br />

ratio (EVCR) between the overall cost to run the course and the overall EV (whatever the mode of delivery). This<br />

ratio can then be used to remove from initial consideration any individual RLOs with a ratio that is much worse than<br />

this target value. This reduces the size of possible RLOs to search through, though it does not alter the number of<br />

RLOs that could be included in a solution. Once a set of potential solutions has been found, a check can be<br />

conducted to test whether swapping any of the current ‘best set’ with some of those that were ignored, would<br />

produce a better solution overall.<br />

The validity of this approach depends on the sets of RLOs that are collected on the basis of the EVCR value. A<br />

simple example shows that estimating the EV of a combined set is far harder than this. This example is ‘trivial’ in<br />

scale but should suffice to illustrate why the standard algorithm would never work.<br />

Table 1. Selecting resources<br />

Resource Coverage EV Cost EVCR<br />

A a, b, c, d 8 8 1<br />

B a, b 3.6 4 0.9<br />

C e, f 2.8 4 0.7<br />

Table 1, above, shows the coverage, EV and cost for three resources. The current ‘best-approach’ model would<br />

select the first two resources (A and B) in order to produce the best set with a budget of ‘12’. Swapping C for B<br />

could never be considered as it has the lowest nominal EVCR. Ignoring the other information, i.e. coverage of each<br />

resource, misses the duplication of material that has occurred.<br />

208


Knowlton, D. S. (<strong>2007</strong>). I Design; Therefore I Research: Revealing DBR through Personal Narrative. <strong>Educational</strong> <strong>Technology</strong> &<br />

Society, <strong>10</strong> (4), 209-223.<br />

I Design; Therefore I Research: Revealing DBR through Personal Narrative<br />

Dave S. Knowlton<br />

Department of <strong>Educational</strong> Leadership; Southern Illinois University Edwardsville, USA // Tel: +1 618.650.3948 //<br />

Fax: +1 618.650.3808 // dknowlt@siue.edu<br />

ABSTRACT<br />

Design Based Research (DBR) is a new and still-emerging approach to research about design, learning, and<br />

allied areas. This article reports one designer’s experiences within a DBR project. Whereas most reports of DBR<br />

focus on the outcomes of the design itself, the current paper offers a hermeneutical perspective by focusing on<br />

the personal narrative of the designer. Using Edelson’s (2002) categories of learning as a basis for the<br />

discussion, the author reports the development of his own domain theories, design frameworks, and design<br />

methodology. Implications for other designers who would consider using personal narrative and hermeneutics<br />

are offered.<br />

Keywords<br />

Design-based research, Rapid prototyping, Personal narrative, Computer-mediated bulletin boards, Theory<br />

development<br />

Introduction<br />

Because design-based research (DBR) is a relatively new and still-emerging methodology that is used by learning<br />

scientists, we have much to learn about its processes and resulting products. Most scholarship about design research<br />

has focused on the outcomes, such as the ontological innovations, that emerged from a DBR experiment (cf., diSessa<br />

& Cobb, 2004). This is as it should be since one of the main points of DBR is to improve educational practice<br />

(Collins, Joseph, & Bielaczyc, 2004). Design, however, can be a valuable learning experience in its own right<br />

(Edelson, 2002; Knowlton, 2004a; Nelson, 2003; Nelson & Knowlton, 2005; Wiggins & McTighe, 2005), and even<br />

when designers are not acting as researchers, per se, their own reflections on their efforts within a design project can<br />

be useful toward a more complete understanding of design-based research as a phenomenon of engagement.<br />

Therefore, designers should examine their experiences as designers and describe the ways that they find themselves<br />

situated within design scenarios. One way for designers to illustrate their own situatedness within various scenarios<br />

is through personal narratives. Personal narratives can capture the experiences of designers within specific contexts,<br />

and thus create a shift from curriculum and pedagogical development towards DBR, a shift described and justified by<br />

Edelson (2002). But this type of DBR focuses on the interactions between a designer and a design, rather than the<br />

outcomes of design. After all, as noted by Barab and Squire (2004), the way that designers are situated within a<br />

context can impact the research itself.<br />

The purpose of this paper is to offer my personal narrative as a designer charged with prototyping strategies to<br />

support the use of a computer-mediated bulletin board among university students. In the first part of this paper, brief<br />

support for rapid prototyping as an appropriate design methodology within higher education is provided. In the<br />

second part of this paper, I describe the particular design efforts on which this paper is based. These first two parts of<br />

the paper serve as a contextual frame for supporting the third part of this paper, in which I offer a personal account of<br />

my design experiences. In the fourth part of this paper, my experiences are generalized in an effort to offer<br />

implications for other would-be designers who may find value in considering design scenarios through a personal<br />

narrative approach.<br />

In its totality, this paper is based in the exploratory research tradition (Marshall & Rossman, 1995), in that this paper<br />

serves “to investigate little understood phenomena to identify/discover important variables to generate hypotheses for<br />

further research” (p. 41). It also is framed by hermeneutics in that it "establishes context and meaning” for human<br />

action (Edelson, 1988; Patton, 1990, p. 85), and it attempts to provide an understanding of the contextualized “lived<br />

experience" (O'Grady, Righy, & Van den Hengel, 1987, p. 94) of a designer.<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

209


Rapid Prototyping and Design Based Research<br />

Formal instructional design often is too expensive of a process to be viable in the higher education classroom on a<br />

day-to-day basis; and carefully-designed learning environments require such detailed planning that it is unrealistic to<br />

expect college professors to enter each semester with a solid design that fully supports an inquiry-based or<br />

exploratory environment. Sometimes, then, the best a faculty member can do is to prototype an assignment—to<br />

design “something” as a part of a new course preparation and tweak it over time through iterative cycles of<br />

implementation and revision. Reiser (2001) notes that rapid prototyping is established as a widely-used process<br />

because it allows a designer to arrive at instructional products more quickly. In this respect, rapid prototyping is also<br />

appropriate as a basis for design research in that design research is based on “progressive refinement”—“putting a<br />

first version of a design into the world to see how it works” and then revising the design “until all the bugs are<br />

worked out” (Collins, Joseph, & Bielaczyc, 2004, p. 18). Thus, rapid prototyping has dual potential in that it can<br />

serve both as a practical approach to curriculum and pedagogical development and as a basis for scholarly inquiry.<br />

In spite of this dual potential for productivity in higher education, the use of rapid prototyping in higher education is<br />

not well documented in academic literature. One explanation for this deficiency in the literature is that rapid<br />

prototyping is a process more closely associated with meeting goals that are indicative of business and industry<br />

(Stokes & Richey, 2000). Thus, higher education faculty members simply might overlook rapid prototyping as a<br />

viable design approach that is worthy of being documented. Even when those in higher education recognize rapid<br />

prototyping as a viable design approach, documentation efforts may be hindered because models of rapid prototyping<br />

are surprisingly complex (Stokes & Richey, 2000); and the sound execution of rapid prototyping often requires a<br />

support team, such as graduate assistants and experienced designers (cf., Lohr, Javeri, Mahoney, Gall, Li, &<br />

Strongin, 2003). The complexity of process and lack of available human capital to support such a process may cause<br />

faculty members to shy away from documenting their efforts.<br />

Furthermore, within most disciplines, scholarship that deals with pedagogy, curriculum development, and allied<br />

areas is not valued as highly as discipline-based research. Even where valued, reporting ideas from implementing a<br />

design like rapid prototyping requires a reconsideration of research paradigms. As Barab and Squire (2004) note,<br />

DBR has a “pragmatic philosophical,” not positivist, underpinning (p. 6). Most professors are trained in a positivist<br />

research tradition. This view seems supported by Dede (2004), who notes that scholars trained in traditional research<br />

are likely to view a DBR approach as not very promising.<br />

As I have pointed out, rapid prototyping may be a useful design approach within higher education, and rapid<br />

prototyping as a design methodology is congruent with DBR; yet, little literature documents the experiences of<br />

professors as they prototype materials and processes. This paper begins filling this void in the literature.<br />

The Design Context and Events<br />

In this section, I provide an overview of one experience where I used rapid prototyping as a methodology for<br />

developing instructional strategies in the context of higher education. My purpose here is not to offer a full treatment<br />

of the processes and products of this design, as this information has been reported elsewhere (e.g., Knowlton, 2006;<br />

Knowlton 2004b; Knowlton 2004c). Rather, my purpose is to provide a context for the personal narrative that is<br />

offered in the next section of this paper. In this section, I describe the context in which my design activities occurred,<br />

and I describe the prototyping of the assignment across three phases of implementation. Within the discussion of<br />

each phase, I describe the ways that the evolving context influenced design decisions and how each phase was<br />

evaluated.<br />

Context in which Rapid Prototyping Design Occurred<br />

Barab and Squire (2004) note the importance of considering the broader context in which a design is developed and<br />

implemented. Furthermore, Collins, Joseph, and Bielaczyc (2004) note that “setting” is a critical variable for<br />

characterization when reporting DBR efforts. In this case, the macro context in which the design was implemented<br />

played a large factor in the progression of the designs and, more to the exact point of this paper, in the ways that I<br />

found myself situated within the design process.<br />

2<strong>10</strong>


Rapid prototyping was used in the context of a two-year, field-based teacher-certification program that was designed<br />

to prepare undergraduate students for careers as public school teachers. These preservice teachers were assigned to<br />

K-12 classrooms in partnership schools. During the first year of the two-year program, the preservice teachers often<br />

served as paraprofessionals or aides. During the second year, though, the preservice teachers became more centrally<br />

involved in teaching and learning activity.<br />

Throughout the two years, a team of university faculty supervised weekly content seminars, which largely resembled<br />

“traditional” college classrooms. I was responsible for teaching educational psychology within the seminar. In<br />

principle, though, “courses” were non-existent. Instead, each courses’ content was integrated into seminar activities<br />

and discussions. Rapid prototyping was used to design strategies to support the effective use of computer-mediated<br />

bulletin board (CMBB) discussion. The rapid prototyping of strategies as a means of promoting learning is common<br />

(Stokes & Richey, 2000).<br />

Phases of Design and Implementation<br />

Table 1 provides an overview of the factors that influenced the initial, intermediate, and refined versions of design<br />

and the characteristics of each design. Each version corresponds to a school semester. Also shown in Table 1 is a<br />

summary of the evaluation findings that provided a source on which to base the subsequent iterations of design.<br />

Table 1. Overview of three prototypes of CMBB discussion guidelines<br />

Initial Version<br />

Intermediate Version Refined Version<br />

Factors Influencing Design Factors Influencing Redesign Factors Influencing Redesign<br />

• Need for flexible and efficient<br />

communication tool<br />

• Emerging nature of the field<br />

experience<br />

• Lack of information about the<br />

participants’ knowledge and<br />

skills<br />

• Need for basic content principles<br />

• Initial version was ineffectual<br />

• Shifting Responsibilities of<br />

preservice teachers<br />

• Changes to the use of seminar<br />

time<br />

• Evaluation of intermediate<br />

version<br />

• Elimination of seminar time for<br />

educational psychology<br />

Initial Design Characteristics: Characteristics of Design: Characteristics of Redesign:<br />

• Laissez-faire<br />

• Preservice teachers were simply<br />

made aware that discussion<br />

board existed.<br />

• Preservice Teachers assigned<br />

to two groups<br />

• Discussion based on threeweek<br />

cycles<br />

• Discussion centered on<br />

student-initiated problems and<br />

proposals for practical<br />

solutions<br />

• Addition of a Privacy<br />

Statement and job aid of CMC<br />

conventions<br />

• Additional direction to govern<br />

discussion contributions (e.g.<br />

focus on “instructional<br />

problems” only; more<br />

scaffolding of what constitutes<br />

a “good” contribution)<br />

• Addition of the self report<br />

form<br />

Evaluation of Initial Evaluation of Intermediate Evaluation of Refined<br />

• Ineffectual and rare use<br />

• Preservice teachers reported that<br />

they didn’t see practical value of<br />

using CMC<br />

• Problems were narrow in scope<br />

• Interaction among the<br />

preservice teachers was limited<br />

• Grading was cumbersome<br />

• Perservice Teachers noted<br />

workload was heavy and<br />

contrived<br />

• Reducing the number of required<br />

contributions was helpful in<br />

terms of the usability of the<br />

assignment<br />

• Scaffolding questions were<br />

useful in helping the preservice<br />

teachers think more broadly<br />

about applications of educational<br />

psychology<br />

211


Initial Design<br />

Because the field-based program was new, the context of the field experience emerged as implementation<br />

progressed. This symbiosis between context and implementation influenced the ways the bulletin board might be<br />

used. Beyond the newness of the field program, two other contextual factors influenced design:<br />

• I had no knowledge of the computer skills of the learner for which I was designing. Had they used a CMBB<br />

before? Did they even have skills to find the bulletin board and log on?<br />

• The preservice teachers had never before taken educational psychology. Certification tests that the preservice<br />

teachers would need to pass suggested the need for the preservice teachers to obtain a basic understanding of<br />

educational psychology concepts and principles.<br />

Because of these contextual factors, no specific strategies were designed to support the CMBB discussion. The<br />

faculty team simply made the preservice teachers aware that WebCT (the university’s course management tool) had<br />

a discussion board for asynchronous sharing of ideas. This laissez-faire approach resulted in virtually no use of the<br />

bulletin board. Some preservice teachers suggested that it was nice to know the bulletin board was available, but they<br />

did not see how sharing ideas on the bulletin board would help them prepare for working in their classrooms.<br />

Intermediate Design<br />

Beyond the obvious inadequacies of the initial use of the electronic bulletin board, a conflict between the emerging<br />

role of preservice teachers and a decision made by university personnel necessitated formalized guidelines to support<br />

the use of the bulletin board as an educational tool. The preservice teachers were moving from serving as<br />

paraprofessionals who assisted the teacher to professionals who were responsible for designing and implementing<br />

lesson plans. As they grew into these professional roles, they needed to experience a shift from knowing theory as<br />

described in textbooks to using theory as a basis of their problem-solving efforts. CMBBs are appropriate tools for<br />

supporting problem-solving within field experiences (Beckett & Grant, 2003). In spite of the preservice teachers<br />

becoming more authentically situated, the team of faculty members who supervised the weekly seminars decided that<br />

seminar time should be divided among content areas—“Today is an <strong>Educational</strong> Psychology seminar; next week will<br />

be a reading methods seminar.” Such a decision mitigates against the authenticity of a field experience. Designing<br />

strategies to support the use of the CMBB could facilitate continued integrated connections, even though seminar<br />

time was less integrated.<br />

I designed instructional strategies in the form of formal assignment guidelines that were similar to those already<br />

existing in the literature (cf., Knowlton, 2002). Participants were divided into two groups and the discussion was<br />

based on a three-week cycle of sharing and response. At the end of each cycle, roles were reversed so that preservice<br />

teachers in group one performed the responsibilities of the preservice teachers in group two and vice versa.<br />

During the first week of the discussion cycle, preservice teachers in group one were responsible for defining a<br />

professional problem that they were experiencing within their partnership school. During the second week of the<br />

discussion cycle, the preservice teachers in group two were responsible for using the textbook as a learning-ondemand<br />

resource to theoretically frame the problems that their colleagues had shared during week one of the cycle.<br />

During the third week of the discussion cycle, all the preservice teachers were responsible for three contributions to<br />

the bulletin board discussion. To build in reflection time for the preservice teachers, the assignment guidelines<br />

dictated that not all three contributions should be posted on the same day of the week. Because I wanted the<br />

preservice teachers focusing on dialogue, not earning a grade, I loosely structured assessment criteria, allowing them<br />

to receive most credit by participating in the discussion.<br />

My assessment of students’ efforts served as one basis for determining additional changes that could improve the<br />

efficacy of the CMBB assignment: “[O]nly the integration of assessment [with] evaluation can produce a clear<br />

picture of an online discussion’s educational viability” (Knowlton, 2001, p. 164). Through the synthesis of<br />

evaluation and assessment, several findings emerged:<br />

• The problems shared by the preservice teachers were extremely narrow in scope, with over 90% focusing on<br />

classroom discipline.<br />

• Most contributions during week three of the discussion were replies to the original problem posted during week<br />

one. In other words, the preservice teachers were not discussing the problems by interacting; they merely<br />

212


continued to offer solutions to the original problem. Furthermore, the ideas across solutions were highly<br />

redundant.<br />

• Grading overshadowed other activities that are related to assessment but more productive toward creating<br />

continued learning among students, such as my reacting to their discussion contributions, highlighting common<br />

themes among their interactions, and offering contributions to the discussion as an authentic participant.<br />

Certainly, grading is within the instructor’s purview, but it should not dominate assessment processes (Bauer &<br />

Anderson, 2000).<br />

Also, through formal written and oral feedback from the preservice teachers, I determined that numerous aspects of<br />

the discussion assignment should be modified:<br />

• The number of required contributions in both weeks two and three needed to be reduced. The preservice teachers<br />

indicated that the workload was simply too demanding.<br />

• Criteria that specified on what days of the week participation could occur needed to be eliminated. Several<br />

preservice teachers noted that they were printing out discussion contributions and sometimes even entire threads<br />

of discussion and reading them. So, while their actual contributions might come on a single day of the week,<br />

they were considering the discussion across time.<br />

• A “privacy policy” needed to be added. Some of the preservice teachers were concerned that the content of the<br />

online discussion might somehow “get back to” their mentor teachers, administrators, or even students and<br />

parents. Given the nature of some of the problems that were being shared, this could be embarrassing.<br />

Refined Design<br />

The evaluation of the intermediate implementation of the strategies contributed to the development of the refined<br />

design, but a change to the weekly seminars contributed, as well. It was determined that certain content areas—<br />

educational psychology being one such area—would not be given any formal emphasis during seminars. I was still<br />

accountable for assessing the preservice teachers and giving an <strong>Educational</strong> Psychology grade to each of them at<br />

semester’s end, yet I was afforded no formal seminar time to assess them. Continuing to use the CMBB seemed to be<br />

a choice that could help me overcome this dilemma.<br />

Three minor adjustments were made to the assignment guidelines in an effort to overcome some of the administrative<br />

problems discovered during the evaluation of the intermediate version. First, in the assignment guidelines I suggested<br />

the need to respect the privacy of all discussion participants by not sharing conversations from the bulletin board<br />

with others, such as school personnel. Second, in an effort to help the preservice teachers better consider the<br />

conventions of bulletin board discussion, I created a job aid that discussed some of these conventions, such as double<br />

spacing between paragraphs and using meaningful subject lines. Third, I developed a self-report form, which allowed<br />

the preservice teachers to report factual information to me about the frequency and scope of their participation. This<br />

form was not a self-assessment as much as it was a productivity report; it provided me with a list of threads in which<br />

I could find their contributions. This made the process of “grading” less time consuming.<br />

Beyond these administrative adjustments, though, three larger changes were made to the refined version of the<br />

assignment.<br />

• All problems contributed during week one of the discussion cycle must be “instructional problems”—as opposed<br />

to the type of behavior and discipline problems that dominated the intermediate version of the assignment.<br />

• The number of required contributions during week two of each cycle was reduced from three to two.<br />

• Week three contributions had to be replies to week two contributions, not replies to the original problem<br />

discussed during week one of each cycle. This change was designed to promote deeper analysis of the issues<br />

embedded within the problems, not just continued (and often redundant) “solutions” to the original problem.<br />

• A list of possible strategies that students might use as they offered a week three contribution was developed. See<br />

table 2 for a list of these strategies.<br />

The evaluation of the refined design was based on an open-ended survey completed by the preservice teachers. The<br />

survey was designed to capture the preservice teachers’ views of the strategies used to create participation. I focus<br />

here on feedback from the preservice teachers that directly relates to changes that I made in designing the refined<br />

version. The preservice teachers reported that<br />

213


• reducing the number of required contributions to the discussion was helpful in making the discussion more<br />

manageable.<br />

• focusing on instructional problems, as opposed to behavioral problems, was difficult, but the refocusing of their<br />

thinking did help them see broader applications of educational psychology.<br />

• the scaffolding questions shown in table 2 were somewhat useful in promoting discussion, but those same<br />

scaffolding questions seemed to limit the direction of the conversations too much.<br />

Table 2. Discussion prompts for week three contributions<br />

• Pick two replies to the same problem and discuss why you think one would work better than the other.<br />

• Pick a reply to a problem and discuss the strengths and weaknesses of the proposed solution<br />

• Pick a theory that someone mentioned as a help to understanding week #2 and apply that theory differently (or<br />

more thoroughly).<br />

• Discuss your experiences with how a solution has/has not worked in the classroom.<br />

• Write a summary of responses to your own problem and describe what the biggest things that you are taking<br />

away from your problem are.<br />

A Personal Account of Theory Development through Design<br />

Edelson (2002) has suggested that design research is different from design, and he points to the notion that “design<br />

research explicitly exploits the design process as an opportunity to advance the researchers (sic) understanding of<br />

teaching, learning, and educational systems” (p. <strong>10</strong>7). In the remainder of this paper, this exploitation comes in the<br />

form of a personal narrative—an exploitation of “the phenomenon of experience” (Clandinin & Connely, 2000, p.<br />

128). Indeed, the personal self does not operate apart from the professional self (Knowlton, 1995), and including a<br />

synthesis of the two can broaden the perspectives from which DBR is considered. In this respect, the remainder of<br />

this paper takes the “hermeneutical turn” (O'Grady, Righy, & Van den Hengel, 1987, p. 94), whereby I do not offer a<br />

definitive truth about design but an interpretation of my design experiences. Edelson notes that several types of<br />

theories can be created through participation in design research; this section is organized around each type of theory,<br />

with personal narrative integrated into each section. Admittedly, there is overlap among these three sections. Table<br />

three offers an overview of the coming discussion.<br />

Table 3. Learning through design within the context of the current project<br />

Domain Theories Design Frameworks Design Methodologies<br />

• The ill-structured nature of a • Balance between instructional • Structural revision is often<br />

field experience may hinder<br />

prescription and the natural<br />

necessary in order to bring a<br />

learning unless designers account benefits of social learning is design to full fruition<br />

for the lack of structure through<br />

difficult to achieve<br />

• Rapid prototyping allows for a<br />

their design decisions<br />

• Tension exists between a<br />

stronger co-dependence<br />

• A context for learning may not designer attempting to guide between theory and practice.<br />

influence desirable outcomes as<br />

learners through prescriptive<br />

much as the learner’s perceptions of processes and teaching those<br />

the context influences outcomes<br />

processes as a generalized<br />

cognitive tool.<br />

• Balance must be struck<br />

between the role of designers<br />

and role of facilitators<br />

Domain Theories<br />

Domain theories, Edelson (2002) notes, are “theor[ies] about the world, not [theories] about design per se.” One<br />

type of domain theory is the context theory, which deals with “the challenges and opportunities” presented by the<br />

context in which the educational intervention was applied (p. 113). Because my point is to consider the way that I<br />

encountered this design project as a situated phenomenon (i.e., design within context), I emphasize context theories;<br />

214


ut from these context theories, I make connections to outcome domain theories, which are both the “desired” and<br />

“undesirable” outcomes that are “associated with some intervention” (p. 113). Specifically, I examine the influence<br />

of the field-based context on my ability to promote the sound use of the CMBB. Then, I consider the CMBB itself as<br />

a context for problem solving.<br />

Field-based Context<br />

Field-based programs can be educationally valuable, but this was only the second implementation of this field-based<br />

certification program, and much of the context supporting the program was still developing. Administration included<br />

no clear plan—what Edelson (2002) calls a “sequence of design” (p. 114)—that would help fulfill the conceptual<br />

purpose of the field-based program. Furthermore, the various faculty members who were working in the field-based<br />

program each had individual visions of the program’s intended conceptual outcomes. Both the lack of administrative<br />

leadership and the absence of a shared vision among faculty members made it difficult to teach and facilitate learning<br />

in such a way as to support the very point of a field-based program—that content should be integrated and directly<br />

based on the preservice teachers’ field experiences (cf., Bell, 1995; Scanlon & Ford, 1998; Weber, 1996). My design<br />

activities within this nebulous field-based context lead me to a domain theory, and it can be stated as follows: In<br />

theory, the authenticity of a field-based context provides a unique environment that can allow learning to flourish in<br />

ways not possible in artificial environments; in practice, however, people may unknowingly undermine the benefits<br />

of being in an authentic context by attempting to bring a sense of familiar and artificial classroom structure to<br />

alleviate the dissonance created by the authentic and ill-structured field-based environment.<br />

In this case, it was not the learners who wanted the type of clear-cut processes and direct answers that would be<br />

indicative of an artificial environment. Rather, it was the faculty team that tried to bring too much structure—to<br />

enclose and box the phenomenon of experience. For learning to flourish within a field experience, designers must<br />

aim for an approach that accounts for the unpredictable and nebulous milieu, which would require them to accept the<br />

act of design as a complex phenomenon. Instead, this design team of which I was a part viewed the emerging context<br />

of the field-based program as a barrier to learning and imposed an artificial structure on learning processes.<br />

How did this artificial structure manifest itself? The faculty team regressed toward the expedient, familiar, and<br />

comfortable, not towards the educationally sound. Why did the faculty team, for example, keep changing the<br />

purpose, intent, and function of the weekly seminars? Was it in an effort to improve learning within the authentic<br />

context, or was it a type of “giving up” on the true benefits of a field-based context? I may seem to be arguing<br />

towards an indictment of the faculty team on which I served, but as I reflect on my own design decisions, I see a<br />

similar pattern of craving the familiarity of a traditional and familiar classroom environment and thus undermining<br />

the natural benefits of authenticity within a field-based context. What was my purpose in adding the self-report form<br />

in the refined design? Did such a form add to the preservice teachers’ educational experience? It did not, in my<br />

view. I was more concerned with administrative ease than with focusing on the preservice teachers’ learning. To put<br />

this in the language of an outcomes theory, the “sequence of design and implementation [of] cycles” (Edelson, 2002,<br />

p. 114) must be congruent with the learning context; without such congruence, “desirable outcomes” (p. 114) will<br />

not be realized.<br />

CMBB as Context for Problem Solving<br />

As I have noted, Edelson (2002) points to two types of domain theories, a context domain theory and an outcome<br />

domain theory. What is the relationship between a CMBB as context and the intended outcome of problem solving?<br />

Jonassen (2001) points out that different types of problems require different types of representations; thus a question<br />

about CMBB as an appropriate medium (i.e., context) for solving problems (i.e., the outcomes) is begged: Did the<br />

CMBB support representations that allowed the preservice teachers to move toward solutions?<br />

Contextually, a CMBB may seem to remove cues that contribute to a communicative environment. As Weiss (2000)<br />

notes, some of these cues—gestures, facial expressions, and other physical elements, for example—can “contribute<br />

subtle (and sometimes not so subtle) meanings or attitudes” (p. 48). So, an argument can be made that CMBB as a<br />

context for problem solving can, in fact, impede communication, and thus hinder the potential for learning through<br />

problem solving. During all phases of implementation, the preservice teachers did communicate to me informally<br />

215


that they felt this hindrance; I felt it, too. For example, recall that in the intermediate design I struggled to see a<br />

developing conversation. I observed static contributions to the discussion that offered solutions to the original<br />

problem without considering other contributions to the same thread of discussion. Yet, during the face-to-face<br />

seminars, conversations about the problems shared in the bulletin board were lively and highly interactive.<br />

Through my role in this project, however, I did come to see an opposing view of CMBBs as a context for problem<br />

solving. The absence of visual and audible cues, in fact, removed various elements that traditionally have lead to bias<br />

and disenfranchisement. Because a CMBB is a text-based environment, hidden were oral communication<br />

idiosyncrasies, such as regional accents, speech impediments, or a general lack of verbal eloquence. I, myself, came<br />

to this Midwestern university from my native Mississippi, and students do occasionally point to my distinct southern<br />

accent, which begrudgingly admitted influences my face-to-face interactions. As another example, one preservice<br />

teacher in particular had some interesting ideas but struggled mightily to formulate those ideas and articulate them<br />

when called upon to do so during the face-to-face seminars. His struggles manifested themselves in stream of<br />

conscious soliloquies that often seemed off topic; his body language seemed to indicate that he was aware of his<br />

inability to join the flow of the discussion. His struggles and physical manifestations of those struggles were clearly<br />

recognized by other preservice teachers many of whom hesitated to respond to his ideas during the seminar for fear<br />

of exacerbating his communication difficulties. This same preservice teacher, though, was better able to take<br />

advantage of the CMBB context and more carefully articulate his viewpoints in writing. As a result, this preservice<br />

teacher and others who responded to him were less “put on the spot” to immediately interact in productive ways.<br />

More substantively, race and sometimes gender were removed as influences that may have tainted the way the<br />

preservice teachers received messages from each other. The “pseudo-anonymity” of CMBBs (Kemp, 1998, p. 140) in<br />

some cases, then, promoted more comfortable interaction among the preservice teachers, which allowed the online<br />

experience to become more fully humanized. Paradoxically, by removing the familiarity of interacting through<br />

simplistic elements of voice, demeanor, and appearance, the preservice teachers’ very existence became inherently<br />

intertwined with their ideas.<br />

Above, I have described computer-mediated bulletin boards in terms of my realizations of them as a paradoxical<br />

context for communication. This context influences outcomes. These preservice teachers were in a context that was<br />

not familiar to them; thus the way that the context held potential for mediating their learning was uncomfortable for<br />

them. For positive outcomes to occur, then, the preservice teachers would need to see this paradox and come to shift<br />

their perceptions of the CMBB context. This point can be stated directly as a domain theory: Perhaps it is not a<br />

learning context as much as it is learners’ perceptions of (and comfort with) that context that determines whether<br />

desirable outcomes can be achieved.<br />

Particularly during the last semester of this project, the preservice teachers did informally note an understanding of<br />

this shift; though, as a practical matter, little evidence supported such an understanding. In terms of evidence, the<br />

resituating of the self as a result of the context may actually force a stronger awareness of sensory reactions. Facial<br />

expressions, for example, can only be represented as metalinguistic cues, such as emoticons. When traditional<br />

sensory cues that we often take for granted become abstract representations, participants must consciously search for<br />

opportunities to insert them into CMBB discussions. Admittedly, I did not see such a use of emoticons or other<br />

metalinguistics. This section does serve to suggest, though, a paradox regarding CMBBs as context. Furthermore, it<br />

offers a broad consideration of somewhat non-indigenous discussants within that context.<br />

Design Frameworks<br />

The domain-based considerations that were described in the previous section of this paper influenced my perceptions<br />

of both design frameworks, which are discussed in this section of the paper, and design methodologies, which are<br />

discussed in the next section of the paper. Edelson (2002) notes that a design framework is a “generalized design<br />

solution.” Design frameworks “describe the characteristics that a designed artifact must have [in order] to achieve a<br />

particular set of goals [with]in a particular context” (p. 114). The artifact was my articulation of the assignment<br />

guidelines to the extent that I shaped the assignment into a PBL and asynchronous discussion assignment. In this<br />

section, I discuss the shaping of the guidelines as both a problem-based learning artifact and an asynchronous<br />

discussion artifact. In considering both, I have come to learn that balance between two extremes is difficult to find.<br />

On one extreme is the careful design of strategies to promote learning; on the other extreme is the designer having a<br />

trust in students’ initiative and curiosity as a motivating impetus towards learning.<br />

216


Problem-based learning<br />

Designers must strike an important balance between allowing autonomy for students as problem-solvers and<br />

providing needed scaffolding to the same students, who can only be described as nascent in their problem-solving<br />

abilities. Within this project, a shift from complete autonomy (the initial design) towards strong scaffolding (the<br />

refined design) can be seen. At each extreme of this shift, I was taxed by questions about the educational viability of<br />

my design. As I embraced the laissez-faire approach of the initial design, I recognized that the preservice teachers<br />

may not know how to solve problems. Would they understand, for example, that articulating the problem is more<br />

important than offering solutions, a commonly-accepted principle of problem solving (cf., Abel, 2003)? Would they<br />

intuitively understand the inefficiency of rushing toward a solution without collaboratively exploring and analyzing<br />

alternatives based on their individual experiences (cf., Beckett & Grant, 2003)? Based on their use (or, better said,<br />

“the lack of use”) of the CMBB, I saw no evidence of positive answers to such questions—thus, the need for the<br />

intermediate design.<br />

As I began adjusting the assignment guidelines to better allow the preservice teachers to methodically solve<br />

problems, I felt that I perhaps was not doing justice to the potential of problem-based learning as a design<br />

framework. By articulating strategies, for example, was I not forcing a narrow view—my view—of how to articulate<br />

the problem space? Was I allowing the power of social interaction as a natural phenomenon to emerge, or was I<br />

creating something contrived that forced an artificial codependence among the preservice teachers?<br />

This tension can be framed more theoretically. Some constructivists claim that learning is “internally controlled and<br />

mediated by the learner” (Jonassen, 1991, p. 12). Yet, pedagogically, constructivists seem to focus on tools,<br />

environments, and interaction with others—all of which are external. To what extent within a PBL framework can a<br />

designer dictate the external requirements that will result in internal learning? Throughout this project, I felt<br />

dissonance regarding the balance between activity that was based on a teacher-centered design and activity motivated<br />

by the preservice teachers’ true cognitive dissonance that compelled them to pursue a consistent understanding of<br />

content.<br />

Have I set up a false dichotomy here by pointing to, on the one hand, a teacher-centered design and, on the other<br />

hand, student initiative? Perhaps I should have taught the preservice teachers a problem-solving methodology.<br />

Perhaps it was not an unwillingness to alleviate their own cognitive dissonance, as much as it was a lack of<br />

understanding about how to achieve this alleviation. As a design framework, problem solving does seem to be<br />

robust, but designers would do well to consider whether the robustness lies in guiding learners through the design<br />

process or teaching the problem-solving process as a generalized cognitive tool.<br />

Asynchronous discussion<br />

As one of my most prominent co-authors lives some five states away, I understand purposeful use of computer-based<br />

communication and the power that it has in representing ideas. Designing an artifact to help others see the power can<br />

be difficult, however. The power lies in one’s desire to engage in a collaborative dialogue; true “dialogical<br />

participation” is a higher-order type of learning through asynchronous discussion. It transcends the type of<br />

“generative participation” that can simply shape one’s individual thinking; dialogical participation moves<br />

asynchronous discussion participants toward principles of distributed learning (Knowlton, 2005).<br />

The initial design depended on the tool simply being available, which did not produce viable results. It is only when<br />

instructional strategies were added (in the intermediate design) and revised (in the refined design) that more<br />

participation and evidence of engagement began to appear. My experiences here seem to confirm Clark (1983,<br />

1994a, 1994b) and others’ assumption that computers do not create learning. Rather, it is the careful design of<br />

strategy that creates learning.<br />

There was a contradiction within asynchronous discussion as a design framework that I needed to reconcile.<br />

Designing strategies to “force” dialogue seemed necessary for promoting learning, yet I was committed to the idea<br />

that there must be some commitment on the part of the preservice teachers to want to engage in an academic<br />

discussion. Do the design of strategies mitigate against helping students see the need for dialogue? As I added the<br />

necessary characteristics that would allow asynchronous discussion as a framework to “achieve a particular set of<br />

217


goals” (Edelson, 2002, p. 114) was I not further usurping students’ authority? Consider the discussion prompts<br />

described earlier and shown in table 2. Does such a list of strategies send the message to the preservice teachers that<br />

week three contributions should involve close-ended and narrow responses, not attempts to contribute to an authentic<br />

conversation?<br />

I infer from the experiences that I had within this design project that one necessary characteristic of asynchronous<br />

discussion as a framework is an impetus to promote both initial contributions and replies. Thus, the designed artifact<br />

required both. The problem with these strategies is that they were discrete and perhaps aimed in the wrong direction.<br />

Perhaps appropriate design would have occurred more in my role as facilitator of a discussion, as opposed to<br />

designer of the guidelines. If I had facilitated the preservice teachers’ efforts to engage in dialogue, as opposed to<br />

“demanding” dialogue through the written assignment guidelines, then perhaps the preservice teachers would better<br />

have come to understand the benefits of peer-to-peer sharing of ideas.<br />

Design Methodologies<br />

Design methodologies refer to the “process for achieving a class of designs” (Edelson, 2002, p. 115). Edelson points<br />

to instructional systems design and human-computer interaction as examples of classes of design. Earlier in this<br />

paper, rapid prototyping was defined through existing literature, but the differences between encountering rapid<br />

prototyping through literature and personally experiencing it through an act of design are vast. Indeed, numerous<br />

variables specific to the project described in this paper have led to my understanding of rapid prototyping as a<br />

“system” of contextual inputs and student outputs. These inputs and outputs include contextual influences on my<br />

design decisions and the evaluation of each implementation as summarized in table 1. Also, though, my domain<br />

theories as they emerged throughout this project and the design frameworks described in the previous section of this<br />

paper shaped my understanding of rapid prototyping. If design—even the design of instructional strategies—is<br />

problem solving, then the representation of inputs, outputs, and change in designer thinking is consequential; as<br />

Jonassen (2003) notes, problem solving necessitates representation. This seems congruent with Edelson’s (2002)<br />

point that design can sometimes only be understood reflectively, after design has occurred. In the abstract, the<br />

system of inputs and outputs is shown in figure 1.<br />

Designer’s:<br />

• Domain Theories<br />

• Design Frameworks<br />

• Design Methodologies<br />

• Understanding of Project<br />

Variables<br />

Implementation<br />

Figure 1. Representation of rapid prototyping<br />

Usability<br />

Learning<br />

viability<br />

Practicality<br />

As figure 1 suggests, a designer’s existing domain theories, design frameworks, and understanding of design<br />

methodologies as well as the designer’s existing understanding of project variables may contribute to the prototyping<br />

and implementation of a design. Emerging from this implementation are at least three types of outputs: Usability, as<br />

defined by the learner; learning viability, as determined by a course instructor or evaluation personnel; and<br />

practicality given the constraints of a design scenario, as determined by administration or other stakeholders on the<br />

design team or within a context. Strikingly, only one of these three is directly concerned with learning, as neither<br />

usability nor practicality are necessarily related to learning processes. These outputs from the implementation are<br />

218


considered by a designer (or a design team) and become a type of input that shapes the designer’s continued thinking<br />

about theories, frameworks, methodologies, and project variables.<br />

When presented in this way, rapid prototyping can be seen as cyclical and can allow for infinite revisions across<br />

time. Such a representation begs two questions that should be explored to construct a fuller understanding of this<br />

conception of rapid prototyping methodology. The first relates to developing a true understanding of notions of<br />

“revision.” The second relates to the codependent interactions between theory and practice.<br />

What is the Nature of Revision?<br />

Design as process is not something to be tolerated; rather, it is something to be embraced. Largely absent from figure<br />

1 and the description is a more nuanced consideration of design’s subphases. I note, for example, that I drafted<br />

assignment guidelines in preparation for implementing the intermediate design. But, the creation of those assignment<br />

guidelines required multiple drafts characterized by structural revision. For example, in one draft of the intermediate<br />

guidelines, the preservice teachers were going to be divided into three groups of sharing problems and responding,<br />

not the two groups that I ultimately implemented. As I drafted a particular approach to a strategy, I felt tension<br />

among learner usability, practicality, and my primary directive of promoting learning among the preservice teachers.<br />

As a result of this tension, both the intermediate and refined drafts of the assignment guidelines were drafted into<br />

many different permutations. When a certain permutation didn’t “feel right” to me as the designer, I kept drafting to<br />

see where the design process would “take me.” This point may, in some ways, seem to be a statement of the<br />

obvious, but on average have those who engage in design based-research and curriculum development been trained<br />

in what it means to do more than “tolerate” the need for revision of a framework in an attempt to fully execute the<br />

methodology? Notions of “learning by design” as described by Edelson (2002) and others (e.g., Knowlton, 2004;<br />

Nelson, 2003; Nelson & Knowlton, 2005) require an understanding of documenting design efforts as a means of<br />

coming to understand the design that is attempting to emerge.<br />

How Do Theory and Practice Depend on Each Other?<br />

Through this project, I discovered a non-linear and co-dependent relationship between theory and practice within a<br />

rapid prototyping methodology. This non-linear relationship is particularly pronounced when rapid prototyping is<br />

compared with more traditional instructional design. In traditional instructional systems design, instructional theory<br />

guides and shapes design decisions (Morrison, Ross, & Kemp, 2004). The flow of thinking runs from instructional<br />

theory into a functional model (e.g., Morrison, Ross, & Kemp, 2004; Dick & Carey, 1990), which helps a designer<br />

make decisions. By the time implementation occurs a theoretical frame supporting practice has been solidified. In<br />

rapid prototyping, however, iterations and cycles of design and action occur more rapidly—the very essence of rapid<br />

prototyping. Thus, not only does theory shape design but also learner activity itself shapes future design decisions.<br />

Sometimes learner activity changes as a direct result of the implementation of a design. But, sometimes learner<br />

activity changes as the result of an evolution of the context in which learners are active. Regardless of the impetus<br />

for change, subsequent design decisions are affected by the change. I am not arguing that such a symbiotic<br />

relationship between theory and practice does not exist within traditional design; I merely am arguing that the<br />

relationship is more pronounced in rapid prototyping and designers need to be more aware of the relationship.<br />

Two examples from the current project might be illustrative. The first example illustrates changes as a result of a<br />

design decision, but the design decision was a response to learner activity. Consider the influence of strategies for<br />

enhancing social interaction that I added in the intermediate design. The design decisions to “force” interaction<br />

created a shift from individual cognition to a distributed view of cognition. This design decision was a direct<br />

response to the preservice teachers’ unwillingness (or inability) to collaborate without a sense of being “forced” to<br />

interact. This forced interaction, though, changed my thinking as a designer. Thus, in the refined design, I tweaked<br />

the interaction through additional design decisions. Namely, as described earlier, I placed criteria for success on their<br />

week three contributions to the discussion.<br />

A second example illustrates how design decisions can be a response to the ever-shifting context in which learners<br />

are engaged in learning activities. As I described earlier in this paper, as the field experience progressed, the<br />

preservice teachers became more authentically situated within the context of their classrooms; that is, they moved<br />

219


more and more from the periphery of classroom activity to the center of teaching and learning. In authentic cases,<br />

this is a natural progression that is justifiable (Lave & Wenger, 1991), but the point is that design decisions needed to<br />

be responsive to this shift. Specifying that problems worthy of discussion were to be instructional problems, not<br />

classroom discipline problems, was a design decision that I made in the refined version of the assignment. This<br />

design decision was purposeful toward the goal of broadening the preservice teachers’ thinking regarding what<br />

constitutes a classroom problem that is worthy of analysis. It was necessary to broaden the preservice teachers’<br />

thinking because their shift in responsibilities was a broadening of the scope of things that fell under their purview.<br />

The strategies had to respond to the ever-evolving context in which the preservice teachers were engaged.<br />

Design decisions, then, serve the purpose not only of rectifying design problems through a reconsideration of<br />

theoretically-derived prescriptions but also of realigning learner practices within an ever-shifting context in which<br />

learning occurs and, as a direct result of that shifting context, the theoretical thinking of the designer. I have<br />

described some of these shifts in theoretical thinking—whether to err on the side of student initiative or to place<br />

more faith in strategies, for example—in the previous part of this paper. This conflict regarding on which side I<br />

should err was a direct result of observing my designs as they influenced the preservice teachers operating within an<br />

ever-evolving learning context.<br />

Implications<br />

In this paper, I have offered my experiences as a designer within a specific scenario. I have tried to offer this<br />

perspective from a “personal” viewpoint, but I have tried to shape the narrative provided here around what Edelson<br />

(2002) says a designer should “learn” (ie., what theories a designer should develop) as design is occurring. The<br />

narrative, in its own right, provides insights about DBR as a mode of inquiry and insights about notions of rapid<br />

prototyping. In one respect, I am simply pointing to a practical operationalization of Edelson’s (2002) view that the<br />

problem or possibility that leads to design often develops hand-in-hand with the design itself. In a larger respect, I<br />

am pointing to the idea that there is value, simply, in designers coming to understand the multi-faceted influences on<br />

design decisions. In what follows, I provide heuristical questions for would-be designers. The heuristical questions<br />

might be useful in the confines of faculty development initiatives in which faculty are engaged in design (e.g.,<br />

Nelson & Knowlton, 2005). Through these questions, I am not suggesting the need for quantitative empiricism;<br />

rather, I am advocating a deeper excursion into hermeneutic approaches for understanding DBR.<br />

What is the relationship between designer training and perceptions of a design scenario?<br />

I am trained in the neo-classical tradition of instructional systems design. Would someone trained in a different<br />

tradition—say, in user-engineering or constructivist environment design—have seen this design scenario from a<br />

different perspective? Certainly, they would have, but research needs to be done in how a designer’s formal or<br />

experiential training influences that designer’s understanding of a design scenario. Designers who report their efforts<br />

through personal narratives should share their biases and assumptions about design in order to unveil the perspective<br />

from which the designer is coming.<br />

How does context itself influence designer’s perceptions?<br />

Like many scenarios described in the literature, I was operating here in the context of teacher-education. What if my<br />

experiences were based in a similar task (i.e., designing strategies to support the educational use of CMBB) but in a<br />

different context (e.g., in corporate management training)? Certainly there are cultural differences among teacher<br />

education, corporate education, and other contexts in which formal learning activities must be designed. How do<br />

those macro-context cultures influence designers’ perceptions of their tasks? Consider, for example, the stark<br />

differences between a designer in a corporate setting and a college professor acting as designer. Likely, the corporate<br />

designer will not also be facilitating the implementation of design. Yet, in the context of higher education, the<br />

designer is often the same person who implements the designs. Designers in a variety of settings should offer their<br />

personal perceptions of design tasks and scenarios. Through these perceptions, a variety of contexts for personal<br />

narratives would be present in the literature.<br />

220


How would one of the Preservice Teacher’s Narratives Align with the Narrative Presented here?<br />

Often in considering a learning scenario, the perceptions of the professor (acting as both designer and facilitator of<br />

learning) may be markedly different from the perceptions of students who experienced the learning side of design<br />

(cf., Knowlton, Eschmann, Fish, Heffren, & Voss, 2004). Admittedly, this paper has focused on the hermeneutical<br />

perspective of a single individual. The perspective provided by personal narrative is powerful, but admittedly,<br />

accompanying perspectives would add research robustness. It would be interesting to see a parallel discussion from<br />

one of the preservice teachers involved in this project. Designers might consider structuring their personal narratives<br />

and setting them against a frame of learner narratives. Such an approach would better allow multiple data sources<br />

(albeit still ones based in hermeneutics and personal narrative) to be considered.<br />

References<br />

Abel, C. F. (2003). Heuristics and problem solving. In D. S. Knowlton & D. C. Sharp (Eds.), Problem-based<br />

learning in the information age, San Francisco: Jossey-Bass, 53-58.<br />

Barab, S., & Squire, K. (2004). Design-based research: Putting a stake in the ground. The Journal of the Learning<br />

Sciences, 13 (1), 1-14.<br />

Bauer, J. F., & Anderson, R. S. (2000) Evaluating students’ written performance in the online classroom. In R. E.<br />

Weiss, D. S. Knowlton & B. W. Speck, (Eds.), Principles of effective teaching in the online classroom, San<br />

Francisco: Jossey-Bass, 65-72.<br />

Beckett, J., & Grant, N. K. (2003). Guiding students toward solutions in field experiences. In D. S. Knowlton & D.<br />

C. Sharp (Eds.), Problem-based learning in the information age, San Francisco: Jossey-Bass, 67-72.<br />

Bell, N. (1995). Professional development sites: Creating authentic experiences for preservice middle school<br />

teachers. Current Issues in Middle Level Education, 4 (1), 9-19.<br />

Clark, R. E. (1983). Reconsidering research on learning from media. Review of <strong>Educational</strong> Research, 53 (4), 445-<br />

459.<br />

Clark, R. E. (1994a). Media will never influence learning. <strong>Educational</strong> <strong>Technology</strong> Research and Development, 42<br />

(2), 21-29.<br />

Clark, R. E. (1994b). Media and method. <strong>Educational</strong> <strong>Technology</strong> Research and Development, 42 (3), 7-<strong>10</strong>.<br />

Clandinin, D. J., & Connelly, F. M. (2000). Narrative inquiry: Experience and story in qualitative research, San<br />

Francisco: Jossey-Bass.<br />

Collins, A., Joseph, D., & Bielaczyc, K. (2004). Design research: Theoretical and methodological issues. The<br />

Journal of the Learning Sciences, 13 (1), 15-42.<br />

Dede, C. (2004). If design-based research is the answer, what is the question? A commentary on Collins, Joseph, and<br />

Bielaczyc; diSessa and Cobb; and Fishman, Marx, Blumenthal, Krajcik, and Soloway in the JLS special issue on<br />

Design-based research. The Journal of the Learning Sciences, 13 (1), <strong>10</strong>5-114.<br />

Dick, W., & Carey, L. (1990). The systematic design of instruction (3 rd Ed.), New York: Harper Collins.<br />

diSessa, A. A., & Cobb, P. (2004). Ontological innovations and the role of theory in design experiments. The Journal<br />

of the Learning Sciences, 13 (1), 77-<strong>10</strong>3<br />

Edelson, D. C. (2002). Design research: What we learn when we engage in design. The Journal of the Learning<br />

Sciences, 11 (1), <strong>10</strong>5-121.<br />

221


Edelson, M. (1988). The hermeneutic turn and the single case study in psychoanalysis. In D. N. Berg & K. K. Smith<br />

(Eds.), The self in social inquiry: Researching methods, Newbury Park: Sage, 21-34.<br />

Jonassen, D. H. (2001). Toward a design theory of problem solving. <strong>Educational</strong> <strong>Technology</strong> Research &<br />

Development, 48 (4), 63-86.<br />

Jonassen, D. H. (2003). Using cognitive tools to represent problems. Journal of Research on <strong>Technology</strong> in<br />

Education, 35 (3), 362-381.<br />

Jonassen, D. H. (1991). Objectivism versus constructivism: Do we need a new philosophical paradigm? Journal of<br />

<strong>Educational</strong> Research, 39 (3), 5-14.<br />

Kemp, F. (1998). Computer-mediated communication: Making nets work for writing instruction. In J. R. Galin & J.<br />

Latchaw (Eds.), The dialogic classroom: Teachers integrating computer technology, pedagogy, and research,<br />

Urbana: National Council of Teachers of English, 133-150.<br />

Knowlton, D. S. (2006). Rapid prototyping as method for developing instructional strategies for supporting<br />

computer-mediated communication among university students. The Journal for the Scholarship of Teaching &<br />

Learning, 6 (1), 75-87.<br />

Knowlton, D. S. (2005). A taxonomy of learning through asynchronous discussion. Journal of Interactive Learning<br />

Research, 16 (2), 155-177.<br />

Knowlton, D. S. (2004a). Never mind the prescriptions, bring on the descriptions: Students’ representations of<br />

inquiry-driven design. In M. Simonson & M. Crawford (Eds.), The Proceedings of the 27 th Annual Convention of the<br />

Association for <strong>Educational</strong> Communications and <strong>Technology</strong>, Bloomington, IN: AECT, 369-374.<br />

Knowlton, D. S. (2004b). Electronic bulletin boards as medium for asynchronous problem solving in field<br />

experiences. The International Journal of Instructional <strong>Technology</strong> & Distance Education, 1 (5), 43-52.<br />

Knowlton, D. S. (2004c). Using asynchronous discussion to promote collaborative problem solving among<br />

preservice teachers in field experiences: Lessons learned from implementation. In M. Simonson & M. Crawford<br />

(Eds.), The Proceedings of the 27 th Annual Convention of the Association for <strong>Educational</strong> Communications and<br />

<strong>Technology</strong>, Bloomington, IN: AECT, 375-381.<br />

Knowlton, D. S. (2002). Promoting liberal arts thinking through online discussion: A practical application and its<br />

theoretical basis. <strong>Educational</strong> <strong>Technology</strong> & Society, 5 (3), 189-194.<br />

Knowlton, D. S. (2001). Determining educational viability in online discussions: A student-centered approach.<br />

Academic Exchange Quarterly, 5 (4), 162-168.<br />

Knowlton, D. S. (1995, November). Personal narrative and graduate-level education: How does gender influence<br />

thinking and writing about curriculum? Paper presented at the annual meeting of the Mid-South <strong>Educational</strong><br />

Research Association, November 8-<strong>10</strong>, 1995, Biloxi, Mississippi.<br />

Knowlton, D. S., Eschmann, A., Fish, N., Heffren, B., & Voss, H. (2004). Processes and impact of journal writing in<br />

a graduate-level theory course: Students’ experiences and reflections. Teaching & Learning: The Journal of Natural<br />

Inquiry and Reflective Practice, 18 (2), 40-53.<br />

Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation, Cambridge, MA: Cambridge<br />

University Press.<br />

Lohr, L., Javeri, M., Mahoney, C., Gall, J., Li, K., & Strongin, D. (2003). Using rapid application development to<br />

improve usability of a preservice teacher technology course. <strong>Educational</strong> <strong>Technology</strong> Research & Development, 51<br />

(2), 41-55.<br />

222


Marshall, C., & Rossman, G. (1995). Designing qualitative research, Newbury Park, CA: Sage.<br />

Morrison, G. R., Ross, S. M., & Kemp, J. E. (2004). Designing effective instruction (4 th Ed.), New York: John Wiley.<br />

Nelson, W. (2003). Learning by design. In D. S. Knowlton & D. C. Sharp (Eds.), Problem-based learning in the<br />

information age, San Francisco: Jossey-Bass, 39-44.<br />

Nelson, W., & Knowlton, D. S. (2005). “Learning through Design” as a strategy for faculty development: Lessons<br />

learned in a teacher education setting. In M. Orey, J. McClendon, & R. M. Branch (Eds.), <strong>Educational</strong> media and<br />

technology yearbook (vol. 30), Westport: Libraries Unlimited, 29-36.<br />

O'Grady, P. J., Righy, P., & Van den Hengel, J. W. (1987). Hermeneutics and the method of social science.<br />

American Psychologist, 42 (2), 194.<br />

Patton, M. Q. (1990). Qualitative evaluations and research methods (Rev. Ed.), Newbury Park, CA: Sage.<br />

Reiser, R. A. (2001). A history of instructional design and technology: Part 1: A history of instructional media.<br />

<strong>Educational</strong> <strong>Technology</strong> Research and Development, 49 (1), 53-64.<br />

Scanlon, P. A., & Ford, M. P. (1998). Grading student performance in real-world settings. In R. S. Anderson & B.<br />

W. Speck (Eds.). Changing the way we grade student performance: Classroom assessment and the new learning<br />

paradigm, San Francisco: Jossey-Bass, 97-<strong>10</strong>5.<br />

Stokes, T., & Richey, R. C. (2000). Rapid prototyping methodology in action: A developmental study. <strong>Educational</strong><br />

<strong>Technology</strong> Research and Development, 48 (2), 63-80.<br />

Weber, A. (1996). Professional development schools and university laboratory schools: Is there a difference? The<br />

Professional Educator, 18 (2), 59-65.<br />

Weiss, R. E. (2000). Humanizing the online classroom. In R. E. Weiss, D. S. Knowlton, & B. W. Speck (Eds.),<br />

Principles of effective teaching in the online classroom, San Francisco: Jossey-Bass, 47-52.<br />

Wiggins, G., & McTighe, J. (2005). Understanding by design (2 nd Ed.). Alexandria: Association for Supervision and<br />

Curriculum Development.<br />

223


Wang, H.-Y., & Chen, S. M. (<strong>2007</strong>). Artificial Intelligence Approach to Evaluate Students’ Answerscripts Based on the Similarity<br />

Measure between Vague Sets. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 224-241.<br />

Artificial Intelligence Approach to Evaluate Students’ Answerscripts Based on<br />

the Similarity Measure between Vague Sets<br />

Hui-Yu Wang<br />

Department of Education, National Chengchi University, Taiwan // 94152514@@nccu.edu.tw<br />

Shyi-Ming Chen<br />

Department of Computer Science and Information Engineering, National Taiwan University of Science and<br />

<strong>Technology</strong>, Taiwan // Tel: +886-2-27376417 // smchen@mail.ntust.edu.tw<br />

ABSTRACT<br />

In this paper, we present two new methods for evaluating students’ answerscripts based on the similarity<br />

measure between vague sets. The vague marks awarded to the answers in the students’ answerscripts are<br />

represented by vague sets, where each element ui in the universe of discourse U belonging to a vague set is<br />

represented by a vague value. The grade of membership of u i in the vague set à is bounded by a subinterval<br />

[tÃ(u i), 1 – f à (u i)] of [0, 1]. It indicates that the exact grade of membership μ Ã(u i) of u i belonging the vague set<br />

à is bounded by t Ã(u i) ≤ μ Ã(u i) ≤ 1 – f Ã(u i), where t Ã(u i) is a lower bound of the grade of membership of u i<br />

derived from the evidence for ui, f Ã(u i) is a lower bound of the negation of u i derived from the evidence against<br />

u i, t Ã(u i) + f Ã(u i) ≤ 1, and u i∈U. An index of optimism λ determined by the evaluator is used to indicate the<br />

degree of optimism of the evaluator, where λ ∈ [0, 1]. Because the proposed methods use vague sets to<br />

evaluate students’ answerscripts rather than fuzzy sets, they can evaluate students’ answerscripts in a more<br />

flexible and more intelligent manner. Especially, they are particularly useful when the assessment involves<br />

subjective evaluation. The proposed methods can evaluate students’ answerscripts more stable than Biswas’s<br />

methods (1995).<br />

Keywords<br />

Similarity functions, Students’ answerscripts, Vague grade sheets, Vague membership values, Vague sets, Index of<br />

optimism<br />

Introduction<br />

In recent years, some methods have been presented for students’ evaluation (Biswas, 1995; Chang & Sun, 1993;<br />

Chen & Lee, 1999; Cheng & Yang, 1998; Chiang and Lin, 1994; Frair, 1995; Echauz & Vachtsevanos, 1995;<br />

Hwang, Lin, & Lin, 2006; Kaburlasos, Marinagi, & Tsoukalas, 2004; Law, 1996; Ma & Zhou, 2000; Liu, 2005;<br />

McMartin, Mckenna, & Youssefi, 2000; Nykanen, 2006; Pears, Daniels, Berglund, & Erickson, 2001; Wang & Chen<br />

2006a; Wang & Chen, 2006b; Wang & Chen, 2006c; Wang & Chen, 2006d; Weon & Kim, 2001; Wu, 2003). Chang<br />

and Sun (1993) presented a method for fuzzy assessment of learning performance of junior high school students.<br />

Chen and Lee (1999) presented two methods for evaluating students’ answerscripts using fuzzy sets. Cheng and<br />

Yang (1998) presented a method for using fuzzy sets in education grading systems. Chiang and Lin (1994) presented<br />

a method for applying the fuzzy set theory to teaching assessment. Frair (1995) presented a method for student peer<br />

evaluations using the analytic hierarchy process method. Echauz and Vachtsevanos (1995) presented a fuzzy grading<br />

system to translate a set of scores into letter grades. Hwang, Lin and Lin, (2006) presented an approach for test-sheet<br />

composition with large-scale item banks. Kaburlasos, Marinagi, and Tsoukalas (2004) presented a software tool,<br />

called PARES, for computer-based testing and evaluation used in the Greek higher education system. Law (1996)<br />

presented a method for applying fuzzy numbers in education grading systems. Liu (2005) presented a method for<br />

using mutual information for adaptive item comparison and student assessment. Ma and Zhou (2000) presented a<br />

fuzzy set approach for the assessment of student-centered learning. McMartin, Mckenna and Youssefi (2000) used<br />

scenario assignments as assessment tools for undergraduate engineering education. Nykanen (2006) presented<br />

inducing fuzzy models for student classification. Pears, Daniels, Berglund, and Erickson (2001) presented a method<br />

for student evaluation in an international collaborative project course. Wang and Chen (2006a) presented two<br />

methods for students’ answerscripts evaluations using fuzzy sets. Wang and Chen (2006b) presented two methods<br />

for evaluating students’ answerscripts using fuzzy numbers associated with degrees of confidence. Wang and Chen<br />

(2006c) presented two methods for students’ answerscripts evaluation using vague sets. Weon and Kim (2001)<br />

presented a leaning achievement evaluation strategy in student’s learning procedure using fuzzy membership<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

224


functions. Wu (2003) presented a method for applying the fuzzy set theory and the item response theory to evaluate<br />

learning performance.<br />

Biswas (1995) pointed out that the chief aim of education institutions is to provide students with the evaluation<br />

reports regarding their test/examination as sufficient as possible and with the unavoidable error as small as possible.<br />

Therefore, Biswas (1995) presented a fuzzy evaluation method (fem) for applying fuzzy sets in students’<br />

answerscripts evaluation. He also generalized the fuzzy evaluation method to propose a generalized fuzzy evaluation<br />

method (gfem) for students’ answerscripts evaluation. In (Biswas, 1995), the fuzzy marks awarded to answers in the<br />

students’ answerscripts are represented by fuzzy sets (Zadeh, 1965). In a fuzzy set, the grade of membership of an<br />

element ui in the universe of discourse U belonging to a fuzzy set is represented by a real value between zero and<br />

one, However, Gau and Buehrer (1993) pointed out that this single value between zero and one combines the<br />

evidence for ui∈U and the evidence against ui∈U. They pointed out that it does not indicate the evidence for ui∈U<br />

and the evidence against ui ∈U, respectively, and it does not indicate how much there is of each. Gau and Buehrer<br />

(1993) also pointed out that the single value between zero and one tells us nothing about its accuracy. Thus, they<br />

proposed the theory of vague sets, where each element in the universe of discourse belonging to a vague set is<br />

represented by a vague value. Therefore, if we can allow the marks awarded to the questions of the students’<br />

answerscripts to be represented by vague sets, then there is room for more flexibility.<br />

In this paper, we present two new methods for evaluating students’ answerscripts based on the similarity measure<br />

between vague sets. The vague marks awarded to the answers in the students’ answerscripts are represented by vague<br />

sets, where each element belonging to a vague set is represented by a vague value. An index of optimismλ(Cheng<br />

and Yang, 1998) determined by the evaluator is used to indicate the degree of optimism of the evaluator,<br />

whereλ∈[0, 1]. If 0 ≤λ< 0.5, then the evaluator is a pessimistic evaluator. Ifλ= 0.5, then the evaluator is a normal<br />

evaluator. If 0.5


If the universe of discourse U is an infinite set, then a vague set à of the universe of discourse can be represented as<br />

à = [ ~ ( ), 1 ~ ( ) ] / i ,<br />

∫ t u<br />

U A i − f u<br />

A i u ui∈U, (2)<br />

where the symbol ∫ denotes the union operator.<br />

Figure 1. A vague set<br />

Definition 1: Let à be a vague set of the universe of discourse U with the truth-membership function tà and the falsemembership<br />

function fÃ, respectively. The vague set à is convex if and only if for all u1, u2 in U,<br />

whereλ ∈ [0, 1].<br />

tÃ(U), 1- fÃ(U)<br />

1.0<br />

1 - ƒ Ã(u i) tÃ(U)<br />

t Ã(u i)<br />

0<br />

ui<br />

1- fÃ(U)<br />

tÃ(λ u1 + (1 –λ )u2) ≥ Min(tÃ(u1), tÃ(u2)), (3)<br />

1 – fà (λ u1 + (1 –λ ) u2) ≥ Min(1 – fÃ(u1), 1 – fÃ(u2)), (4)<br />

Definition 2: A vague set à of the universe of discourse U is called a normal vague set if ∃ ui ∈ U, such that 1 –<br />

fÃ(ui) = 1. That is, fÃ(ui) = 0.<br />

Definition 3: A vague number is a vague subset in the universe of discourse U that is both convex and normal.<br />

Chen (1995b) presented a similarity measure between vague values. Let X = [tx, 1 – fx] be a vague value, where<br />

tx∈[0, 1], fx∈[0, 1] and tx + fx ≤ 1. The score of the vague value X can be evaluated by the score function S shown as<br />

follows:<br />

S(X) = tx – fx, (5)<br />

where S(X)∈[-1, 1]. Let X and Y be two vague values, where X = [tx, 1 – fx], Y = [ty, 1 – fy], tx∈[0, 1], fx∈[0, 1], tx +<br />

fx ≤ 1, ty∈[0, 1], fy∈[0, 1], and ty + fy ≤ 1. The degree of similarity M(X, Y) between the vague values X and Y can be<br />

evaluated by the function M,<br />

S(<br />

X ) − S(<br />

Y )<br />

M ( X , Y ) = 1−<br />

, (6)<br />

2<br />

where S(X) = tx – fx and S(Y) = ty – fy. The larger the value of M(X, Y), the higher the degree of similarity between the<br />

U<br />

226


vague values X and Y. It is obvious that if X and Y are identical vague values (i.e., X = Y), then S(X) = S(Y). By<br />

applying Eq. (6), we can see that M(X, Y) = 1, i.e., the degree of similarity between the vague values X and Y is equal<br />

to 1.<br />

Table 1 shows some examples of the degree of similarity M(X, Y) between X and Y.<br />

Table 1. Some examples of the degree of similarity M(X, Y) between the vague values X and Y<br />

X Y M(X, Y)<br />

[1, 1] [0, 0] 0<br />

[1, 1] [1, 0]<br />

1<br />

2<br />

[1, 0] [1, 1]<br />

1<br />

2<br />

[0, 1] [0, 1] 1<br />

Let X and Y be two vague values, where X = [tx, 1 – fx], Y = [ty, 1 – fy], tx∈[0, 1], fx∈[0, 1], tx + fx ≤ 1, ty∈[0, 1],<br />

fy∈[0, 1], and ty + fy ≤ 1. The proposed similarity measure between vague values has the following properties:<br />

Property 1: Two vague values X and Y are identical if and only if M(X, Y) = 1.<br />

Proof:<br />

(i) If X and Y are identical, then tx = ty and 1 – fx = 1 – fy (i.e., fx = fy). Because S(X) = tx – fx and S(Y) = ty – fy = tx – fx,<br />

the degree of similarity between the vague values X and Y is calculated as follows:<br />

S(<br />

X ) − S(<br />

Y )<br />

M ( X , Y ) = 1−<br />

2<br />

( t x − f x ) − ( t y − f y )<br />

= 1−<br />

2<br />

=<br />

= 1.<br />

(ii) If M(X, Y) = 1, then<br />

( t x − f x ) − ( t x − f<br />

1−<br />

2<br />

S(<br />

X ) − S(<br />

Y )<br />

M ( X , Y ) = 1−<br />

2<br />

( t x − f x ) − ( t y − f y )<br />

= 1−<br />

2<br />

x<br />

)<br />

= 1.<br />

It implies that tx = fx and ty = fy (i.e., 1 –ty = 1 – fy). Therefore, the vague values X and Y are identical. Q. E. D.<br />

Property 2: M(X, Y) = M(Y, X).<br />

Proof:<br />

Because<br />

and<br />

S(<br />

X ) − S(<br />

Y )<br />

M ( X , Y ) = 1−<br />

,<br />

2<br />

S(<br />

Y ) − S(<br />

X )<br />

M ( X , Y ) = 1−<br />

,<br />

2<br />

227


we can see that<br />

S( X ) − S(<br />

Y ) S(<br />

Y ) − S(<br />

X )<br />

=<br />

,<br />

2<br />

2<br />

S( X ) − S(<br />

Y ) S(<br />

Y ) − S(<br />

Y )<br />

1−<br />

= 1−<br />

2<br />

2<br />

and M(X, Y) = M(Y, X). Q. E. D.<br />

Let à and B ~ be two vague sets in the universe of discourse U, U = {u1, u2, …, un}, where<br />

à = [ t ~ ( u ), 1−<br />

f ~ ( u )] / u1 + [ t ( ), 1 ( )]<br />

A 1 A 1<br />

~ u − f 2 ~ u / u2 + … + [ t ( ), 1 ( )]<br />

A<br />

A 2<br />

~ un<br />

− f ~ u<br />

A<br />

A<br />

n / un,<br />

and<br />

B ~ = t ( u ), 1−<br />

f ( u )] / u1 + t ( u ), 1−<br />

f ( u )] / u2 + … + t ( u ), 1−<br />

f ( u )] / un.<br />

[ ~<br />

B 1 ~<br />

B<br />

1<br />

[ ~ 2 ~<br />

B<br />

B<br />

2<br />

[ ~<br />

B<br />

n ~ n<br />

B<br />

Let ~ ( i)<br />

A u V = [ ) ( ~<br />

A i u t , 1 – ) ( ~<br />

A i u f ] be the vague membership value of ui in the vague set Ã, and let ~ ( )<br />

B i u V =<br />

[ ~ ( )<br />

B i u t , 1 – ) ( ~<br />

B<br />

i u f ] be the vague membership value of ui in the vague set B ~ . By applying Eq. (5), we can see<br />

that the score ( ~ ( ))<br />

A i u V S of the vague membership value ) ( ~<br />

A i u V can be evaluated as follows:<br />

( ~ ( ))<br />

A i u V S = ) ( ~<br />

A i u t – ) ( ~<br />

A i u f ,<br />

and the score ( )) V S of the vague membership value ) ( V can be evaluated as follows:<br />

( ~<br />

B i u<br />

( )) V S = ) ( t – ), ( f<br />

( ~<br />

B i u<br />

~<br />

B i u<br />

~<br />

B i u<br />

~<br />

B i u<br />

where 1 ≤ i ≤ n. Then, the degree of similarity H(Ã, B ~ ) between the vague sets à and B ~ can be evaluated by the<br />

function H,<br />

n ~ ~ 1<br />

H ( A,<br />

B)<br />

= ∑ M ( V ~ ( u ), V u i ~ ( ))<br />

B i<br />

n i=<br />

1<br />

A<br />

1 ( ~ ( )) ( ~ ( ))<br />

∑ 1<br />

,<br />

= 1<br />

2 ⎟ ⎟<br />

n ⎛ S V u − S V u ⎞<br />

= ⎜ A i B i<br />

−<br />

(7)<br />

n i ⎜<br />

⎝<br />

⎠<br />

where H(Ã, B)∈[0, 1]. The larger the value of H(Ã, B ~ ), the higher the similarity between the vague sets à and B ~ .<br />

Let à and B ~ be two vague sets in the universe of discourse U, U = {u1, u2, …, un}, where<br />

à = [ t ~ ( u ), 1−<br />

f ~ ( u )] / u1 + [ t ( ), 1 ( )]<br />

A 1 A 1<br />

~ u − f 2 ~ u / u2 + … + [ t ( ), 1 ( )]<br />

A<br />

A 2<br />

~ un<br />

− f ~ u<br />

A<br />

A<br />

n / un,<br />

and<br />

B ~ = [ t ~ ( u ), 1−<br />

f ~ ( u )] / u1 + [ t ( ), 1 ( )]<br />

B 1<br />

B<br />

1<br />

~ u − f 2 ~ u / u2 + … + [ ( ), 1 ( )]<br />

B<br />

B<br />

2<br />

~ un<br />

f<br />

B<br />

~ un<br />

B<br />

The proposed similarity measure between vague sets has the following properties:<br />

Property 3: Two vague sets à and B ~ are identical if and only if H(Ã, B ~ ) = 1.<br />

Proof:<br />

(i) If à and B ~ are identical, then<br />

t − / un.<br />

228


t ( u ), 1−<br />

f ( u )] = t ( u ), 1−<br />

f ( u )], where 1 ≤ i ≤ n.<br />

[ ~ ~<br />

A i A i<br />

[ ~<br />

B i ~<br />

B<br />

i<br />

( ~ A i u ( ~<br />

B i u<br />

That is, t ( ) = ) ( t , f ) = f ), and 1 ≤ i ≤ n. Because<br />

and<br />

~<br />

A i u<br />

~<br />

B i u<br />

( )) V S = ) (<br />

( ~<br />

A i u<br />

t – ) ( f<br />

~<br />

A i u<br />

~ A i u<br />

( ~ ( ))<br />

B i u V S = ) ( ~<br />

B i u t – ) ( ~ B i u f = ) ( ~<br />

A i u t – ) ( ~ A i u f = )). ( ( ~ A i u V S<br />

Therefore, we can see that<br />

H(Ã, B ~ ) ∑ ⎟ = ⎟<br />

n 1 ⎛ S(<br />

V ~ ( ui<br />

)) − S(<br />

V ~ ( ui<br />

)) ⎞<br />

= ⎜ A<br />

B<br />

1−<br />

n ⎜ i 1 ⎝<br />

2<br />

⎠<br />

(ii) If H(Ã, B ~ ) = 1, then<br />

= ∑ ⎟ = ⎟<br />

n 1 ⎛ S(<br />

V ~ ( ui<br />

)) − S(<br />

V ~ ( ui<br />

)) ⎞<br />

⎜ A<br />

A<br />

1−<br />

n ⎜ i 1 ⎝<br />

2<br />

⎠<br />

= 1.<br />

H(Ã, B ~ ) ∑ ⎟ = ⎟<br />

n 1 ⎛ S(<br />

V ~ ( ui<br />

)) − S(<br />

V ~ ( ui<br />

)) ⎞<br />

= ⎜ A<br />

B<br />

1−<br />

= 1.<br />

n ⎜ i 1 ⎝<br />

2<br />

⎠<br />

It implies that ( )) V S = )), ( V S where 1 ≤ i ≤ n. Because )) ( V S =<br />

( ~<br />

A i u<br />

( ( ~<br />

B i u V ( ~<br />

B i u<br />

( ~<br />

B i u ( ~ A i u<br />

( ~ B i u<br />

( ~ B i u<br />

( )) V S and S )) = t ) – f ), where 1 ≤ i ≤ n, we can see that<br />

( ~<br />

B i u<br />

t ( ) = )<br />

~<br />

A i u<br />

t and f ) = ) ( f (i.e., 1 – ) ( f = 1 – ) ( f ),<br />

~ B i u<br />

~ A i u<br />

~ B i u<br />

where 1 ≤ i ≤ n. Therefore, the vague sets à and B ~ are identical. Q. E. D.<br />

Property 4: H(Ã, B ~ ) = H( B ~ , Ã).<br />

Proof:<br />

Because<br />

H(Ã, B ~ ) ∑ ⎟ = ⎟<br />

n 1 ⎛ S(<br />

V ~ ( ui<br />

)) − S(<br />

V ~ ( ui<br />

)) ⎞<br />

= ⎜ A<br />

B<br />

1−<br />

n ⎜ i 1 ⎝<br />

2<br />

⎠<br />

and<br />

H( B ~ , Ã) ∑ ⎟ = ⎟<br />

n 1 ⎛ S(<br />

V ~ ( ui<br />

) − S(<br />

V ~ ( ui<br />

)) ⎞<br />

= ⎜ B<br />

A<br />

1−<br />

,<br />

n ⎜ i 1 ⎝<br />

2<br />

⎠<br />

and because<br />

∑<br />

=<br />

⎟ ⎟⎟<br />

⎛ ⎞<br />

1 n S(<br />

V~<br />

( u −<br />

⎜<br />

i))<br />

S(<br />

V~<br />

( ui<br />

))<br />

⎜<br />

1−<br />

A B = ∑ n i 1<br />

2<br />

⎝<br />

⎠<br />

= ⎟ ⎟⎟<br />

⎛ ⎞<br />

1 n S(<br />

V~<br />

( u −<br />

⎜<br />

i)<br />

S(<br />

V~<br />

( ui<br />

))<br />

⎜<br />

1−<br />

B A ,<br />

n i 1<br />

2<br />

⎝<br />

⎠<br />

we can see that H(Ã, B ~ ) = H( B ~ , Ã). Q. E. D.<br />

( ~ A i u<br />

229


Example 1: Let à and B ~ be two vague sets of the universe of discourse U,<br />

U = {u1, u2, u3, u4, u5},<br />

à = [0.2, 0.4]/u1 + [0.3, 0.5]/u2 + [0.5, 0.7]/u3 + [0.7, 0.9]/u4 + [0.8, 1]/u5<br />

B ~ = [0.3, 0.5]/u1 + [0.4, 0.6]/u2 + [0.6, 0.8]/u3 + [0.7, 0.9]/u4 + [0.8, 1]/u5,<br />

where<br />

V A<br />

~ ( u1<br />

V ~ ( u<br />

A 2<br />

V ~ ( u<br />

A 3<br />

V ~ ( u<br />

A 4<br />

)<br />

)<br />

)<br />

)<br />

V ~ ( u<br />

B 1<br />

V ~ ( u<br />

B 2<br />

V ~ ( u<br />

B 3<br />

V ~ ( u<br />

B 4<br />

V ~ ( u<br />

B 5<br />

= [0.2, 0.4], ) = [0.3, 0.5],<br />

= [0.3, 0.5], ) = [0.4, 0.6],<br />

= [0.5, 0.7], ) = [0.6, 0.8],<br />

= [0.7, 0.9], ) = [0.7, 0.9],<br />

V ~ ( u ) = [0.8, 1], ) = [0.8, 1].<br />

A 5<br />

By applying Eq. (5), we can get<br />

= 0.2 – 0.6 = –0.4,<br />

S(<br />

V ~ ( u ))<br />

A 1<br />

S V ( ))<br />

( ~ u<br />

A 2<br />

S(<br />

V ~ ( u<br />

A 3<br />

S(<br />

V ~ ( u<br />

A 4<br />

S(<br />

V ~ ( u<br />

A 5<br />

S( V ~ ( u<br />

B 1<br />

S( V ~ ( u<br />

B 2<br />

S( V ~ ( u<br />

B 3<br />

S( V ~ ( u<br />

B 4<br />

S( V ~ ( u<br />

B 5<br />

))<br />

))<br />

))<br />

))<br />

))<br />

))<br />

))<br />

))<br />

= 0.3 – 0.5 = –0.2,<br />

= 0.5 – 0.3 = 0.2,<br />

= 0.7 – 0.1 = 0.6,<br />

= 0.8 – 0 = 0.8,<br />

= 0.3 – 0.5 = –0.2,<br />

= 0.4 – 0.4 = 0,<br />

= 0.6 – 0.2 = 0.4,<br />

= 0.7 – 0.1 = 0.6,<br />

= 0.8 – 0 = 0.8.<br />

By applying Eq. (7), the degree of similarity H(Ã, B ~ ) between the vague sets à and B ~ can be evaluated, shown as<br />

follows:<br />

H(Ã, B ~ ) ∑<br />

= ⎟ ⎟<br />

5 1 ⎛ S(<br />

V ~ ( u )) − S(<br />

V ~ ( u )) ⎞<br />

= ⎜ A i B i<br />

1−<br />

5 i 1 ⎜<br />

2<br />

⎝<br />

⎠<br />

1 ⎡⎛<br />

− 0.<br />

4 − ( −0.<br />

2)<br />

⎞ ⎛ − 0.<br />

2 − 0 ⎞ ⎛ 0.<br />

2 − 0.<br />

4 ⎞<br />

= ⎢⎜( 1−<br />

⎟ + ⎜1−<br />

⎟ + ⎜1−<br />

⎟ +<br />

5 ⎣⎝<br />

2 ⎠ ⎝ 2 ⎠ ⎝ 2 ⎠<br />

⎛ 0.<br />

6 − 0.<br />

6 ⎞ ⎛ 0.<br />

8 − 0.<br />

8 ⎞⎤<br />

⎜1<br />

− ⎟ + ⎜1−<br />

⎟⎥<br />

⎝ 2 ⎠ ⎝ 2 ⎠⎦<br />

1<br />

= ( 0.<br />

9 + 0.<br />

9 + 0.<br />

9 + 1+<br />

1)<br />

5<br />

= 0.94.<br />

It indicates that the degree of similarity between the vague sets à and B ~ is equal to 0.94.<br />

230


A Review of Biswas’ Methods for Students’ Answerscripts Evaluation<br />

Biswas (1995) used the matching function S to measure the degree of similarity between two fuzzy sets (Zadeh,<br />

1965). Let A and B be the vector representation of the fuzzy sets A and B, respectively. Then, the degree of<br />

similarity S( A , B ) between the fuzzy sets A and B can be calculated as follows (Chen, 1988):<br />

S( A , B ) =<br />

A ⋅ B<br />

, (8)<br />

Max(<br />

A ⋅ A,<br />

B ⋅ B)<br />

where S( A , B )∈[0, 1]. The larger the value of S( A , B ), the higher the similarity between the fuzzy sets A and B.<br />

Biswas (1995) presented a “fuzzy evaluation method” (fem) for evaluating students’ answerscripts, based on the<br />

matching function S. He used five fuzzy linguistic hedges, called Standard Fuzzy Sets (SFS), for students’<br />

answerscripts evaluation, i.e., E (excellent), V (very good), G (good), S (satisfactory) and U (unsatisfactory), where<br />

X = {0%, 20%, 40%, 60%, 80%, <strong>10</strong>0%},<br />

E = {(0%, 0), (20%, 0), (40%, 0.8), (60%, 0.9), (80%, 1), (<strong>10</strong>0%, 1)},<br />

V = {(0%, 0), (20%, 0), (40%, 0.8), (60%, 0.9), (80%, 0.9), (<strong>10</strong>0%, 0.8)},<br />

G = {(0%, 0), (20%, 0.1), (40%, 0.8), (60%, 0.9), (80%, 0.4), (<strong>10</strong>0%, 0.2)},<br />

S = {(0%, 0.4), (20%, 0.4), (40%, 0.9), (60%, 0.6), (80%, 0.2), (<strong>10</strong>0%, 0)},<br />

U = {(0%, 1), (20%, 1), (40%, 0.4), (60%, 0.2), (80%, 0), (<strong>10</strong>0%, 0)}.<br />

He used the vector representation method to represent the fuzzy sets E, V, G, S and U by the vectors E , V ,G , S<br />

and U , respectively, where<br />

E = ,<br />

V = ,<br />

G = ,<br />

S = ,<br />

U = .<br />

Biswas pointed out that “A”, “B”, “C”, “D” and “E” are letter grades, where 0 ≤ E < 30, 30 ≤ D < 50, 50 ≤ C < 70,<br />

70 ≤ B < 90 and 90 ≤ A ≤ <strong>10</strong>0. Furthermore, he presented the concept of “mid-grade-points”, where the mid-gradepoints<br />

of the letter grades A, B, C, D and E are P(A), P(B), P(C), P(D) and P(E), respectively, P(A) = 95, P(B) = 80,<br />

P(C) = 60, P(D) = 40 and P(E) = 15. Assume that an evaluator evaluates the first question (i.e., Q.1) of the<br />

answerscript of a student using a fuzzy grade sheet as shown in Table 2.<br />

Table 2. A fuzzy grade sheet (Biswas, 1995)<br />

Question No. Fuzzy mark Grade<br />

Q.1 0.1 0.2 0.3 0.6 0.8 0.9<br />

Q.2<br />

Q.3<br />

…<br />

…<br />

…<br />

…<br />

…<br />

…<br />

…<br />

Total mark =<br />

In the second row of Table 2, the fuzzy marks 0.1, 0.2, 0.3, 0.6, 0.8 and 0.9, awarded to the answer of question Q.1,<br />

indicate that the degrees of the evaluator’s satisfaction for that answer are 0%, 20%, 40%, 60%, 80% and <strong>10</strong>0%,<br />

respectively.<br />

In the following, we briefly review Biswas’ method (1995) for students’ answerscript evaluation as follows:<br />

Step 1: For each question in the answerscript repeatedly perform the following tasks:<br />

(1) The evaluator awards a fuzzy mark Fi to each question Q.i and fills up each cell of the ith row for the first seven<br />

…<br />

231


columns shown in Table 2, where 1 ≤ i ≤ n. Let F be the vector representation of Fi, where 1 ≤ i ≤ n.<br />

i<br />

(2) Based on Eq. (8), calculate the degrees of similarity S( E , F ), S(V , F ), S(G , F ), S( S , F ) and<br />

S(U , F ), respectively, where E , V , G , S and U are the vector representations of the standard fuzzy sets<br />

i<br />

E (excellent), V (very good), G (good), S (satisfactory) and U (unsatisfactory), respectively, and 1 ≤ i ≤ n.<br />

(3) Find the maximum value among the values of S( E , F ), S(V , F ), S(G , F ), S( S , F ) and S(U , F ).<br />

Assume that S(V , F ) is the maximum value among the values of S( E , F ), S(V , F ), S(G , F ), S( S , F )<br />

i<br />

i<br />

i<br />

i<br />

i<br />

and S(U , F ), then award the letter grade “B” to the question Q.i due to the fact that the letter grade “B”<br />

i<br />

corresponds to the standard fuzzy set V (very good). If S( E , F ) = S(V , F ) is the maximum value among the<br />

values of S( E , F ), S(V , F ), S(G , F ), S( S , F ) and S(U , F ), then award the letter grade “A” to the<br />

i<br />

i<br />

i<br />

i<br />

i<br />

question Q.i due to the fact that the letter grade “A” corresponds to the standard fuzzy set E (excellent).<br />

Step 2: Calculate the total mark of the student as follows:<br />

n 1<br />

× ∑<br />

=<br />

Total Mark = [ T ( Q.<br />

i)<br />

× P(<br />

g i )],<br />

(9)<br />

<strong>10</strong>0 i 1<br />

where T(Q.i) denotes the mark allotted to Q.i in the question paper, gi denotes the grade awarded to Q.i by Step 1 of<br />

the algorithm, P(gi) denotes the mid-grade-point of gi, and 1 ≤ i ≤ n. Put this total score in the appropriate box at the<br />

bottom of the fuzzy grade sheet.<br />

Biswas (1995) also presented a generalized fuzzy evaluation method (gfem) for students’ answerscripts evaluation,<br />

where a generalized fuzzy grade sheet shown in Table 3 is used to evaluate the students’ answerscripts.<br />

Table 3. A generalized fuzzy grade sheet (Biswas, 1995)<br />

Derived letter<br />

Question No. Fuzzy mark grade Mark<br />

Q.1<br />

Q.2<br />

…<br />

…<br />

…<br />

F11<br />

F12<br />

F13<br />

F14<br />

F21<br />

F22<br />

F23<br />

F24<br />

…<br />

…<br />

…<br />

i<br />

i<br />

g11<br />

g12<br />

g13<br />

g14<br />

g21<br />

g22<br />

g23<br />

g24<br />

…<br />

…<br />

…<br />

Total mark =<br />

In the generalized fuzzy grade sheet shown in Table 3, for all j = 1, 2, 3, 4 and for all i, gij denotes the derived letter<br />

grade by the fuzzy evaluation method fem for the awarded fuzzy mark Fij and mi denotes the derived mark awarded<br />

to the question Q.i, where<br />

4<br />

1<br />

mi = × T (Q.i) × ∑ P ( g ij ) , (<strong>10</strong>)<br />

400<br />

j=<br />

1<br />

n<br />

∑<br />

i=<br />

1<br />

and the Total Mark = m<br />

.<br />

i<br />

i<br />

i<br />

i<br />

i<br />

m1<br />

m2<br />

i<br />

…<br />

…<br />

…<br />

i<br />

i<br />

i<br />

i<br />

232


A New Method for Evaluating Students’ Answerscripts Based on the Similarity Measure<br />

between Vague Sets<br />

In this section, we present a new method for evaluating students’ answerscripts based on the similarity measure<br />

between vague sets. Let X be the universe of discourse. We use five fuzzy linguistic hedges, called Standard Vague<br />

Sets (SVS), for students’ answerscripts evaluation, i.e., E ~ (excellent), V ~ (very good), G ~ (good), S ~ (satisfactory)<br />

and U ~ (unsatisfactory), where<br />

X = {0%, 20%, 40%, 60%, 80%, <strong>10</strong>0%},<br />

E ~ = [0, 0]/0% + [0, 0]/20% + [0, 0]/40% + [0.4, 0.5]/60% + [0.8, 0.9]/80% +<br />

[1, 1]/<strong>10</strong>0%,<br />

V ~ = [0, 0]/0% + [0, 0]/20% + [0, 0]/40% + [0.4, 0.5]/60% + [1, 1]/80% +<br />

[0.7, 0.8]/<strong>10</strong>0%,<br />

G ~ = [0, 0]/0% + [0, 0]/20% + [0.4, 0.5]/40% + [1, 1]/60% + [0.8, 0.9]/80% +<br />

[0.4, 0.5]/<strong>10</strong>0%,<br />

S ~ = [0, 0]/0% + [0.4, 0.5]/20% + [1, 1]/40% + [0.8, 0.9]/60% + [0.4, 0.5]/80% +<br />

[0, 0]/<strong>10</strong>0%,<br />

U ~ = [1, 1]/0% + [1, 1]/20% + [0.4, 0.5]/40% + [0.2, 0.3]/60% + [0, 0]/80% +<br />

[0, 0]/<strong>10</strong>0%.<br />

Assume that “A”, “B”, “C”, “D” and “E” are letter grades, where 0 ≤ E < 30, 30 ≤ D < 50, 50 ≤ C < 70, 70 ≤ B < 90<br />

and 90 ≤ A ≤ <strong>10</strong>0. Assume that an evaluator evaluates the first question (i.e., Q.1) of a student’s answerscript, using a<br />

vague grade sheet as shown in Table 4.<br />

Table 4. A vague grade sheet<br />

Question No. Vague mark<br />

Q.1<br />

Q.2<br />

Q.3<br />

[0, 0] [0.1, 0.2] [0.3, 0.4] [0.6, 0.7] [0.7, 0.8] [1, 1]<br />

…<br />

Q.n<br />

…<br />

…<br />

…<br />

…<br />

…<br />

Total mark =<br />

…<br />

Derived letter<br />

grade<br />

In the second row of the vague grade sheet shown in Table 4, the vague marks [0, 0], [0.1, 0.2], [0.3, 0.4], [0.6, 0.7],<br />

[0.7, 0.8] and [1, 1], awarded to the answer of question Q.1, indicate that the degrees of the evaluator’s satisfaction<br />

for that answer are 0%, 20%, 40%, 60%, 80% and <strong>10</strong>0%, respectively. Let the vague mark of the answer of question<br />

~<br />

~<br />

Q.1 be denoted by F . Then, we can see that F is a vague set of the universe of discourse X, where<br />

1<br />

1<br />

X = {0%, 20%, 40%, 60%, 80%, <strong>10</strong>0%},<br />

~<br />

F = [0, 0]/0% + [0.1, 0.2]/20% + [0.3, 0.4]/40% + [0.6, 0.7]/60% +<br />

1<br />

[0.7, 0.8]/80% + [1, 1]/<strong>10</strong>0%.<br />

The proposed vague evaluation method (VEM) for students’ answerscripts evaluation is presented as follows:<br />

Step 1: For each question in the answerscript repeatedly perform the following tasks:<br />

(1) The evaluator awards a vague mark F i<br />

~ represented by a vague set to each question Q.i by his/her judgment and<br />

fills up each cell of the ith row for the first seven columns shown in Table 4, where 1 ≤ i ≤ n.<br />

(2) Based on Eq. (7), calculate the degrees of similarity H( E ~ , F i<br />

~ ), H(V ~ , F i<br />

~ ), H(G ~ , F i<br />

~ ), H( S ~ , F i<br />

~ ) and<br />

H(U ~ , F i<br />

~ ), respectively, where E ~ (excellent), V ~ (very good), G ~ (good), S ~ (satisfactory) and U ~<br />

(unsatisfactory) are standard vague sets.<br />

…<br />

233


(3) Find the maximum value among the values of H( E ~ ,<br />

H(W ~ , i<br />

i<br />

F ~ ) is the largest value among the values of H( E ~ ,<br />

F ~ ), H(V ~ , F ~ ), H(G ~ , F ~ ), H( S ~ ,<br />

i<br />

i<br />

F ~ ), H(V ~ , F ~ ), H(G ~ , F ~ ), H( S ~ ,<br />

i<br />

i<br />

i<br />

F ~ ) and H(U ~ , F ~ ). If<br />

i<br />

F ~ ) and H(U ~ ,<br />

whereW ~ ∈{ E ~ ,V ~ ,G ~ , S ~ ,U ~ }, then translate the standard vague set W ~ into the corresponding letter grade,<br />

where the standard vague set E ~ is translated into the letter grade “A”, the standard vague set V ~ is translated<br />

into the letter grade “B”, the standard vague set G ~ is translated into the letter grade “C”, the standard vague set<br />

S ~ is translated into the letter grade “D”, and the standard vague set U ~ is translated into the letter grade “E”.<br />

For example, assume that H(V ~ , F i<br />

~ ) is the maximum value among the values of H( E ~ , F i<br />

~ ), H(V ~ , F i<br />

~ ),<br />

H(G ~ , F i<br />

~ ), H( S ~ , F i<br />

~ ) and H(U ~ , F i<br />

~ ), then award grade “B” to the question Q.i due to the fact that the letter<br />

grade “B” corresponds to the standard vague set V ~ (very good). If H( E ~ , F i<br />

~ ) = H(V ~ , F i<br />

~ ) is the maximum<br />

value among the values of H( E ~ , F i<br />

~ ), H(V ~ , F i<br />

~ ), H(G ~ , F i<br />

~ ), H( S ~ , F i<br />

~ ) and H(U ~ , F i<br />

~ ), then award the letter<br />

grade “A” to the question Q.i due to the fact that the letter grade “A” corresponds to the standard vague set E ~<br />

(excellent).<br />

Step 2: Calculate the total mark of the student as follows:<br />

1 n<br />

Total Mark = × ~ ~<br />

∑ [ T ( Q.<br />

i)<br />

× K(<br />

g ) × H ( w,<br />

F )],<br />

(11)<br />

<strong>10</strong>0<br />

i<br />

i<br />

i = 1<br />

where T(Q.i) denotes the mark allotted to the question Q.i in the question paper, gi denotes the letter grade awarded<br />

to Q.i by Step 1, K(gi) denotes the derived grade-point of the letter grade gi based on the index of optimism λ<br />

determined by the evaluator, where λ∈[0, 1], H(W ~ , F i<br />

~ ) is the maximum value among the values of H( E ~ , F i<br />

~ ),<br />

H(V ~ , F i<br />

~ ), H(G ~ , F i<br />

~ ), H( S ~ , F i<br />

~ ) and H(U ~ , F i<br />

~ ), W ~ ∈{ E ~ ,V ~ ,G ~ , S ~ ,U ~ }, such that the derived letter grade<br />

awarded to the question Q.i is gi, and 1 ≤ i ≤ n. If 0 ≤ λ < 0.5, then the evaluator is a pessimistic evaluator. If λ = 0.5,<br />

then the evaluator is a normal evaluator. If 0.5 < λ ≤ 1.0, then the evaluator is an optimistic evaluator. Assume that<br />

the derived letter grade obtained in Step 1 with respect to the question Q.i is gi, where gi∈{A, B, C, D, E} and 0 ≤ y1<br />

≤ gi ≤ y2 ≤ <strong>10</strong>0, then the derived grade-point K(gi) shown in Eq. (8) is calculated as follows:<br />

K(gi) = (1 – λ) × y1 + λ × y2, (12)<br />

where λ is the index of optimism determined by the evaluator, λ∈[0, 1], and 0 ≤ y1 ≤ K(gi) ≤ y2 ≤ <strong>10</strong>0. Put the<br />

derived total mark in the appropriate box at the bottom of the vague grade sheet.<br />

Example 2: Consider a student’s answerscript to an examination of <strong>10</strong>0 marks. Assume that in total there are four<br />

questions to be answered:<br />

TOTAL MARKS = <strong>10</strong>0,<br />

Q.1 carries 30 marks,<br />

Q.2 carries 30 marks,<br />

Q.3 carries 20 marks,<br />

Q.4 carries 20 marks.<br />

Assume that an evaluator awards the student’s answerscript using the vague grade sheet shown in Table 5, where the<br />

index of optimismλ determined by the evaluator is 0.60, i.e.,λ = 0.60. Assume that “A”, “B”, “C”, “D” and “E” are<br />

letter grades, where 0 ≤ E < 30, 30 ≤ D < 50, 50 ≤ C < 70, 70 ≤ B < 90 and 90 ≤ A ≤ <strong>10</strong>0.<br />

Table 5. Vague grade sheet of Example 2<br />

Question No. Vague mark Derived letter grade<br />

Q.1 [0, 0] [0, 0] [0, 0] [0.4, 0.5] [1, 1] [0.5, 0.6]<br />

Q.2 [0, 0] [0, 0] [0, 0] [0.4, 0.5] [0.8, 0.9] [1, 1]<br />

Q.3 [0, 0] [0.4, 0.5] [1, 1] [0.6, 0.7] [0.4, 0.5] [0, 0]<br />

Q.4 [0.8, 0.9] [0.5, 0.6] [0.2, 0.3] [0, 0] [0, 0] [0, 0]<br />

Total mark =<br />

i<br />

i<br />

F ~ ),<br />

i<br />

234


From Table 5, we can see that the vague marks of the questions Q.1, Q.2, Q.3 and Q.4 represented by vague sets<br />

~ ~ ~ ~<br />

F , F , F , respectively, where<br />

are 1<br />

~<br />

2<br />

F and<br />

3 4<br />

F = [0, 0]/0% + [0, 0]/20% + [0, 0]/40% + [0.4, 0.5]/60% + [1, 1]/80% +<br />

1<br />

[0.5, 0.6]/<strong>10</strong>0%,<br />

~<br />

F = [0, 0]/0% + [0, 0]/20% + [0, 0]/40% + [0.4, 0.5]/60% + [0.8, 0.9]/80% +<br />

2<br />

[1, 1]/<strong>10</strong>0%,<br />

~<br />

F = [0, 0]/0% + [0.4, 0.5]/20% + [1, 1]/40% + [0.6, 0.7]/60% + [0.4, 0.5]/80% +<br />

3<br />

[0, 0]/<strong>10</strong>0%,<br />

~<br />

F 4 = [0.8, 0.9]/0% + [0.5, 0.6]/20% + [0.2, 0.3]/40% + [0, 0]/60% + [0, 0]/80% +<br />

[0, 0]/<strong>10</strong>0%.<br />

[Step 1] According to the standard vague sets E ~ ,V ~ ,G ~ , S ~ ,U ~ ~ ~<br />

and the vague marks F ,<br />

1 F ,<br />

2<br />

vague values, as shown in Table 6.<br />

~<br />

F ,<br />

3 4<br />

~<br />

F , we can get the<br />

Table 6. Vague values of Example 2<br />

t V~ ( t)<br />

E V~ ( t)<br />

V V~ ( t)<br />

G V~ ( t)<br />

S V ~ ( t)<br />

U<br />

V ~ ( t)<br />

F1<br />

V~ ( t)<br />

F2<br />

V~ ( t)<br />

F3<br />

V~ ( t)<br />

F4<br />

0 % [0, 0] [0, 0] [0, 0] [0, 0] [1, 1] [0, 0] [0, 0] [0, 0] [0.8, 0.9]<br />

20 % [0, 0] [0, 0] [0, 0] [0.4, 0.5] [1, 1] [0, 0] [0, 0] [0.4, 0.5] [0.5, 0.6]<br />

40 % [0, 0] [0, 0] [0.4, 0.5] [1, 1] [0.4, 0.5] [0, 0] [0, 0] [1, 1] [0.2, 0.3]<br />

60 % [0.4, 0.5] [0.4, 0.5] [1, 1] [0.8, 0.9] [0.2, 0.3] [0.4, 0.5] [0.4, 0.5] [0.6, 0.7] [0, 0]<br />

80 % [0.8, 0.9] [1, 1] [0.8, 0.9] [0.4, 0.5] [0, 0] [1, 1] [0.8, 0.9] [0.4, 0.5] [0, 0]<br />

<strong>10</strong>0 % [1, 1] [0.7, 0.8] [0.4, 0.5] [0, 0] [0, 0] [0.5, 0.6] [1, 1] [0, 0] [0, 0]<br />

By applying Eq. (5), we can get scores of the vague values, as shown in Table 7.<br />

Table 7. Scores of the vague values of Example 2<br />

t S( V~<br />

( t))<br />

E S( V~<br />

( t))<br />

V S( V ~ ( t))<br />

G S( V~<br />

( t))<br />

S S( V ~ ( t))<br />

U<br />

S( V ~ ( t))<br />

F 1<br />

S( V ~ ( t))<br />

F2<br />

S( V ~ ( t))<br />

F3<br />

S( V ~ ( t))<br />

F4<br />

0 % -1 -1 -1 -1 1 -1 -1 -1 0.7<br />

20 % -1 -1 -1 -0.1 1 -1 -1 -0.1 0.1<br />

40 % -1 -1 -0.1 1 -0.1 -1 -1 1 -0.5<br />

60 % -0.1 -0.1 1 0.7 -0.5 -0.1 -0.1 0.3 -1<br />

80 % 0.7 1 0.7 -0.1 -1 1 0.7 -0.1 -1<br />

<strong>10</strong>0 % 1 0.5 -0.1 -1 -1 0.1 1 -1 -1<br />

H(X, Y) Y<br />

X<br />

Table 8. The degrees of similarity between the vague sets<br />

~<br />

~<br />

~<br />

F 1<br />

F F 2<br />

3 F 4<br />

E ~ 0.900 1.000 0.492 0.342<br />

V ~ 0.967 0.942 0.508 0.358<br />

G ~ 0.792 0.742 0.633 0.350<br />

S ~ 0.508 0.458 0.967 0.425<br />

U ~ 0.300 0.250 0.508 0.825<br />

~<br />

235


By applying Eq. (7), we can get the degree of similarity H(X, Y) between the vague values X and Y, where<br />

~ ~ ~ ~<br />

F }, as shown in Table 8.<br />

X∈{ E ~ ,V ~ ,G ~ , S ~ ,U ~ } and Y ∈{ F 1 , F 2 , F 3 , 3<br />

Because H(V ~ ~<br />

, F 1 ) is the maximum value among the values of H( E ~ ~<br />

, F 1 ), H(V ~ ~<br />

, F 1 ), H(G ~ ~<br />

, F 1 ), H( S ~ ~<br />

, F 1 )<br />

and H(U ~ ~<br />

, F 1 ), we award grade “B” to the question Q.1 due to the fact that the letter grade “B” corresponds to the<br />

standard vague set V ~ (very good).<br />

Because H( E ~ ~<br />

, F 2 ) is the maximum value among the values of H( E ~ ~<br />

, F 2 ), H(V ~ ~<br />

, F 2 ), H(G ~ ~<br />

, F 2 ), H( S ~ ~<br />

, F 2 )<br />

and H(U ~ ~<br />

, F 2 ), we award grade “A” to the question Q.2 due to the fact that the letter grade “A” corresponds to the<br />

standard vague set E ~ (excellent).<br />

Because H( S ~ ~<br />

, F 3 ) is the maximum value among the values of H( E ~ ~<br />

, F 3 ), H(V ~ ~<br />

, F 3 ), H(G ~ ~<br />

, F 3 ), H( S ~ ~<br />

, F 3 )<br />

and H(U ~ ~<br />

, F 3 ), we award grade “D” to the question Q.3 due to the fact that the letter grade “D” corresponds to the<br />

standard vague set S ~ (satisfactory).<br />

Because H(U ~ ~<br />

, F 4 ) is the maximum value among the values of H( E ~ ~<br />

, F 4 ), H(V ~ ~<br />

, F 4 ), H(G ~ ~<br />

, F 4 ), H( S ~ ~<br />

, F 4 )<br />

and H(U ~ ~<br />

, F 4 ), we award grade “E” to the question Q.4 due to the fact that the letter grade “E” corresponds to the<br />

standard vague set U ~ (unsatisfactory).<br />

[Step 2] Because 90 ≤ A ≤ <strong>10</strong>0, 70 ≤ B < 90, 30 ≤ D < 50 and 0 ≤ E < 30, where “A”, “B”, “D” and “E” are letter<br />

grades, and the index of optimismλ determined by the evaluator is 0.60 (i.e.,λ = 0.60), based on Eq. (12), we can get<br />

the following results:<br />

K(A) = (1 – 0.60) × 90 + 0.60 × <strong>10</strong>0 = 96,<br />

K(B) = (1 – 0.60) × 70 + 0.60 × 90 = 82,<br />

K(D) = (1 – 0.60) × 30 + 0.60 × 50 = 42,<br />

K(E) = (1 – 0.60) × 0 + 0.60 × 30 = 18.<br />

Because the questions Q.1, Q.2, Q.3 and Q.4 carry 30 marks, 30 marks, 20 marks and 20 marks, respectively, and<br />

because H(V ~ ~<br />

, F 1 ) = 0.967, H( E ~ ~<br />

, F 2 ) = 1.000, H( S ~ ~<br />

, F 3 ) = 0.967 and H(U ~ ~<br />

, F 4 ) = 0.825, based on Eq. (11), the<br />

total mark of the student is evaluated as follows:<br />

1<br />

(30× 82× 0.967 + 30× 96× 1.000 + 20× 42× 0.967 + 20× 18× 0.825)<br />

<strong>10</strong>0<br />

1<br />

= (2378.82 + 2880 + 812.28 + 297)<br />

<strong>10</strong>0<br />

= 63.681<br />

= 64 (assuming that no half mark is given in the total mark).<br />

A Generalized Method for Evaluating Students’ Answerscripts Based on the Similarity<br />

Measure between Vague Sets<br />

In this section, we present a generalized vague evaluation method (GVEM) for students’ answerscripts evaluation<br />

based on the similarity measure between vague sets, where a generalized vague grade sheet shown in Table 9 is used<br />

to evaluate the students’ answerscripts.<br />

Table 9. A generalized vague grade sheet<br />

Question No. Sub-questions Vague mark Derived letter grade Mark<br />

Q.1<br />

Q.11<br />

~<br />

F 11<br />

~<br />

F<br />

g 11<br />

g<br />

12<br />

Q.12 12<br />

m1<br />

236


Q.2<br />

Q.n<br />

…<br />

~<br />

Q.13 F g<br />

13<br />

13<br />

~<br />

Q.14 F g<br />

14<br />

14<br />

~<br />

Q.21 F g<br />

21<br />

21<br />

~<br />

Q.22 F g<br />

22<br />

22<br />

~<br />

Q.23 F g<br />

23<br />

23<br />

~<br />

Q.24 F g<br />

24<br />

24<br />

…<br />

…<br />

~<br />

Q.n1 F g<br />

n1<br />

n1<br />

~<br />

Q.n2 F g<br />

n2<br />

n2<br />

~<br />

Q.n3 F g<br />

n3<br />

n3<br />

~<br />

Q.n4 F g<br />

n4<br />

n4<br />

…<br />

Total mark =<br />

In the generalized vague grade sheet shown in Table 9, each question Q.i consists of four sub-questions, i.e., Q.i1,<br />

Q.i2, Q.i3 and Q.i4. For all j = 1, 2, 3, 4 and for all i, gij is the derived letter grade by the proposed vague evaluation<br />

method VEM of the awarded vague mark F ij<br />

~ with respect to the sub-question Q.ij, and mi is the derived mark<br />

awarded to the question Q.i,<br />

1 4<br />

mi = × T (Q.i) × ~<br />

∑ [ K ( gij<br />

) × H ( w,<br />

Fij)],<br />

(13)<br />

400<br />

j=<br />

1<br />

and<br />

n<br />

∑<br />

i=<br />

1<br />

Total Mark = m .<br />

i<br />

where T(Q.i) denotes the mark allotted to Q.i in the question paper, gij denotes the derived letter grade awarded to<br />

Q.i, and K(gij) denotes the derived grade-point of the letter grade gij based on the index of optimism λ determined by<br />

the evaluator, whereλ ∈[0, 1], H(W ~ , ij F~ ) is the maximum value among the values of H( E ~ , ij F~ ), H(V ~ , ij F~ ),<br />

H(G ~ , ij F~ ), H( S ~ , ij F~ ) and H(U ~ , ij F~ ), W ~ ∈{ E ~ ,V ~ ,G ~ , S ~ ,U ~ }, such that the derived letter grade awarded to the<br />

question Q.ij is gij, 1 ≤ j ≤ 4, and 1 ≤ i ≤ n. If 0 ≤λ < 0.5, then the evaluator is a pessimistic evaluator. Ifλ = 0.5, then<br />

the evaluator is a normal evaluator. If 0.5


Q.2 carries 25 marks,<br />

Q.3 carries 25 marks,<br />

Q.4 carries 30 marks.<br />

Assume that the index of optimismλ of the evaluator is 0.60 (i.e., λ = 0.60). The evaluator uses Biswas’ method<br />

(1995) and the proposed method to evaluate the student’s answerscript on different days, respectively. The results are<br />

shown in Fig. 2 and Fig. 3, respectively. A comparison of the evaluating results of the student’s answerscript is<br />

shown in Table <strong>10</strong>. From Table <strong>10</strong>, we can see that the proposed method is more stable to evaluate students’<br />

answerscripts than Biswas’ method (1995). It can evaluate students’ answerscripts in a more flexible and more<br />

intelligent manner.<br />

July 1, 2006<br />

Question No. Satisfaction Levels Grade<br />

Q.1 0 0 0 0.6 0.9 0.8<br />

Q.2 0 0 0.6 0.9 0.8 0<br />

Q.3 0 0 0 0.6 0.8 0.9<br />

Q.4 0 0.6 0.9 0.8 0.2 0<br />

Total Mark =<br />

July 2, 2006<br />

Question No. Satisfaction Levels Grade<br />

Q.1 0 0 0 0.8 0.9 1<br />

Q.2 0 0 0.7 0.8 0.9 0<br />

Q.3 0 0 0 0.7 0.9 0.8<br />

Q.4 0 0.5 0.8 0.7 0 0<br />

Total Mark =<br />

July 3, 2006<br />

Question No. Satisfaction Levels Grade<br />

Q.1 0 0 0 0.6 0.9 0.7<br />

Q.2 0 0 0.6 0.8 0.7 0<br />

Q.3 0 0 0 0.5 0.7 0.9<br />

Q.4 0 0.5 0.8 0.6 0 0<br />

Total Mark =<br />

July 4, 2006<br />

Question No. Satisfaction Levels Grade<br />

Q.1 0 0 0 0.6 0.8 0.7<br />

Q.2 0 0 0.5 0.9 0.7 0<br />

Q.3 0 0 0 0.7 0.9 0.8<br />

Q.4 0 0.6 0.9 0.7 0 0<br />

Total Mark =<br />

Figure 2. Evaluating the student’s answerscript at different days using Biswas’ method (1995)<br />

July 1, 2006<br />

Question No. Vague marks Grade<br />

Q.1 [0, 0] [0, 0] [0, 0] [0.6, 0.7] [0.8, 0.9] [0.8, 0.9]<br />

Q.2 [0, 0] [0, 0] [0.6, 0.7] [0.8, 0.9] [0.8, 0.9] [0, 0]<br />

Q.3 [0, 0] [0, 0] [0, 0] [0.6, 0.7] [0.8, 0.9] [0.8, 0.9]<br />

Q.4 [0, 0] [0.5, 0.6] [0.8, 0.9] [0.7, 0.8] [0.1, 0.2] [0, 0]<br />

Total mark =<br />

July 2, 2006<br />

Question No. Vague marks Grade<br />

238


Q.1 [0, 0] [0, 0] [0, 0] [0.7, 0.8] [0.8, 0.9] [0.9, 1.0]<br />

Q.2 [0, 0] [0, 0] [0.6, 0.7] [0.8, 0.9] [0.8, 0.9] [0, 0]<br />

Q.3 [0, 0] [0, 0] [0, 0] [0.7, 0.8] [0.8, 0.9] [0.8, 0.9]<br />

Q.4 [0, 0] [0.5, 0.6] [0.8, 0.9] [0.7, 0.8] [0, 0] [0, 0]<br />

Total mark =<br />

July 3, 2006<br />

Question No. Vague marks Grade<br />

Q.1 [0, 0] [0, 0] [0, 0] [0.6, 0.7] [0.8, 0.9] [0.7, 0.8]<br />

Q.2 [0, 0] [0, 0] [0.6, 0.7] [0.8, 0.9] [0.7, 0.8] [0, 0]<br />

Q.3 [0, 0] [0, 0] [0, 0] [0.5, 0.6] [0.7, 0.8] [0.8, 0.9]<br />

Q.4 [0, 0] [0.5, 0.6] [0.8, 0.9] [0.6, 0.7] [0, 0] [0, 0]<br />

Total mark =<br />

July 4, 2006<br />

Question No. Vague marks Grade<br />

Q.1 [0, 0] [0, 0] [0, 0] [0.6, 0.7] [0.8, 0.9] [0.8, 0.9]<br />

Q.2 [0, 0] [0, 0] [0.5, 0.6] [0.8, 0.9] [0.7, 0.8] [0, 0]<br />

Q.3 [0, 0] [0, 0] [0, 0] [0.7, 0.8] [0.8, 0.9] [0.8, 0.9]<br />

Q.4 [0, 0] [0.6, 0.7] [0.8, 0.9] [0.7, 0.8] [0, 0] [0, 0]<br />

Total mark =<br />

Figure 3. Evaluating the student’s answerscript at different days using the proposed method<br />

Table <strong>10</strong>. A comparison of the evaluating results for different methods<br />

Methods<br />

Total<br />

mark<br />

Days Biswas’ method (1995) The proposed method<br />

July 1, 2006 69 68<br />

July 2, 2006 72 68<br />

July 3, 2006 55 68<br />

July 4, 2006 55 68<br />

The Merits of the Proposed Methods<br />

The proposed methods have the following advantages:<br />

(1) The proposed methods are more flexible and more intelligent than Biswas’ methods (1995) due to the fact that we<br />

use vague sets rather than fuzzy sets to represent the vague mark of each question, where the evaluator can use<br />

vague values to indicate the degree of the evaluator’s satisfaction for each question. Especially, the proposed<br />

methods are particularly useful when the assessment involves subjective evaluation.<br />

(2) The proposed methods are more stable to evaluate students’ answerscripts than Biswas’ methods (1995). They<br />

can evaluate students’ answerscripts in a more flexible and more intelligent manner.<br />

Conclusions<br />

In this paper, we have presented two new methods for evaluating students’ answerscripts based on the similarity<br />

measure between vague sets. The vague marks awarded to the answers in the students’ answerscripts are represented<br />

by vague sets, where each element belonging to a vague set is represented by a vague value. An index of<br />

optimismλ determined by the evaluator is used to indicate the degree of optimism of the evaluator, whereλ ∈[0, 1].<br />

Because the proposed methods use vague sets to evaluate students’ answerscripts rather than fuzzy sets, they can<br />

evaluate students’ answerscripts in a more flexible and more intelligent manner. The experimental results show that<br />

239


the proposed methods can evaluate students’ answerscripts more stable than Biswas’ methods (1995).<br />

Acknowledgements<br />

The authors would like to thank Professor Jason Chiyu Chan, Department of Education, National Chengchi<br />

University, Taipei, Taiwan, Republic of China, for providing very helpful comments and suggestions. This work was<br />

supported in part by the National Science Council, Republic of China, under Grant NSC 95-2221-E-011-117-MY2.<br />

References<br />

Biswas, R. (1995). An application of fuzzy sets in students’ evaluation. Fuzzy Sets and Systems, 74 (2), 187-194.<br />

Chang, D. F., & Sun, C. M. (1993). Fuzzy assessment of learning performance of junior high school students. Paper<br />

presented at the First National Symposium on Fuzzy Theory and Applications, June 25-26, 1993, Hsinchu, Taiwan.<br />

Chen, S. M. (1988). A new approach to handling fuzzy decisionmaking problems. IEEE Transactions on Systems,<br />

Man, and Cybernetics, 18 (6), <strong>10</strong>12-<strong>10</strong>16.<br />

Chen, S. M. (1995a). Arithmetic operations between vague sets. Paper presented at the International Joint<br />

Conference of CFSA/IFIS/SOFT’95 on Fuzzy Theory and Applications, December 7-9, 1995, Taipei, Taiwan.<br />

Chen, S. M. (1995b). Measures of similarity between vague sets. Fuzzy Sets and Systems, 74 (2), 217-223.<br />

Chen, S. M. (1997). Similarity measures between vague sets and between elements. IEEE Transactions on Systems,<br />

Man, and Cybernetics-Part B: Cybernetics, 27 (1), 153-158.<br />

Chen, S. M. (1999). Evaluating the rate of aggregative risk in software development using fuzzy set theory.<br />

Cybernetics and Systems, 30 (1), 57-75.<br />

Chen, S. M., & Lee, C. H. (1999). New methods for students’ evaluating using fuzzy sets. Fuzzy Sets and Systems,<br />

<strong>10</strong>4 (2), 209-218.<br />

Chen, S. M., & Wang, J. Y. (1995). Document retrieval using knowledge-based fuzzy information retrieval<br />

techniques. IEEE Transactions on Systems, Man, and Cybernetics, 25 (5), 793-803.<br />

Cheng, C. H., & Yang, K. L. (1998). Using fuzzy sets in education grading system. Journal of Chinese Fuzzy<br />

Systems Association, 4 (2), 81-89.<br />

Chiang, T .T., & Lin C. M. (1994). Application of fuzzy theory to teaching assessment. Paper presented at the 1994<br />

Second National Conference on Fuzzy Theory and Applications, September 15-17, 1994, Taipei, Taiwan.<br />

Frair, L. (1995). Student peer evaluations using the analytic hierarchy process method. Paper presented at the<br />

Frontiers in Education Conference, November 1-4, 1995, Atlanta, GA, USA.<br />

Gau, W. L., & Buehrer, D. J. (1993). Vague sets. IEEE Transactions on Systems, Man, and Cybernetics, 23 (2), 6<strong>10</strong>-<br />

614.<br />

Echauz, J. R., & Vachtsevanos, G. J. (1995). Fuzzy grading system. IEEE Transactions on Education, 38 (2), 158-<br />

165.<br />

Hwang, G. J., Lin, B. M. T., & Lin, T. L. (2006). An effective approach for test-sheet composition with large-scale<br />

item banks, Computers & Education, 46 (2), 122-139.<br />

Kaburlasos, V. G., Marinagi, C. C., & Tsoukalas, V. T. (2004). PARES: A software tool for computer-based testing<br />

240


and evaluation used in the Greek higher education system. Paper presented at the 2004 IEEE International<br />

Conference on Advanced Learning Technologies, August 30 – September 1, 2004, Joensuu, Finland.<br />

Law, C. K. (1996). Using fuzzy numbers in education grading system. Fuzzy Sets and Systems, 83 (3), 311-323.<br />

Liu, C. L. (2005). Using mutual information for adaptive item comparison and student assessment. <strong>Educational</strong><br />

<strong>Technology</strong> & Society, 8 (4), <strong>10</strong>0-119.<br />

Ma, J., & Zhou, D. (2000). Fuzzy set approach to the assessment of student-centered learning. IEEE Transactions on<br />

Education, 43 (2), 237-241.<br />

McMartin, F., Mckenna, A., & Youssefi, K. (2000). Scenario assignments as assessment tools for undergraduate<br />

engineering education. IEEE Transactions on Education, 43 (2), 111-119.<br />

Nykanen, O. (2006). Inducing fuzzy models for student classification. <strong>Educational</strong> <strong>Technology</strong> & Society, 9 (2), 223-<br />

234.<br />

Pears, A., Daniels, M., Berglund, A., & Erickson, C. (2001). Student evaluation in an international collaborative<br />

project course. Paper presented at the First International Workshop on Internet-Supported Education, January 08 -<br />

12, 2001, San Diego, CA, USA.<br />

Wang, H. Y., & Chen, S. M. (2006a). New methods for evaluating the answerscripts of students using fuzzy sets.<br />

Paper presented at the 19 th International Conference on Industrial, Engineering & Other Applications of Applied<br />

Intelligent Systems, June 27-30, 2006, Annecy, France.<br />

Wang, H. Y., & Chen, S. M. (2006b). New methods for evaluating students’ answerscripts using fuzzy numbers<br />

associated with degrees of confidence. Paper presented at the 2006 IEEE International Conference on Fuzzy<br />

Systems, July 16-21, 2006, Vancouver, BC, Canada.<br />

Wang, H. Y., & Chen, S. M. (2006c). New methods for evaluating students’ answerscripts using vague values. Paper<br />

presented at the 9 th Joint Conference on Information Sciences, <strong>October</strong> 8-11, 2006, Kaohsiung, Taiwan.<br />

Wang, H. Y., & Chen, S. M. (2006d). Evaluating students’ answerscripts based on the similarity measure between<br />

vague sets. Paper presented at the 11 th Conference on Artificial Intelligence and Applications, December 15-16,<br />

2006, Kaohsiung, Taiwan.<br />

Weon, S., & Kim, J. (2001). Learning achievement evaluation strategy using fuzzy membership function. Paper<br />

presented at the 31 st ASEE/IEEE Frontier in Education Conference, <strong>October</strong> <strong>10</strong>-13, 2001, Reno, NV, USA.<br />

Wu, M. H. (2003). A research on applying fuzzy set theory and item response theory to evaluate learning<br />

performance, Master Thesis, Department of Information Management, Chaoyang University of <strong>Technology</strong>, Taiwan.<br />

Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8, 338-353.<br />

241


Gogoulou, A., Gouli, E., Grigoriadou, M., Samarakou, M., & Chinou, D. (<strong>2007</strong>). A Web-based <strong>Educational</strong> Setting Supporting<br />

Individualized Learning, Collaborative Learning and Assessment. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 242-256.<br />

A Web-based <strong>Educational</strong> Setting Supporting Individualized Learning,<br />

Collaborative Learning and Assessment<br />

Agoritsa Gogoulou 1 , Evangelia Gouli 1 , Maria Grigoriadou 1 , Maria Samarakou 2 and<br />

Dionisia Chinou 1<br />

1 Department of Informatics & Telecommunications, University of Athens, Greece // {rgog, lilag, gregor}@di.uoa.gr,<br />

dionisiagr@gmail.com<br />

2 Department of Energy <strong>Technology</strong>, Technological <strong>Educational</strong> Institution of Athens, Greece // marsam@teiath.gr<br />

ABSTRACT<br />

In this paper, we present a web-based educational setting, referred to as SCALE (Supporting Collaboration and<br />

Adaptation in a Learning Environment), which aims to serve leaning and assessment. SCALE enables learners<br />

to (i) work on individual and collaborative activities proposed by the environment with respect to learners’<br />

knowledge level, (ii) participate actively in the assessment process in the context of self-, peer- or collaborativeassessment<br />

activities, (iii) work with educational environments, embedded or integrated in SCALE, that<br />

facilitate the elaboration of the activities and stimulate learners’ active involvement, (iv) use tools that support<br />

the synchronous and asynchronous collaboration/communication and promote learners’ interaction and<br />

reflection, and (v) have access to feedback components tailored to their own preferences. Also, learners have<br />

control on the navigation route through the provided activities and feedback components, personalizing in this<br />

way the learning process. The results revealed from the formative evaluation of the environment are positive and<br />

encouraging regarding the usefulness of the supported capabilities and tools.<br />

Keywords<br />

Learning, Collaboration, Assessment, Feedback, Adaptation<br />

Introduction<br />

An emerging trend in education worldwide is a movement of the focus from that of teaching to that of learning and<br />

from an individualistic and objectivist view of learning to a social constructivism view (Palinscar, 1998; Reigeluth,<br />

1999; Vosniadou, 2001). The underlying principle behind the social constructivism view of learning is that<br />

knowledge is constructed by the active interaction of learner with the environment and the idea that the construction<br />

of knowledge is socially mediated. It is claimed that effective collaboration has proven itself a successful and<br />

powerful learning method (Soller, 2001). Collaborative learning activities immerse students in challenging tasks or<br />

questions and enable them to become immediate practitioners and develop higher order reasoning and problem<br />

solving skills. In this context, collaborative learning is becoming increasingly used and the advent of communication<br />

technologies has made computer-mediated collaboration possible.<br />

Along with the learning process, assessment is considered an important component of an educational setting.<br />

Assessment plays a significant role in helping learners learn when it is interweaved with learning and instruction<br />

instead of being postponed at the end of the instruction (Shepard, 2000). Moreover, assessment helps students to<br />

identify what they have already learned, to observe their personal learning progress and to decide how to further<br />

direct their learning process. As knowledge construction necessitates higher order thinking, new forms of assessment<br />

are required. Assessment methods such as self-, peer- and collaborative-assessment have been introduced in recent<br />

years aiming to enhance/promote learning and integrate assessment with instruction. Self-assessment refers to the<br />

involvement of learners in making judgments about their own work/performance and aims at fostering reflection on<br />

one’s own learning and work (Sluijsmans et al., 1999). Peer-assessment refers to those activities of learners in which<br />

they judge and evaluate the work and/or the performance of their peers, while in collaborative-assessment, learners<br />

and instructor collaborate in order to clarify objectives and standards/criteria, negotiate details of the assessment and<br />

discuss any misunderstandings that exist (Sluijsmans et al., 1999). Integral part of the assessment process and a key<br />

aspect of learning and instruction is considered feedback (Mory, 1996). Feedback should guide and tutor learners<br />

towards the achievement of the underlying learning goals as well as stimulate and cultivate processes like selfexplanation,<br />

self-regulation and self-evaluation (Chi et al., 1994).<br />

Cognitive researchers view each individual learner as paramount in mediating learning. The learner becomes the<br />

focus of the learner-instruction transaction and instructional sequence decisions and options are adapted to individual<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

242


learner’s characteristics. Also, various motivational theories emphasize the importance of learner control. Control<br />

gives individuals the possibility to make choices, to affect outcomes and feel more competent and provokes sustained<br />

and intense effort (Lepper, 1985). Merrill (1980) asserted that the control of learning needs to be given to learners<br />

since in that way they have the possibility to learn better how to learn. <strong>Educational</strong> environments that attempt to<br />

combine technological learning tools with personalization that caters for individual characteristics and learning<br />

preferences have the potential to radically alter the landscape of learning.<br />

In this context, various research efforts and projects focus on the development of web-based learning environments<br />

that support either (i) individualized learning (Papanikolaou et al., 2003; Stern & Woolf, 2000; Weber &<br />

Brusilovsky, 2001) by making adjustments in the educational environment in order to accommodate a diversity of<br />

learner needs and abilities, or (ii) collaborative learning (Rosatelli & Self, 2004; Scardamalia & Bereiter, 1994,<br />

Vizcaino et al., 2000) by providing various means of dialogue and actions, facilities for students’ selfregulation/guidance,<br />

etc to support learners in their communication and in the accomplishment of collaborative<br />

activities, or (iii) assessment (Conejo et al., 2004; Sung et al., 2005) by offering opportunities to learners to identify<br />

what they have already learned and what they are able to do and to teachers to administer the assessment process.<br />

In line with the above efforts, we developed a web-based educational setting, referred to as SCALE (Supporting<br />

Collaboration and Adaptation in a Learning Environment) (available at http://hermes.di.uoa.gr:8080/scale), aiming to<br />

integrate learning and assessment by offering capabilities for individualized and collaborative learning as well as<br />

assessment. More specifically, SCALE enables learners to<br />

• work on individual and collaborative activities which are developed on the basis of contemporary theories of<br />

learning and proposed by the environment with respect to learners’ knowledge level,<br />

• participate actively in the assessment process in the context of self-, peer- or collaborative-assessment activities,<br />

• work with educational environments, embedded or integrated in SCALE, that facilitate the elaboration of the<br />

activities and stimulate learners’ active involvement,<br />

• use tools that support and promote the synchronous and asynchronous collaboration/communication,<br />

• have access to feedback tailored to their own preferences, and<br />

• have control on the navigation route through the provided activities and feedback components.<br />

The paper is structured as follows: In the next section, we give an outline of the theoretical foundations that guided<br />

the development of SCALE. Following, we describe how the learning setting of SCALE is modeled in terms of (i)<br />

the learning activities, (ii) the feedback components supported and (iii) the learner and group model. The tools as<br />

well as the environments supporting learning, collaboration and assessment are briefly presented followed by a<br />

description of the adaptive capabilities of SCALE. The main functionalities of SCALE are outlined through<br />

exemplary screen shots. Finally, the paper discusses the results of three empirical studies that were conducted in the<br />

context of the formative evaluation of the environment. The paper ends with the main points of our work and our near<br />

future plans.<br />

Theoretical Foundations<br />

The design principles of SCALE lie on (i) the Activity Theory which is used as a framework for modeling learning<br />

situations where individualized learning is interweaved with collaborative learning and the concept of activity serves<br />

as a unit of analysis (Hill et al., 2003), (ii) researchers’ suggestions that assessment should be represented as a tool<br />

for learning and powerful learning environments should encompass both instruction and assessment (Dochy &<br />

McDowell, 1997; Shepard, 2000), and (iii) the view that instruction and feedback should be aligned, as much as<br />

possible, to each individual learner’s characteristics (Jonassen & Grawboski, 1993).<br />

Central to the Activity Theory is the notion of activity; an activity is seen as a system of human "doing" whereby a<br />

subject works on an object, by employing mediational tools, in order to attain a desired outcome. Engeström (1987)<br />

developed an extended model of an activity, which adds the component of community; then adds rules to mediate<br />

between subject and community and the division of labour to mediate between object and community. That is, rules<br />

cover both explicit and implicit norms, conventions, and social relations within a community while division of labour<br />

refers to the explicit and implicit organisation of the community as related to the transformation process of the object<br />

into the outcome (Kuutti, 1995). In the framework of the SCALE environment, individualized learning is realized by<br />

243


enabling learner (subject) to work on individual activities with a specific context (object), which results into a<br />

specific outcome, utilizing various tools (mediational tools), which are considered necessary for the accomplishment<br />

of the activity (Figure 1a). The collaborative learning is taking place through collaborative activities where learners<br />

(subject) collaborate, in groups of up to four members (community), in the context of a specific collaborative<br />

learning activity (object) utilizing various tools (mediational tools) and undertaking specific roles which determine<br />

the responsibilities and duties of each learner (division of labour) as well as the rules of the collaboration (rules)<br />

(Figure 1b).<br />

(a) (b)<br />

Figure 1. Application of the Activity Model in SCALE<br />

While traditional assessment focuses on grading and ranking aspects and emphasizes on the need to find out if the<br />

student knows, understands, or is able to do, the new role of assessment emphasizes on the need to find out what the<br />

student knows, understands or is able to do. Many researchers suggest that students will learn more if instruction and<br />

assessment are integrally related and the provision of information about the quality of students’ work as well as<br />

about what they can do to improve is crucial for maximizing learning (Pellegrino et al., 2001). To this end,<br />

assessment should be integrated with feedback for permitting learning to become a logical outcome (Taras, 2002) as<br />

learners need to know what they are trying to accomplish, how close they are coming to the goal and be<br />

guided/supported towards the achievement of the underlying goal. Moreover, feedback should be aligned, as much as<br />

possible, to each individual learner’s characteristics, since individuals differ in their general skills, aptitudes and<br />

preferences for processing information, constructing meaning from it and/or applying it to new situations (Jonassen<br />

& Grabowski, 1993). Furthermore, self-, peer- and collaborative-assessment are alternatives in assessment that have<br />

recently received great attention as they are considered as part of the learning process where skills are developed<br />

(Sluijsmans et al., 1999). Towards this direction, SCALE supports the automatic assessment of the activities, the<br />

self-, peer- and collaborative-assessment as well as the provision of informative and tutoring feedback components<br />

tailored to learner’s individual characteristics.<br />

In the following, we present the model adopted and developed for (i) the representation of the SCALE learning<br />

setting in terms of the learning activities, (ii) the feedback components supported and (iii) the learner and the group<br />

model as a whole.<br />

Modelling Learning Setting in SCALE<br />

The SCALE learning setting aims to serve learning and assessment by supporting an educational framework, which<br />

determines the educational function and the educational/didactical approach followed. The educational function<br />

concerns either learning (knowledge construction) or assessment (ascertainment of learners’ prior knowledge,<br />

formative assessment or summative assessment) (Figure 2). For the accomplishment of the educational function, the<br />

learning setting may exploit an educational/didactical approach that better supports and facilitates the educational<br />

function under consideration (e.g. the educational approach of concept mapping may effectively serve the<br />

ascertainment of learners’ prior knowledge) and may require the use of a specific educational tool that facilitates the<br />

realization of the educational/didactical approach (e.g. in case of concept mapping, the COMPASS environment is<br />

244


used; see section “Tools Supporting Learning and Assessment”). Learner is engaged actively in the learning setting<br />

by working out activities which have been developed to address and serve the underlying educational functions and<br />

have been designed on the principles of the underlying educational/didactical approaches. SCALE attempts to<br />

support and guide learners by providing a framework that includes the feedback components (informative and<br />

tutoring), the notebooks (described analytically in the following) and the indicators which provide information about<br />

the elaboration of the activities (e.g. the number of learners that have worked out an activity, the times that learners<br />

asked for feedback and the type of feedback provided, the number of groups that have worked out a collaborative<br />

activity).<br />

Modelling Learning Activities<br />

Figure 2. The model of the learning setting in SCALE<br />

An activity in SCALE serves a specific learning goal, which corresponds to fundamental concept(s) of the subject<br />

matter (Figure 2). The learning goal is further analysed to learning outcomes that may be classified to the<br />

Comprehension level (Remember + Understand), the Application level (Apply), the Checking-Criticizing level<br />

(Evaluate), and the Creation level (Analyse + Create) (Gogoulou et al., 2005a). The activity has the so-called action<br />

framework, which determines the sub-activities that address and realize the outcomes of the activity. The subactivities<br />

may be individual or collaborative. Each sub-activity addresses learning outcomes that are classified to the<br />

abovementioned levels. The activities/sub-activities may have different difficulty level and different degree of<br />

importance for the accomplishment of the underlying goal with respect to the educational function and the addressed<br />

learning outcomes. In case of a collaborative activity/sub-activity, the action framework also determines the<br />

collaboration model that learners follow; the collaboration model specifies the number of group members, the role of<br />

each member and the moderator of the group being responsible for the submission of their common work and the<br />

coordination of the collaborative process. Depending on the educational function that the activity serves and the<br />

underlying outcomes, the assessment may be done by one of the following forms:<br />

• Automatic assessment: In case of activities including closed questions (i.e. multiple choice, true-false, fill-theblank),<br />

SCALE can automatically assess learner’s answer. Also, the automatic assessment of concept mapping<br />

activities is supported as these are accomplished by means of the COMPASS environment (see section “Tools<br />

Supporting Learning and Assessment”).<br />

245


• Self-, Peer- and Collaborative-assessment: Self-, Peer- and collaborative-assessment are three forms of<br />

assessment that enable learners to actively participate in the assessment process, to get inspiration from their<br />

peers’ work, to develop skills such as critical thinking, teamwork, self-monitoring and regulation, etc. These<br />

forms of assessment are accomplished by means of the PECASSE environment (see section “Tools Supporting<br />

Learning and Assessment”).<br />

• Assessment by the teacher: In case none of the above forms is supported, the teacher is responsible to assess the<br />

activity and inform learner about his/her performance and guide/tutor him/her appropriately.<br />

Modelling Feedback Components<br />

Feedback is considered a key aspect of learning and instruction. Characteristics that influence the effectiveness of<br />

feedback concern the type of feedback, the amount of the provided information as well as the adaptation to learners’<br />

individual differences. In this context, multiple informative and tutoring feedback components are provided during<br />

the elaboration of the activities in SCALE. The informative feedback components (i.e. correctness-incorrectness of<br />

response and performance feedback) inform learners about their current state; this information is included in the<br />

learner model, which is maintained by the environment during the interaction. The tutoring feedback components<br />

aim to tutor/guide learners and are structured in two levels, activity level and sub-activity level. The feedback<br />

components of the sub-activity level refer to the concepts of the sub-activity under consideration, while at activity<br />

level, feedback components are more general and address concepts/topics of the activity. The tutoring feedback<br />

components are associated with various types of knowledge modules (feedback types) and are distinguished in two<br />

categories: explanatory and exploratory. The explanatory feedback may include knowledge modules such as a<br />

description or a definition of the concept/topic, and the correct response whilst the exploratory feedback may include<br />

(i) an image, (ii) an example, (iii) an advice or an instruction on how to proceed, (iv) a question giving students a hint<br />

on what to think about, (v) a case study, (vi) a similar activity followed by its answer, and (vii) any answers given to<br />

the specific activity by other learners.<br />

The different categories and types of knowledge modules aim to serve learners’ individual preferences and to<br />

cultivate skills such as critical and analytical thinking, ability to compare and combine alternative solutions, etc. In<br />

any case, the teacher is responsible to design and develop the appropriate knowledge modules of each level, taking<br />

into account several factors such as the content of the activity/sub-activity under consideration, the difficulty level of<br />

the specific activity and the addressed learning outcomes.<br />

Modelling Learner and Group of Learners<br />

The Learner Model (LM) reflects specific characteristics of the learner and hence it is used as the main source of the<br />

adaptive behaviour of SCALE. The information held is divided into domain dependent information and domain<br />

independent information. As far as the domain dependent information is concerned, the LM keeps information about:<br />

(i) learner’s knowledge level (qualitative and quantitative estimation) with respect to the learning goals and activities<br />

that s/he has worked on, and (ii) learner’s behaviour during his/her interaction with the environment in terms of the<br />

number of times that feedback was asked, type of feedback proposed/selected, time spent on an activity, etc. As far<br />

as the domain independent information is concerned, the LM keeps general information about the learner such as<br />

username, profession, learner’s preferences on feedback types, last time/date the learner logged on/off. The LM is<br />

dynamically updated during learner’s interaction in order to keep track of learner’s “current state”. During<br />

interaction, learners may access their model and see the information held concerning their progress and interaction<br />

behaviour. Also, they have the possibility to modify their initially declared preferences regarding the types of<br />

feedback components supported. The externalisation of learner model aims to support the self-regulation and<br />

reflection processes and enable learner to modify domain independent information kept in LM. The Group Model<br />

(GM) holds information for the group as a whole. The GM keeps information about the activities that the group has<br />

elaborated on, the learners constituting the group, the model of collaboration followed during the elaboration of the<br />

activities and the date/time the group spend on the activity.<br />

246


Tools Supporting Learning and Assessment<br />

For the elaboration of an activity as well as for the promotion of learner’s interaction and reflection, SCALE offers<br />

various tools either embedded or integrated in the environment. In the following, an outline of these tools is given.<br />

Concept Mapping Environment<br />

In case the activity/sub-activity concerns a concept mapping task, the COMPASS environment is used (Gouli et al.,<br />

2006b). COMPASS (COncept MaP ASSessment & learning environment) (available at<br />

http://hermes.di.uoa.gr/compass) is a web-enabled concept mapping learning environment, which aims to assess<br />

learner’s understanding as well as to support the learning process by employing a variety of concept mapping<br />

activities, applying a scheme for the qualitative and quantitative estimation of learner’s knowledge and providing<br />

different informative, tutoring and reflective feedback components, tailored to learner’s individual characteristics and<br />

needs.<br />

Depending on the outcomes, the activities may employ different concept mapping tasks, such as the construction of a<br />

map, the evaluation/correction, the extension and the completion of a given map. The learners may have at their<br />

disposal a list of concepts and/or a list of relationships to use in the task and/or may be free to add the desired<br />

concepts/relationships. The provided lists may contain not only the required concepts/relationships but also<br />

concepts/relationships that play the role of distracters.<br />

Learner’s concept map may be assessed automatically by COMPASS. The analysis of the map is based on (i) the<br />

qualitative characterization of the errors aiming to contribute to the qualitative diagnosis of learner’s knowledge (i.e.<br />

learner’s incomplete understanding/beliefs and false beliefs) and (ii) the quantitative analysis aiming to evaluate<br />

learner’s knowledge level on the central concept of the map (Gouli et al., 2005). The results derived from the map<br />

analysis are represented to learners in an appropriate form during the feedback process. The feedback provided in<br />

COMPASS aims to serve processes of assessment and learning by (i) informing learners about their performance, (ii)<br />

guiding and tutoring learners in order to identify their false beliefs, focus on specific errors, reconstruct their<br />

knowledge and achieve specific learning outcomes addressed by the activity/task, and (iii) supporting reflection in<br />

terms of encouraging learners to “stop and think” and giving them hints on what to think about (Gouli et al., 2006b).<br />

The adaptive functionality of the feedback process is based on learner’s knowledge level, preferences and interaction<br />

behaviour and is implemented through (i) the technology of adaptive presentation that supports the provision of<br />

alternative forms of feedback and feedback components, and (ii) the stepwise presentation of the feedback<br />

components in the dialogue-based form of feedback. Moreover, COMPASS gives learners the possibility to have<br />

control over the feedback presentation process by making the desired selections.<br />

Synchronous and Asynchronous Communication Tools<br />

In the framework of a collaborative activity, learners communicate in order to exchange their ideas and decide on<br />

their common answer. They communicate following a collaboration model, either having the same duties or<br />

undertaking specific roles. All the collaboration/communication is carried out in a written form through synchronous<br />

or asynchronous means. In case of synchronous communication, learners use the ACT (Adaptive Communication<br />

Tool) tool (Gogoulou et al., 2005a), which aims to promote the cultivation of cognitive and communication skills<br />

and guide learners appropriately during their communication. In particular, ACT:<br />

(i) Adapts the communication with respect to the collaborative learning setting: ACT supports both the free and<br />

the structured form of dialogue; the structured dialogue is implemented either through sentence openers or<br />

communication acts. Depending on the learning outcomes addressed by the collaborative activity and the<br />

model of collaboration followed by the group members, the tool proposes the most suitable form of dialogue<br />

and type of scaffolding sentence templates (i.e. sentence openers or communication acts) and provides the most<br />

meaningful and complete set of scaffolding sentence templates adapted with respect to the collaborative<br />

learning setting.<br />

(ii) Enables learners to personalize the communication: the tool offers learners the possibility to have control on<br />

the adaptation by enabling them to negotiate on and select the form of dialogue (i.e. structured versus free<br />

dialogue) and the type of scaffolding sentence templates they prefer to use and enrich the provided set of<br />

scaffolding sentence templates with their own ones in order to cover their own “communication” needs.<br />

247


(iii) Regulates the communication: ACT monitors and analyses the interaction at various levels and provides<br />

alternative and complementary representations of the interaction analysis results as well as proposes remedial<br />

actions to guide learners (Gogoulou et al., 2005b).<br />

In case of asynchronous communication, learners use an asynchronous communication tool, which supports the<br />

labeling of the messages (e.g. a message may be a proposal, a question, a clarification) and the exchange of work.<br />

Peer- and Collaborative-Assessment Environment<br />

PECASSE (Peer- and Collaborative-ASSessment Environment) is a web-based environment that supports self-, peer-<br />

and collaborative-assessment (Gouli et al., 2006a) (available at http://hermes.di.uoa.gr:8080/pecasse). Learners may<br />

act as<br />

• “authors” being able to submit an activity, which has been elaborated either individually or collaboratively,<br />

• “assessors” being responsible to evaluate (i) their own activity in a brief way or according to specific criteria<br />

(self-assessment), and/or (ii) the activities submitted by their peers on their own or by collaborating with other<br />

learners (peer-assessment) or by collaborating with other learners and the instructor (collaborative-assessment),<br />

• “feedback evaluators” being able to evaluate the quality of the work/feedback, provided by their assessors.<br />

The assessment process may be carried out in three consecutive rounds at most. Each round involves the following<br />

steps: (i) activity submission and brief self-assessment, (ii) review of the assigned activities and provision of<br />

feedback, (iii) collaboration of authors and assessors, evaluation of assessors and revision of the activity submitted in<br />

the first step. In PECASSE environment, the review process may emphasize on the grading of the activities and/or<br />

the provision of useful feedback. The provided review/feedback may be structured and recorded either in an<br />

assessment form or in an assessment letter.<br />

Notebooks<br />

The notebooks give learners the possibility to write down their ideas/comments, to characterize them and, if they<br />

wish, to publish their notes; a note may be characterized as general information, proposal/answer, question,<br />

clarification, reasoning, comment or guideline. In this way, the notebooks aim to serve learners’ indirect<br />

collaboration by enabling them to read and answer the published notes and also to foster processes of reflection, and<br />

cultivate metacognitive skills such as self-regulation and self-control.<br />

SCALE supports two types of notebooks at two different levels. At the level of the subject matter, learners have<br />

access to the Notebook of the Subject Matter on which they maintain personal notes and access/reply/comment notes<br />

published by others concerning the specific subject matter and the concepts within the subject matter. At the activity<br />

level, learners have at their disposal the Notebook of the Activity, on which they can maintain personal notes and<br />

access published notes for the specific activity. This notebook acts as an asynchronous mean for learners’<br />

communication in the context of individual activities, aiming to encourage the externalization of personal thoughts<br />

and argumentation on learners’ beliefs.<br />

Adaptation in SCALE<br />

In SCALE, a navigation route through the provided activities and feedback is proposed, based on learner’s<br />

knowledge level and preferences respectively. Learners’ navigation is supported by using a graphical icon to point<br />

out the recommended activities and feedback components. Such a personalization aims to support learner in<br />

achieving the underlying learning goals following his/her own progress. The learner has the possibility to ignore the<br />

system’s recommendations and follow his/her navigation route.<br />

The technology of adaptive link annotation is used in order to generate a sequence of activities and feedback<br />

components that gradually guide learners to accomplish specific activity-related learning outcomes, and finally meet<br />

the underlying learning goal. In particular, SCALE plans the delivery of the activities for a particular learner (in the<br />

context of a learning goal), based on his/her progress with respect to the educational function served by the activity<br />

248


and its difficulty level. For example, if there is an activity aiming to ascertain/assess students’ prior knowledge, then<br />

it is the first one recommended as proposed by the environment (see Figure 3). Once learner completes such an<br />

activity, and his/her knowledge level is determined both quantitatively and qualitatively, the adaptation mechanism<br />

determines the next in sequence proposed activity with respect to learner’s knowledge level and the difficulty level<br />

of the provided activities. This rule is by-passed if there is an activity that has been defined as proposed by the<br />

teacher. The last proposed activity within a learning goal is the one (if any) that aims to draw conclusions about the<br />

degree of achieving the expected learning outcomes (i.e. summative assessment).<br />

For the delivery of the supported tutoring feedback components, SCALE takes into account learner’s preferences and<br />

the delivery sequence defined by the teacher. More specifically, initially the adaptation mechanism checks for<br />

feedback components compatible to learner’s preferences (i.e. whether the types of feedback that learner prefers<br />

coincide with the types of the available feedback). For a specific feedback type, the sequence of the proposed<br />

feedback components is determined with respect to the delivery sequence proposed by the teacher (e.g. in case three<br />

examples are available, these are proposed according to the teacher’s defined order). If learner’s preferences have<br />

been fulfilled, the rest feedback components are recommended with respect to the delivery sequence concerning the<br />

rest available feedback types (e.g. first the definition, then the examples and third the correct answer).<br />

As it is considered essential to allow learners to play an active role and take control over their own learning in order<br />

to meet their needs and preferences, SCALE gives learners the possibility to have control over the activities and<br />

feedback components presented by selecting the preferred activity to work out as well as the desired feedback<br />

component.<br />

Working with SCALE<br />

Based on the learning goal that the learner selects (i.e. the learning goal corresponds to fundamental concepts of the<br />

underlying subject matter), SCALE provides various activities. Figure 3 is the main screen of the SCALE<br />

environment showing information for the Subject Matter that the learner has chosen. More specifically, the Subject<br />

Matter “Informatics for Secondary Education” consists of two learning goals; learning goal A (Computer<br />

Architecture) and learning goal B (Internet). Learning goal A includes two activities A1 and A2. Both activities are<br />

based on the concept mapping approach (i.e. didactical approach), are individual (i.e. type of activity), are assessed<br />

automatically by the system and have not yet been submitted by the learner under consideration (i.e. status). The<br />

learning goal B includes five activities. Activities B1 and B2 are individual, are assessed automatically by the<br />

system, include one sub-activity consisting of various questions and have not yet been worked out by the learner;<br />

activity B3 is individual, includes two sub-activities consisting also of questions, is assessed by the teacher and the<br />

system (i.e. part(s) of the questions are automatically assessed by the system while other parts need to be assessed by<br />

the teacher) and has not yet been worked out; activities B4 and B5 are collaborative, include only one sub-activity<br />

assessed by the teacher and have not yet been worked out. According to the adaptation framework, activity B1 is the<br />

one proposed to the learner (i.e. it is denoted by an icon) as it is an activity aiming to ascertain learner’s prior<br />

knowledge on the specific learning goal.<br />

Once learner selects an activity to work on, the corresponding sub-activities are presented. Figure 4 presents one of<br />

the sub-activities of the activity B3 (Figure 3). The difficulty level of the specific sub-activity is 2 (out of 5), it is<br />

individual and it consists of a question, asking learner to answer to a multiple-choice question and reason his/her<br />

answer. The answer given to the multiple-choice question is automatically assessed while the answer given as<br />

reasoning has to be assessed by the teacher. While working on the activity, learner may have access to the Learner<br />

Model, the <strong>Educational</strong> Tools required for the elaboration of the activity, the Notebook of the activity in order to<br />

record any personal notes or to “communicate” with other learners and the Activity Indicators. Support to learner is<br />

provided through the Learner Assistant, which presents the feedback available at activity level. Once learner submits<br />

his/her answer to the sub-activity, the feedback available at sub-activity level is accessible (i.e. an icon similar to the<br />

Learner Assistant icon appears to the corresponding Feedback column of Figure 4). Figure 5 presents the available<br />

feedback for the sub-activity depicted in Figure 4. Three types of feedback components are provided: an<br />

instruction/hint, a case study and a similar problem. The feedback components are proposed according to learner’s<br />

preferences and the sequence defined by the teacher. Learner ignores the recommendation of the system for feedback<br />

components (i.e. the case study) and has selected to see the first feedback component providing an instruction/hint.<br />

249


Figure 3. A screen shot of the SCALE environment showing two learning goals for the Subject Matter “Informatics<br />

for Secondary Education”<br />

Figure 4. A screen shot of SCALE showing a sub-activity of the activity B3 presented in Fig 3<br />

250


Figure 5. A screen shot of the feedback window showing the available feedback components for the sub-activity<br />

of Figure 4<br />

Formative Evaluation<br />

In the context of the formative evaluation of the environment, three empirical studies were conducted at the Department<br />

of Informatics and Telecommunications of the University of Athens:<br />

• The 1 st empirical study was conducted during the spring-semester of the academic year 2004-2005 in the context<br />

of the postgraduate course of “Distance Education and Learning”. The study focused on usability issues of the<br />

interface, the provided facilities and tools.<br />

• The 2 nd empirical study was conducted during the winter-semester of the academic year 2005-2006 in the<br />

context of the undergraduate course of “Didactics of Informatics”. The study focused on usability issues<br />

regarding the PECASSE environment and students’ attitude towards the peer-assessment process.<br />

• The 3 rd empirical study was conducted during the spring-semester of the academic year 2005-2006 in the context<br />

of the undergraduate course of “Informatics in Education” and the postgraduate course of “Distance Education<br />

and Learning”. The study focused on the structure and presentation of the activities, the provision of feedback<br />

and the adaptive capabilities of the environment.<br />

All three studies were qualitative aiming to elicit students’ point of view for various functionalities supported by SCALE<br />

and the tools embedded or integrated in the environment.<br />

Process<br />

1 st study: The 1 st study was carried out through questionnaires including closed and open questions asking students to<br />

comment and reason their point of view. Thirty-eight students participated in the study, coming from a range of<br />

backgrounds and having different expertise in the use of web-based learning environments. The study took place in the<br />

main laboratory of the department and lasted 4 hours. Each student worked on his/her own computer on different<br />

scenarios. In particular, the working sheet had an activity-oriented behaviour aiming to involve students in different<br />

functions supported by SCALE; the purpose of the first and the second scenario was to enable students to explore the<br />

presentation/structure of the activities and the way of working out an activity; the third scenario focused on the elaboration<br />

of a collaborative activity using the ACT tool; the fourth scenario attempted to investigate the usefulness of the facilities<br />

provided by COMPASS environment and thus engaged students in a concept mapping task.<br />

251


2 nd study: Thirty-five students participated in the study, which lasted nine weeks in total. The students had to work on a<br />

self- and peer-assessment activity provided in SCALE. Learners were asked to design a lesson plan for a specific topic<br />

(half of them worked on the topic “Internet and search engines” and the rest worked on the topic “The concept of variable<br />

in programming”). The accomplishment of the activity was supported by the PECASSE environment. Initially students<br />

submitted their work and they self-evaluated and gave mark to their own work. Following, they were assigned two<br />

activities to assess: one addressing the same topic as their own and the second one addressing the alternative topic. The<br />

review process was carried out through an assessment form. As last step, the students received two anonymous reviews<br />

for their activity and evaluated their assessors. Upon the completion of the whole activity, students were asked to fill and<br />

submit a questionnaire concerning the evaluation of the PECASSE environment. Also, the students were asked to<br />

comment on the interface of SCALE and on the capability of the environment to support both the learning and assessment<br />

processes and provide various tools in order to serve this purpose.<br />

3 rd study: In the framework of the undergraduate course “Informatics & Education” and the postgraduate course “Distance<br />

Education and Learning”, eighteen students were asked to act as designers for the development of educational material for<br />

SCALE. In particular, the students had to explore the environment (i.e. an indicative set of activities had been developed)<br />

and to design and develop material, following the principles of the environment, for (i) the topic “Looping constructs in<br />

programming” for the secondary education (the undergraduate students) and (ii) the main concepts (e.g. open education,<br />

distance education, the role of teacher) of the “Distance Education and Learning” course (the postgraduate students). The<br />

students had to submit a set of individual and collaborative activities accompanied with appropriate feedback components.<br />

They also had to comment on the SCALE environment regarding the structure and presentation of the activities, the<br />

capability of providing alternative feedback types and the adaptation of the environment.<br />

Results<br />

The three empirical studies revealed positive and interesting results, which are presented in the following in terms of<br />

the issues investigated.<br />

Support of learning and assessment<br />

The students, who participated in the three studies, found interesting the capability of the environment to support<br />

both learning and assessment. The variety of activities/sub-activities that students may work on as well as their active<br />

participation in the learning and assessment process was high in most of the students favour. They rated positively<br />

the capability of automatic assessment and the provision of immediate informative feedback (i.e. knowledge of<br />

correctness/incorrectness of their response). It is worthwhile mentioning that their opinion was that the SCALE<br />

environment can effectively support the instruction process in higher education.<br />

Despite the positive comments, the students of the 3 rd study acting as designers of educational material for the<br />

environment, had difficulties in organizing/structuring their activities to the design principles of SCALE. In<br />

particular, they found hard the decomposition of an activity into a logical structure of sub-activities and the<br />

specification of characteristics such as difficulty level and outcome level. It seems that their difficulty is mainly<br />

attributed to their inexperience in acting as authors of educational material.<br />

Provision of feedback<br />

As mentioned above, the students considered essential the provision of informative feedback. Furthermore, they<br />

found very useful the provision of different types of feedback components (e.g. examples, case studies, hints).<br />

Especially the students of the 3 rd activity, fully explored the alternative feedback components that were available and<br />

tried to design feedback material to cover all types. They also found useful and quite guiding the structuring and<br />

provision of feedback at activity and sub-activity level. However, some students claimed that they should have<br />

access to the feedback of the sub-activity level while working with the specific sub-activity; in the current version of<br />

the system, the feedback at the sub-activity level is accessible once they submit their answer for first time.<br />

Support of adaptation<br />

The students of the 3 rd study took advantage of the capability of the environment to adapt the delivery sequence of<br />

the available feedback components to their preferences, kept in the learner model. They asserted that the use of an<br />

indicative icon to point out the recommended feedback component facilitates learner’s navigation while<br />

simultaneously enables learner to have control on the navigation route and select the desired component. The<br />

252


adaptivity supported for the recommendation of the most appropriate activity with respect to the students’ progress<br />

needs to be investigated in a future study, as it was not included in the presented studies.<br />

Presentation/Structure/Accessibility of the activities & sub-activities<br />

Most of the students that participated in the three studies expressed their satisfaction regarding the organization/structure<br />

of the activities. They also marked as adequate the characteristics presented for the activities and sub-activities (Figure 3<br />

and 4) and they believe that the presented characteristics depict a reasonable amount of information and facilitate their<br />

interaction. Students’ suggestions concerned the difficulty level of the sub-activities; most of them asked for more details<br />

for the specific characteristic. Regarding the way of accessing and working on the sub-activities, the students participated<br />

in the 1 st study reported that they should have access to any sub-activity included in an activity instead of following the<br />

sequential order imposed by the environment; the corresponding version of the system restricted students to follow a<br />

sequential order while the current version of the system enables students to select and work on the sub-activities<br />

following their preferred order. The students of the 2 nd and the 3 rd study commented positively on this change mentioning<br />

that it is quite helpful to have a look at the content of the sub-activities and subsequently decide on which one to work.<br />

Moreover, the possibility to submit their answer as many times as they wish stood high in most of the students favour; the<br />

teacher is responsible to specify for every activity the maximum number of times that students are allowed to access the<br />

activity and re-submit their answer.<br />

Facilities and tools supported<br />

Notebooks: Most students (85%) of the 1 st study that used the notebooks in a systematic way, believed that this facility<br />

can help in the elaboration of the activities as they have the chance to “collaborate”, exchange their ideas, ask questions,<br />

externalize their thoughts and share their expertise. Most of them appreciated the participation of the teacher in the<br />

notebook of the Subject Matter; during the 1 st study, the teacher asked students to express their point of view for the<br />

variety of activities provided using the corresponding notebook of the Subject Matter and she kept track of the students’<br />

notes and participated in the “conversation”. Despite the students’ positive attitude towards the specific facility, a lot of<br />

them found the way of working with notebooks as moderate (62%) as they considered limited the space provided for<br />

writing a note and for presenting the list of the submitted notes (these comments were taken into account in the<br />

development of the current version of the environment).<br />

ACT tool: In the context of the 1 st study, the students worked on a collaborative activity. For the elaboration of the<br />

specific activity, the students had to use the ACT tool in order to collaborate and communicate with their partner<br />

synchronously. All of the students asserted that they had no difficulties in accessing ACT. As far as the evaluation of the<br />

ACT tool is concerned, the analysis of the students’ answers revealed that a considerable number of students (83%)<br />

characterized the way of working with the provided scaffolding sentence templates as easy. The majority of the students<br />

(83%) considered the capability of the ACT tool to group messages into sub-trees and to represent the dialogue in a visual<br />

graphical form (Dialogue Tree) very useful because it enables them to monitor the dialogue in an organized and<br />

comprehensive manner, to evaluate the collaboration process more easily and to proceed to interventions in order to<br />

improve their participation. Only a small number of students (17%) mentioned that there was no need to consult the<br />

Dialogue Tree during the elaboration of the activities. As far as the adaptation of the scaffolding sentence templates is<br />

concerned, the majority of the students (80%) considered the provided type of scaffolding sentence templates (i.e.<br />

sentence openers) appropriate for the corresponding context of the activity and most of them (66%) characterized the<br />

facility of enriching the predefined sets of sentence openers with their own ones as useful (50% of them took advantage<br />

of the specific facility during the elaboration of the activities).<br />

COMPASS environment: COMPASS was used for the elaboration of a concept mapping task in the context of the 1 st<br />

study. The students accessed the environment through SCALE and used it for the construction of a concept map. While<br />

working, they used facilities for the analysis of the map, the provision of feedback and the quantitative and qualitative<br />

estimation of learners’ knowledge level. Most of the facilities were characterized as useful: 68% for the analysis of the<br />

map, 81% for the provision of feedback, 31% for the quantitative estimation of learner’s knowledge level and 56% for<br />

the diagnosis of students’ false beliefs and incomplete understanding. A considerable number of students (69%)<br />

characterized the facility concerning the quantitative estimation of learner’s knowledge level as neutral as they believed<br />

that the added value in such a tool is the provision of feedback, which helps learners to identify their weaknesses and<br />

errors and improve their concept maps.<br />

PECASSE environment: The students of the 2 nd study that involved in the self- and peer-assessment process through the<br />

PECASSE environment, believe that the peer-assessment process promotes and enhances learning but the majority of<br />

253


them characterized it as time and effort consuming. A considerable number of students considered that PECASSE fulfils<br />

the aims of the peer-assessment process, facilitates the execution of the steps and contributes positively in the realization<br />

of the process in a useful/easy way. They found most of the provided facilities useful and usable and they suggested<br />

improvements for the management and the completion of the assessment form. Also, 76% of the students believe that<br />

PECASSE can be incorporated effectively as an assessment tool in the instruction process and about 60% of the students<br />

were willing to work out activities through PECASSE in the future. As far as the review process is concerned, a<br />

considerable number of students (89%) were satisfied and considered that the feedback they received was useful and<br />

helped them to revise their initial activity.<br />

Conclusions and Outlook<br />

The educational setting presented in this paper attempts to interweave individualized learning with collaborative<br />

learning as well as assessment. SCALE supports learning and assessment by (i) enabling learners to select the<br />

desired learning goal and the activities serving this goal, (ii) providing multiple informative and tutoring feedback<br />

components both at the activity and the sub-activity level, (iii) supporting various tools, which facilitate the<br />

elaboration of the activities and support learner’s synchronous and asynchronous communication/collaboration and<br />

the processes of reflection and self-regulation, and (iii) serving various forms of assessment such as the automatic<br />

assessment of the activities, the self-, peer- and collaborative-assessment. Moreover, SCALE supports the individual<br />

learner in achieving the underlying learning goals by proposing a navigation route through the provided activities<br />

and feedback, based on learner’s knowledge level and preferences respectively. So far, the results of the three studies<br />

that were carried out revealed that the provided facilities and tools may facilitate and support learning and<br />

assessment and stood high in most of the students favour. However, the use of the SCALE learning setting in real<br />

classroom conditions under long periods of time is considered necessary.<br />

Acknowledgments<br />

The above research work was partly co-funded by the European Social Fund and National Resources - (EPEAEK II)<br />

ARXIMHDHS.<br />

References<br />

Chi, M., de Leeuw, N, Chiu, M-H., & Lavancher, C. (1994). Eliciting self-explanation improves understanding.<br />

Cognitive Science, 18, 439–477.<br />

Conejo, R. Guzmán, E. Millán, E., Trella, M., Pérez-De-La-Cruz, J., & Ríos, A. (2004). SIETTE: A Web-Based Tool<br />

for Adaptive Testing. International Journal of Artificial Intelligence in Education, 14, 29-61.<br />

Dochy, F., & McDowell, L. (1997). Assessment as a tool for learning. Studies in <strong>Educational</strong> Evaluation, 23 (4),<br />

279-298.<br />

Engeström, Y. (1987). Learning by expanding: An activity theoretical approach to developmental research,<br />

Helsinki, Finland: Orienta Konsultit Oy.<br />

Gogoulou, A. Gouli, E., Grigoriadou, M., & Samarakou, M. (2005a). ACT: A Web-based Adaptive Communication<br />

Tool. In Koschmann, T., Suthers, D. & Chan, T.W. (Eds.), Proceedings of Computer Supported Collaborative<br />

Learning 2005: The Next <strong>10</strong> Years!, Mahwah, NJ: Lawrence Erlbaum, 180-189.<br />

Gogoulou, A. Gouli, E., & Grigoriadou, M. (2005b). Analysing Learner Interaction in an Adaptive Communication<br />

Tool. Paper presented at the AIED2005 Workshop on Representing and Analyzing Collaborative Interactions: What<br />

works? When does it work? To what extent?, July 18-22, 2005, Amsterdam.<br />

254


Gouli, E., Gogoulou, A., Papanikolaou, K., & Grigoriadou, M. (2005). Evaluating Learner’s Knowledge level on<br />

Concept Mapping Tasks. In Goodyear, P., Sampson, D.G., Yang, D., Kinshuk, Okamoto, T., Hartley, R. & Chen, N-<br />

S (Eds.), Proceedings of the 5th IEEE International Conference on Advanced Learning Technologies, Los Alamitos:<br />

IEEE Computer Society, 424-428.<br />

Gouli, E., Gogoulou, A., & Grigoriadou, M. (2006a). Supporting Self- Peer- and Collaborative-Assessment through a<br />

Web-based Environment. In Kommers, P. & Richards, G. (Eds.), Proceedings of ED-MEDIA 2006, Chesapeake,<br />

VA: AACE, 2192-2199.<br />

Gouli, E., Gogoulou, A., Tsakostas, C., & Grigoriadou, M. (2006b). How COMPASS supports multi-feedback forms<br />

& components adapted to learner’s characteristics. In Cañas, A. & Novak, J. (Eds.), Proceedings of the Second<br />

International Conference on Concept Mapping, Universidad de Costa Rica, Vol.1, 255-262.<br />

Hill, C.M., Cummings, M., & van Aalst, J. (2003). Activity Theory as a Framework for Analyzing Participation<br />

within Knowledge Building Community. Paper presented at the annual meeting of the American <strong>Educational</strong><br />

Research Association, Chicago, IL, retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.educ.sfu.ca/kb/Papers/<br />

Hill_Cummings.pdf.<br />

Jonassen, D., & Grabowski, B. (1993). Handbook of Individual Differences, Learning and Instruction, Hillsdale, NJ:<br />

Lawrence Erlbaum.<br />

Kuutti, K. (1995) Activity Theory as a Potential Framework for Human-Computer Interaction Research. In Nardi,<br />

B.A. (Ed.), Context and Consciousness: Activity Theory and Human-Computer Interaction, MIT Press, 17-44.<br />

Lepper, M.R. (1985). Microcomputers in education: Motivational and social issues. American Psychologist, 40, 1-18.<br />

Merrill, M.D. (1980). Learner control in Computer Based Learning. Computers & Education, 4, 77-95.<br />

Mory, E. (1996). Feedback Research. In Jonassen, D. H. (Ed.), Handbook of research for educational<br />

communications and technology, New York: Simon & Schuster Maxmillan, 919-956.<br />

Palinscar, A. (1998). Social constructivist perspectives on teaching and learning. Annual Review of Psychology, 49,<br />

345-375.<br />

Papanikolaou, K., Grigoriadou, M., Kornilakis, H., & Magoulas, G. (2003). Personalizing the interaction in a webbased<br />

educational hypermedia system: the case of INSPIRE. User-Modeling and User-Adapted Interaction, 13 (3),<br />

213-267.<br />

Pellegrino, J., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The Science and Design of<br />

<strong>Educational</strong> Assessment, Washington DC: National Academy Press.<br />

Reigeluth, C.M. (1999). Instructional design theories and models: A new paradigm of instructional theory, vol. II,<br />

Mahwah, NJ: Lawrence Erlbaum.<br />

Rosatelli, M., & Self, J. (2004). A Collaborative Case Study System for Distance Learning. International Journal of<br />

Artificial Intelligence in Education, 14, 97-125.<br />

Scardamalia, M., & Bereiter, C. (1994). Computer support for knowledge-building communities. The Journal of the<br />

Learning Sciences, 3 (3), 265-283.<br />

Shepard, L. (2000). The Role of Assessment in a Learning Culture. <strong>Educational</strong> Researcher, 29 (7), 4-14.<br />

Sluijsmans, D., Dochy, F., & Moerkerke, G. (1999). Creating a learning environment by using self- peer- and coassessment.<br />

Learning Environments Research, 1, 293-319.<br />

255


Soller, A. (2001). Supporting Social Interaction in an Intelligent Collaborative Learning System. International<br />

Journal of Artificial Intelligence in Education, 12, 40-62.<br />

Stern, M.K., & Woolf, B.P. (2000). Adaptive content in an online lecture system. Lecture Notes in Computer<br />

Science, 1892, 227–238.<br />

Sung, Y-T., Chang, K-E., Chiou, S-K., & Hou, H-T. (2005). The design and application of a web-based self- and<br />

peer-assessment system. Computers & Education, 45, 187-202.<br />

Taras, M. (2002). Using assessment for learning and learning for assessment. Assessment & Evaluation in Higher<br />

Education, 27 (6), 501-5<strong>10</strong>.<br />

Vizcaíno, A. Contreras, J., Favela, J., & Prieto, M. (2000). An Adaptive, Collaborative Environment to Develop<br />

Good Habits in Programming. Lecture Notes in Computer Science 1839, 262-271.<br />

Weber, G., & Brusilovsky, P. (2001). ELM-ART: An Adaptive Versatile System for Web-based Instruction.<br />

International Journal of Artificial Intelligence in Education, 12 (4), 351-384.<br />

Vosniadou, S. (2001). How children learn, retrieved <strong>October</strong> 15, <strong>2007</strong> from http://www.ibe.unesco.org/publications/<br />

<strong>Educational</strong>PracticesSeriesPdf/prac07e.pdf.<br />

256


Roberts, T. S., & McInnerney, J. M. (<strong>2007</strong>). Seven Problems of Online Group Learning (and Their Solutions). <strong>Educational</strong><br />

<strong>Technology</strong> & Society, <strong>10</strong> (4), 257-268.<br />

Seven Problems of Online Group Learning (and Their Solutions)<br />

Tim S. Roberts and Joanne M. McInnerney<br />

Faculty of Business and Informatics, Central Queensland University, Australia // t.roberts@cqu.edu.au //<br />

cowlrick@optusnet.com.au<br />

ABSTRACT<br />

The benefits of online collaborative learning, sometimes referred to as CSCL (computer-supported collaborative<br />

learning) are compelling, but many instructors are loath to experiment with non-conventional methods of<br />

teaching and learning because of the perceived problems. This paper reviews the existing literature to present<br />

the seven most commonly reported such problems of online group learning, as identified by both researchers<br />

and practitioners, and offers practical solutions to each, in the hope that educators may be encouraged to “take<br />

the risk”.<br />

Keywords<br />

Online collaborative learning, CSCL, Group learning, Group work, Free riders<br />

Introduction<br />

The importance and relevance of social interaction to an effective learning process has been stressed by many<br />

theorists, from Vygotsky (1978), through advocates of situated learning such as Lave and Wenger (1991), and many<br />

other recent researchers and practitioners. Indeed, the academic, social, and psychological benefits of group learning<br />

in a face-to-face environment are well documented (see, for example, Johnson & Johnson, 1977, 1984; Slavin, 1987;<br />

Tinzmann et al, 1990; Bonwell & Eison, 1991; Felder & Brent, 1994; Panitz & Panitz, 1998; Burdett, 2003; Graham<br />

and Misanchuk, 2004; Roberts, 2004, 2005).<br />

Online group learning, sometimes referred to as computer-supported collaborative learning (CSCL), if implemented<br />

appropriately, can provide an ideal environment in which interaction among students plays a central role in the<br />

learning process (see, for example, Koschmann, 1999, 2001; Lipponen, 2002; Lipponen et al, 2002). Why then is<br />

online group learning not more widely practiced, particularly within higher education? There are a variety of<br />

possible reasons that could be supported with some justification. Certainly one reason that would be prominent in<br />

any list would be educators’ fears of veering away from the well established “sage on the stage” mentality<br />

(characterised by the traditional lecture / seminar / tutorial format, with notes and other resources provided on the<br />

Web) to the more increasingly common “guide on the side” mentality (characterised by various forms of group and<br />

peer learning). These fears can, however, be readily allayed by a prior knowledge of the problems likely to be<br />

encountered, and appropriate solutions that can be applied.<br />

The Seven Problems<br />

Amongst the problems that are thought to be inherent to this method of teaching, the seven most commonly found in<br />

the literature are the following:<br />

Problem #1: student antipathy towards group work<br />

Problem #2: the selection of the groups<br />

Problem #3: a lack of essential group-work skills<br />

Problem #4: the free-rider<br />

Problem #5: possible inequalities of student abilities<br />

Problem #6: the withdrawal of group members, and<br />

Problem #7: the assessment of individuals within the groups.<br />

Many of these problems of online group learning are inter-related. For example, student antipathy (#1) may lead to<br />

free-riders within groups (#4), and even the withdrawal of some group members (#6), and this in turn may cause<br />

problems for the assessment of individuals within the groups (#7). Indeed, it will be seen that problem #7 in<br />

particular is central. Nevertheless, it is perhaps advantageous to examine each of these problems independently,<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

257


making reference to the others where appropriate. All of the listed problems are relatively easy to overcome, and the<br />

possible solutions undemanding when measured in terms of time and resources.<br />

Problem #1: student antipathy towards group work<br />

Some students do not care for the idea of group work and can be apathetic, or even on occasions actively hostile to<br />

the whole idea. Why should this be so? Given that it is relatively commonplace for students to voluntarily congregate<br />

in groups to discuss assignment problems and solutions outside of class times, it would seem especially surprising.<br />

Commonly expressed student views against involvement in group work include:<br />

• I study best on my own<br />

• I have no need to work in a group<br />

• I can’t spare the time to meet and communicate with others<br />

• Others in the group are less capable<br />

Although some students may be genuinely concerned, experience shows that if their initial antipathy can be<br />

overcome, many will come to appreciate the advantages that a group learning environment can provide. So how can<br />

the antipathy best be overcome?<br />

Solution #1.1: Tell the students the benefits!<br />

Among the potential benefits which educators should stress to students are the social, psychological, and learning<br />

benefits, the much greater chance of being received appreciatively by potential employers, and the fact that much of<br />

their future careers will almost certainly involve working in groups with a diverse range of people who will have a<br />

wide variety of skills and abilities. Experience of working with others of differing backgrounds and capabilities is<br />

therefore likely to be highly beneficial.<br />

The Teaching & Assessment Network (1999: p5) members highlighted the point that<br />

‘…. departments needed to communicate the purpose of group activity so that students understand the<br />

associated benefits (and limitations). Such explicitness was seen as an important stage in helping<br />

students better understand and develop their group work skills.’<br />

A minimal list of generic skills honed by the use of group work would include the abilities to cooperate with others;<br />

to communicate effectively; to lead, and work effectively in, teams; and to organise, delegate, and negotiate.<br />

Students should also be made aware that any number of surveys point to employers regarding generic skills (perhaps<br />

most particularly the ability to work effectively in teams) as being of prime importance when selecting graduates.<br />

Indeed, in many areas, such skills are seen as being equally as important as content knowledge.<br />

Levin (2002a: p5) indicated that, when it came time for them to partake in real world employment, students involved<br />

in group learning would have developed the skills of…<br />

• developing rapport with others<br />

• negotiating a framework for working with others<br />

• generating and sustaining motivation and commitment to working together<br />

• standing back from the hurly-burly of teamwork and<br />

• making sense of what is going on in one’s team<br />

• coping with stressful situations that arise<br />

• evaluating the working of one’s team<br />

• recognising and making the most of individuals’ dispositions to prefer particular team roles<br />

• building up one’s teamwork expertise.<br />

But the development of generic skills is not the only benefit of group learning. Research shows that many students,<br />

particularly including weaker students, and some minority groups, often learn more effectively in such an<br />

258


environment. And, in an online setting, the use of group work can greatly reduce the feeling of isolation experienced<br />

by many students, even the most successful ones.<br />

Most students are naturally ignorant of the benefits of group learning. They may find out for themselves during the<br />

learning process, but initially at least, they need to be told!<br />

Solution #1.2: Make the assessment criteria explicit.<br />

Students are naturally wary of any system which might judge them based on the merits (or shortcomings) of others.<br />

It is not sufficient to have a fair assessment system in place – it must, in addition, be made absolutely explicit, right<br />

from the outset of the course, so that students are fully aware of the basis upon which their individual mark will be<br />

based. In an online course, it is usually convenient to have this information available on a prominent web page,<br />

though it could also be communicated via discussion lists, or by individual email.<br />

The exact nature of how students can be assessed in an online group setting is dealt with in detail in problem #7. The<br />

important point is that when appropriate assessment criteria have been decided, they should be justified, and both the<br />

criteria themselves and their justification made available to all students. Once students are assured that group work<br />

is beneficial, and that they will be judged according to their individual efforts, much of the initial antipathy will<br />

dissipate.<br />

Problem #2: the selection of the groups<br />

Selection of groups tends to be easier in an online environment. Two problems common in the face-to-face<br />

environment are either non-existent, or greatly reduced: the tendency for students to want to be in a group with<br />

friends (and to feel aggrieved if they are not), and the difficulty of arranging suitable times when all group members<br />

can meet outside of scheduled sessions.<br />

How large should each group be? There is no standard answer here that fits all circumstances. Johnson and Johnson<br />

(1987) and Kagan (1998) suggest that teams of four work well in a face-to-face setting, while Bean (1996, p160)<br />

suggests groups of five or six work best. However, arguments based on group dynamics are less applicable in an<br />

asynchronous online environment, where both small and large groups can work well, depending upon the context,<br />

and the size and complexity of the group task.<br />

How should the membership of each group be determined? There are several solutions here, but letting students<br />

choose their own groups is not usually one of them. The method presents too many complications in the online<br />

environment.<br />

Solution #2.1: Select at random.<br />

Since the students are online, and can, presumably, communicate via email, problems of geographic location shrink<br />

into unimportance, and, if communication is asynchronous, so too does the problem of arranging meeting times. So<br />

group membership based on a random selection is likely to be prone to far fewer difficulties than would be the case<br />

in a face-to-face environment. And a pseudo-random selection (free from even unconscious instructor influence),<br />

based on a digit in the student number, or the month of birth, or some other criteria, is easy to implement.<br />

The down side to such a solution is obvious: as Burdett (2003: p178) comments:<br />

‘…it is likely that groups will be formed with little consideration given to personality, life experience,<br />

ability or aptitude, so that a successful mixture of individuals is more likely to be achieved by happy<br />

accident rather than design.’<br />

In many cases, however, a random selection may suffice, and may indeed prove to be as effective as some more<br />

contrived method.<br />

259


Solution 2.2: Deliberately select heterogeneous groups.<br />

Left to select on their own, students will naturally choose to be with friends, who, more likely than not, will be from<br />

similar backgrounds. It can be very beneficial for the instructor choosing groups to attempt to do exactly the<br />

opposite – that is, to mix students from a range of ages, genders, and cultural backgrounds. This can improve a range<br />

of generic skills, including the ability to communicate effectively, to understand others’ points of view, and to be<br />

understanding of other cultures and backgrounds.<br />

There is some evidence that heterogeneous groups can be advantageous, because of the different perspectives<br />

brought to the group (Kagan, 1997; Johnson & Johnson, 1989). Thus, where possible, it would seem advisable to<br />

consider each student’s previous academic background and work experience as important factors. In fact, the<br />

authors’ own experiences would suggest that it is often considerably easier to form diverse groups when the majority<br />

of students are studying online, rather than face-to-face.<br />

Problem #3: a lack of essential group-work skills<br />

‘Simply placing students in groups and telling them to work together does not in and of itself result in cooperative<br />

efforts. There are many ways in which group efforts can go wrong.’ So say Johnson and Johnson (1994: p57), and<br />

of course they are quite right. Educators need ‘to foster students’ group skills over time, building more complex<br />

group activities as students become more familiar with the group context.’ (Teaching & Assessment Network, 1999:<br />

p5). Burdett (2003: p179) stresses the point that: ‘group work can be hard work emotionally and intellectually; and<br />

that this fact is sometimes overlooked by group work advocates and practitioners’.<br />

In situations where students who have not previously been introduced to group work, and lack the necessary skills,<br />

any instructor who uses group work as a major component, and does not prepare the students appropriately, is almost<br />

inevitably condemning the students to a traumatic and probably unproductive experience. This is certainly one of the<br />

major reasons why some instructors choose to revert to more traditional methods.<br />

Solution #3.1: introduce new courses.<br />

Potentially the most powerful way to ensure that students are both enthusiastic and appropriately prepared for group<br />

learning is the introduction of a core course to cover the requisite skills. This has the dual effect of instilling in<br />

students the idea that the university regards such skills as of significant importance, and of providing a sound<br />

preparation for future courses. Of all of the solutions presented in this paper, this is the only one that may lie outside<br />

the purview of the instructor, since it requires a change to the overall program, rather than to the individual course.<br />

As such, it may only be possible to implement this solution with the agreement of program administrators and other<br />

educators.<br />

The Teaching & Assessment Network (1999: p4) states the ‘…need to ease the introduction of group work into an<br />

otherwise “traditional” degree framework, fit group work into the semester system and to train academic staff in<br />

their understanding of group skills and theory’. The importance of these three aspects cannot be overstated - in<br />

particular, the need to provide not only the students, but also the instructors, with the necessary group work skills.<br />

Ideally, a short course for staff should be run on a regular basis, and an introductory core course for students, in the<br />

first year of the program, should be introduced. Both staff and student courses should cover key generic skills, of<br />

which group work, team-building, and effective communication would form an integral part (other parts of the<br />

course could cover topics such as computer literacy, presentation skills, email etiquette, proper referencing, etc).<br />

This solution enables any courses within the program to utilise group learning, confident that the key requirements<br />

and skills will already be familiar to the students prior to the commencement of the course. Similarly, students will<br />

be aware that group work may form part of the standard learning process, and are likely to approach such courses<br />

with far less trepidation than might otherwise be the case.<br />

260


Solution #3.2: cover the skills required at the beginning of the course.<br />

In cases where students participate in group work without any prior formal training in group skills, a minimum of<br />

two weeks at the start of the course should be devoted entirely to the core advantages and benefits of group learning,<br />

and the skills required. This may seem difficult – perhaps impossible – to many educators convinced that they have<br />

to “cover the content”, and that they “cannot afford to waste two weeks”. However, preparation of students in this<br />

way is an essential prerequisite for successful group work, and “covering the content” somehow assumes less<br />

importance when one steps out of the more usual lecture / seminar / tutorial mode.<br />

Amongst the skills that should be stressed in these sessions are group facilitation, effective online communication,<br />

‘netiquette’, and responsibilities to other group members. Excellent discussions of the processes involved in a<br />

successful group formation phase can be found in Kagan (1997) and Daradoumis and Xhafa (2005).<br />

Problem #4: the free-rider<br />

The free-rider effect (Kerr and Bruun , 1983) is probably the most commonly cited disadvantage of group work; that<br />

is, when one or more students in the group does little or no work, thereby contributing almost nothing to the well<br />

being of the group, and consequently decreasing the group’s ability to perform to their potential. In many cases, this<br />

may multiply into additional unwanted effects: first, of gaining unwarranted marks for the free-rider; second, of<br />

damaging the morale of the other members of the group; and third, of lowering the reputation of the educator and the<br />

institution for fair dealing and justice in assessment.<br />

An example of this attitude is illustrated by Burdett (2003: p178), quoting a University of South Australia graduate<br />

student:<br />

‘I acknowledge the reasons for including group work as a component of a university course; however<br />

due to the nature of groups, it usually falls to one or two individuals to do the bulk of the work. As a<br />

student motivated to achieve the best results of which I am capable, I find it frustrating that not only<br />

do other students get a free ride so to speak, but that through being forced to work in groups, the task<br />

becomes more difficult than it would have been if done alone.’ (University of South Australia, 2001)<br />

Levin (2002b: p3) states that the educator ‘may be the last person to know that there are students who consider that<br />

there is a free-rider’ hiding in their group. Students are ‘likely to feel that the issue is one that they should deal with<br />

themselves, and … be reluctant to tell tales on a fellow student’. It is appropriate then that educators pride an<br />

environment in which students involved in group work indicate the responsibilities they will be undertaking within<br />

the group, as a means of maintaining the integrity of the group and as a way to lessen the free-rider effect.<br />

Solution #4.1: use pressure from the instructor.<br />

The instructor should make potential free-riders within the group aware that they will lose marks – and indeed run<br />

the risk of failing the course – if they do not contribute. It is not sufficient – nor indeed fair – to impose rulings to<br />

this effect only at the end. It is essential that assessment rules be made explicit prior to the commencement of all<br />

group work.<br />

Unfortunately, if this is the only method used, constant monitoring by the instructor is essential. This can be timeconsuming<br />

and stressful at best, particularly in an online environment, and other solutions are generally preferable.<br />

Solution #4.2: use peer pressure openly and unashamedly.<br />

The best monitors of the quantity and quality of any single student’s contributions are the other students within the<br />

group. As such, students who seem to be free-riding should be encouraged by the other members of their group to<br />

‘pull their weight’. Such reminders should occur on a regular basis throughout the entire group learning process – not<br />

261


just near a deadline, when it may be too late. It is therefore a primary responsibility of the instructor to ensure that all<br />

group members are aware right from the start of their own responsibilities in this area.<br />

Many educators are already doing this to good effect. For example, Chin and Overton (2005: Pp 2-3) state in their<br />

primer that:<br />

‘Depending on the assessment tools employed students can both receive and provide feedback to their<br />

peers. This process helps students gain a better appreciation of the skills being developed and how to<br />

work effectively as a group. For example, peer assessment of a presentation can improve student<br />

understanding if they have to assess their peers on the same criteria with which they will be assessed.’<br />

Such assessment does not have to include any element of marking. However, the best solutions do just that.<br />

Solution #4.3: employ a marking scheme that penalises free-riders<br />

The most appropriate and effective antidote to the free-rider problem is to ensure that all students are well<br />

acquainted, at the commencement of group work, with the marking scheme to be employed, and that such a scheme<br />

is clearly seen to penalise free-riders.<br />

The unfortunately-common scheme of giving equal marks to all group members is not recommended, for such a<br />

scheme almost invariably invites free-riders to take advantage of the other members of the group. Why would the<br />

savvy and more industrious student spend time on this subject, since work by other students will get the marks, and<br />

their time can be more profitably spent on other subjects? Therefore, a marking scheme needs to be employed that<br />

provides different marks to group members based upon their individual contributions.<br />

One such method is to build into the assessment process an element of peer and/or self-assessment, thereby giving<br />

students the ability to demonstrate independent value within the group confines. Watkins and Daly (2003: p.9) have<br />

suggested a seemingly effective assessment method whereby the group participants are awarded bonus points by<br />

their peers, in the hope that this<br />

‘…may reduce social loafing through the ‘group evaluation’ effect and they may reduce free riding by<br />

enabling some equitable allocation of outcomes to individual input.’<br />

Relying as it does on differential individual assessment, this solution is covered more fully in problem #7.<br />

Problem #5: possible inequalities of student abilities<br />

Many researchers have expressed the hope that the online environment would, of itself, produce more equal levels of<br />

group participation than might be expected in a face-to-face environment (Harasim, 1993; Harasim et al, 1995;<br />

Sproull & Kiesler, 1991). This hope has not always been well-founded, however, with certain individuals or groups<br />

often dominating discussion (eg Herring, 1993).<br />

Winkworth and Maloney (2002) state that a fundamental dilemma in groups can be the need to temper the individual<br />

students’ needs with those of all the students in the group. In successful groups, individual students may need to<br />

sacrifice some aspects of their individuality for the benefits of learning in a group (Roschelle & Teasley, 1995). One<br />

of the supervisor's tasks is to monitor the groups to try to ensure that the strengths (and not the weaknesses) of<br />

individual students’ abilities are activated, while trying to ensure the success of the group as a whole.<br />

There is always the possibility that the most able student(s) within a group may fall victim to what has become<br />

known as the sucker effect (Kerr, 1983), which in many ways may be the reverse of the free-rider effect. The sucker<br />

in the group is the student who is perceived by other members of the group to be the most capable, and is therefore<br />

left to carry the bulk of the workload.<br />

262


It should be noted at the outset that the sucker effect is not all bad. It can result in weaker students learning more<br />

effectively, and perhaps going on to be suckers themselves in later groups. The situation that must be guarded against<br />

is one where the sucker does all the work, and is not rewarded appropriately for it. Also, of course, it is possible that<br />

the other students within the group rely so much on the sucker to do the work that they fail to learn anything at all.<br />

In certain circumstances, the free-rider and sucker effects can feed on each other. Ruël et.al. (2003: p.3) aptly stated<br />

that ‘…due to a feeling of being exploited by free-riders, one also reduces one’s own effort, because he or she does<br />

not want to be seen as a sucker who does all the work for his or her co-students’, and noted that there are several<br />

conditions which will create the sucker effect, including ‘…the type of task to be performed, the number of students<br />

within a team (group size), the type of performance and reward (on an individual or a group basis), the<br />

identifiability of the individual contribution and certain group characteristics.’ (Ruel et.al., 2003: p.3)<br />

The good news is that the sucker effect is fairly easy to overcome.<br />

Solution #5.1: identify potential “suckers” in advance.<br />

It may be possible to identify potential “suckers” in advance – they will generally have proved themselves via past<br />

work to be very able students. If so, some confidential correspondence between instructor and “sucker” may be all<br />

that is required, stressing the benefits of group work, and that perhaps all within the group may benefit from<br />

participation, rather than being supplied with finished work by the “sucker”.<br />

Solution #5.2: employ an appropriate reward scheme.<br />

Just as there should be potential penalties for the free-rider, so there should be potential rewards for the sucker. If the<br />

rewards are sufficiently high, for example better marks, every group member will want to be a sucker, and the group<br />

may then out-perform expectations. If, despite best efforts, a clear “sucker” still emerges, the potential rewards<br />

should be arranged in order to be appropriate to efforts (eg Webb, 1994: p13).<br />

For example, Watkins and Daly (2003: Pp.<strong>10</strong>-12) discuss the use of bonus points for group members that combats<br />

both the free-rider effect and the sucker effect. Perhaps the fairest and most easily defensible method is to use an<br />

appropriate combination of self, peer, and group assessment techniques, where assigned marks are correlated with<br />

individual efforts, as judged by the members of the group themselves. The sucker will likely not object to being a<br />

sucker if he or she is adequately recognised by other members of the group, and their efforts appropriately rewarded,<br />

and other members are more than often happy with such an outcome. Successful methods such as these are<br />

described in detail by many chapter authors in (Roberts, 2005).<br />

Solution #5.3: use subgroups within groups where feasible<br />

Apart from suggesting the use of bonus points, Watkins and Daly (2003: Pp.<strong>10</strong>-12) also put forward the idea of using<br />

small subgroups as a method of creating a more equitable working situation. Such groups-within-groups generally<br />

make the free-rider work harder, and ensure that the sucker does not have to carry the entire group. Alternatively,<br />

instructors may find that it is easier to begin with a smaller group size; this small size will not automatically mean<br />

that there will be no free-rider or sucker effect, but it does mean it is easier to observe and intervene if necessary<br />

when it occurs.<br />

Problem #6: the withdrawal of group members<br />

Courses conducted online or by distance education notoriously suffer from higher than average attrition rates (eg<br />

Simonson, 2000), often because of feelings of isolation (Hara and Kling, 2000). In a more conventional learning<br />

environment, one where group work is not being used, the withdrawal of a student normally has little or no direct<br />

effect on the work or grade of other students. Of course, this may not be strictly true, since the student may be in an<br />

informal study group, or have developed a friendship with other students, etc; but students learn and are officially<br />

263


assessed on an individual basis. With group work, it is common for those students who remain in the group to feel<br />

disadvantaged if one or more of their members officially withdraws, or disappears from the group for whatever<br />

reason.<br />

Although probably the least-cited of the seven problems listed in the paper, this has the potential to be the most<br />

serious, since the student concerned may have been assigned some component vital to the success of the group as a<br />

whole. While the withdrawn student can be awarded zero, what should happen to the other members of the group?<br />

Solution #6.1: take no action.<br />

In circumstances where the group member drops out very early in the course, or does not play a vital role within the<br />

group, it may be appropriate to take little action other than a minor reassignment of roles, which could be managed<br />

by the instructor, or by the group members themselves, or a combination of both. In some instances, constant<br />

monitoring of the group may be enough to alert others to the possibility of such an occurrence, which can then lead<br />

to early intervention before the matter becomes crucial.<br />

Solution #6.2: use a multiplier on the group work.<br />

Despite effective monitoring, it often occurs that a student will withdraw at a vital stage of a groups work, and<br />

sometimes this may be completely unanticipated. This can happen for any number of reasons, some quite beyond the<br />

control of the student concerned. Personal circumstances change, accidents happen, etc. How can the other students<br />

within the group be treated fairly in such cases? Taking contributions so far into account, it may be possible for the<br />

instructor to grade each member of the group in the normal way, and then apply a multiplier to make up for the<br />

disruption. The size of the multiplier of course will be dependent upon the particular circumstances, how late the<br />

withdrawal occurred, and the degree to which the missed contribution played a role in the outcome of the group.<br />

Solution #6.3: use a multiplier on the remaining course work.<br />

In the most problematic cases, the instructor may decide that the effect of the withdrawal is such that the group<br />

cannot be assessed. In such cases, a multiplier can be used on the remaining assessment tasks, such as other group<br />

tasks, or individual components such as the end-of-semester examination. The imposition of extra assignment items<br />

to “make up” for the missed group work is generally not a recommended course of action, since this can be viewed<br />

as an added imposition on students who have had to cope with circumstances beyond their control, and have<br />

otherwise fulfilled all of the necessary requirements.<br />

Problem #7: the assessment of individuals within the groups<br />

The traditional view of assessment has always been something along the following lines: assessment is about<br />

grading. One or more instructors assess the work of the students, with the primary – and perhaps sole - aim of<br />

assigning fair and appropriate grades to each of the students at the end of the course. An alternative view, and one<br />

that has claimed a large number of adherents in recent years, is that assessment can and should play a vital part in the<br />

learning process itself (Bain, 2004; Roberts, 2005). No matter where one stands on this issue, however, at the end of<br />

the day, individual students must be assessed. How can this be done fairly if group work is used?<br />

Assigning group grades without attempting to distinguish between individual members of the group is both unfair<br />

and deleterious to the learning process, for many reasons which should be apparent from earlier discussion, and may<br />

in some circumstances even be illegal (Kagan, 1997; Millis and Cottell, 1998).<br />

Specifically talking about group work, Webb (1994) stated that the<br />

‘… purpose of assessment is to measure group productivity…’<br />

264


ut then went on to stress that another purpose of assessment is<br />

‘… to measure students’ ability to interact, work, and collaborate with others and to function<br />

effectively as members of a team. Team effectiveness involves many dynamic processes including, for<br />

example, coordination, communication, conflict resolution, decision-making, problem solving, and<br />

negotiation.’<br />

Several effective solutions may be employed to do exactly as Webb suggests, that is, to measure group productivity<br />

and to measure the individual students’ abilities within the group. Exactly which of the solutions is the most<br />

appropriate will depend upon the circumstances.<br />

Solution #7.1: use individual assessment.<br />

While the learning may take place in groups, it may still be appropriate to assess individually. For example, while<br />

skills may be built up by a series of group projects throughout the semester, the assessment of the student learning<br />

process may take place through individual tests or assignments placed throughout the semester, or via an end-ofsemester<br />

examination, or via a combination of these.<br />

This may be a perfectly valid solution in many cases. The down side is that some students may see little value in<br />

participating in the groups, if such work is not directly assessed. It is therefore up to the instructor to ensure that the<br />

assessment items, while being individual, nevertheless test the learning that has occurred within a group setting.<br />

This may not be easy.<br />

Solution #7.2: assess individual contributions.<br />

If the group work is to form the bulk of the assessment, the instructor may be in a position to assess the contributions<br />

of individual students throughout. This method may be employed in either the face-to-face or the online situation, but<br />

is perhaps most effective in the latter, since individual contributions can be stored and reflected upon before final<br />

grading takes place.<br />

Instructors may in addition require students to record their own contributions and reflect upon them, in the form of a<br />

diary or journal, or perhaps in the form of a more structured portfolio, to be submitted to the instructor at the end of<br />

the course.<br />

Solution #7.3: use self, peer, and group assessment techniques.<br />

Self, peer, and group assessment techniques can be extremely beneficial for both students and instructors in all forms<br />

of online collaborative learning. Students who learn in groups are generally very aware of their own, and others’,<br />

relative contributions to the group. This knowledge can be usefully employed during assessment.<br />

A number of strategies to determine individual grades based on peer reviews by other students are possible – see for<br />

example (Li, 2001). One technique is to have students within each group anonymously rate their fellow group<br />

members. For example, one scheme has the students log on at the end of each piece of group work, and<br />

anonymously rate their fellow members on a scale ranging from -1 to +3. A rating of -1 indicates the group member<br />

was actually deleterious to the group (that is, the group would have performed better had the student not been in the<br />

group); 0 indicates that the group member’s contributions were negligible or non-existent; +1 indicates a below<br />

average contribution; +2 indicates an average contribution; and +3 indicates an above average contribution. The<br />

mark for the performance of the group as a whole is then divided appropriately to each group member. A number of<br />

different formulae can be used for this, depending upon the requirements of the instructor.<br />

An alternative scheme uses a pie chart. Students are advised to divide up the pie according to their relative<br />

contributions to the group. Since this is done by all students anonymously and online, there is little fear of<br />

repercussions from aggrieved students. Experience has shown that this method generally works well, and is accepted<br />

265


– and even appreciated – by students. Once again, the instructor retains responsibility for final grades, but utilises<br />

the student’s recommendations when deciding how to reward individual contributions.<br />

A variety of other techniques utilising self, peer, or group assessment in an online learning environment can be found<br />

in (Roberts, 2005). Sample forms that may be used as rubrics for self, peer, and group assessment can be found in<br />

Barkley et al (2005).<br />

Conclusion<br />

This paper has attempted to help those considering the introduction of online group learning into their courses, by<br />

listing seven of the most common problems, and describing solutions for each. It may of course be argued that this is<br />

far from an exhaustive list, and that there are other potential problems of online group learning that have not been<br />

dealt with here. While the authors agree that is undoubtedly true, they are also of the belief that the benefits of<br />

online group learning are compelling, and they hope the solutions presented here will be sufficient to encourage<br />

other educators to take the risks, discover the benefits for themselves, and report the results, so that others may be<br />

likewise enthused.<br />

References<br />

Bain, K. (2004). What the Best College Teachers Do, Cambridge, MA: Harvard University Press.<br />

Barkley, E. F., Cross, K. P., & Major, C. H. (2005). Collaborative learning techniques, San Francisco: Jossey-Bass.<br />

Bean, J. C. (1996). The professor’s guide to integrating writing, critical thinking, and active learning in the<br />

classroom, San Francisco: Jossey-Bass.<br />

Bonwell, C., & Eison, J. (1991). Active Learning: Creating Excitement in the Classroom, retrieved <strong>October</strong> 15, <strong>2007</strong>,<br />

from http://www.ntlf.com/html/lib/bib/91-9dig.htm.<br />

Burdett, J. (2003). Making Groups Work: University Students’ Perceptions. International Education Journal, 4 (3),<br />

retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://ehlt.flinders.edu.au/education/iej/articles/v4n3/Burdett/paper.pdf.<br />

Chin, P., & Overton, T. (2005). Assessing group work, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.heacademy.ac.uk/assets/ps/documents/primers/primers/ps0083_assessing_group_work_mar_2005_1.pdf.<br />

Daradoumis, T., & Xhafa, F. (2005), Problems and Opportunities of Learning Together in a Virtual Learning<br />

Environment. In Roberts T.S. (Ed.), Computer-Supported Collaborative Learning in Higher Education, Hershey,<br />

PA: Idea Group Publishing, 218-233.<br />

Felder, R. M., & Brent, R. (1994). Cooperative learning in technical courses: procedures, pitfalls, and payoffs,<br />

retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.ncsu.edu/felder-public/Papers/Coopreport.html.<br />

Graham, C.R., & Misanchuk, M. (2004). Computer-Mediated Learning Groups: Benefits and Challenges to Using<br />

Groupwork in Online Learning Environments. In Roberts, T. (Ed.), Online Collaborative Learning: Theory and<br />

Practice, Hershey, PA: Information Science Publishing.<br />

Hara, N., & Kling, R. (2000). Students' distress with a web-based distance education course. Information,<br />

Communication and Society, 3 (4), 557-579.<br />

Harasim, L. (1993). Collaborating in Cyberspace: Using computer conferences as a group learning environment.<br />

Interactive Learning Environments, 3 (2), 119-130.<br />

Harasim, L., Hiltz, S. R., Teles, L., & Turoff, M. (1995). Learning Networks- a Field Guide to Teaching and<br />

Learning Online, Cambridge, MA: The MIT Press.<br />

266


Herring, S. (1993). Gender and democracy in computer-mediated communication. Electronic Journal of<br />

Communication, 3 (2), 1-17.<br />

Johnson, D.W., & Johnson, R.T. (1977), Learning Together and Alone; Cooperation, Competition and<br />

Individualization, Eaglewood Cliffs, NJ: Prentice-Hall.<br />

Johnson, D.W., & Johnson, R.T. (1984). Circles of Learning, Washington, DC: Assoc. Supervision and Curriculum<br />

Dev. As quoted by The Foundation Coalition (2005). Positive Interdependence, Individual Accountability,<br />

Promotive Interaction: Three Pillars of Cooperative Learning, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.foundationcoalition.org/publications/brochures/acl_piiapi.pdf.<br />

Johnson, D.W., & Johnson, R.T. (1987). Creative conflict, Edina, MN: Interaction Book Company.<br />

Johnson, D. W., & Johnson, R.T. (1989). Cooperation and competition: Theory and research, Edina, MN:<br />

Interaction Book Company.<br />

Johnson, D. W., & Johnson, R. (1994). Leading the cooperative school (2 nd Ed.), Edina, MN: Interaction Book<br />

Company.<br />

Kagan, S. (1997). Cooperative learning, San Juan Capistrano, CA: Kagan Cooperative Learning.<br />

Kagan, S. (1998). Teams of four are magic! Cooperative Learning and College Teaching, 9 (1), 9.<br />

Kerr, N.L. (1983). Motivation losses in small groups: A social dilemma analysis. Personality and Social Psychology,<br />

45, 819-828.<br />

Kerr, N.L., & Bruun, S.E. (1983). Dispensability of member effort and group motivation losses: free rider effects.<br />

Journal of Personality and Social Psychology, 44, 78-94.<br />

Koschmann, T.D. (1999). Computer support for collaboration and learning. Journal of the Learning Sciences, 8, 495-<br />

498.<br />

Koschmann, T. (2001). Revisiting the paradigms of instructional technology. In G. Kennedy, M. Keppell, C.<br />

McNaught & T. Petrovic (Eds.), Proceedings of the 18 th Annual Conference of the Australian Society for Computers<br />

in Learning in Tertiary Education, Melbourne: The University of Melbourne, 15-22.<br />

Lave, J., & Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation, Cambridge, UK: Cambridge<br />

University Press.<br />

Levin, P. (2002a). Teamwork Tutoring: Helping Students Working On Group Projects To Develop Teamwork,<br />

retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.teamwork.ac.uk/MGS_teamwork_tutoring.PDF.<br />

Levin, P. (2002b). Running group projects: dealing with the free-rider Problem. Planet, 5, 7-8.<br />

Li, L (2001). Some Refinements on Peer Assessment of Group Projects. Assessment and Evaluation in Higher<br />

Education, 26 (1), 5-18.<br />

Lipponen, L. (2002). Exploring foundations for computer-supported collaborative learning. In G. Stahl (Ed.),<br />

Proceedings of the Computer-supported Collaborative Learning 2002 Conference, Hillsdale, NJ: Erlbaum, 72-81.<br />

Lipponen, L., Rahikainen, M., Hakkarainen, K., & Palonen, T. (2002). Effective participation and discourse through<br />

a computer network: Investigating elementary students' computer-supported interaction. Journal of <strong>Educational</strong><br />

Computing Research, 27, 353-382.<br />

Millis, B. J., & Cottell, P. G., Jr. (1998). Cooperative learning for higher education faculty, Phoenix, AZ: The Oryx<br />

Press.<br />

267


Panitz, T., & Panitz, P. (1998). Encouraging the Use of Collaborative Teaching In Higher Education, retrieved<br />

<strong>October</strong> 15, <strong>2007</strong>, from http://home.capecod.net/~tpanitz/tedsarticles/encouragingcl.htm.<br />

Roberts, T. S. (2004). Online Collaborative Learning in Higher Education, Hershey, PA: Information Science.<br />

Roberts, T. S. (2005). Self, Peer, and Group Assessment in E-Learning, Hershey, PA: Idea Group Publishing.<br />

Roschelle, J., & Teasley, S. (1995). The construction of shared knowledge in collaborative problem solving. In<br />

O'Malley, C.E., (Ed.), Computer Supported Collaborative Learning, Heidelberg: Springer, 69-97.<br />

Ruël, G. C., Bastiaans, N., & Nauta, A. (2003). Free-riding and team performance in project education, retrieved<br />

<strong>October</strong> 15, <strong>2007</strong>, from http://som.eldoc.ub.rug.nl/FILES/reports/themeA/2003/03A42/03a42.pdf.<br />

Simonson, M (2000). Equivalency Theory and Distance Education, Tech Trends. 43 (5), 5-8.<br />

Slavin, R. (1987). Cooperative Learning: Student Teams, Washington: National Education Association.<br />

Sproull, L., & Kiesler, S. (1991). Connections: New ways of working in the networked organization, Cambridge,<br />

MA: MIT Press.<br />

Teaching & Assessment Network (1999). Developing and assessing students’ group work skills: Notes from the<br />

meeting of the Teaching & Assessment Network 16th December 1999, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.le.ac.uk/teaching/tan/pdf/groups.pdf.<br />

Tinzmann, M. B., Jones, B. F., Fennimore, T. F., Bakker, J., Fine, C., & Pierce, J. (1990). What is the collaborative<br />

classroom, retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.arp.sprnet.org/Admin/supt/collab2.htm.<br />

University of South Australia (2001). Student experience survey 2000, Unpublished report, Adelaide: University of<br />

South Australia. As quoted in Burdett, J. (2003). Making Groups Work: University Students’ Perceptions.<br />

International Education Journal, 4 (3), retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://ehlt.flinders.edu.au/education/iej/<br />

articles/v4n3/Burdett/paper.pdf.<br />

Vygotsky, L.S. (1978). Mind and Society: The Development of Higher Mental Processes, Cambridge, MA: Harvard<br />

University Press.<br />

Watkins, R., & Daly, V. (2003). Issues raised by an Approach to Group Work for Large <strong>Number</strong>s. Paper presented<br />

at the BEST Conference, April 9-11, 2003, Brighton, UK.<br />

Webb, N. (1994). Group Collaboration in Assessment: Competing Objectives, Processes, and Outcomes, retrieved<br />

<strong>October</strong> 15, <strong>2007</strong>, from http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/<br />

80/13/69/9f.pdf.<br />

Winkworth, A., & Maloney, D. (2002). An exploration of apathy and enthusiasm in task-focused groups:<br />

implications for task design and supervisor intervention, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.tedi.uq.edu.au/conferences/teach_conference00/papers/winkworth-maloney.html.<br />

268


Kwon, S. Y., & Cifuentes, L. (<strong>2007</strong>). Using Computers to Individually-generate vs. Collaboratively-generate Concept Maps.<br />

<strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 269-280.<br />

Using Computers to Individually-generate vs. Collaboratively-generate<br />

Concept Maps<br />

So Young Kwon<br />

The University of Texas Health Science Center at Houston School of Nursing, USA // Tel: +1 713 500 2069 // Fax:<br />

+1 713 500 0272 // soyoungkwon@gmail.com<br />

Lauren Cifuentes<br />

Department of <strong>Educational</strong> Psychology, Texas A&M University, USA // Tel: +1 979 845 7806 // laurenc@tam.edu<br />

ABSTRACT<br />

Five eighth grade science classes of students in at a middle school were assigned to three treatment groups:<br />

those who individually concept mapped, those who collaboratively concept mapped, and those who<br />

independently used their study time. The findings revealed that individually generating concept maps on<br />

computers during study time positively influenced science concept learning above and beyond independent use<br />

of study time, but that collaboratively generating concept maps did not. Students in both individual and<br />

collaborative concept mapping groups had positive attitudes toward concept mapping using Inspiration software.<br />

However, students in the collaborative concept mapping group did not like working in a group. This study<br />

contributes to the limited body of knowledge concerning the comparative effectiveness of individually and<br />

collaboratively-generating concept maps on computers for learning.<br />

Keywords<br />

Collaborative learning, Computer-based learning, Concept mapping, Meaningful learning, Science learning<br />

Introduction<br />

Within a constructivist framework, learning takes place as learners progressively differentiate concepts into more<br />

complex understandings and also reconcile abstract understanding with concepts acquired from experience. New<br />

knowledge is constructed when learners establish connections among knowledge learned, previous experiences, and<br />

the context in which they find themselves (Bransford, 2000; Daley, 2002; Jonassen, 1994). Chang, Sung, and Chen<br />

(2001) propose that concept mapping, a form of visualization, is a powerful learning strategy consistent with<br />

constructivist learning theory in that it is a study strategy that helps learners visualize interrelationships among<br />

concepts (Duffy, Lowyer, & Jonassen, 1991).<br />

Scientists agree with educators that visualization facilitates thought. J. H. Clark states that, “Visualization has been<br />

the cornerstone of scientific progress throughout history. Much of modern physics is the result of the superior<br />

abstract visualization abilities of a few brilliant men. …Virtually all comprehension in science, technology and even<br />

art calls on our ability to visualize. In fact, the ability to visualize is almost synonymous with understanding. We<br />

have all used the expression ‘I see’ to mean ‘I understand.’” (as cited in Earnshaw & Wiseman, 1992, p. v). The<br />

authors propose that because visualization is a factor in scientific understanding, visualization of concepts should be<br />

a focus of science curriculum.<br />

That concept mapping facilitates learning has been demonstrated in bodies of research on visual aids and<br />

visualization processes. Research on visual displays or visual representations as adjunct aids to text has demonstrated<br />

that they facilitate both recall and comprehension (Gobert & Clement, 1999; Mayer, 1989; Mayer & Gallini, 1990).<br />

Conceptual understanding can also be facilitated by requiring students to build meaningful and appropriate mental<br />

representations and concrete, visual representations of concepts being taught (Gobert & Clement, 1999, p. 40). While<br />

visually representing concepts, learners construct knowledge rather than absorb others’ representations of<br />

knowledge. Making visual representations is a natural cognitive activity which facilitates active synthesis of concepts<br />

and phenomena (Ajose, 1999).<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

269


Concept Mapping<br />

Concept mapping by students is a common visualization method assigned by instructors in elementary and secondary<br />

schools as well as adult learning environments to provide students with access to their own visual representations of<br />

knowledge structures (Gaines & Shaw, 1995). According to Novak (1998), concept mapping is a process of<br />

organizing and representing concepts and their relationships in visual form. Concept mapping is one tool that can<br />

overtly engage students in meaningful learning processes. Further, concept mapping promotes meaningful learning<br />

and retention of knowledge for long periods of time and helps students negotiate meaning (Hyerle, 2000; Novak,<br />

1990). Concept mapping is a schematic device for representing interrelationships among a set of concept meanings<br />

embedded in a framework of propositions and two-dimensional, hierarchical, node-link diagrams that represent<br />

verbal, conceptual, or declarative knowledge in visual or graphic forms (Quinn, Mintzes, & Laws, 2003).<br />

The effectiveness of concept mapping for learning has been tested in several contexts across many content areas. For<br />

instance, using a Biology Achievement Test and the Zucherman Affect Adjective Checklist as measures, Jegede,<br />

Alaiyemola, and Okebukola (1990) found that concept mapping raised mean scores on achievement in biology (p<br />

=.00) and decreased male students’ anxiety levels (p= .01) in fifty-one tenth grade students. In 1990 Novak, the<br />

originator of the term "concept maps," reviewed investigations of the effect and impact of concept mapping. Such a<br />

summary revealed that the process promotes novel problem solving abilities and increases students’ positive attitudes<br />

toward the content being studied.<br />

In a meta-analysis of 18 concept mapping studies, Horton, MacConney, Gallo, Woods, Senn, and Hamelin, (1993)<br />

concluded that concept mapping on paper has positive effects on both knowledge attainment and attitudes for<br />

students at the elementary level, middle school level, high school level, and the college level. Concept mapping<br />

raised individual student achievement in the average study by .46 standard deviations (from the 50th to the 68th<br />

percentile). Concept mapping also improved student attitudes toward content being studied.<br />

Concept mapping is quickly learned and easily understood by students; the minimum use of text makes it easy to<br />

scan for a word, phrase, or general idea; and visual representation allows for development of a holistic understanding<br />

that words alone cannot convey (Plotnick, 1997). Also, concept mapping provides a useful organizational cue for<br />

retrieving information and concepts from memory by providing a representation of interrelationships among that<br />

information and those concepts. In teaching, concept maps can be assessed as representations of declarative and<br />

procedural knowledge in the science classroom (Rice, Ryan, & Samson, 1998; Ruiz-Primo & Shavelson, 1996).<br />

In a controlled study by Cifuentes and Hsieh (2003a, 2003b), college students who (a) visualized interrelationships<br />

among science concepts, (b) mapped relationships between new concepts learned and prior learning, and (c) created<br />

connections both visually and verbally in their hand written study notes performed significantly higher on a test than<br />

those students who did not show such interrelationships, relationships, and connections in their study notes (p=.02).<br />

The researchers concluded that visualization is effective as a metacognitive strategy for college level students. Due<br />

to the theoretical propositions and evidence reported above, theorists and researchers agree that students should be<br />

encouraged to generate concept maps during study time.<br />

Computer-supported Visual Learning<br />

Incorporating visual thinking tools into the teaching and learning process opens up new avenues for constructivist<br />

learning (Anderson-Inman, 1996). Concept mapping can be directly and easily supported by personal computers and<br />

computer software (Anderson-Inman & Ditson, 1999; Anderson-Inman & Zeitz, 1993; Fischer, 1990; Royer &<br />

Royer, 2004). The use of computer-based visualization tools such as Inspiration and Visio enable learners to<br />

interrelate the ideas that they are studying in multidimensional networks of concepts, and to label and describe the<br />

relationships between those concepts (Jonassen, Carr, & Yueh, 1998; Jonassen, 2000). Anderson-Inman (1996)<br />

indicated that computer-based visualization makes the learning process more accessible to students, and it alleviates<br />

the frustration felt by students when constructing and revising concept maps using paper and pencil.<br />

Cifuentes and Hsieh (2004) explored the comparative effects of computer-based and paper-based visualization as<br />

study strategies for middle-schoolers' science concept learning. In their quasi-experimental study, although<br />

visualization on paper improved test scores for middle schoolers, scores on a test did not improve as a result of<br />

270


computer-based visualization during study. Qualitative findings indicated that students were quite unskilled at<br />

visualization and required training in both basic computer skills and computer-based visualization to successfully<br />

apply a computer-based visualization strategy. Students in that study also required more training in development of<br />

computer graphics to be effective visualizers on computers.<br />

Because of the finding that students require training in computer-based visualization, Hsieh and Cifuentes (2006)<br />

developed a seven-and-a-half hour visualization and computer skills workshop for preparing students to represent<br />

interrelationships among science concepts. In their controlled study, eighth grade students who used computer<br />

visualization as a study strategy outscored students who constructed visualizations on paper and those who did not<br />

construct visualizations at all during study time (p=.00). Hsieh and Cifuentes concluded that, given training in<br />

computer visualization, middle school students can generate visual representations that show interrelationships<br />

among concepts to both build and demonstrate their understanding. These findings provided evidence of the<br />

effectiveness of student-generated visualization using computers for the improvement of concept understanding.<br />

Royer and Royer (2004) also investigated the difference between hand drawn and computer generated concept<br />

mapping using Inspiration software on desktop computers with 9 th and <strong>10</strong> th grade biology classes. The group using<br />

the computer created more complex maps than the group that used paper and pencil. Also, students preferred using<br />

Inspiration to paper and pencil for concept mapping. Royer and Royer theorize that “computers enabled students<br />

to communicate more clearly, to add and revise concept maps more easily, and to discover relationships between<br />

sub-concepts more readily.” (p. 79)<br />

Collaboratively-generating Concept Maps<br />

According to Fischer, Bruhn, Grasel, and Mandl, (2002) collaborative processes can support learners’ scientific<br />

knowledge construction more effectively than independent processes. Lumpe and Staver (1995) demonstrated that<br />

collaboratively creating propositions using paper and pencil in small groups can have positive effects on student<br />

achievement. They compared collaborative conceptualizing of photosynthesis with individual conceptualizing of<br />

photosynthesis and found that high school students who collaborated out-performed those who worked<br />

independently on a comprehension test (p=.00).<br />

A visual representation technique such as concept mapping can be integrated into collaborative learning activities<br />

(Chiu, Wu, & Huang, 2000). During the process of collaboratively developing visualizations, the role of the student<br />

can evolve from being a passive learner to becoming an active, social learner. Students’ perceptions and<br />

representations of those perceptions are challenged during collaboration, and learning builds on what learners have<br />

already constructed in other contexts (Fischer, Bruhn, Grasel, & Mandl, 2002; Brandon & Holingshead, 1999). For<br />

instance, Novak and Gowin (1984) found that concept mapping provided for meaningful knowledge construction by<br />

providing a means for learners to communicate representations of their cognitive structures with other learners. Roth<br />

(1994) suggests that when students generate concept maps on paper in small groups, they are able to demonstrate<br />

what they know about a subject while listening, observing, and learning from others, resulting in the modification of<br />

their own meaningful understandings.<br />

In the only research study found investigating the comparative effects of individually vs. collaboratively generated<br />

concept maps, Brown (2003) compared test scores among students who collaboratively generated concept maps or<br />

individually generated concept maps on paper. A comparison of student comprehension of concepts showed that<br />

those students who collaboratively-generated concept maps on paper (M = 2. 34, SE = 0.56) outperformed students<br />

who individually generated concept maps on paper (M= 0.46, SE = 0.52) in high school biology.<br />

In summary, studies have investigated the effects of concept mapping on paper, concept mapping on computers, and<br />

concept mapping individually and collaboratively on paper. Such studies have shown that concept mapping<br />

positively affects students’ concept learning. When students have computer skills, computer-generated concept<br />

mapping also positively affects students’ learning of concepts beyond concept mapping on paper. In addition, the<br />

literature indicates that collaboratively generating concept maps on paper positively affects learning beyond<br />

individually generating concept maps. Therefore, the next logical step in the body of research on visualization is a<br />

comparison between computer-based individually-generated concept maps and computer-based collaborativelygenerated<br />

concept maps to determine which strategy is most appropriate.<br />

271


Research Questions<br />

In order to determine the comparative effects of individually vs. collaboratively-generating computer-based concept<br />

maps on eighth grade middle school science learning, the researchers administered treatments and compared scores<br />

on a subsequent comprehension test. Research questions were- (1) Did middle school students who collaboratively or<br />

individually generated computer-based concept maps perform better on a comprehension test than those who studied<br />

independently and did not generate computer-based concept maps? (2) Did middle school students who<br />

collaboratively generated computer-based concept maps perform better on a comprehension test than those who<br />

individually generated computer-based concept maps? (3) How did students’ attitudes toward generating concept<br />

maps during study time differ between those who individually-generated concept maps and those who<br />

collaboratively-generated concept map? And (4) What specific learning strategies were used in each group to prepare<br />

for the comprehension test and did they differ according to group?<br />

The researchers hypothesized that students who individually and collaboratively generated concept maps on<br />

computers would outscore those who did not, that students who collaboratively generated concept maps would<br />

outscore students who individually generated concept maps, that attitudes toward concept mapping would be positive<br />

across groups, and that students would apply different strategies across groups.<br />

Methodology<br />

Mixed methods were applied to answer the research questions. Using a quasi-experimental posttest-only-controlgroup<br />

design the relative effects of a computer-based individual concept mapping strategy, a computer-based<br />

collaborative concept mapping strategy, and a self-selected learning strategy on science concept learning were<br />

investigated. Posttest scores were compared across three treatment groups. Qualitative data were analyzed to<br />

describe attitudes toward concept mapping and study strategies employed across groups as well as to explain<br />

quantitative findings.<br />

Participants<br />

The potential participants were the entire eighth grade student body of a rural middle school in Texas (N=89).<br />

However, eleven of the students did not turn in consent forms, one was absent for part of the treatment, and three<br />

others were absent for testing. Therefore, 74 students (32 boys and 42 girls) in five eighth grade science classes<br />

participated in this study. All of those students took science from one teacher and all eighth grade students and<br />

science classes were assigned to that same teacher. The study was conducted in the natural school setting as part of<br />

the science curriculum with treatments randomly assigned to five classrooms taught by one teacher. This approach<br />

meant that student participants were not randomly selected and that treatment group sizes differed, two weaknesses<br />

of the study.<br />

The control group consisted of 12 classmates; the individual group consisted of 31 students from 2 different classes;<br />

and the collaborative group consisted of 31 students from 2 different classes. The ethnic distribution of the classes<br />

combined was 62% African American, 34% Hispanic, and 7% white. Over 84% of students were economically<br />

disadvantaged. Ethnicities and socio-economic status of students were equally distributed across the classes.<br />

Using mathematics and reading performance scores on the Texas Assessment of Knowledge and Skills to assure<br />

equivalence of student achievement across groups, the researchers randomly assigned the teacher’s classes to one of<br />

the three experimental groups. Those groups were- control, computer-based individual concept mapping strategy,<br />

and computer-based collaborative concept mapping strategy. Chi-square results indicated that no significant<br />

difference existed among the control, individual, and collaborative groups on their prior math and reading<br />

performance scores on the standardized Texas Assessment of Knowledge and Skills (X 2 = 1.13, p = .57 and X 2 = .30,<br />

p = .86 respectively).<br />

To determine whether the groups differed in their knowledge of the four science topics to be used in the experiment:<br />

“Tools of Modern Astronomy,” “Characteristics of Stars,” “Lives of Stars,” and “Star Systems and Galaxies,”<br />

students were asked to report the extent to which they had been previously exposed to the information presented in<br />

272


the science essays that they studied during the four day experiment. Four Pearson Chi-Square tests were conducted<br />

for the comparison among three groups. The Chi-square results indicated that there was no significant difference in<br />

group knowledge of the topics among the students across the four topics (X 2 = 4.82, p = .57; X 2 = 5.92, p = .43; X 2 =<br />

3.93, p = .69; and X 2 = 4.09, p = .67).<br />

In addition, Chi-Square tests were used to investigate whether the three groups were different from each other in<br />

their frequency of accessing computers at school and at home, in the number of computer courses taken in the past,<br />

in the amount of the time spent each time using a computer in school and at home, and in the frequency of using<br />

computer tools to support various learning tasks, such as word processing, E-mail, Internet, games, spreadsheets,<br />

presentations, graphics, and webpage development. According to their self-report, students in all three groups were<br />

not different regarding previous experiences using computers (see Table 1).<br />

Table 1. Pearson Chi-Square Group Differences in Computer Use Survey<br />

Topic Pearson Chi-Square Df Significance (2-sided)<br />

Use computers at school 2.22 4 .70<br />

Time spent at school computers 3.85 6 .70<br />

<strong>Number</strong>s of computer courses 3.09 4 .54<br />

Frequency –use of computer at home 5.47 4 .24<br />

Time spent at home computers 2.78 4 .60<br />

Frequency – Create computer graphics 4.61 8 .80<br />

Frequency – Word 2.19 4 .70<br />

Frequency – Internet 2.72 4 .61<br />

Frequency – E-mail 6.13 8 .63<br />

Frequency – Chatting 5.89 6 .44<br />

Frequency – Games 3.47 6 .75<br />

Frequency – Spreadsheets 2.14 6 .91<br />

Frequency – Presentations 5.20 8 .74<br />

Frequency – Programming .65 2 .72<br />

Frequency – Webpage development 2.34 2 .31<br />

*p < .05.<br />

Design<br />

The independent variable, group, had three levels: a control group consisting of one class of twelve students who<br />

were not trained in concept mapping, an experimental group consisting of two classes totaling thirty-one students<br />

trained to individually generate concept maps on computers, and an experimental group consisting of two classes<br />

totaling thirty-one students trained to collaboratively generate concept maps on computers. The dependent variable<br />

was science concept learning as demonstrated by comprehension test scores.<br />

Computer-based Concept-Mapping Workshop<br />

Prior to studying science concepts, both groups that created concept maps on computers attended a three day<br />

workshop on computer-based concept mapping. The computer-based concept mapping workshop lasted fifty minutes<br />

each of three days. The workshop had the same content, materials, and processes for each experimental group except<br />

that students in the collaborative group were informed in the last five minutes of the last day of the workshop that<br />

they would be concept mapping collaboratively the following day. Science topics explored during the computerbased<br />

concept mapping workshops were “The eye – An organ system,” “Light waves and lenses,” and “Wave<br />

behavior.” The science content of the workshops were carefully selected to assure that content did not include<br />

concepts to be covered in the experimental studying materials. The first day of the workshop the teacher trained<br />

students focusing on how to identify and visualize expository text with sequential structures using Inspiration. On the<br />

second day of workshop, the teacher trained students to identify and visualize expository text with categorical<br />

structures. For the third day of the workshop, the same teacher trained students to identify and visualize expository<br />

text with compare-contrast structures. The control group spent the same amount of time as the experimental groups<br />

273


with their teacher but rather than learning how to create concept maps, they watched a video about the upcoming<br />

science fair.<br />

Procedures<br />

Prior to conducting the study, one of the researchers spent approximately one hour training the teacher in how to use<br />

Inspiration to create concept maps and then another hour training the teacher on how to deliver the Computerbased<br />

Concept-mapping Workshop to the students. Led by their teacher, students in the individual and collaborative<br />

experimental groups first spent three days learning how to develop concept maps on computers using Inspiration<br />

in the Computer-based Concept-mapping Workshop. Every student participant had a computer account and logon ID<br />

for the school computer laboratory so that the teacher was able to trace student work and outcomes on an<br />

administrator’s server.<br />

After three days of the workshop, the control, individual, and collaborative groups were given the same science<br />

essays to study in the classroom for four days. The only difference among groups was that the control group<br />

followed their own learning strategy to study the concepts. Computers were not available to them. In the individual<br />

group, students worked independently to use their learned computer-based concept mapping skills to show<br />

interrelationships among concepts during their study time. Each student worked alone at a computer.<br />

However, in the collaborative group, students were required to study together within groups of three per computer<br />

using their learned computer-based concept mapping skills to collaboratively create concept maps that showed<br />

interrelationships among concepts during their study time in the classroom. Each of the three group members was<br />

assigned to be either the group’s leader, reporter, or monitor. The role of the leader was to encourage group input and<br />

use the mouse to create the concept maps according to group input. The role of the monitor was to provide input<br />

regarding creation of bubbles and links. The role of the reporter was to print out group concept maps and summarize<br />

group work during study time. Roles were rotated daily so that each group member could fulfill each role. Students<br />

had not had prior training in how to work in such groups nor did they receive training as part of the experiment. The<br />

same teacher for all groups implemented instructional procedures during the four day experimental period. When<br />

students from any group asked for help and information, the teacher gave feedback equally to the students.<br />

The experimental procedure for all three groups followed three steps each of the four experimental days: First, as<br />

was typical in the classroom when students studied concepts in their text, the teachers gave ten minutes of instruction<br />

for each group. Second, after the teacher’s instruction, the students in the control group studied individually to<br />

prepare for the comprehension test. The control group students followed their own learning strategy such as<br />

highlighting, memorization, or taking notes to prepare for their test for thirty minutes. Students in the two<br />

experimental groups’ created concept maps using computers for thirty minutes.<br />

Students in the individual group created concept maps and studied independently using computers. Students in the<br />

collaborative group, however, created concept maps together using computers. The experimental groups’ students<br />

saved their files on their computer-server and printed out their concept maps to use them during their study time.<br />

Finally, all three groups of students turned in their study notes and concept maps prior to taking a test.<br />

Materials and Instruments<br />

The study essays were selected by the classroom teacher from the Prentice Hall Science textbook for eighth grade<br />

that was adopted by the school district (Padilla, Miaoulis, & Cyr, 2002). The contents of the four essays to be studied<br />

by students were validated by a subject matter expert and the teacher established that they met the state curriculum<br />

and that the students had not been exposed to those essays or the topics of those essays in school prior to the study.<br />

The four short essays, “Tools of Modern Astronomy,” “Characteristics of Stars,” “Lives of Stars,” and “Star Systems<br />

and Galaxies” were each one page long and consisted of expository text without illustrations or graphics. Students<br />

were given 50 minutes to study each essay.<br />

The comprehension test consisted of 40 computer-based multiple-choice items from the Prentice Hall test bank that<br />

was provided with the eighth grade textbook adopted by the participating school district. The multiple choice<br />

comprehension test items were selected and validated by both the teacher and researchers as appropriate for this<br />

274


study. Items were criterion referenced to concepts in the essays that students studied during their experimental study<br />

time. Ten item multiple choice comprehension tests were administered on each of the four days for ten minutes after<br />

treatment. Scores were totaled for each student to provide the 40 item total. For the purpose of scoring students’<br />

responses to the comprehension test items, one point was given for a correct answer, and no credit was given for<br />

incorrect or unanswered questions. Internal consistency was established at .82 (coefficient alpha) for the<br />

comprehension test.<br />

All participants were asked to fill out a Learning Strategy Questionnaire in the last minutes of the fourth day. The<br />

Learning Strategy Questionnaire was a student self-report instrument developed by the researchers. Students in all<br />

three groups were asked to describe the steps that they took to prepare for the test. Students in both experimental<br />

groups were asked to explain how they felt about making concept maps that showed interrelationships among<br />

concepts during study time and to discuss how making concept maps helped them learn content. The individuallygenerated<br />

concept mapping group answered the following other questions: When you created concept maps on a<br />

computer during study time, do you think that working by yourself helped you learn the content better than if you<br />

had worked with others? Why or why not? The collaboratively-generated concept mapping group answered the<br />

following questions: When you created concept maps on a computer during study time, do you think that working<br />

with others helped you learn the content better than if you had worked by yourself? Why or why not?<br />

Data Sources and Analyses<br />

The four data sources included: (a) comprehension test scores, (b) student responses on the Learning Strategy<br />

Questionnaire, (c) students’ study notes, and (d) the video recording of classroom activities. Comprehension test<br />

scores were analyzed quantitatively. Students’ study notes, the video recording, and responses on the Learning<br />

Strategy Questionnaire were analyzed qualitatively to explain quantitative results and to provide insight into<br />

participants’ attitudes and study strategies.<br />

A one-way analysis of variance (ANOVA) using “treatment” as the independent variable and “comprehension test<br />

scores” as the dependent variable was administered among the control, individual, and collaborative groups. The<br />

researchers summarized student responses to the Learning Strategy Questionnaire and triangulated those self-reports<br />

with researchers' and teachers' observation and students' study notes. The researchers examined students' study<br />

notes and identified strategies employed. Content analyses approaches as described by Emerson, Fretz, and Shaw<br />

(1995) were applied to the researcher's journal entries, the video recording during study time, and students' response<br />

to the Learning Strategy Questionnaire. For focused coding analyses, the researcher independently compiled and<br />

numbered the contents according to the categories that emerged (Merriam, 1998). To analyze the video recording,<br />

the researcher watched the video several times and independently identified categories of student behaviors.<br />

Results<br />

Results of data analyses provided answers to the research questions regarding the effects of individually generated<br />

computer-based concept mapping, and collaboratively generated computer-based concept mapping. ANOVA<br />

(Analysis of Variance) revealed that means differed across groups. A .05 level was used for determining<br />

significance. Levene’s Test of Equality of Error Variances was applied and groups were found to be homogenous.<br />

Descriptive statistics for the experimental groups’ were respectively: individual group, n = 31, mean = 26.29, SD =<br />

4.49; collaborative group, n = 31, mean = 23.19, SD = 6.20; and the control group, n=12, mean = 19.67, SD = 7.11.<br />

The one-way ANOVA results indicated that a significant difference existed among the individual, collaborative, and<br />

control, groups on the mean scores of the comprehension posttest, F =6.25 (p < .05 level) as seen in Table 2.<br />

Table 2. One Way ANOVA Summery Table<br />

Source Df Mean Square F Significance<br />

Group 2 203.81 6.25 .00*<br />

Error 71 2315.89<br />

*p < .05.<br />

275


Tukey’s HSD post hoc test revealed that the group that generated concept maps individually significantly outscored<br />

the group that did not generate concept maps (control group), while the group that generated concepts maps<br />

collaboratively did not differ significantly in its performance when compared to the other groups (see Table 3). The<br />

effect size between the control and individual groups was 1.11. The effect size between the control and the<br />

collaborative groups was 0.53. The effect size between the individual and collaborative groups was 0.57.<br />

Table 3. Tukey HSD Post Hoc Test Results<br />

Tukey HSD<br />

(I) Group (J) Group Mean Difference Significance Effect Size/Cohen’s d<br />

Control Individual 6.62* .00* 1.11<br />

Control Collaborative 3.53 .17 0.53<br />

Individual Collaborative<br />

*p < .05.<br />

3.<strong>10</strong> .09 0.57<br />

Therefore, our first research hypothesis that middle school students who collaboratively or individually generate<br />

computer-based concept maps perform better on a comprehension test than those who do not generate computerbased<br />

concept maps is only partially accepted. Although students who individually generated concept maps scored<br />

higher than the control group, students who collaboratively generated concept maps did not score significantly higher<br />

than the control group on the comprehension test. Also, there was no significant difference between the groups of<br />

students who individually and collaboratively generated concept maps. Therefore, the second hypothesis that middle<br />

school students who collaboratively generated computer-based concept maps performed better on a comprehension<br />

test than those who individually generated computer-based concept maps is rejected. Qualitative data provided<br />

insight regarding quantitative findings, students’ attitudes toward concept mapping, and the strategies that students<br />

chose to employ while studying.<br />

Table 4. Survey Result of Learning Strategy during Study Time<br />

Category Control Group Individual Group Collaborative Group<br />

Feeling about None 85% say helpful and fun. Some 87% say helpful and fun. Some<br />

*CM<br />

students think better than students think better than<br />

worksheet on paper or regular worksheet on paper or regular<br />

class activity. 15 % No class activity.13 % No<br />

response<br />

response<br />

Sense that CM None 87% think concept maps help 97% think concept maps help<br />

helps with<br />

with learning the science with learning the science<br />

content learning.<br />

content. 13 % No response content. 3 % No response<br />

Opinion of<br />

working with<br />

group<br />

Opinion of<br />

working<br />

individually<br />

Steps taken to<br />

prepare for test<br />

Previous<br />

exposure to<br />

concept mapping<br />

*CM = concept mapping<br />

None None 50% think working with a<br />

group is helpful and useful. 40<br />

% did not like group work and<br />

preferred to work alone. <strong>10</strong> %<br />

None 70% think that studying by<br />

themselves is helpful and<br />

useful. 19% preferred to work<br />

Read the whole handout<br />

and some students took<br />

notes and underlined parts.<br />

in a group. 11 % No response<br />

Generated and studied their<br />

concept maps and tried to<br />

understand relationships<br />

among the bubbles.<br />

None 12% had created concept maps<br />

before. <strong>10</strong>0% did not know<br />

Inspiration program before<br />

No response<br />

None<br />

Generated and studied their<br />

concept maps and tried to<br />

understand relationships<br />

among the bubbles.<br />

9% had created concept maps<br />

before. <strong>10</strong>0% did not know<br />

Inspiration program before<br />

276


Attitudes toward concept mapping for science concept learning were positive whether the students were working<br />

individually or collaboratively (see Table 4). Generating concept maps using the computer program, Inspiration,<br />

provided students with a useful learning strategy and a positive experience. An interesting observation by both the<br />

teacher and the researchers was that the students in the individual group were more positively engaged in their<br />

studying than were the students who studied collaboratively. The teacher and researchers observed students in the<br />

collaborative groups spending excessive time competing for control of the mouse and students complained vocally<br />

about having to share the keyboard and work collaboratively. Only 50% of students in the collaborative group<br />

thought that working with peers was helpful for studying and learning science concepts. Students’ negative attitudes<br />

toward collaboration may have influenced their comprehension of concepts being studied and might explain why the<br />

students in the individual group performed better than the students in the collaborative group.<br />

The study strategy chosen by students in the control group when they prepared for the test was to simply read the<br />

handout, while the students in the two experimental groups’ created and studied the relationships between bubbles on<br />

their concept maps and links that they created during study time. Students who created concept maps expressed that<br />

creating those maps and studying relationships between bubbles and links were quite helpful and fun for learning<br />

science.<br />

Discussion and conclusions<br />

The findings provide further evidence that individually-generating concept maps during study time positively<br />

influences science concept learning and that computer-based concept mapping can be facilitative. But findings do not<br />

support the assumption that collaborative learning is more effective than learning individually. Students enjoyed<br />

Inspiration software which supported their construction of concept maps for science learning and helped them<br />

capture their quickly evolving ideas and organize them for meaningful learning. These findings provide evidence that<br />

constructivist learning theory is correct regarding learners’ needs to organize and represent concepts visually and<br />

explore interrelationships among concepts. However, in this case, social construction of meaning using concept maps<br />

was no more effective than application of a self-selected study strategy.<br />

The Positive Effect of Concept Mapping on Learning<br />

These findings replicate previous research results (Cifuentes & Hsieh, 2003a, b; Cifuentes & Hsieh, 2004, Hsieh &<br />

Cifuentes, 2003; Hsieh & Cifuentes, 2006). It extends the research by providing evidence that individuallygenerating<br />

concept maps on computers is more effective than either independent, unguided study, or collaborativelygenerating<br />

concept maps. However, the findings do not support Fischer, et al’s (2002) assumption that collaborative<br />

knowledge construction is more effective than individual knowledge construction. Qualitative findings suggest that<br />

the reason that students in the collaborative group did not score significantly higher than the control group on<br />

achievement might have been lack of a disciplined, supportive collaborative working environment.<br />

Cifuentes and Hsieh (2004) previously demonstrated that distraction of computers and software and the difficulty of<br />

visualization can contribute to lack of the effectiveness of computer-based visualization. Students in the school<br />

setting of this study were not motivated to collaborate with each other and were distracted by each other, the<br />

computers, and the software. Most of the participants did not have computers at home and the school district had<br />

limited technical facilities. The participating students learned concept mapping using computers for the first time in<br />

the context of this study. In addition, according to their own self-report, most students had had few opportunities to<br />

develop collaborative learning skills in their young school careers. With computer skills, concept mapping, and<br />

collaboration all new to the students, the combined tasks challenged them. To be effective, all three components of<br />

the experiment required sophistication on the part of learners.<br />

Perhaps more experienced learners would produce a different result. Findings indicate that teachers should train their<br />

students in computer-based concept mapping and facilitate adoption of concept-mapping as an independent study<br />

strategy. Deciding whether to adopt a computer-based individual concept mapping strategy or a computer-based<br />

collaborative concept mapping strategy might be based upon characteristics of the learners and the learning context.<br />

For example, a teacher should ask the following questions prior to implementing concept mapping: Do students feel<br />

comfortable and competent working on computers during class time? Do students already know how to work<br />

277


collaboratively during study time? Do students know how to work collaboratively on computers? If the answer to<br />

any of the questions is “no,” as was the case in this study, then teachers should only recommend such a strategy after<br />

students have been sufficiently trained on computers, on collaboration, and on computer-based collaboration. Until<br />

that time, students should be encouraged to develop concept maps individually. If the answer to these questions is<br />

“yes,” then the teacher might consider encouraging students to generate concept maps collaboratively.<br />

Recommendations for Future Study<br />

Study limitations are that generalizations to populations beyond the sample of this study should be made<br />

conservatively because a nonrandomized, quasi-experimental design was used, a small number of students<br />

participated, and groups were of unequal size. Therefore, it is recommended that these findings be replicated in a<br />

study with a larger group of students who are randomly selected for placement in equal-sized groups. However, we<br />

acknowledge that this is quite difficult in the naturalistic classroom environment.<br />

Chiu, Wu, & Huang, (2000) have found that when students have computer skills and collaboration skills, they can<br />

work together effectively on computers. Therefore, the researchers suggest that this study be replicated in another<br />

school context, where students have technical skills and support, the atmosphere is conducive to collaboration, and<br />

students have a history of collaborative experience in school. They predict that under these circumstances, the<br />

outcomes of collaboratively-generated concept mapping may be more positive. Researchers might investigate<br />

whether individually or collaboratively computer-generated concept maps differentially affect learners with specific<br />

characteristics such as those listed above.<br />

Given that participants in this study were distracted by members of their group, both the classroom teacher of those<br />

students and the researchers think that an investigation comparing collaborative groups versus collaborative pairs is<br />

of interest. Collaborative pairs might be less distracting than collaborative groups. Further studies might be<br />

conducted to compare the effect of individually and collaboratively generating concept maps on the quality of those<br />

maps.<br />

Possible qualitative factors might include— propositions, hierarchical relationships among sub-concepts, cross links,<br />

and examples (Novak & Gowin, 1984). In order to conduct a study using quality of concept maps as a dependent<br />

variable, training in future studies should specifically prepare study participants to generate concept maps that<br />

include quality factors identified in concept mapping literature.<br />

References<br />

Ajose, S.A. (1999). Discussant’s comments: On the role of visual representations in the learning of mathematics.<br />

Paper presented at the annual meeting of the North American Chapter of the International Group for the Psychology<br />

of Mathematics Education, <strong>October</strong> 23-26, 1999, Morelos, Mexico.<br />

Anderson-Inman, L. (1996). Computer-assisted outlining: Information organization made easy. Journal of<br />

Adolescent and Adult Literacy, 39 (4), 316-320.<br />

Anderson-Inman, L., & Ditson, L. (1999). Computer-based concept mapping: A tool for negotiating meaning.<br />

Learning and Leading with <strong>Technology</strong>, 26 (8), 6-13.<br />

Anderson-Inman, L., & Zeitz, L. (1993). Computer-based concept mapping: Active studying for active learners. The<br />

Computer Teacher, 21 (1), 6-8, <strong>10</strong>-11.<br />

Brandon, D. P., & Holingshead, A. B. (1999). Collaborative learning and computer-supported groups.<br />

Communication Education, 48 (2), <strong>10</strong>9-126.<br />

Bransford, J. (2000). How people learn: Brain, mind, experience, and school, Washington, DC: National Academy<br />

of Sciences.<br />

278


Brown, D.S. (2003). High School biology: A group approach to concept mapping. The American Biology Teacher,<br />

65 (3), 192-197.<br />

Chang, K.E., Sung, Y.T., & Chen, I.D. (2001). Learning through computer-based concept mapping with scaffolding<br />

aid. The Journal of Computer Assisted Learning, 17, 21-33.<br />

Chiu, C.H., Wu, W.S., & Huang, C.C. (2000). Collaborative concept mapping processes mediated by computer.<br />

Institute of Computer and Information Education, National Taiwan Teachers College, 33 (2), 95-<strong>10</strong>0.<br />

Cifuentes, L., & Hsieh, Y. C. (2003a). Visualization for construction of meaning during study time: A Quantitative<br />

Analysis. International Journal of Instructional, Media, 30 (3), 263-273.<br />

Cifuentes, L., & Hsieh, Y. C. (2003b). Visualization for construction of meaning during study time: A Qualitative<br />

Analysis. International Journal of Instructional Media, 30 (4), 407-417.<br />

Cifuentes, L., & Hsieh, Y. C. (2004). Visualization for middle school student’ engagement in science learning.<br />

Journal of Computers in Mathematics and Science Teaching, 23 (2), <strong>10</strong>9-137.<br />

Daley, B. J. (2002). Facilitating learning with adult students through concept mapping. The Journal of Continuing<br />

Higher Education, 50 (1), 21-31.<br />

Duffy, T.M., Lowyck, J., & Jonassen, D.H. (1991). Designing environment for constructive learning, New York:<br />

Springer.<br />

Earnshaw, R.A., & Wiseman, N. (1992). An introductory guide to scientific visualization, New York: Springer.<br />

Emerson, R. M., Fretz, R. I., & Shaw, L. L. (1995). Writing ethnographic fieldnotes, Chicago, IL: The University of<br />

Chicago Press.<br />

Fischer, F., Bruhn, J., Grasel, C., & Mandl, H. (2002). Fostering collaborative knowledge construction with<br />

visualization tools. Learning and Instruction, 12, 213-232.<br />

Fischer, K.M. (1990). Semantic-networking: The new kid on the block. Journal of Research in Science Teaching, 27,<br />

<strong>10</strong>02-<strong>10</strong>18.<br />

Gaines, B.R., & Shaw, M.L.G. (1995). Concept maps as hypermedia components. International Journal of Human-<br />

Computer Studies, 43 (3), 323–361.<br />

Gobert, J.D., & Clement, J.J. (1999). Effect of student-generated diagrams versus student-generated summaries on<br />

conceptual understanding of causal and dynamic knowledge in plate tectonics. Journal of Research in Science<br />

Teaching, 36 (1), 39-53.<br />

Horton, P.B., MacConney, A.A., Gallo, M., Woods, A. L., Senn, G.J., & Hamelin, D. (1993). An investigation of the<br />

effectiveness of concept mapping as an instructional tool. Science Education, 77, 95-111.<br />

Hsieh, Y. C. & Cifuentes, L. (2003). A cross-cultural study of the effect of student-generated visualization on middle<br />

school students' science concept learning in Texas and Taiwan. <strong>Educational</strong> <strong>Technology</strong> Research and Development,<br />

51 (3), 90-95.<br />

Hsieh, Y.C., & Cifuentes, L. (2006). Student-generated visualization as a study strategy for science concept learning.<br />

<strong>Educational</strong> <strong>Technology</strong> and Society, 9 (3), 137-148.<br />

Hyerle, D. (2000). A field guide to using visual tools, Alexandria, VA: Association for Supervision and Curriculum<br />

Development.<br />

279


Jegede, O.J., Alaiyemola, F.F., & Okebukola, P.A.O. (1990). The effect of concept mapping on students' anxiety and<br />

achievement in biology. Journal of Research in Science Teaching, 27, 951-960.<br />

Jonassen, D. (1994). Thinking technology. <strong>Educational</strong> <strong>Technology</strong>, 34 (4), 34-37.<br />

Jonassen, D.H. (2000). Computer as mindtools for schools: Engaging critical thinking, Upper Saddle River, NJ:<br />

Prentice Hall.<br />

Jonassen, D. H., Carr, C., & Yueh, H.P. (1998). Computers as mindtools for engaging learners in critical thinking.<br />

TechTrends, 43 (2), 24-32.<br />

Lumpe, A. T., & Staver, J.R. (1995). Peer collaboration and concept development: Learning about photosynthesis.<br />

Journal of Research in Science Teaching, 32 (1), 71-98.<br />

Mayer, R. (1989). Systematic thinking fostered by illustration in scientific text. Journal of <strong>Educational</strong> Psychology,<br />

81, 240-246.<br />

Mayer, R., & Gallini, J. (1990). When is an illustration worth then thousand words? Journal of <strong>Educational</strong><br />

Psychology, 82, 715-726.<br />

Merriam, S. B. (1998). Qualitative research and case study applications in education, San Francisco: Jossey-Bass.<br />

Novak, J. D. (1990). A useful tool for science education. Journal of Research in Science Teaching, 27 (<strong>10</strong>), 937-949.<br />

Novak, J.D. (1998). Learning, creating and using knowledge: Concept maps as facilitative tools in schools and<br />

corporations, Mahwah, NJ: Lawrence Erlbaum.<br />

Novak, J.D., & Gowin, D. B. (1984). Learning how to learn, Cambridge, UK: Cambridge University Press.<br />

Padilla, M. J., Miaoulis, I., & Cyr, M. (2002). Prentice Hall science explorer, Upper Saddle River, NJ: Prentice Hall.<br />

Plotnick, E. (1997). Concept mapping: A graphical system for understanding the relationship between concepts,<br />

retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://www.ericdigests.org/1998-1/concept.htm.<br />

Quinn, H. J., Mintzes, J. J., & Laws, R. A. (2003). Successive concept mapping. Journal of College Science<br />

Teaching, 33 (3), 12-16.<br />

Rice, D. C., Ryan, J. M., & Samson, S. M. (1998). Using concept maps to assess student learning in the science<br />

classroom: Must different methods compete? Journal of Research in Science Teaching, 35 (<strong>10</strong>), 1<strong>10</strong>3-1127.<br />

Roth, W.M. (1994). Students’ views of collaborative concept mapping: An emancipatory research project. Science<br />

Education, 78 (1), 1-34.<br />

Royer, R., & Royer, J. (2004). Comparing hand drawn and computer generated concept mapping. Journal of<br />

Computers in Mathematics and Science Teaching, 23 (1), 67-81.<br />

Ruiz-Primo, M. A., & Shavelson, R. (1996). Problems and issues in the use of concept maps in science assessment.<br />

Journal of Research in Science Teaching, 33, 569-600.<br />

280


Hastie, M., Chen, N.-S., & Kuo, Y.-H. (<strong>2007</strong>). Instructional Design for Best Practice in the Synchronous Cyber Classroom.<br />

<strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 281-294.<br />

Instructional Design for Best Practice in the Synchronous Cyber Classroom<br />

Megan Hastie<br />

Brisbane School of Distance Education, Australia // Tel: +07 3214 8265 // mhast5@eq.edu.au<br />

Nian-Shing Chen and Yen-Hung Kuo<br />

Department of Information Management, National Sun Yat-sen University, Taiwan // nschen@cc.nsysu.edu.tw //<br />

d934020001@student.nsysu.edu.tw<br />

ABSTRACT<br />

This paper investigates the correlation between the quality of instructional design and learning outcomes for<br />

early childhood students in the online synchronous cyber classroom. Today’s generation of e-learners has<br />

access to highly engaging and well-designed multi-media synchronous classrooms. However little data exists<br />

on what constitutes ‘good practice’ in instructional design for online synchronous cyber lessons. The<br />

synchronous cyber classroom outperforms all other modes of instruction in enabling students to simultaneously<br />

integrate visual, auditory and kinaesthetic processes. The online synchronous cyber classroom provides learners<br />

with more authentic and engaging learning activities enabling higher levels of learning compared to purely<br />

asynchronous modes of self-paced learning. During 2001-<strong>2007</strong> a group of students aged 5 to 8 years<br />

collaborated with their teacher at Brisbane School of Distance Education, Australia in a trial of online<br />

synchronous learning. The trial identified ‘best practice’ in the instructional design of synchronous lessons<br />

delivered through the Collaborative Cyber Community (3C) learning platform at the National Sun Yat-sen<br />

University, Taiwan. A guideline for ‘best practice’ in the instructional design of online synchronous cyber<br />

lessons for early childhood students has been developed and discussed.<br />

Keywords<br />

Instructional design, Online synchronous learning, Cyber classroom, Early childhood students<br />

Introduction<br />

This paper describes the evolution of ‘best practice’ in instructional design in synchronous cyber classrooms for<br />

students aged 5 to 8 years. The students were enrolled at Brisbane School of Distance Education (BSDE) and<br />

worked synchronously with their teacher Ms Megan Hastie during a six year trial. The trial became an international<br />

collaboration between BSDE, Australia and the National Sun Yat-sen University (NSYSU), Taiwan during 2005-<br />

<strong>2007</strong>. Synchronous teaching and learning over the Internet was embraced by the teacher and students as a means of<br />

overcoming the tyranny of distance and isolation experienced by most students.<br />

The paper attempts to define ‘best practice’ in instructional design for maximum learning gain in early childhood<br />

students in the synchronous cyber classroom. We examine instructional design within the context of the practical<br />

purpose of learning. We describe new instructional design elements that we have developed to maximize online<br />

learning for very young students. These elements acknowledge the prior learning of each student and provide the<br />

practical means for achieving expected learning outcomes.<br />

Technological innovation has provided educators with hardware and software but has not necessarily provided<br />

innovative instruction and pedagogy. To use an analogy, we have the machine but we are still waiting for the<br />

teaching manuals to be written. In particular, a paucity of data exists on instructional design for synchronous cyber<br />

learning with early childhood students.<br />

BSDE is a public school which operates under the governance of Education Queensland (EQ). Seven schools of<br />

distance education are located throughout Queensland. The students in this study were aged 5 to 8 years and were<br />

enrolled in Megan Hastie’s class at BSDE between 2001 and <strong>2007</strong>. The students and their parents live in various<br />

locations throughout Australia and overseas. Students enrolled at BSDE access a range of asynchronous course<br />

materials including print, audio and multi-media. Students work off-campus, usually at home and under the<br />

supervision of a parent who is their Home Tutor. Students complete the course work and return the work to their<br />

teacher for evaluation. Communication between the student and the teacher has traditionally been by mail and<br />

telephone. With the advent of the Internet communication is occurring increasingly through email and web-based<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

281


esources. The use of synchronous teaching and learning is being explored as a viable alternative and adjunct to<br />

traditional modes of delivery for students enrolled in Education Queensland schools of distance education.<br />

Nowadays, people are living in the global village such that children may go with their parents to a foreign country<br />

because of changes in their work locations. With the global Internet environment, children can use Information<br />

Communication Technologies (ICT) to keep in touch with their family and friends. Furthermore, e-learning can also<br />

help individual students continue their education. Interaction is one of the key factors in gaining access to<br />

information and for effective learning (Keegan, 1990). However, instructors are supposed to take more responsibility<br />

for instructional design to improve and encourage learners in online learning activities (Hannafin, 1992).<br />

Given that ICT enable the instantaneous transfer of information; educators have the added challenge of designing<br />

instruction for high-speed interaction. In ICT-enabled educational environments thoughts can be transferred from<br />

one person to another in nano-seconds enabling brain-to-brain information exchange (Hastie & Chen, 2006). The<br />

synchronous component of ICT-enabled learning environments is gaining more prominence. For instance, Chen et al<br />

(2005) state that the synchronous cyber classroom provides a learning environment that can outperform both<br />

asynchronous online instruction and traditional face-to-face instruction. Little attention, however, has been given to<br />

the use of synchronous cyber learning with younger students and to the development of ‘best practice’ in<br />

instructional design in the synchronous cyber classroom. This paper attempts to redress the situation.<br />

Literature Review<br />

Despite current behaviorist-oriented or constructivist-oriented arguments, instructional design makes instructors take<br />

a systematic approach to creating a learning environment that promotes effective learning outcomes. In this paper we<br />

define instructional design as the process through which an educator determines the best teaching methods for<br />

specific learners in a specific context, attempting to obtain a specific goal (Dick & Carey, 2001).<br />

The challenge for today’s educators is to develop ‘best practice’ in instructional design for lessons in the<br />

synchronous cyber classroom. Chickering, & Ehrmann (1996) say that simply incorporating technology into<br />

instruction is not enough. Instructors need to focus on seven ‘best practice’ strategies:<br />

1. Increase interaction between instructors and students.<br />

2. Increase cooperation among students.<br />

3. Increase students´ active learning.<br />

4. Prompt feedback given to students.<br />

5. Facilitate students´ time on task.<br />

6. Communicate expectations.<br />

7. Adapt to students with diverse talents and ways of learning.<br />

These strategies are highly relevant to the pedagogy of the synchronous cyber classroom. The challenge for<br />

educators is to ensure that ‘best practice’ is translated into ‘real’ practice in synchronous cyber lessons.<br />

Worley (2000) cited Ehrmann in arguing that the learner should be the focal point in research on web-based learning.<br />

In particular she says that the relationship between faculty and student deserves more investigation. Greater<br />

prominence should be given to the specific learning strategies rather than the impact of the technology in isolation.<br />

Witt & Wheeless (2001) say that research must consider other variables, particularly those related to instructional<br />

strategies to determine the effectiveness of web-based learning.<br />

A high correlation has been identified between the immediacy of verbal and nonverbal responses and cognitive<br />

learning (Rodriguez et al., 1996). The immediacy of the teacher’s verbal and nonverbal behaviours in face-to-face<br />

situations has been linked both directly and indirectly to enhanced cognitive and affective learning. Immediacy<br />

therefore is equally relevant, and possibly more relevant, in web-based lessons because participants are required to<br />

create a social presence through their communications. The synchronous cyber classroom lends itself to high speed<br />

verbal and nonverbal interaction between teacher and student. When translated into practice in the synchronous<br />

cyber classroom, the teacher and student use the technology to establish their communication link, to create a social<br />

presence and negotiate the lesson content. They then embark on the ‘real’ synchronous interactive component of the<br />

lesson using the system tools. The teacher gives the student ‘live’ feedback and evaluation. Recordings of the lesson<br />

282


can be used by both the teacher and student for evaluation of the learning process. Storage of recorded lessons means<br />

instructional design issues can be identified and resolved.<br />

With feedback gained from the instructional design process, refinements can be made to the adopted learning theory<br />

and to the instructional process itself. In computer-mediated instructional design (including e-learning, e-tutoring,<br />

course development.), instructional design plays a critical role in enabling immediacy of response and in creating<br />

social presence. The immediacy behaviors of participants in the synchronous cyber classroom become more<br />

prominent. The role of the teacher shifts from discussion leader to discussion facilitator as the student assumes more<br />

responsibility. This facilitates technology-based learning which enables students to solve their specific learning<br />

problems (Nichols & Anderson, 2005).<br />

Cognitive gain is one of the important goals for instructional design (Chial, 2004; Philips, 1994). Teachers, therefore,<br />

need to focus on how to design learning activities which can better engage students in active learning. This results in<br />

deeper learning and promotes cognitive gain Topping (2005) has pointed out that cooperative learning and peer<br />

tutoring facilitated by ICT tools can increase students’ self-organizing opportunities and improve cognitive gain<br />

(Topping, 2005; Topping et al., 2004). Other literature has also reported on ways to use different ICT tools to<br />

promote self-organizing opportunities for learners (Vogel, et al., 2006; Corpus & Eisbach, 2005).<br />

Synchronous learning methodology has been acknowledged as an important strategy for collaborative learning<br />

(Marjanovic, 1999). Chen, Ko, Kinshuk, & Lin (2005) demonstrate that the most promising advantages of the<br />

synchronous cyber classroom are in the provision of immediate feedback such that learners in the cyber classroom<br />

are able to correct themselves immediately and thereby strengthen their learning. The synchronous process enhances<br />

student motivation through the students’ obligation to be present and to participate in the face-to-face cyber<br />

environment. Wang & Chen (<strong>2007</strong>) describe the results of a pilot study assessing the value of a synchronous learning<br />

management system to support online live tutorial sessions in second language learning. They found that the range of<br />

tools supported by the platform including chat, whiteboard, and videoconferencing technology can provide a<br />

resilient, supportive learning environment for distance learning students. Hastie & Palmer (2003) found that the<br />

synchronous cyber classroom demands higher levels of concentration in learners during learning activities and<br />

thereby maximizes memorization and learning outcomes. The results recorded in the synchronous cyber classroom<br />

are compared to its asynchronous cyber classroom counterpart. The paper concludes that there is a need for educators<br />

to give greater prominence to synchronous cyber lessons for learners aged 5-8 years instead of using the current<br />

pedagogy of starting from asynchronous learning activities.<br />

Methodology<br />

The trial provided individualised instruction to students aged 5-8 years using online synchronous cyber classrooms<br />

created by the digital school called the Collaborative Cyber Community (3C). The digital school server is located at<br />

NSYSU and is a kind of Synchronous Learning Management System (SLMS) developed and supported by Professor<br />

Nian-Shing Chen to facilitate international team-teaching for young children. The teacher, Megan Hastie, was based<br />

in Brisbane. The students were located within Australia and throughout the world, including Asia and the Pacific,<br />

Europe and the United States of America.<br />

The synchronous cyber lessons were an adjunct to the regular program provided at BSDE. The students volunteered<br />

to participate in the trial. Students worked at home using a personal computer and linked-up to the Collaborative<br />

Cyber Community digital school platform.<br />

The early phase: communication with isolated learners<br />

In the early phase of the trial the synchronous cyber classrooms were used to enable live communication between the<br />

teacher and students. The cost of telephone calls to isolated and international students meant that these students<br />

seldom had the opportunity to speak to their teacher. The majority of their communication was asynchronous and<br />

paper based. Communication relied on traditional mail services.<br />

283


The use of digital communication modes via the Internet became a feature at BSDE in 2000. This enabled email<br />

contact and more significantly provided broadband capacity that supported an interactive whiteboard and Voice over<br />

Internet Protocol (VoIP). This enabled the teacher and students to talk and interact with each other in ‘real time’.<br />

The use of the Internet for communication set a precedent in itself because many of the students were geographically,<br />

socially and educationally isolated. This meant that the students went from practically no direct contact and<br />

communication with their teacher to ‘real’ time interaction and discussion.<br />

The teacher and student entered the synchronous cyber classroom by logging-on to the Collaborative Cyber<br />

Community platform. The platform supports both asynchronous and synchronous functionalities for teachers to<br />

design and conduct various teaching & learning activities in asynchronous cyber classroom or synchronous cyber<br />

classroom. The student and their home tutor, usually a parent, logged-on to the Collaborative Cyber Community<br />

platform at a pre-arranged time. Allowances were made for time zone differences for students living overseas.<br />

Both teacher and student wore a headset with built-in microphone. The headsets limited extraneous noise and audio<br />

feedback. The interactive whiteboard was used, along with the webcam (where available) for the sharing of digital<br />

information. VoIP enabled audio and discussion. The student used a mouse or a graphic tablet to draw and write on<br />

the synchronous interactive whiteboard. The teacher provided the student with digital and auditory feedback in the<br />

form of written and verbal comments. A webcam (where available) was used by the teacher and student. The<br />

teacher was able to make direct live observations of the student at work.<br />

The interaction between the teacher and student in the initial phase of the trial can best be described as teacherdirected<br />

learning. The teacher prepared multiple whiteboard screens prior to the lesson. The screens featured<br />

activities designed to meet the specific learning needs of the individual learner. This type of activity resembled a<br />

‘worksheet’ similar to the paper activities found in traditional classrooms. The teacher used commercially produced<br />

graphics as picture clues. The student completed the activity and received immediate feedback from the teacher in<br />

the form of a ‘tick’ and written praise for a correct answer. Negative feedback was avoided for incorrect answers.<br />

Rather the student was encouraged to have another attempt at the correct answer. This allowed the student to perform<br />

tightly scripted tasks in a teacher-directed and highly supportive learning environment. An example of this type of<br />

activity is shown in Figure 1.<br />

Figure 1. Worksheet format<br />

The Second Phase: student and teacher ‘real’ collaborative learning<br />

The communication capabilities of the synchronous cyber classrooms were established in the early phase of the trial.<br />

The focus of the trial then changed to the pedagogical issues related to synchronous teaching and learning. This can<br />

be largely attributed to a student named Madeline.<br />

284


Madeline was eight years of age and worked online in a synchronous cyber classroom with Megan Hastie during<br />

2001-2002. A gifted student, Madeline brought a new dimension to cyber lessons. Her written responses on the<br />

interactive whiteboard in the synchronous cyber classroom provided irrefutable evidence of abstract thinking and<br />

higher levels of cognitive functioning. Madeline created ‘mind maps’ on the interactive whiteboard to record and<br />

plan her learning. This is a form of ‘metacognition’ in which Madeline demonstrated her ability to think about her<br />

thinking. She recorded her thoughts on the interactive whiteboard whilst simultaneously typing the text and<br />

discussing the content with her teacher. Figure 2 shows Madeline was planning a research project on birds that were<br />

nesting on her parents’ farm:<br />

Figure 2. High level cognitive function<br />

Figure 3. High level cognitive tasks<br />

From this point the trial sought to maximise the interaction between the student and teacher during synchronous<br />

cyber lessons. A less didactic approach was adopted. Strategies were trialled to support instruction that demanded<br />

demonstrated evidence of visual, auditory and kinaesthetic processing by the student during the lesson. We expected<br />

the integration of these three sensory functions to result in higher levels of thinking and learning. This posed<br />

instructional design challenges. During 2002-<strong>2007</strong> the design of the whiteboard screens evolved such that the student<br />

285


contributed more information and engaged in ‘chat’, both verbal and written, relating to the activity. By contrast, the<br />

Figure 3 below shows how instructional design enabled the student to interact in a more spontaneous way. The<br />

student’s responses were visual, auditory and kinaesthetic. This demanded a higher level of thinking and contributed<br />

to cognitive gain.<br />

We found a direct correlation between the instructional design features illustrated in Figure 3 and higher levels of<br />

interactivity by the student. The student provided more information and demonstrated greater levels of understanding<br />

of the concepts.<br />

We attribute the teacher’s ability to adopt a less didactic and teacher-directed approach to the design of synchronous<br />

cyber lessons to the teacher gaining confidence with the technology of the synchronous cyber classroom. This<br />

included greater competence in the establishment of the link with the student, the use of the interactive whiteboard<br />

and its tools, and the design of synchronous cyber lessons. Growing confidence with the technology allowed the<br />

focus to shift to the pedagogy.<br />

This resulted in higher levels of interaction and collaboration between the student and teacher during the lesson. It<br />

meant the teacher and student shared the learning space on the interactive whiteboard. The whiteboard became a<br />

colourful, creative and dynamic playground for learning. Input from the student was maximised and resulted in less<br />

teacher preparation time. The teaching and learning became ‘real’. The evolution of the lesson format and its<br />

associated instructional design features was guided by the students themselves.<br />

We decided to keep the instructional design for the synchronous lessons simple and ‘minimalist’. The focus was on<br />

clarity of communication between the student and teacher and was deemed to be the most suitable approach for<br />

working with younger students. It provided the students with the freedom to use concrete technological tools,<br />

especially the writing tools on the interactive whiteboard, to encode abstract thought. An interesting dichotomy<br />

evolved between simplicity in the instructional design and complexity in the cognitive functioning of students. In<br />

effect, the teacher and students collaborated in the learning process and the design of lessons. This approach had two<br />

major advantages: it was a good way to engage online students, and it was also a good strategy to encourage<br />

teachers to take a graduated approach to the adoption of online synchronous teaching.<br />

The simplicity of the approach with its ‘minimalist’ design features was applied to the lesson format and also to each<br />

individual screen of the synchronous interactive whiteboard. The increased interaction by the student in the form of<br />

visual, auditory and kinaesthetic responses during synchronous cyber lessons was equated with higher cognitive<br />

function.<br />

The lesson format usually consisted of three or four screens prepared by the teacher prior to the lesson. Extra screens<br />

were added to the lesson as required. Some screens contained a simple one-sentence instruction or idea for the<br />

student to read. The remaining space on the screen was purposely left empty and was used for drawing and writing<br />

by the student and teacher. Other screens were completely empty and were added as the lesson progressed to provide<br />

extra writing space for the student.<br />

As a result, we developed a ten point guideline for ‘best practice’ in instructional design to maximise learning<br />

outcomes in the students in our trial:<br />

1. We kept the design simple (minimalist) when we planned the format of the synchronous lesson and for each<br />

screen of the interactive whiteboard.<br />

2. We used the first screen of the synchronous whiteboard, along with the webcam (where available) to start the<br />

lesson and welcome the student.<br />

3. We used the second screen for the lesson plan.<br />

4. We provided teacher directed activities during the lesson.<br />

5. We used the third and subsequent screens as a working space for activities based on the lesson plan.<br />

6. We gave greater prominence to freehand drawing and keyboard writing on the interactive whiteboard.<br />

7. We balanced interactivity and spontaneity with teacher ‘wait time’.<br />

8. We used the final screen to plan the next lesson.<br />

9. We also used the final screen to praise the student, end the lesson and say farewell.<br />

<strong>10</strong>. We used email as an adjunct to synchronous cyber lessons.<br />

286


We applied the ten points to all synchronous cyber lessons with points 1, 2, 3, 8, 9, <strong>10</strong> forming a standard template.<br />

This provided students with a predictable format for the lessons and minimised teacher preparation time. The<br />

students formed an expectation that the lesson would start with a greeting and a discussion of the lesson plan (the<br />

advance organiser). Points 4-7 formed the working space for collaborative learning between the student and teacher.<br />

From this stage of the lesson until its conclusion the expectation of the students was that they would work<br />

collaboratively with their teacher on self-selected and teacher directed tasks. The teacher was able to provide direct<br />

instruction while facilitating the shared ownership of the working space with the students. The lesson conclusion<br />

correlates with points 8-<strong>10</strong> with acknowledgement of students’ effort and learning outcomes and collaborative<br />

planning for the next lesson. We describe the guideline in more detail in the following section.<br />

Findings & Discussions<br />

We have identified ten basic instructional design elements for synchronous cyber lessons with students aged 5-8<br />

years. These will now be described in detail:<br />

1. Keep the design simple (minimalist) for the lesson format and for each screen of the interactive cyber<br />

whiteboard<br />

• Use the student’s name.<br />

• Encourage the student to respond using your name.<br />

• Give thought to your ‘persona’ as a synchronous cyber teacher and try to simply be yourself.<br />

• Keep verbal and written communication simple for clarity of communication.<br />

2. Start the lesson and welcome the student<br />

• Use the first screen of the interactive whiteboard to greet the student and start the lesson.<br />

• Use webcam, if available, to greet the student and their Home Tutor.<br />

• Use the student’s name in both the written and verbal greeting.<br />

• Continue to use the student’s name throughout.<br />

• Use one colour for the greeting (blue is good). Write the greeting at the top of the screen.<br />

• Continue to use the same colour throughout the lesson.<br />

Figure 4. An example of element 2<br />

287


3. Use the second screen for the lesson plan<br />

• Provide the student with an advance organizer based on negotiated content.<br />

• Invite the student and their Home Tutor to nominate content and specific learning for the lesson.<br />

• Confirm the lesson plan with the student and Home Tutor at the start of the lesson or prior to the lesson via<br />

email<br />

Figure5. An example of element 3<br />

Figure6. An example of element 4<br />

4. Provide teacher directed activities during the lesson<br />

• Provide pedagogical input for decisions about content based on observations of the student’s progress, child<br />

development theory and curriculum requirements.<br />

• Use simple written instructions throughout the lesson, preferably one sentence, placed at the top of the screen.<br />

288


• Use graphics and photographs as picture clues.<br />

• Share ownership of the empty space on the whiteboard with the student.<br />

• Draw a box on the whiteboard screen to define the working space for more structured activities.<br />

• Use a different colour for feedback and comments to the student (pink is good).<br />

• Write feedback and comments at the bottom or side of the screen or wherever a space can be found that does not<br />

overlap the student’s written work.<br />

5. Use the third and subsequent screens as a working space for activities based on the lesson plan<br />

• Use the whiteboard as a learning space and ‘playground’.<br />

• Add screens as required.<br />

Figure7. An example of element 5<br />

Figure 8. An example of element 6<br />

6. Give greater prominence to freehand drawing and keyboard writing on the whiteboard<br />

• Encourage the student to draw pictures on the whiteboard as a way of developing confidence.<br />

289


• Use drawing to develop writing skills.<br />

• Encourage the student to use the tools in the toolbar.<br />

• Encourage the student to experiment with colour using the palette.<br />

• Use writing to develop reading skills.<br />

• Encourage the student to develop keyboarding skills.<br />

• Encourage the student to use the graphic tablet.<br />

7. Balance interactivity and spontaneity with teacher ‘wait times’.<br />

• Allow the student time to respond verbally.<br />

• Keep the teacher’s verbal comments simple to accommodate audio lag.<br />

• Write comments and feedback on the whiteboard to compensate for audio lag.<br />

• Repeat verbal comments in writing to restate the message if necessary.<br />

• Use the chat room to communicate with the Home Tutor.<br />

Figure 9. An example of element 7<br />

Figure <strong>10</strong>. An example of element 8<br />

290


8. Use the final screen to plan the next lesson<br />

• Negotiate the content for the next lesson with the student and their Home Tutor.<br />

• Provide pedagogical input for decisions about content based on observations of the student’s progress, child<br />

development theory and curriculum requirements.<br />

9. Use the final screen to praise the student, end the lesson and say farewell<br />

• Give positive feedback to the student. For example, ‘You did clever writing today.’<br />

• Tell the student that the lesson is ending<br />

• Say farewell. For example, ‘Bye for now. I’ll talk to you again soon’.<br />

Figure 11. An example of element 9<br />

<strong>10</strong>. Use email as an adjunct to synchronous cyber lessons<br />

• Use email to confirm the lesson<br />

• Use email to give feedback to the student and Home Tutor after the lesson.<br />

Throughout our trial of synchronous cyber teaching and learning with students aged 5-8 years we found an increased<br />

level of interaction between teacher and students. We negotiated the lesson content with the student and their Home<br />

Tutors. We used an advance organiser at the start of each lesson to communicate expectations. We were able to<br />

cater for students with diverse talents and ways of learning through negotiated curriculum and ‘tailor-made’ lessons<br />

to suit individual needs. We found that the synchronous interactive whiteboard compelled the student and teacher to<br />

encode information in a high speed exchange of written responses. Students used the mouse, graphic tablet and<br />

keyboard to write on the synchronous interactive whiteboard. Students acquired sophisticated keyboarding skills<br />

from an early age, generally by age seven years. Keyboarding enabled the students to participate in the learning<br />

process during synchronous cyber lessons in a highly interactive manner.<br />

We believe that the level of interactivity demonstrated by the students in our trial during synchronous cyber lessons<br />

was superior to any other learning environment, including face-to-face. The teacher was able to respond<br />

immediately to the student using the mouse and keyboard to provide written instructions and feedback to the student<br />

on the synchronous interactive whiteboard. The multi-layered and colourful written interactions created by the<br />

291


student and teacher on the synchronous interactive whiteboard became the ‘face’ of the communication and<br />

contributed to the evolution of a social presence that we believe is unique to synchronous cyber lessons. We found<br />

that the immediacy of the teacher’s verbal and nonverbal behaviours during intense, high speed interaction between<br />

the student and teacher resulted in accelerated learning by the students. The tools of the synchronous cyber<br />

classroom were observed to contribute to a higher level of efficiency in the mastery of concepts and the completion<br />

of tasks by the students. In particular, the students demonstrated higher levels of concentration and increased work<br />

rates. We found that the quality and quantity of the work completed by the students, for example, in twenty minute<br />

online synchronous lessons could be compared to longer time allocations in traditional classroom settings. The<br />

students’ time on task was maximised and resulted in enhanced cognitive and affective learning. In terms of<br />

engaging students, maintaining high levels of concentration, capitalising on their individual interests and learning<br />

styles and simply ‘getting-the-job-done’ we found synchronous lessons surpassed all other modes of instruction.<br />

Although we worked individually with our students we found that the students, who were geographically isolated<br />

from their teacher and from one another, were beginning to form a learning community within the Youth Knowledge<br />

Network. We anticipate that the students’ technologically networked community will continue to grow and that this<br />

will result in increased cooperation among our students in the future.<br />

Our guideline for ‘best practice’ in instructional design in synchronous cyber classrooms is a practical application of<br />

the seven strategies identified by Chickering, & Ehrmann (1996). Chickering & Ehrmann suggested educators use<br />

technology in instruction to increase interaction between instructors and students, to increase cooperation among<br />

students, to ,increase students´ active learning, to provide prompt feedback to students, to facilitate students´ time on<br />

task, to communicate expectations and to provide benefits for students with diverse talents and ways of learning. The<br />

synchronous cyber classroom provided the technological tools to apply the strategies as described by Chickering and<br />

Ehrmann (1996). But we found there was no manual for teachers to use during synchronous cyber lessons that gave<br />

a practical and pedagogically sound guide to the instructional design of such lessons. The guideline was developed,<br />

therefore, as a ‘user friendly’ checklist for teachers who are embarking on synchronous cyber teaching.<br />

In summary, our major finding was the enhanced learning outcomes that we observed in all the students in our trial.<br />

We attribute this to the ideal learning environment provided in the synchronous cyber classroom. Students developed<br />

higher levels of concentration and memorization as a direct result of the uniquely individual learning process that<br />

occurs in synchronous cyber lessons. Students were able to integrate visual, auditory and kinesthetic processes free<br />

from the distractions that plague the traditional classroom. We were able to record and quantify the students’<br />

learning outcomes as evidenced in their written and verbal responses. Based on our findings we believe an urgent<br />

need exists for further research to be undertaken into brain function in early childhood students during lessons in the<br />

synchronous cyber classroom. This leads us to the conclusion that synchronous cyber teaching and learning that is<br />

informed by best practice in instructional design poses a serious challenge to traditional classroom practices in early<br />

childhood education.<br />

Conclusion<br />

A paucity of literature exists on best practice in instructional design for online synchronous cyber lessons with<br />

students aged 5-8 years. This paper has attempted to identify ‘best practice’ in instructional design in the online<br />

synchronous cyber classroom. We equated increased interactivity by students in the form of verbal and written<br />

responses during synchronous cyber lessons with higher learning outcomes. We developed a guideline for the<br />

instructional design of synchronous cyber lessons that would maximise student interaction and result in enhanced<br />

learning. The guideline is, therefore, a practical application of best practice strategies and a survival manual for<br />

teachers embarking on synchronous cyber teaching.<br />

Essentially we found that when the teacher adopted a simplified and ‘minimalist’ approach to instructional design:<br />

the students contributed significantly more information and demonstrated higher levels of learning. We regard this as<br />

‘real’ collaborative learning. The students’ rate of response was faster and involved an integration of visual, auditory<br />

and kinaesthetic processes. We attribute this to the unique and ideal learning environment that is created in the<br />

synchronous cyber classroom.<br />

We believe the simplified ‘minimalist’ approach can be used to encourage more teachers to embark on synchronous<br />

online teaching as it helps build confidence in what may be perceived as a highly innovative yet challenging<br />

292


application of technology. It is a carefully considered pedagogical approach to working synchronously with early<br />

childhood learners. As such it has the potential to influence best practice in synchronous cyber teaching and learning.<br />

In conclusion we say the synchronous cyber classroom outperforms all other modes of instruction. We urge<br />

educators to give greater prominence to the synchronous component of online teaching and learning.<br />

Acknowledgement<br />

This study was supported by the National Science Council, Taiwan, under grant NSC95-2520-S-1<strong>10</strong>-001-MY2.<br />

References<br />

Chen, N.S., Ko, H. C., Kinshuk, & Lin, T. (2005). A model for synchronous learning using the internet. Innovations<br />

in Education and Teaching International, 42 (2), 181-194.<br />

Corpus, J. H. & Eisbach, A. OD.. (2005). A live demonstration to enhance interest and understanding in child<br />

development. Journal of Instructional Psychology, 32 (1), 35-43.<br />

Chial, M. R. (2004). A brief guide to instructional development, retrieved <strong>October</strong> 15, <strong>2007</strong>, from<br />

http://www.comdis.wisc.edu/staff/mrchial/InstDevSite/index.htm.<br />

Chickering, A. W., & Ehrmann, S. C. (1996). Implementing the seven principles: technology as lever. AAHE<br />

Bulletin, 49 (2), 3-6.<br />

Dick, W., & Carey, L., & Carey, J. O. (2001). The systematic design of instruction, NY: Addison-Wesley.<br />

Hannifin, M. J. (1992). Emerging technologies, ISD and learning environments: critical perspectives. <strong>Educational</strong><br />

<strong>Technology</strong> Research & Development, 40 (1), 49-63.<br />

Hastie, M., & Chen, N.S. (2006). Working brain-to-brain: ‘real learning’ - teacher-directed online live lessons using<br />

a synchronous cyber classroom. Paper presented at the Australian Computers in Education Conference 2006 (ACEC<br />

2006), <strong>October</strong> 2-4, 2006, Cairns, Australia.<br />

Hastie, M., & Palmer, A. (2003). ‘Real time, real young, real smart’- the use of the internet for real time teaching<br />

with 5 to 8 year olds. Paper presented at the Open Distance Learning Association of Australia Conference (ODLAA<br />

2003), <strong>October</strong> 1-4, 2003, Canberra, Australia.<br />

Keegan, D. (1990). The foundations of distance education, London: Routledge.<br />

Marjanovic, O. (1999). Learning and teaching in a synchronous collaborative environment. Journal of Computer<br />

Assisted Learning, 15 (2), 129-138.<br />

Nichols, M., & Anderson, B. (2005). Strategic e-learning implementation. Discussion paper of the International<br />

Forum of <strong>Educational</strong> <strong>Technology</strong> & Society, retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://ifets.ieee.org/discussions/<br />

discuss_july2005.html.<br />

Phillips, L. (1994). The continuing education guide –the CEU and other professional development criteria, Dubuque,<br />

IA: Kendall/Hunt.<br />

Rodriguez, J. L., Plax, T. G., & Kearney, P. (1996). Clarifying the relationship between teacher nonverbal<br />

immediacy and student cognitive learning: affective learning as the central causal mediator. Communication<br />

Education, 45, 293-305.<br />

Topping, K. J., Peter, C., Stephen, P., & Whale, M. (2004). Cross-age peer tutoring of science in the primary school:<br />

influence on scientific language and thinking. <strong>Educational</strong> Psychology, 24 (1), 57-76.<br />

293


Vogel, J. J., Vogel, D. S., Cannon-Bowers, J., Bowers, C. A., Muse, K., & Wright, M. (2006). Computer gaming and<br />

interactive simulations for learning: a meta-analysis. Journal of <strong>Educational</strong> Computing Research, 34 (3), 229-243.<br />

Wang, Y., & Chen, N.S. (<strong>2007</strong>). Online synchronous language learning: SLMS over the Internet. Innovate, 3 (3),<br />

retrieved <strong>October</strong> 15, <strong>2007</strong>, from http://innovateonline.info/index.php?view=article&id=337.<br />

Witt, P. L., & Wheeless, L. R. (2001). An experimental study of teacher’s verbal and nonverbal immediacy and<br />

student’s affective and cognitive learning. Communication Education, 50 (4), 327-342.<br />

Worley, R. B. ( 2000). The medium is not the message. Business Communication Quarterly, 63, 93-<strong>10</strong>.<br />

294


Malinski, R. (<strong>2007</strong>). Book review: Cases on global e-learning practices: successes and pitfalls (Eds. R.C. Sharma and S. Mishra).<br />

<strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 295-297.<br />

Reviewer:<br />

Richard Malinski<br />

Ryerson University, Canada<br />

richard@ryerson.ca<br />

Cases on global e-learning practices: successes and pitfalls<br />

(Book Review)<br />

Textbook Details:<br />

Cases on global e-learning practices: successes and pitfalls<br />

<strong>2007</strong>, R.C. Sharma and S. Mishra (Eds.)<br />

Information Science Publishing, Hershey, PA 17033, USA<br />

ISBN - 1-59904-340-8, 356 pages<br />

The table of contents is available on the website<br />

http://www.igi-pub.com/books/additional.asp?id=6164&title=Table+Of+Contents&col=contents<br />

The preface is available online at<br />

http://www.igi-pub.com/books/additional.asp?id=6164&title=Preface&col=preface<br />

The first chapter can be downloaded from<br />

http://www.igi-pub.com/books/additional.asp?id=6164&title=Book+Excerpt&col=book_excerpt<br />

Introduction<br />

The authors state that the object of their book is ‘to provide learning opportunities through a set of case studies on<br />

implementation of e-learning’ (p vii). The editors accomplish this by providing 23 case studies divided into three<br />

sections, i.e. completely online learning systems (<strong>10</strong> cases), blended online learning systems (8), and resources based<br />

online learning systems (5). To surround these cases the editors provide a brief introduction to e-learning and an<br />

even briefer summary of findings as conclusion.<br />

The introductory essay by the editors points out the lack of sound pedagogic practices in e-learning courses today<br />

and therefore the need for clarity and appropriate strategies. The editors miss a step towards clarity by not detailing<br />

what the significance is of having the three sections. They do however include a list of the many definitions of elearning<br />

and its synonyms and then define e-learning to be ‘teaching and learning in a networked environment with<br />

or without blending of face-to-face contact and other digital media’ (p 4). Once again the editors miss an<br />

opportunity for clarity by not going into detail and example to help specify what they really mean and how this fits<br />

with the categories of case studies.<br />

They do continue by listing some of the many benefits from e-learning and numerous factors one should consider in<br />

the design and development of e-learning. Most significant is the restatement of the ERIC (Experience-Reflect-<br />

Interact-Construct) framework outlined in an earlier work by Mishra (Mishra, 2002) and which integrates elements<br />

of behaviorism (content), cognitivism (learner support), and constructivism (learning activities). It may not be a<br />

novel framework but it is useful in organizing three significant elements of e-learning. Most of the case study<br />

authors mention these key elements but tend to focus on the importance of the constructivism and the provision of<br />

knowledge-building learning environments for students.<br />

The editors attempted to have the case study authors cover specific topics so that there would be some<br />

standardization of content and ease of comparison or assessment of case study information. They outlined their case<br />

study framework which included topics such as academic and administrative issues; program evaluation; networking<br />

and collaboration; policy implications; sustainability and conclusion along with lessons learned and best practices.<br />

The editors allowed some leeway in both the coverage of these topics as well as in writing styles and so readers need<br />

to be patient while reading some of the case studies.<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

295


Cases<br />

The first set of ten case studies cover examples of completely online learning systems. Within these cases there<br />

tends to be a restatement of the benefits of e-learning along with the acceptance of the authentic learning and<br />

knowledge construction possibilities in e-learning. Most but not all cover the case study elements outlined by the<br />

editors, nevertheless there is much for those interested in or involved with e-learning. Several issues are of particular<br />

note in this first set of cases, e.g.<br />

Assessment -<br />

• Assessment tools not only for the e-learning courses but also for the instructors are very important.<br />

• The positive student assessment of the courses tends to be directly proportional to the participation of the<br />

instructor.<br />

Networking -<br />

• For those interested in program development networks and their issues, the German WINFOline discussion can<br />

serve as a sound model and an outline of some of the hurdles that crop when working with other institutions.<br />

Professional development -<br />

• For those just moving to e-learning, the New Mexico State University professional development program is a<br />

realistic and useful case. The lessons learned here are exemplary, e.g. the learning curve is steep, capitalize on<br />

established models, and be prepared for ‘Plan B.’ This is complemented by the later case on educational reform<br />

which outlines some very useful lessons and practices, such as preparing teachers with problem solving and<br />

technology awareness sessions as well as developing communications support structures to support feedback<br />

and discussions.<br />

• The New Jersey-Namibia teacher training collaboration case brings out cross-cultural and cross-time period<br />

issues. Specifically, there is a need for flexibility when networks are unstable or less advantaged students must<br />

rely on Internet Cafés for access.<br />

Student support -<br />

• The pilot program to introduce student e-portfolios illustrates the use of this learning tool in promoting<br />

reflection and self-understanding in a concrete practical way.<br />

• The Bridging Online (BOL) Program in the Co-Op unit at Simon Fraser University in British Columbia<br />

provides excellent examples of products helping co-op students become more confident and self-directed in<br />

understanding the pertinence and value of their skills for employment.<br />

The second set of eight cases covers those learning systems that the editors consider Blended Online Learning<br />

Systems. The editors do not make it clear how these cases differ from the first set as a result the distinction is more<br />

a curiosity than a help. This section brings out the difficulty of trying to pigeon-hole these e-learning case studies<br />

into distinct groups. The readers should not expect to see clearly discontinuous categories in this book because<br />

some of the cases could be put in one or more sections. Such cases are really examples along a continuum from less<br />

to more use of online resources and activities. Important here are the stories that the authors tell, the lessons that<br />

they learn, and the best practices that they suggest! Significant issues brought up in these eight cases are;<br />

Planning -<br />

• Timing always seems to be too short so do leave sufficient time to complete tasks<br />

• Funding is also something that requires much thought – e-learning is expensive so it is important to realize the<br />

limits of the e-learning program and make everyone aware of these.<br />

• Once again, be ready for surprises and unexpected results – remain flexible.<br />

• Understanding institutional policies and being ready to explain the e-learning program is essential.<br />

Student engagement -<br />

• Provide students with learning environments that bring together both theoretical and real-world scenarios.<br />

• Seek out assessments of e-learning directly from the students in order to improve the e-learning offerings.<br />

• Students are themselves a resource for learning. Use them to assist their fellow students.<br />

Instructor support -<br />

• There are many fundamental challenges facing instructors – changing roles with online activity especially the<br />

need for online support and participation, increased technical requirements, combining face-to-face with online<br />

contacts, perhaps increasing student load or development requirements.<br />

296


• Assistance in designing and working with authentic learning environments or ‘deep learning’ can provide<br />

instructors with insights into the potential of technology and into their pedagogy.<br />

The third set of five cases focus on Resource-Based Online Learning Systems. The editors do not define exactly<br />

what they mean by ‘Resource-Based Online Learning Systems’ so do not be misled by the section title. While these<br />

five cases note many of the same examples of lessons learned and best practices noted in the other cases, there are<br />

several aspects that are noteworthy in these cases. These particular aspects are;<br />

Costing -<br />

• The multimedia instructional product for medical school students mentions the need for experts in<br />

storyboarding and scripting as well as for technically skilled people to transform the content into online<br />

materials. The planning and development process outlined would be extremely useful. This case is the only<br />

one to mention costs - $75,000 per hour of multimedia course.<br />

Planning -<br />

• The Hard Fun case illustrates a hardcore constructivist approach using much jargon in developing a learning<br />

resource. The components that go into this resource and the evaluation rubric outlined are excellent examples<br />

and will be of use to those wanting to explore constructivism.<br />

• The ESPORT case brings out the importance of systematic and consultative evaluation of pilot projects and the<br />

significance of understanding the difficulties of adoption of innovations within organizations.<br />

• The EBS E-learning chapter reinforces the importance of orchestrating many factors in order for e-learning to<br />

be effective.<br />

• The last chapter discusses a multifaceted ideology which describes an e-learning ecosystem encompassing such<br />

elements as the conceptualization of courseware, standardizing interoperable content, and personalizing<br />

learning experiences. The two projects, one on student support and the other on faculty development, illustrate<br />

how useful such a framework is to integrating products into a unified program.<br />

The 23 cases give the reader different stories of how researchers around the globe are facing the challenges of elearning.<br />

While the stories are different many of the challenges faced are similar, i.e. planning ahead is critical,<br />

instructor development crucial, and student support vital.<br />

Conclusion<br />

This book is a valuable resource for practitioners but at the same time a difficult read! Editing a multi-chapter<br />

publication is a daunting venture and the editors should be commended for bringing these cases together. The<br />

editors did try to bring some standardization to the content formats but the variety of writing styles requires close<br />

reading to recognize what some of the authors really mean. Here is where the editors could have done more work<br />

in clarifying meaning, catching typographic errors, questioning vague terminology or unsupported conclusions, and<br />

improving illegible graphics.<br />

Nevertheless, the suggestions by the case authors are valuable for the lessons learned and suggestions for best<br />

practices. Readers interested in what others around the world are doing should find the cases useful for<br />

confirmation as well as for new insights. In addition, the experiences from around the world show that while some<br />

are ahead of others, instructors and students are facing the same challenges in e-learning.<br />

Reference<br />

Mishra, S. (2002). A design framework for online learning environments. British Journal of <strong>Educational</strong><br />

<strong>Technology</strong>, 33 (4), 493-496.<br />

297


Kılıçkaya, F. (<strong>2007</strong>). Website review: WordChamp: Learn Language Faster. <strong>Educational</strong> <strong>Technology</strong> & Society, <strong>10</strong> (4), 298-299.<br />

Reviewer:<br />

Ferit Kılıçkaya<br />

Middle East Technical University, Faculty of Education<br />

Department of Foreign Language Education<br />

06531 Ankara, Turkey<br />

kilickay@metu.edu.tr<br />

http://www.metu.edu.tr/~kilickay<br />

http://www.technologycallsyou.com<br />

Site URL:<br />

http://www.wordchamp.com<br />

Site title:<br />

WordChamp: Learn Language Faster<br />

WordChamp: Learn Language Faster<br />

(Website Review)<br />

Objective of the site:<br />

The very basic objective is to help the teachers, students and organizations in the process of learning new vocabulary<br />

of the target language that they are aiming to learn.<br />

Intended audience:<br />

The site seems very useful to teachers and students of any language and the organizations aiming to help their<br />

employees and client.<br />

Domain related aspects:<br />

It is an educational site and it focuses on teaching vocabulary which is compatible with all languages. The site<br />

provides the audience with vocabulary of different types of drills including translation, listening comprehension,<br />

dictation, and language-specific drills.<br />

The content of the site is original and memberships including access to all features and content are free for everyone.<br />

Structure of the Site:<br />

“WordChamp: Learn Language Faster” is basically divided into three sections:<br />

Web reader<br />

Learn vocabulary<br />

Course management<br />

Browse languages<br />

Under the section headed “Web reader”, there is a page which help the audience read authentic texts. It takes any<br />

webpage or text and shows popup definitions to all the words it recognizes. It demolishes the frequent annoying<br />

time-consuming trips to the dictionary. Moreover, if the word has audio, the audience has the opportunity to hear a<br />

native speaker pronouncing it. “Learn vocabulary” hosts vocabulary drills including translation, listening<br />

comprehension, dictation, and language-specific drills. Also, the audience can create their own vocabulary lists with<br />

audio, which can be downloaded as mp3 files and as flashcards. You can find samples of the different drills here.<br />

The database currently holds 2,494,798 flashcards in 112 languages. “Course management” provides teachers with<br />

tools to help their students to learn vocabulary outside of class. With the tools provided in this section, teachers can<br />

create custom lists specific to their classes, set up a class and homework assignments. Moreover, they can also track<br />

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of <strong>Educational</strong> <strong>Technology</strong> & Society (IFETS). The authors and the forum jointly retain the<br />

copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies<br />

are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by<br />

others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior<br />

specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.<br />

298


their students’ performance. In “Browse Languages” section, the audience can go over the lists that have been<br />

prepared by others or browse the languages available.<br />

Usefulness and richness of each topic:<br />

Learning new vocabulary is one of the things that take the most time and the most of the learners have difficulty in.<br />

Vocabulary is inescapable since anything related to any language starts with learning new vocabulary. This site<br />

provides learners of any language with the vocabulary items (audio, definitions, drills, and flashcards). It makes it<br />

easy for students to practice the vocabulary they need. It is rich as regards the materials provided.<br />

Connectivity:<br />

I did not notice any problem in having access to the site through ADSL and standard modem connection. However,<br />

downloading the lists of vocabulary with audio is a problem with slow connection. No special software is needed.<br />

However, in order to listen to the audio files on this site and the lists of vocabulary items with audio, it is required to<br />

have a flash player and an mp3 player, which is freely available on the net.<br />

Interface related aspects:<br />

The layout of the website<br />

The layout is good and the links to the pages are clearly identifiable. There is no animations which are distracting.<br />

Site structure<br />

The audience can easily find what they are looking for via clear titles and links. There are separate sections for<br />

different aims (for students, teachers and organizations). The fonts are readable and the background color is not<br />

distracting.<br />

Navigation<br />

The audience can easily navigate the website.<br />

Search facilities<br />

The search facility provided helps the audience to search in the vocabulary lists, users and the words.<br />

Overall issues:<br />

The site is supported by a commercial firm and all the contact details of the site owners are provided. It has no<br />

distracting banners, animations or advertisements.<br />

The site is regularly updated and the news is provided through RSS. The external links to other language resources<br />

were valid at the time of writing this review.<br />

Other comments:<br />

“WordChamp: Learn Language Faster” is rich regarding the content, materials, vocabulary lists and the facilities<br />

provided to the learners and teachers of any language. The access to over 127,000 recordings of native speakers, the<br />

database currently holding 2,494,798 flashcards in 112 languages and “Web Reader” is really fascinating.<br />

Downloading lists and flashcards to mp3 players and to mobile phones, pre-made conjugation charts, and audio files<br />

make a huge difference while studying a foreign language. The tools provided by this site are invaluable. Better still,<br />

this site is currently open to access without any payment. However, audio is currently available in Arabic, Bulgarian,<br />

Chinese (Mandarin), English, Farsi, French, German, Hungarian, Italian, Japanese, Norwegian, Portuguese, Spanish,<br />

Swahili, Turkish, and Tagalog.<br />

299

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!