11.07.2015 Views

Abstracts - Earli

Abstracts - Earli

Abstracts - Earli

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

eceived human mediation based on classroom discussion, while the second group received acomputer-mediated program. The comparison of pre- and post-tests demonstrated that mediationin both groups was effective in enhancing the level of proportional reasoning. The effect size ofcomputer mediation was 1.4, the effect size of human mediation was 1.2. Though the students’proportional reasoning level was much lower than predicted by the classical Piagetian theory, aconsiderable reasoning potential was within the students’ Zone of Proximal Development asrevealed during a short (2 hours) mediation session. Computer-mediation turned out to be moreeffective than human mediation. At the same time different learning patterns were identified inthese two forms of mediated learning. Students with identical pre-test performance demonstrateddifferent learning potential thus confirming the educational value of dynamic assessment ascompared to the static assessment paradigm.Measuring the Sense of Presence in Mediated Environments: Utility of IRTSean Early, University of Southern California, USAItem Response Theory (IRT) holds great promise for the creation and validation of scalesmeasuring latent constructs. This paper describes the creation and preliminary validation of scaleof the sense of presence in mediated environments. The field of presence research has struggled todevelop a stable and valid measure of the construct in spite of intensive efforts. The results of theIRT model, based on a sample of 102 respondents, demonstrate good coverage over the expectedrange of values of the latent construct, good internal reliability estimates (Cronbach’s alpha = .90;Person’s Separation Index = .89). Results of promising preliminary evidence of construct validityare also described.Using the SOLO taxonomy to assess assignments of novice university teachersJelle Geyskens, University of Antwerp, BelgiumAnn Stes, University of Antwerp, BelgiumPeter Van Petegem, University of Antwerp, BelgiumThe present study wants to investigate whether the instrument we developed is capable ofdifferentiating and classifying various assignments and what these differences in scores actuallyindicate. The instrument is based on the Structure of Observed Learning Outcome (SOLO) whichgives a measure of the quality of a learning outcome (i.e., an answer to a question) in terms of theprogressive structural complexity of that learning outcome. Two research questions areformulated. First, is the instrument capable of categorizing the assignments? Second, does thecategorization (i.e., differences in scores of the assignment) reflect quality? To answer thesequestions we carried out a research to assess the assignments of 22 training participants. Thetraining was developed for novice university teachers at the University of Antwerp. Tworesearchers used the instrument to categorize independently the assignments of the 22 subjects.The study shows that applying our instrument on the assignments results in a continuum of scoresfrom high over medium to low. That is, the instrument is capable of differentiating andcategorizing the different assignments. A comparison is made between the scores given by theSOLO-based criteria list and feedback on the assignment given to the participants (i.e., holisticjudgement by two trainers). This comparison indicates that the two classifications showsimilarities, which suggests that the classification made by the SOLO-based criteria list isreflecting the quality of the assignments.– 690 –

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!