25.07.2013 Views

January 2012 Volume 15 Number 1 - Educational Technology ...

January 2012 Volume 15 Number 1 - Educational Technology ...

January 2012 Volume 15 Number 1 - Educational Technology ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

In the nickname group, the student’s created identity was shown at the top of the field containing his or her generated<br />

questions and comments (See Fig. 2). Students are free to change their nicknames to reflect their current state of<br />

mind each time they construct or assess a new question. In the anonymity group, information on the question-author<br />

or assessor was not shown, and only the word “anonymous” appeared at the top of the question and comment field.<br />

(See Fig. 3).<br />

Experimental procedures<br />

Three intact classes were randomly assigned to different treatment conditions. Considering that true/false, fill-in-theblank<br />

and multiple-choice questions are among the most frequently encountered question types in middle schools,<br />

they were adopted for the study.<br />

To ensure that participants possessed the fundamental skills needed for the generation and assessment of these types<br />

of questions, a training session with hands-on activities was held at the beginning of the study, in addition to a lesson<br />

on the operational procedures of QuARKS. One pamphlet containing (a) learning objectives, (b) QuARKS key<br />

features and functions, (c) question generation criteria and sample questions, and (d) peer assessment criteria and<br />

sample feedback/comments, was distributed for individual reference.<br />

During each weekly online learning activity, students were first directed by the instructor to individually compose at<br />

least one question of each type in accordance with the covered instructional contents. Each student then individually<br />

assessed at least one question from the pool of peer-generated questions for each question type.<br />

To establish a baseline regarding students’ perceptions of the different aspects of the activity, a real-identity mode<br />

was used for all conditions during the first two sessions. Afterwards, students in different conditions used their<br />

respective identity revelation modes. Students’ performance on the first biology exam was collected. A questionnaire<br />

about the examined variables was disseminated for individual completion before different treatment conditions were<br />

implemented in different groups, which started at the third week. After exposure to the activity for six weeks,<br />

students completed the same questionnaire. Students’ performance on the second biology exam was then also<br />

collected.<br />

Measurements<br />

The effects of different identity revelation modes on students’ academic performance were assessed by the first and<br />

second biology exams of the participating school. Generally, the three exams spaced evenly throughout a 5-month<br />

long semester (i.e., approximately six weeks separation between two exams) are arranged by schools and<br />

administered at the same time for all students and all major subjects at high school levels in Taiwan. Item analyses<br />

were conducted which ascertained that the test items correctly discriminated between high and low scores (average<br />

0.61 and 0.44 for the first and second exam, respectively) and that the items as a whole were of moderate difficulty<br />

(average 0.66 and 0.63 for the first and second exam, respectively).<br />

The effects of different treatment conditions on students’ perceptions of various aspects of the engaged activity were<br />

assessed using the same pre- and post-questionnaire, which consists of a set of four 5-point Likert scales (5=strongly<br />

agree, 4=agree, 3=no opinion, 2=disagree, 1=strongly disagree). Existing instruments on related areas (i.e., peerassessment,<br />

perceptions toward interacting parties, perceptions toward communication process and perceived<br />

learning environment) were referred to, and items were adapted to fit the targeted experimental context. Specifically,<br />

for “Attitudes toward Peer-Assessment Scale,” “Peer Assessment Questionnaire” (Brindley & Scoffield, 1998), “Peer<br />

Assessment Questionnaire” (Wen & Tsai, 2006) and “Fairness of Grading Scale” (Ghaith, 2003) were referred to.<br />

When constructing “Perception Toward Assessors Scale,” “Peer Assessment Rating Form” (Lurie, Nofziger,<br />

Meldrum, Mooney & Epstein, 2006), ”The Questionnaire on Teacher Interaction” (Wubbels & Brekelmans, 2005),<br />

“Student Perceptions of their Own Dyad” (Yu, 2001), “Student Perceptions of Other Dyads” (Yu, 2001) and<br />

“Cooperative Learning Scale” (Ghaith, 2003) were used as a reference whereas “Student Evaluations of their<br />

Experience of Assessment” (Stanier, 1997), “Student Perceptions of the Communication Process within the Dyad”<br />

(Yu, 2001) “Student Perceptions of the Communication Process among the Dyads” (Yu, 2001), “Cooperative<br />

Learning Scale” (Ghaith, 2003) and “Peer Assessment Rating Form” (Lurie, et al., 2006) were referred to when it<br />

comes to the construction of “Perception toward the Interaction Process with Assessors Scale.” Finally, “Learning<br />

68

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!