12.01.2015 Views

zmWmQs

zmWmQs

zmWmQs

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Designing Video for Massive Open Online-Education:<br />

Conceptual Challenges from a Learner-Centered Perspective<br />

Carmen Zahn, Karsten Krauskopf, Jonas Kiener, Friedrich W. Hesse<br />

derable panels (see Figure 1), each of which contains the<br />

extracted clip and a text field for annotation, comment or<br />

other interpretation. Specific parts of the source video can<br />

be extracted, which enables a user to direct the attention of<br />

other users to what he or she is referring to. This process<br />

has been termed ‘guided noticing’ (Pea, 2006). Each panel<br />

with its comments constitutes a permanent external representation<br />

of specific information within the dive, to which<br />

users can resort whenever they decide to.<br />

Figure 1. Screenshot of the online learning environment Web-<br />

Diver TM (Pea et al., 2004).<br />

The test materials consisted of a factual knowledge test<br />

and a picture recognition test. The pre-test and post-test<br />

of participants’ factual knowledge of the historical context<br />

were created from information taken from secondary school<br />

history textbooks. These tests were given in a multiple choice<br />

format, where either one (pre-test) or multiple options (posttest)<br />

per item were correct. The picture test consisted of 28<br />

pictures, half of which were scenes taken from the original<br />

newsreel and half of which were distractors from a different<br />

newsreel (same genre and period).<br />

Measures. To assess learning outcomes in terms of general<br />

content knowledge acquisition, we administered the factual<br />

knowledge tests (pre- and post-test) and the picture recognition<br />

test after the collaboration phase. Total test scores<br />

were computed, resulting in a theoretical maximum of 12<br />

points in the pre-test and 45 in the post-test. To assess task<br />

performance, the participants’ contributions from the saved<br />

panels (WebDIVER protocols, see below), including their<br />

selections from the video, annotations and comments, were<br />

analyzed. Precisely, our analyses were based on the overall<br />

number of the created panels, the number of panels, where<br />

details were selected by using the WebDIVER’s selection<br />

frame, and the number of comments including their length<br />

in words. Additionally, we analysed the quality of the dyads’<br />

comments by coding (a) aspects of contents covered<br />

in relation to the learning goal and (b) aspects of collaboration<br />

quality. For coding of the comments, we developed<br />

two coding schemes. The first one – coding scheme I (see<br />

below) – was developed to assess the quality of the panel<br />

comments. The second one – coding scheme II (see below)<br />

– was developed to assess the overall quality of interactions<br />

within dyads. Here, screen videos were viewed in addition to<br />

examination of comments to determine which comment was<br />

written by which collaboration partner thereby counting<br />

panels created in partnership by both participants of the<br />

dyads together, and for categorizing different kinds of social<br />

interaction in the comments. All comments were coded by<br />

two observers.<br />

Coding schemes. Coding scheme I for the quality of the<br />

comments consisted of the following categories: utterances<br />

addressing historical content of the newsreel, utterances<br />

addressing filmic style of the newsreel, and utterances integrating<br />

aspects of historical content and filmic style, respectively.<br />

Units for the utterances were defined as sentences or<br />

sentence fragments. On the basis of these categories, Diver<br />

protocols were coded by two independent, trained raters.<br />

Interrater-reliability ranged between Cronbach’s α = .80<br />

and .98. Coding scheme II rating the comments exchanged<br />

within dyads was developed in two steps: First, two observers<br />

analyzed and discussed the WebDiverTM protocols<br />

and considered relevant literature (e.g., Stahl, Koschmann &<br />

Suthers, 2006) in order to derive indicators for establishing<br />

different categories of collaboration, including coordinating<br />

and communicating activities. Second, a collaboration index<br />

was calculated. The categories applied in analysis were: 1)<br />

double references as an indicator of collaboration in general;<br />

2) proposals for work structuring as an indicator of<br />

coordination activities; and 3) referencing one partner’s<br />

utterances or directly addressing the other partner as an<br />

indicator for communication. The coding results were then<br />

integrated by weighing the number of utterances of category<br />

1) by factor three, because they were considered the<br />

strongest indicator of collaboration. The result was then<br />

added together with the number of utterances in categories<br />

2) and 3) to form the collaboration index. This collaboration<br />

index and the number of panels created in partnership were<br />

used for further analyses. Again, two independent raters<br />

performed the analysis. Interrater-reliability ranged from<br />

Cronbach’s α = .92 and 1.0.<br />

Data analyses. For data analyses, we grouped the dependent<br />

variables into three levels (see Table 1): First, a cognitive<br />

level with regard to knowledge acquisition, ensuring effectiveness<br />

of online-learning in the two conditions (factual<br />

knowledge and picture recognition performance). Second, a<br />

surface level of effects on collaboration and learning, where<br />

we compared the two conditions with respect to the variables<br />

describing overall collaborative activities (number of<br />

comments, length of comments, number of panels created<br />

in partnership, and collaboration index). Third, pointing at<br />

deeper level effect on collaboration and learning we looked<br />

at quantitative and qualitative indicators for more knowledge<br />

intensive collaborative activities (panels referring to<br />

details, and utterances addressing either historical content<br />

or filmic style of the newsreel and utterances integrating<br />

aspects of historical content and filmic style).<br />

Research Track |163

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!