25.07.2013 Views

January 2012 Volume 15 Number 1 - Educational Technology ...

January 2012 Volume 15 Number 1 - Educational Technology ...

January 2012 Volume 15 Number 1 - Educational Technology ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Experimental Tools<br />

This study employed the following tools: 1) evaluation of students’ prior knowledge (pretest); 2) assessment of<br />

students’ levels of cognitive development (HW1 and HW2); 3) evaluation of students’ learning achievement (posttest);<br />

and 4) a questionnaire survey and one-on-one semi-structured interviews regarding students’ perceptions and<br />

behavioral intentions toward using STR.<br />

The pretest featured 20 multiple choice questions and took place during the first class. The post-test had 10 true or<br />

false questions and 10 multiple choice questions, and it took place during the last class. The content of the pretest<br />

and the post-test evaluations related to the Information Security course. Both tests were scored on a 100-point scale<br />

(with 100 as the highest score), yet the tests were different in content. The study employed Taxonomy for<br />

Information Security Education (van Niekerk & Thomson, 2010) adopted from Bloom (1956) to assess homework in<br />

order to determine students’ level of cognitive development. The taxonomy (see Appendix 2) includes six levels;<br />

each level increases in complexity as the learner moves through the levels. The study adopted a concept as a coding<br />

unit and six-point scales for homework assessment. A score of “1” represented the lowest level of cognitive<br />

development, and a score of “6” represented the highest level. The final score for homework was the score that<br />

corresponded to the highest level of cognitive development found in the homework. For example, if the highest<br />

cognitive level in homework was identified as “4” (Analyze), then the homework was scored with a “4.” The<br />

assessments were created by a teacher, with more than 10 years of teaching experience in the Information Security<br />

domain; thus, the assessments provided superior validity under this condition.<br />

The questionnaire was designed based on the <strong>Technology</strong> Acceptance Model (Davis, 1986). Four dimensions were<br />

covered in the questionnaire: perceived ease (of STR) use (PEU); perceived usefulness (of STR) for learning (PUL);<br />

perceived usefulness (of STR) during online one-way lectures (PUOWL); and behavioral intention (BI) to use STR<br />

for learning in the future. According to Davis (1986), PEU is the degree to which a student believes that using STR<br />

would be free of physical and mental effort. PUL is the degree to which a student believes that using STR for<br />

learning would enhance his or her learning performance. PUOWL is the degree to which a student believes that using<br />

STR during online one-way lectures would enhance his or her learning performance. BI is hypothesized to be a<br />

major determinant of whether or not a student actually uses STR. Responses to the questionnaire items were scored<br />

using a five-point Likert scale, anchored by the end-points “strongly disagree” (1) and “strongly agree” (5). Twentyfour<br />

valid answer sheets to the questionnaire were obtained out of twenty-five experimental students.<br />

One-on-one semi-structured interviews with subsequent data analysis followed the general recommendations of<br />

Creswell (2008). Five students were randomly selected for the interviews. The interviews contained open-ended<br />

questions in which students were asked about the following: 1) their experience using the STR application during the<br />

experiment; and 2) their opinions about the impact of STR-generated texts for learning. Each interview took<br />

approximately 30 minutes; all interviews were audio-recorded with the permission of the interviewee and then fully<br />

transcribed for analysis. The text segments that met the criteria for providing the best research information were<br />

highlighted and coded. Next, codes were sorted to form categories; codes with similar meanings were aggregated<br />

together. Established categories produced a framework to report findings to the research questions.<br />

Statistical Analysis Methods<br />

The study adopted the following methods of statistical analysis:<br />

1. Cohen’s kappa – to evaluate the inter-rater reliability of the assessment (Creswell, 2008; Punch, 2009), i.e.,<br />

pretest, HW1, HW2, and post-test. The analysis result exceeded 0.72, indicating its high reliability.<br />

2. Cronbach α – to assess the internal consistency of the survey (Creswell, 2008). The value for PEU = 0.89; PUL<br />

= 0.94; PUOWL = 0.97; and BI = 0.84, which indicated that the reliability of the items was satisfied.<br />

3. Independent samples test (t-test) – to compare the difference in learning performances for the control and<br />

experimental groups (Creswell, 2008) on the pretest, homework, and post-test.<br />

4. The standardized mean difference statistic (referred as d) – Creswell (2008) suggested quantifying the practical<br />

strength of the difference between variables through effect size; this approach is important in a quantitative<br />

study, especially when using a small sample size to know the significance of a statistical test. He suggested that<br />

the effect size of .20 is small, .50 is medium (or moderate), and .80 is large.<br />

372

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!