12.01.2015 Views

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

CONSTRUCTING A TEST 421<br />

Select the contents of the test<br />

Here the test is subject to item analysis.Gronlund<br />

and Linn (1990) suggest that an item analysis will<br />

need to consider:<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

the suitability of the format of each item for<br />

the (learning) objective (appropriateness)<br />

the ability of each item to enable students<br />

to demonstrate their performance of the<br />

(learning) objective (relevance)<br />

the clarity of the task for each item<br />

the straightforwardness of the task<br />

the unambiguity of the outcome of each item,<br />

and agreement on what that outcome should be<br />

the cultural fairness of each item<br />

the independence of each item (i.e. where the<br />

influence of other items of the test is minimal<br />

and where successful completion of one item<br />

is not dependent on successful completion of<br />

another)<br />

the adequacy of coverage of each (learning)<br />

objective by the items of the test.<br />

In moving to test construction the researcher will<br />

need to consider how each element to be tested<br />

will be operationalized:<br />

<br />

<br />

<br />

<br />

what indicators and kinds of evidence of<br />

achievement of the objective will be required<br />

what indicators of high, moderate and low<br />

achievement there will be<br />

what will the students be doing when they are<br />

working on each element of the test<br />

what the outcome of the test will be (e.g.<br />

a written response, a tick in a box of<br />

multiple choice items, an essay, a diagram,<br />

acomputation).<br />

Indeed the Task Group on Assessment and Testing<br />

(1988) in the UK suggest that attention will<br />

have to be given to the presentation, operation<br />

and response modes of a test:<br />

how the task will be introduced (e.g.<br />

oral, written, pictorial, computer, practical<br />

demonstration)<br />

what the students will be doing when they are<br />

working on the test (e.g. mental computation,<br />

practical work, oral work, written)<br />

<br />

what the outcome will be – how they will show<br />

achievement and present the outcomes (e.g.<br />

choosing one item from a multiple choice<br />

question, writing a short response, open-ended<br />

writing, oral, practical outcome, computer<br />

output).<br />

Operationalizing a test from objectives can<br />

proceed by stages:<br />

Identify the objectives/outcomes/elements to<br />

be covered.<br />

Break down the objectives/outcomes/elements<br />

into constituent components or elements.<br />

Select the components that will feature in the<br />

test, such that, if possible, they will represent<br />

the larger field (i.e. domain referencing, if<br />

required).<br />

Recast the components in terms of specific,<br />

practical, observable behaviours, activities and<br />

practices that fairly represent and cover that<br />

component.<br />

Specify the kinds of data required to provide<br />

information on the achievement of the criteria.<br />

Specify the success criteria (performance<br />

indicators) in practical terms, working out<br />

marks and grades to be awarded and how<br />

weightings will be addressed.<br />

Write each item of the test.<br />

Conduct a pilot to refine the language/<br />

readability and presentation of the items, to<br />

gauge item discriminability, item difficulty and<br />

distractors (discussed below), and to address<br />

validity and reliability.<br />

Item analysis, Gronlund and Linn (1990: 255)<br />

suggest, is designed to ensure that the items<br />

function as they are intended, for example,<br />

that criterion-referenced items fairly cover the<br />

fields and criteria and that norm-referenced<br />

items demonstrate item discriminability (discussed<br />

below); the level of difficulty of the items<br />

is appropriate (see below: item difficulty); the<br />

test is reliable (free of distractors – unnecessary<br />

information and irrelevant cues, see below:<br />

distractors) (seeMillmanandGreene(1993).An<br />

item analysis will consider the accuracy levels<br />

available in the answer, the item difficulty, the<br />

Chapter 19

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!