12.01.2015 Views

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

TRUE EXPERIMENTAL DESIGNS 277<br />

2 Subtract the pretest score from the post-test<br />

score for the control group to yield score 2.<br />

3 Subtract score 2from score 1.<br />

Using Campbell’s and Stanley’s terminology, the<br />

effect of the experimental intervention is:<br />

(O 2 − RO 1 ) − (O 4 − RO 3 )<br />

If the result is negative then the causal effect was<br />

negative.<br />

One problem that has been identified with this<br />

particular experimental design is the interaction<br />

effect of testing. Good (1963) explains that<br />

whereas the various threats to the validity of the<br />

experiments listed in Chapter 6 can be thought<br />

of as main effects, manifesting themselves in<br />

mean differences independently of the presence<br />

of other variables, interaction effects, as their<br />

name implies, are joint effects and may occur even<br />

when no main effects are present. For example,<br />

an interaction effect may occur as a result of<br />

the pretest measure sensitizing the subjects to<br />

the experimental variable. 1 Interaction effects<br />

can be controlled for by adding to the pretestpost-test<br />

control group design two more groups<br />

that do not experience the pretest measures.<br />

The result is a four-group design, as suggested<br />

by Solomon (1949) below. Later in the chapter,<br />

we describe an educational study which built into<br />

apretest-post-testgroupdesignafurthercontrol<br />

group to take account of the possibility of pretest<br />

sensitization.<br />

Randomization, Smith (1991: 215) explains,<br />

produces equivalence over a whole range of<br />

variables, whereas matching produces equivalence<br />

over only a few named variables. The use<br />

of randomized controlled trials (RCTs), a<br />

method used in medicine, is a putative way of<br />

establishing causality and generalizability (though,<br />

in medicine, the sample sizes for some RCTs is<br />

necessarily so small – there being limited sufferers<br />

from a particular complaint – that randomization<br />

is seriously compromised).<br />

A powerful advocacy of RCTs for planning and<br />

evaluation is provided by Boruch (1997). Indeed<br />

he argues that the problem of poor experimental<br />

controls has led to highly questionable claims being<br />

made about the success of programmes (Boruch<br />

1997: 69). Examples of the use of RCTs can be<br />

seen in Maynard and Chalmers (1997).<br />

The randomized controlled trial is the ‘gold<br />

standard’ of many educational researchers, as<br />

it purports to establish controllability, causality<br />

and generalizability (Coe et al. 2000;Curriculum,<br />

Evaluation and Management Centre 2000). How<br />

far this is true is contested (Morrison 2001b).<br />

For example, complexity theory replaces simple<br />

causality with an emphasis on networks, linkages,<br />

holism, feedback, relationships and interactivity in<br />

context (Cohen and Stewart 1995), emergence,<br />

dynamical systems, self-organization and an<br />

open system (rather than the closed world<br />

of the experimental laboratory). Even if we<br />

could conduct an experiment, its applicability<br />

to ongoing, emerging, interactive, relational,<br />

changing, open situations, in practice, may be<br />

limited (Morrison 2001b). It is misconceived to<br />

hold variables constant in a dynamical, evolving,<br />

fluid, open situation.<br />

Further, the laboratory is a contrived, unreal<br />

and artificial world. Schools and classrooms<br />

are not the antiseptic, reductionist, analysedout<br />

or analysable-out world of the laboratory.<br />

Indeed the successionist conceptualization of<br />

causality (Harré 1972), wherein researchers make<br />

inferences about causality on the basis of<br />

observation, must admit its limitations. One<br />

cannot infer causes from effects or multiple causes<br />

from multiple effects. Generalizability from the<br />

laboratory to the classroom is dangerous, yet<br />

with field experiments, with their loss of control<br />

of variables, generalizability might be equally<br />

dangerous.<br />

Classical experimental methods, abiding by the<br />

need for replicability and predictability, may not be<br />

particularly fruitful since, in complex phenomena,<br />

results are never clearly replicable or predictable:<br />

we never step into the same river twice. In<br />

linear thinking small causes bring small effects and<br />

large causes bring large effects, but in complexity<br />

theory small causes can bring huge effects and<br />

huge causes may have little or no effect. Further,<br />

to atomize phenomena into measurable variables<br />

Chapter 13

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!