12.01.2015 Views

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>RESEARCH</strong> AND EVALUATION 45<br />

to extend the TVEI project was made without<br />

any evaluation reports having been received<br />

from evaluation teams in Leeds or the National<br />

Foundation for Educational Research. (The Technical<br />

and Vocational Education Initiative (TVEI)<br />

was a 1980s UK government-funded project frequently<br />

targeted to lower-attaining students.) This<br />

echoes James (1993) where she writes:<br />

The classic definition of the role of evaluation as<br />

providing information for decision-makers ...is a<br />

fiction if this is taken to mean that policy-makers<br />

who commission evaluations are expected to make<br />

rational decisions based on the best (valid and<br />

reliable) information available to them.<br />

(James 1993: 119)<br />

Where evaluations are commissioned and<br />

have heavily political implications, Stronach and<br />

Morris (1994) argue that the response to this is<br />

that evaluations become more ‘conformative’ (see<br />

http://www.routledge.com/textbo<strong>ok</strong>s/<br />

9780415368780 – Chapter 1, file 1.13. ppt),<br />

possessing several characteristics:<br />

Being short-term, taking project goals as given<br />

and supporting their realization.<br />

Ignoring the evaluation of longer-term learning<br />

outcomes, or anticipated economic or social<br />

consequences of the programme.<br />

Giving undue weight to the perceptions of<br />

programme participants who are responsible<br />

for the successful development and implementation<br />

of the programme; as a result, tending to<br />

‘over-report’ change.<br />

Neglecting and ‘under-reporting’ the views of<br />

classroom practitioners, and programme critics.<br />

Adopting an atheoretical approach, and<br />

generally regarding the aggregation of opinion<br />

as the determination of overall significance.<br />

Involving a tight contractual relationship with<br />

the programme sponsors that either disbars<br />

public reporting, or encourages self-censorship<br />

in order to protect future funding prospects.<br />

<br />

Undertaking various forms of implicit advocacy<br />

for the programme in its reporting style.<br />

Creating and reinforcing a professional<br />

schizophrenia in the research and evaluation<br />

community, whereby individuals come to<br />

hold divergent public and private opinions,<br />

or offer criticisms in general rather than<br />

in particular, or quietly develop ‘academic’<br />

critiques which are at variance with their<br />

contractual evaluation activities, alternating<br />

between ‘critical’ and ‘conformative’ selves.<br />

The argument so far has been confined to<br />

large-scale projects that are influenced by and<br />

may or may not influence political decisionmaking.<br />

However, the argument need not remain<br />

there. Morrison (1993), for example, indicates<br />

how evaluations might influence the ‘micropolitics<br />

of the school’. Hoyle (1986), for example,<br />

asks whether evaluation data are used to bring<br />

resources into, or take resources out of, a<br />

department or faculty. The issue does not relate<br />

only to evaluations, for school-based research, far<br />

from the emancipatory claims for it made by action<br />

researchers (e.g. Carr and Kemmis 1986; Grundy<br />

1987), is often concerned more with finding out<br />

the most successful ways of organization, planning,<br />

teaching and assessment of a given agenda rather<br />

than setting agendas and following one’s own<br />

research agendas. This is problem-solving rather<br />

than problem-setting.Thatevaluationandresearch<br />

are being drawn together by politics at both macrolevel<br />

and micro-level is evidence of a growing<br />

interventionism by politics into education, thus<br />

reinforcing the hegemony of the government in<br />

power. Several points have been made here:<br />

There is considerable overlap between<br />

evaluation and research.<br />

There are some conceptual differences between<br />

evaluation and research, though, in practice,<br />

there is considerable blurring of the edges of<br />

the differences between the two.<br />

The funding and control of research and<br />

research agendas reflect the persuasions of<br />

political decision-makers.<br />

Evaluative research has increased in response<br />

to categorical funding of research projects.<br />

The attention being given to, and utilization of,<br />

evaluation varies according to the consonance<br />

between the findings and their political<br />

attractiveness to political decision-makers.<br />

Chapter 1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!