12.01.2015 Views

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

144 VALIDITY AND RELIABILITY<br />

this is exposed most clearly in data triangulation,<br />

as it is presumed that a multiple data source<br />

(concurrent validity) is superior to a single data<br />

source or instrument. The assumption that a single<br />

unit can always be measured more than once<br />

violates the interactionist principles of emergence,<br />

fluidity, uniqueness and specificity (Denzin 1997:<br />

320). Further, Patton (1980) suggests that even<br />

having multiple data sources, particularly of<br />

qualitative data, does not ensure consistency or<br />

replication. Fielding and Fielding (1986) hold that<br />

methodological triangulation does not necessarily<br />

increase validity, reduce bias or bring objectivity<br />

to research.<br />

With regard to investigator triangulation,<br />

Lincoln and Guba (1985: 307) contend that<br />

it is erroneous to assume that one investigator will<br />

corroborate another, nor is this defensible, particularly<br />

in qualitative, reflexive inquiry. They extend<br />

their concern to include theory and methodological<br />

triangulation, arguing that the search for<br />

theory and methodological triangulation is epistemologically<br />

incoherent and empirically empty (see<br />

also Patton 1980). No two theories, it is argued,<br />

will ever yield a sufficiently complete explanation<br />

of the phenomenon being researched. These criticisms<br />

are trenchant, but they have been answered<br />

equally trenchantly by Denzin (1997).<br />

Ensuring validity<br />

It is very easy to slip into invalidity; it is both<br />

insidious and pernicious as it can enter at every<br />

stage of a piece of research. The attempt to build<br />

out invalidity is essential if the researcher is to<br />

be able to have confidence in the elements of the<br />

research plan, data acquisition, data processing<br />

analysis, interpretation and its ensuing judgement<br />

(see http://www.routledge.com/textbo<strong>ok</strong>s/<br />

9780415368780 – Chapter 6, file 6.3. ppt).<br />

At the design stage, threats to validity can be<br />

minimized by:<br />

choosing an appropriate time scale<br />

ensuring that there are adequate resources for<br />

the required research to be undertaken<br />

selecting an appropriate methodology for<br />

answering the research questions<br />

selecting appropriate instrumentation for<br />

gathering the type of data required<br />

using an appropriate sample (e.g. one which is<br />

representative, not too small or too large)<br />

demonstrating internal, external, content,<br />

concurrent and construct validity and ‘operationalizing’<br />

the constructs fairly<br />

ensuring reliability in terms of stability<br />

(consistency, equivalence, split-half analysis<br />

of test material)<br />

selecting appropriate foci to answer the<br />

research questions<br />

devising and using appropriate instruments:<br />

for example, to catch accurate, representative,<br />

relevant and comprehensive data (King et al.<br />

1987); ensuring that readability levels are<br />

appropriate; avoiding any ambiguity of<br />

instructions, terms and questions; using<br />

instruments that will catch the complexity<br />

of issues; avoiding leading questions; ensuring<br />

that the level of test is appropriate – e.g.<br />

neither too easy nor too difficult; avoiding<br />

test items with little discriminability; avoiding<br />

making the instruments too short or too long;<br />

avoiding too many or too few items for each<br />

issue<br />

avoiding a biased choice of researcher or<br />

research team (e.g. insiders or outsiders as<br />

researchers).<br />

There are several areas where invalidity or bias<br />

might creep into the research at the stage of data<br />

gathering; these can be minimized by:<br />

reducing the Hawthorne effect (see the<br />

accompanying web site: http://www.routledge.<br />

com/textbo<strong>ok</strong>s/9780415368780 – Chapter 6,<br />

file 6.1.doc)<br />

minimizing reactivity effects: respondents<br />

behaving differently when subjected to scrutiny<br />

or being placed in new situations, for example<br />

the interview situation – we distort people’s<br />

lives in the way we go about studying them<br />

(Lave and Kvale 1995: 226)<br />

trying to avoid dropout rates among respondents<br />

taking steps to avoid non-return of questionnaires

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!