12.01.2015 Views

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

240 INTERNET-BASED <strong>RESEARCH</strong> AND COMPUTER USAGE<br />

245–6) also reports that greater variance in results<br />

is likely in an Internet-based experiment than in a<br />

conventional experiment due to technical matters<br />

(e.g. network connection speed, computer speed,<br />

multiple software running in parallel).<br />

On the other hand, Reips (2002b: 247) also<br />

reports that Internet-based experiments have<br />

an attraction over laboratory and conventional<br />

experiments:<br />

<br />

<br />

<br />

They have greater generalizability because of<br />

their wider sampling.<br />

They demonstrate greater ecological validity as<br />

typically they are conducted in settings that are<br />

familiar to the participants and at times suitable<br />

to the participant (‘the experiment comes to<br />

the participant, not vice versa’), though, of<br />

course, the obverse of this is that the researcher<br />

has no control over the experimental setting<br />

(Reips 2002b: 250).<br />

They have a high degree of voluntariness,<br />

such that more authentic behaviours can be<br />

observed.<br />

How correct these claims are is an empirical<br />

matter. For example, the use of sophisticated<br />

software packages (e.g. Java) can reduce<br />

experimenter control as these packages may<br />

interact with other programming languages.<br />

Indeed Schwarz and Reips (2001) report that the<br />

use of Javascript led to a 13 per cent higher<br />

dropout rate in an experiment compared to an<br />

identical experiment that did not use Javascript.<br />

Further, multiple returns by a single participant<br />

could confound reliability (discussed above in<br />

connection with survey methods).<br />

Reips (2002a, 2002b) provides a series of ‘dos’<br />

and ‘don’ts’ in Internet experimenting. In terms of<br />

‘dos’ he gives five main points:<br />

<br />

<br />

<br />

Use dropout as a dependent variable.<br />

Use dropout to detect motivational confounding<br />

(i.e. to identify boredom and motivation<br />

levels in experiments).<br />

Place questions for personal information at<br />

the beginning of the Internet study. Reips<br />

(2002b) suggests that asking for personal<br />

information may assist in keeping participants<br />

<br />

in an experiment, and that this is part of the<br />

‘high hurdle’ technique, where dropouts selfselect<br />

out of the study, rather than dropping<br />

out during the study.<br />

Use techniques that help ensure quality in data<br />

collection over the Internet (e.g. the ‘high<br />

hurdle’ and ‘warm-up’ techniques discussed<br />

earlier, subsampling to detect and ensure<br />

consistency of results, using single passwords<br />

to ensure data integrity, providing contact<br />

information, reducing dropout).<br />

Use Internet-based tools and services to<br />

develop and announce your study (using<br />

commercially produced software to ensure<br />

that technical and presentational problems<br />

are overcome). There are also web sites (e.g.<br />

the American Psychological Society) that<br />

announce experiments.<br />

<br />

<br />

<br />

<br />

<br />

In terms of ‘don’ts’ Reips gives five main points:<br />

Do not allow external access to unprotected<br />

directories. This can violate ethical and<br />

legal requirements, as it provides access to<br />

confidential data. It also might allow the<br />

participants to have access to the structure<br />

of the experiment, thereby contaminating the<br />

experiment.<br />

Do not allow public display of confidential<br />

participant data through URLs (uniform<br />

resource locators, a problem if respondents<br />

use the GET protocol, which is a way of<br />

requesting an html page, whether or not one<br />

uses query parameters), as this, again, violates<br />

ethical codes.<br />

Do not accidentally reveal the experiment’s<br />

structure (as this could affect participant<br />

behaviour). This might be done through<br />

including the experiment’s details on a related<br />

file or a file in the same directory.<br />

Do not ignore the technical variance inherent<br />

in the Internet (configuration details, browsers,<br />

platforms, bandwidth and software might all<br />

distort the experiment, as discussed above).<br />

Do not bias results through improper use of<br />

form elements, such as measurement errors,<br />

where omitting particular categories (e.g.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!