12.01.2015 Views

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

236 INTERNET-BASED <strong>RESEARCH</strong> AND COMPUTER USAGE<br />

lists in alphabetical order. Finally they found that<br />

placing very short guides underneath the writein<br />

box rather than at its side (e.g. dd/mm/yy for<br />

‘day/month/year’, and using ‘yy’ for ‘year’ rather<br />

than ‘yyyy’) increased response rates, and that<br />

placing instructions very close to the answer box<br />

improved response rates.<br />

Dillman et al. (2003: 23) also found that<br />

having respondents use a yes/no format (a ‘forced<br />

choice’) for responding resulted in increased<br />

numbers of affirmative answers, even though this<br />

requires more cognitive processing than nonforced<br />

choice questions (e.g. ‘tick[check]-all-thatapply’<br />

questions). This is because respondents may<br />

not wish to answer questions in the outright<br />

negative (Dillman et al. 2003:10);evenifthey<br />

do not really have an opinion or they are neutral<br />

or the item does not really apply to them, they may<br />

choose a ‘yes’ rather than a ‘no’ category. They<br />

may leave a blank rather than indicating a ‘no’.<br />

The percentage of affirmative responses was higher<br />

in a paper-based survey than in an Internet-based<br />

survey (11.3 per cent and 6.5 per cent respectively)<br />

(Dillman et al. 2003: 22).<br />

Similarly, as mentioned earlier, Dillman et al.<br />

(2003) report that respondents tend to select<br />

items higher up a list than lower down a list<br />

of options (the primacy effect), opting for the<br />

‘satisficing’ principle (they are satisfied with a<br />

minimum sufficient response, selecting the first<br />

reasonable response in a list and then moving on<br />

rather than working their way down the list to find<br />

the optimal response), suggesting that item order<br />

is a significant feature, making a difference of over<br />

39 per cent to responses (Dillman et al. 2003:7).<br />

This is particularly so, the authors aver, when<br />

respondents are asked for opinions and beliefs<br />

rather than topics seeking factual information.<br />

They also suggest that the more difficult the<br />

item is, the more respondents will move towards<br />

‘satisficing’. Dillman et al. (2003:22)foundthat<br />

‘satisficing’ and the primacy effect were stronger in<br />

Internet surveys than paper-based surveys, and that<br />

changing ‘check-all-that-apply’ to forced responses<br />

(yes/no) did not eliminate response order effects.<br />

Dillman et al. (2003:6)alsoreportthatthe<br />

order of response items can have an effect on<br />

responses, citing as an example a study that found<br />

that asking college students whether their male or<br />

female teachers were more empathetic was affected<br />

by whether the ‘male’ option was placed before or<br />

after the ‘female’ option: ‘respondents evaluated<br />

their female teachers more positively when they<br />

were asked to compare them to their male teachers<br />

than when they were asked to compare their male<br />

teachers to their female teachers’. Respondents<br />

compare the second item in light of the first<br />

item in a list rather than considering the items<br />

separately.<br />

Internet-based surveys are subject to the same<br />

ethical rules as paper-based surveys. These include,<br />

for example, informed consent and confidentiality.<br />

While the former may be straightforward to ensure,<br />

the issue of confidentiality on the Internet is<br />

more troublesome for researchers. For example,<br />

on the one hand, an email survey can be quick and<br />

uncomplicated, it can also reveal the identity and<br />

traceability of the respondent. As Witmer et al.<br />

(1999: 147) remark, this could stall a project.<br />

Security (e.g. through passwords and PINs) is<br />

one possible solution, although this, too, can<br />

create problems in that respondents may feel<br />

that they are being identified and tracked, and,<br />

indeed, some surveys may deposit unwelcome<br />

‘co<strong>ok</strong>ies’ onto the respondent’s computer, for<br />

future contact.<br />

Sampling in Internet-based surveys<br />

Sampling bias is a major concern for Internetbased<br />

surveys (Coomber 1997; Roztocki and<br />

Lahri 2002). Hewson et al. (2003: 27) suggest<br />

that ‘Internet-mediated research is immediately<br />

subject to serious problems concerning sampling<br />

representativeness and validity of data’, e.g. that<br />

the Internet researcher tends to tap into middleclass<br />

and well-educated populations, mainly from<br />

the United States, or undergraduate and college<br />

students. Survey 2000 (Witte et al. 1999)found<br />

that 92.5 per cent of respondents were white.<br />

However, the view of over-representation of<br />

some and under-representation of others is being<br />

increasingly challenged (Smith and Leigh 1997;

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!