12.01.2015 Views

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

238 INTERNET-BASED <strong>RESEARCH</strong> AND COMPUTER USAGE<br />

non-volunteers in an Internet survey is to contact<br />

them by email (assuming that their email addresses<br />

are known), e.g. a class of students, a group of<br />

teachers. However, email addresses themselves do<br />

not give the researcher any indication of the<br />

sample characteristics (e.g. age, sex, nationality<br />

etc).<br />

Watt (1997) suggests that there are three types<br />

of Internet sample:<br />

<br />

<br />

<br />

an unrestricted sample: anyone can complete<br />

the questionnaire, but it may have limited<br />

representativeness<br />

a screened sample: quotas are placed on the<br />

subsample categories and types (e.g. gender,<br />

income, job responsibility etc.)<br />

a recruited sample: respondents complete a<br />

preliminary classification questionnaire and<br />

then, based on the data provided in them,<br />

are recruited or not.<br />

Response rate for an Internet survey is typically<br />

lower than for a paper-based survey, as is the<br />

rate of completion of the whole survey (Reips<br />

2002a). Witmer et al. (1999:147)reportthatfor<br />

a paper-based survey the response could be as<br />

high as 50 per cent and as low as 20 per cent;<br />

for an Internet survey it could be as low as 10<br />

per cent or even lower. Dillman et al. (1998b)<br />

report a study that found that 84 per cent of a<br />

sample completed a particular paper-based survey,<br />

while only 68 per cent of a sample completed<br />

the same survey online. Solomon (2001) reported<br />

that response rates to an Internet-based survey<br />

are lower than for their equivalent mail surveys.<br />

However, this issue is compounded because in an<br />

Internet-based survey, there is no real knowledge<br />

of the population or the sample, unless only specific<br />

people have been approached (e.g. through email).<br />

In the same study Witmer et al. foundthatshort<br />

versions of an Internet-based questionnaire did not<br />

produce a significantly higher response rate than<br />

the long version (p. 155). Solomon (2001) suggests<br />

that response rates can be improved through the<br />

use of personalized email, follow-up reminders, the<br />

use of simple formats and pre-notification of the<br />

intent to survey.<br />

Reips (2002a) provides some useful guidelines<br />

for increasing response rates on an Internet<br />

survey. He suggests that response rates can be<br />

increased by utilizing the multiple site entry<br />

technique, i.e. having several web sites and<br />

postings on several discussion groups that link<br />

potential participants or web surfers to the<br />

web site containing the questionnaire. Reips<br />

(2002a: 249) also suggests utilizing a ‘high hurdle’<br />

technique, where ‘motivationally adverse factors<br />

are announced or concentrated as close to the<br />

beginning’ as possible, so that any potential<br />

dropouts will self-select at the start rather than<br />

during the data collection. A ‘high hurdle’<br />

technique, he suggests, comprises:<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

Seriousness: inform the participants that the<br />

research is serious and rigorous.<br />

Personalization: ask for an email address or<br />

contact details and personal information.<br />

Impression of control: inform participants that<br />

their identity is traceable.<br />

Patience: loading time: use image files to reduce<br />

loading time of Web pages.<br />

Patience: long texts: place most of the text in the<br />

first page, and successively reduce the amount<br />

on each subsequent page.<br />

Duration: inform participants how long the<br />

survey will take.<br />

Privacy: inform the participants that some<br />

personal information will be sought.<br />

Preconditions: indicate the requirements for<br />

particular software.<br />

Technical pretests: conduct tests of compatibility<br />

of software.<br />

Rewards: indicate that any rewards/incentives<br />

are contingent on full completion of the survey.<br />

Of course, some of these strategies could backfire<br />

on the researcher (e.g. the disclosure of personal<br />

and traceable details), but the principle here is<br />

that it is better for the participant not to take part<br />

in the first place rather than to drop out during the<br />

process. Indeed Frick et al.(1999)foundthatearly<br />

dropout was not increased by asking for personal<br />

information at the beginning. In relation to online<br />

experiments they found that ‘the tendency of leaving<br />

the experiment when personal information is

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!