06.11.2014 Views

learning-styles

learning-styles

learning-styles

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Shwery (1994) also questioned aspects of the LSI:<br />

‘The instrument is still plagued by issues related to its<br />

construct validity and the lack of an a priori theoretical<br />

paradigm for its development.’<br />

Reliability<br />

Curry (1987) judged the internal reliability of the LSI<br />

and PEPS to be good, with an average of 0.63 for the<br />

LSI and 0.66 for the PEPS. Yet she did not indicate<br />

what she regarded as ‘good’ coefficients and these are<br />

normally accepted to be 0.7 or above for a sub-scale.<br />

LaMothe et al. (1991) carried out an independent study<br />

of the internal consistency reliability of the PEPS with<br />

470 nursing students. They found that only 11 of the<br />

20 scales had alpha coefficients above 0.70, with the<br />

environmental variables being the most reliable and<br />

the sociological variables the least reliable.<br />

Knapp (1994) 6 expressed concerns both about<br />

the approach to reliability in the design of the LSI<br />

and the reporting of reliability data: in particular,<br />

he criticised repeating questions in the LSI to improve<br />

its reliability. He added:<br />

No items are, in fact, repeated word for word. They<br />

are simply reworded … Such items contribute to<br />

a consistency check, and are not really concerned<br />

with reliability at all … Included in the directions<br />

on the separate answer sheet … is the incredible<br />

sentence ‘Some of the questions are repeated to help<br />

make the inventory more reliable’. If that is the only<br />

way the authors could think of to improve the reliability<br />

of the inventory, they are in real trouble!<br />

There are also concerns about the Dunns’ claims for<br />

internal consistency. For example, Shwery (1994) says:<br />

Scant evidence of reliability for scores from the LSI<br />

is provided in the manual. The authors report [that]<br />

‘research in 1988 indicated that 95 percent’ (p.30)<br />

of the 22 areas … provided internal consistency<br />

estimates of 0.60 or greater. The actual range is<br />

0.55–0.88. Internal consistency of a number of areas …<br />

was low. As such, the link between the areas and<br />

justifiably making decisions about instruction in these<br />

areas is questionable.<br />

Murray-Harvey (1994) reported that the reliability<br />

of ‘the majority’ of the PEPS elements was acceptable.<br />

However, she considered ‘tactile modality’ and<br />

‘<strong>learning</strong> in several ways’ to ‘show poor internal<br />

consistency’ (1994, 378). In order to obtain retest<br />

measures, she administered the PEPS to 251 students<br />

in 1991 and again in 1992. Environmental preferences<br />

were found to be the most stable, with coefficients<br />

of between 0.48 (‘design’) and 0.64 (‘temperature’),<br />

while sociological and emotional preferences were less<br />

so (0.30 for ‘persistence’ and 0.59 for ‘responsibility’),<br />

as might be expected from Rita Dunn’s (2001a)<br />

characterisation of these areas as more open to<br />

change. However, the physiological traits, which are<br />

supposed to be relatively stable, ranged from<br />

0.31 for a specific ‘late morning’ preference to 0.60<br />

for a general ‘time of day’ preference (Price and Dunn<br />

1997). Overall, 13 out of 20 variables exhibited poor<br />

test–retest reliability scores of below 0.51.<br />

Two separate reviews of the PEPS by Kaiser (1998)<br />

and Thaddeus (1998) for the Mental Measurements<br />

Yearbook highlighted concerns about the Dunns’<br />

interpretations of reliability. Both reviews noted the<br />

reliability coefficients of less than 0.60 for ‘motivation’,<br />

‘authority-oriented <strong>learning</strong>’, ‘<strong>learning</strong> in several ways’,<br />

‘tactile <strong>learning</strong>’ and ‘kinaesthetic <strong>learning</strong>’. Thaddeus<br />

also noted that some data was missing, such as<br />

the characteristics of the norm group to whom the<br />

test was administered.<br />

Validity<br />

Criticism was directed at a section entitled ‘reliability<br />

and validity’ in the LSI manual (Price and Dunn 1997,<br />

10). Knapp (1994) argued that ‘there is actually<br />

no mention of validity, much less any validity data’<br />

and Shwery (1994) noted that ‘the reader is referred<br />

to other studies to substantiate this claim’. These<br />

are the dissertation studies which supporters cite<br />

to ‘provide evidence of predictive validity’ (De Bello<br />

1990, 206) and which underpin the meta-analyses<br />

(Dunn et al. 1995). There were also problems in<br />

obtaining any information about validity in the PEPS<br />

(Kaiser 1998; Thaddeus 1998) and a problem with<br />

extensive lists of studies provided by the Dunns,<br />

namely that: ‘the authors expect that the validity<br />

information for the instrument can be gleaned through<br />

a specific examination of these studies.’ (Kaiser 7 1998).<br />

Kaiser also makes the point that ‘just listing the<br />

studies in which the PEPS was used does not add<br />

to its psychometric properties’.<br />

6<br />

Page numbers are not available for online Buros reports from the<br />

Mental Measurements Yearbooks. The same applies to Shwery (1994).<br />

7<br />

Page numbers are not available for online Buros reports from the Mental<br />

Measurements Yearbooks. The same applies to Thaddeus (1998).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!