06.11.2014 Views

learning-styles

learning-styles

learning-styles

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

The BES instrument has elements for learners<br />

to self-assess ‘analytic’ versus ‘global’, and ‘reflective’<br />

versus ‘impulsive’ processing. In a survey of 73 trainee<br />

teachers using the BES, 71.3% identified themselves<br />

as strong to moderately analytic while 49.4% identified<br />

themselves as strong to moderately reflective.<br />

These findings were used to support the claim that<br />

trainee teachers who are themselves more likely to<br />

be analytic need to be prepared to teach ‘a relatively<br />

high number of global processors amongst youngsters’<br />

(Honigsfeld and Schiering 2003, 292).<br />

Evaluation by authors<br />

Rita Dunn makes strong claims for reliability, validity<br />

and impact; for example (1990b, 223):<br />

Research on the Dunn and Dunn model of the <strong>learning</strong><br />

style is more extensive and far more thorough than<br />

the research on any other educational movement, bar<br />

none. As of 1989, it had been conducted at more than<br />

60 institutions of higher education, at multiple grade<br />

levels … and with every level of academic proficiency,<br />

including gifted, average, underachieving, at risk,<br />

drop-out, special education and vocational/industrial<br />

arts populations. Furthermore, the experimental<br />

research in <strong>learning</strong> <strong>styles</strong> conducted at St John’s<br />

University, Jamaica [in] New York has received one<br />

regional, twelve national, and two international awards<br />

and citations for its quality. No similar claim can be<br />

made for any other body of educational knowledge.<br />

By 2003, the number of research studies had<br />

increased, being conducted in over 120 higher<br />

education institutions (Lovelace 2003).<br />

Reliability<br />

The LSI manual (Price and Dunn 1997) reported<br />

research which indicated that the test–retest<br />

reliabilities for 21 of the 22 factors were greater<br />

than 0.60 (n=817, using the 1996 revised instrument),<br />

with only ‘late morning’ preferences failing to achieve<br />

this level (0.56). It is important to reiterate here<br />

that the number of elements varies between the<br />

different inventories because the PEPS omits elements<br />

for motivation in the case of adults. For the PEPS,<br />

Price (1996) reported that 90% of elements had<br />

a test–retest reliability of greater than 0.60 (n=504),<br />

the ‘rogue element’ in this case being the ‘tactile<br />

modality’ preference (0.33). It is important to note<br />

that the 0.60 criterion for acceptable reliability is a lax<br />

one, since at that level, misclassification is actually<br />

more likely than accuracy. The PEPS was tested with<br />

975 females and 419 males aged 18 to 65 years.<br />

Test–retest reliabilities for the 20 sub-scales ranged<br />

from 0.39 to 0.87 with 40% of the scales being over<br />

0.8 (Nelson et al. 1993).<br />

Although at the time of writing, there are no academic<br />

articles or book chapters dealing with the reliability<br />

and validity of the Building Excellence Survey (BES),<br />

in 1999, one of Rita Dunn’s doctoral students made<br />

a detailed statistical comparison of the PEPS and the<br />

BES (Lewthwaite 1999). Lewthwaite used a paper-based<br />

version of the BES which contained 150 items and<br />

resembled the current electronic version in ‘look<br />

and feel’. Both the PEPS and the BES were completed<br />

by an opportunity sample of 318 adults, with the<br />

PEPS being done first, followed by part of the BES,<br />

the rest being completed by most participants at home.<br />

Lewthwaite felt the need to preface the questionnaire<br />

with a 20–30 minute lecture about the Dunn and Dunn<br />

<strong>learning</strong> <strong>styles</strong> model and an explanation about how<br />

to self-score the BES. There was therefore ample<br />

opportunity for participants to revise their choices<br />

in response to section-by-section feedback, since they<br />

had a fortnight before bringing their completed booklets<br />

to a follow-up session. This was hardly an ideal way<br />

to study the statistical properties of the BES, since<br />

both the lecture and the way in which the BES presents<br />

one strand at a time for self-scoring encouraged<br />

participants to respond in a consistent manner.<br />

What is of particular interest about Lewthwaite’s<br />

study is the almost total lack of agreement between<br />

corresponding components of the PEPS and the<br />

BES. Rita Dunn was closely involved in the design<br />

of both instruments, which are based on the same<br />

model and have similarly worded questions. Yet<br />

the correlations for 19 shared components range<br />

from –0.14 (for <strong>learning</strong> in several ways) and 0.45<br />

(for preference for formal or informal design and<br />

for temperature), with an average of only 0.19. In other<br />

words, the PEPS and the BES measure the same things<br />

only to a level of 4%, while 96% of what they measure<br />

is inconsistent between one instrument and the<br />

other. The only conclusion to be drawn is that these<br />

instruments have virtually no concurrent validity even<br />

when administered in circumstances designed to<br />

maximise such validity.<br />

The literature supporting the model presents<br />

extensive citations of studies that have tested the<br />

model in diverse contexts (see Dunn et al. 1995;<br />

Dunn and Griggs 2003). The authors claim that age,<br />

gender, socio-economic status, academic achievement,<br />

race, religion, culture and nationality are important<br />

variables in <strong>learning</strong> preferences, showing multiple<br />

patterns of <strong>learning</strong> <strong>styles</strong> between and within<br />

diverse groups of students (eg Ewing and Yong 1992;<br />

Dunn et al. 1995). The existence of differences both<br />

between and within groups means that the evidence<br />

does not support a clear or simple ‘<strong>learning</strong> <strong>styles</strong><br />

prescription’ which differentiates between these groups.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!