learning-styles
learning-styles
learning-styles
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
The long history of public dispute over the reliability<br />
of the LSI can be portrayed as the action of two<br />
opposing factions of supporters and detractors.<br />
But this complex picture is made more complicated still<br />
by one of the sharpest groups of critics having a change<br />
of heart as a result of research with a modified version<br />
of the 1985 version of the LSI. In a number of studies,<br />
Veres, Sims and their colleagues had criticised the<br />
1985 version because the minor improvements in<br />
test–retest reliability as compared to the 1976 version<br />
were not sufficient to support Kolb’s theory (Sims et al.<br />
1986; Veres, Sims and Shake 1987; Sims, Veres<br />
and Shake 1989). However, when they changed the<br />
instrument by randomly presenting the order of the<br />
sentence endings to eliminate a probable response<br />
bias, the test–retest reliabilities ‘increased<br />
dramatically’ (Veres, Sims and Locklear 1991, 149).<br />
As a result, they now recommend that researchers<br />
should use the modified version of the LSI to study<br />
<strong>learning</strong> <strong>styles</strong>.<br />
Their stance is supported by Romero, Tepper and<br />
Tetrault (1992) who likewise, in order to avoid problems<br />
with scoring the LSI, developed new scales which<br />
proved to have adequate levels of reliability and validity.<br />
In the technical specifications of the 1999 version<br />
of the LSI, Kolb (2000, 69) uses the data produced<br />
by Veres, Sims and Locklear (1991) to claim that its<br />
reliability has been ‘substantially improved as a result<br />
of the new randomized self-scoring format’.<br />
Validity<br />
The continuing conflict over the reliability of the<br />
LSI is replicated with respect to its validity and shows<br />
little sign of the kind of resolution which the theory<br />
of experiential <strong>learning</strong> suggests is necessary for<br />
<strong>learning</strong> to take place. The latest version of the guide<br />
to the LSI (Kolb 2000) contains one general paragraph<br />
on the topic of validity. This refers the reader to the<br />
vast bibliography on the topic, but does not provide any<br />
detailed statistics or arguments beyond claiming that<br />
in 1991, Hickox reviewed the literature and concluded<br />
that ‘83.3 per cent of the studies she analyzed provided<br />
support for the validity of Experiential Learning Theory<br />
and the Learning Style Inventory’ (Kolb 2000, 70).<br />
In sharp contrast, Freedman and Stumpf (1978, 280)<br />
reported that in studies of undergraduates following<br />
different courses, ‘on average, less than 5 percent<br />
of between-group variance … can be accounted for by<br />
knowledge of <strong>learning</strong> style’. While they accepted that<br />
the LSI has sufficient face validity to win over students,<br />
factor analysis provided only weak support for the<br />
theory; furthermore, they claimed that the variance<br />
accounted for by the LSI may be simply a function<br />
of the scoring system.<br />
Further confusion arises because for every negative<br />
study, a positive one can be found. For example,<br />
Katz (1986) produced a Hebrew version of the<br />
LSI and administered it to 739 Israeli students to<br />
investigate its construct validity. Factor analysis<br />
provided empirical support for the construct validity<br />
of the instrument and suggested that ‘Kolb’s theory<br />
may be generalised to another culture and population’<br />
(Katz 1986, 1326). Yet in direct contradiction,<br />
Newstead’s study (1992, 311) of 188 psychology<br />
students at the University of Plymouth found that,<br />
as well as disappointingly low reliability scores,<br />
‘the factor structure emerging from a factor analysis<br />
bore only a passing resemblance to that predicted<br />
by Kolb, and the scales did not correlate well with<br />
academic performance’.<br />
Again, Sims, Veres and Shake (1989) attempted<br />
to establish construct validity by examining the<br />
LSI and Honey and Mumford’s LSQ for convergence.<br />
The evidence, based on both instruments being<br />
administered to 279 students in two south-eastern<br />
US universities, was ‘disappointingly sparse’<br />
(1989, 232). Goldstein and Bokoros (1992, 710)<br />
also compared the two instruments and found<br />
a ‘modest but significant degree of classification<br />
into equivalent <strong>styles</strong>’.<br />
A more serious challenge to Kolb’s theory and<br />
instrument is provided by De Ciantis and Kirton (1996)<br />
whose psychometric analysis revealed two substantial<br />
weaknesses. First, they argued (1996, 816) that<br />
Kolb is attempting, in the LSI, to measure ‘three<br />
unrelated aspects of cognition: style, level and process’.<br />
By ‘process’, they mean the four discrete stages of the<br />
<strong>learning</strong> cycle through which learners pass; by ‘level’,<br />
the ability to perform well or poorly at any of the<br />
four stages; and by ‘style’, the manner in which<br />
‘each stage in the <strong>learning</strong> process is approached and<br />
operationalised’ (1996, 813). So, as they concluded:<br />
‘each stage can be accomplished in a range of <strong>styles</strong><br />
and in a range of levels’ (1996, 817). The separation<br />
of these three cognitive elements – style, level<br />
and process – is a significant advance in precision<br />
over Kolb’s conflation of <strong>styles</strong>, abilities and stages<br />
and should help in the selection of an appropriate<br />
<strong>learning</strong> strategy.<br />
De Ciantis and Kirton go further, however, by casting<br />
doubt on Kolb’s two bipolar dimensions of reflective<br />
observation (RO)-active experimentation (AE) and<br />
concrete experience (CE)-abstract conceptualisation<br />
(AC). Interestingly, the two researchers elected to use<br />
Honey and Mumford’s LSQ in their study of the <strong>learning</strong><br />
<strong>styles</strong> of 185 managers in the UK and the Republic<br />
of Ireland, because they considered it more reliable<br />
than Kolb’s LSI. Kolb’s four <strong>learning</strong> <strong>styles</strong> emerged from<br />
their factor analysis, but in a different configuration,<br />
with CE at one pole and RO at the other; and AC at one<br />
pole and AE at the other.