learning-styles
learning-styles
learning-styles
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
LSRC reference Section 9<br />
page 140/141<br />
The unwarranted faith placed in simple inventories<br />
A recurrent criticism we made of the 13 models<br />
studied in detail in Sections 3–7 was that too much<br />
is being expected of relatively simple self-report tests.<br />
Kolb’s LSI, it may be recalled, now consists of no more<br />
than 12 sets of four words to choose from. Even if<br />
all the difficulties associated with self-report (ie the<br />
inability to categorise one’s own behaviour accurately<br />
or objectively, giving socially desirable responses,<br />
etc; see Riding and Rayner 1998) are put to one side,<br />
other problems remain. For example, some of the<br />
questionnaires, such as Honey and Mumford’s, force<br />
respondents to agree or disagree with 80 items such<br />
as ‘People often find me insensitive to their feelings’.<br />
Richardson (2000, 185) has pointed to a number<br />
of problems with this approach:<br />
the respondents are highly constrained by the<br />
predetermined format of any particular questionnaire<br />
and this means that they are unable to calibrate<br />
their understanding of the individual items against<br />
the meanings that were intended by the person<br />
who originally devised the questionnaire or by the<br />
person who actually administers it to them<br />
We therefore advise against pedagogical intervention<br />
based solely on any of the <strong>learning</strong> style instruments.<br />
One of the strengths of the models developed<br />
by Entwistle and Vermunt (see Sections 7.1 and 7.2)<br />
is that concern for ecological validity has led them<br />
to adopt a broader methodology, where in-depth<br />
qualitative studies are used in conjunction with an<br />
inventory to capture a more rounded picture of students’<br />
approaches to <strong>learning</strong>.<br />
As Curry (1987) points out, definitions of <strong>learning</strong><br />
style and underlying concepts and theories are<br />
so disparate between types and cultures (eg US and<br />
European) that each model and instrument has to<br />
be evaluated in its own terms. One problem is that<br />
‘differences in research approaches continue and<br />
make difficult the resolution of acceptable definitions<br />
of validity’ (1987, 2). In addition, she argues that<br />
a great deal of research and practice has proceeded<br />
‘in the face of significant difficulties in the bewildering<br />
confusion of definitions surrounding cognitive style<br />
and <strong>learning</strong> style conceptualisations…’ (1987, 3).<br />
Her evaluation, in 1987, was that researchers in the<br />
field had not yet established unequivocally the reality,<br />
utility, reliability and validity of these concepts.<br />
Our review of 2003 shows that these problems still<br />
bedevil the field.<br />
Curry’s evaluation (1987, 16) also offers another<br />
important caveat for policy-makers, researchers and<br />
practitioners that is relevant 16 years later:<br />
The poor general quality of available instruments<br />
(makes it) unwise to use any one instrument as a true<br />
indicator of <strong>learning</strong> <strong>styles</strong> … using only one measure<br />
assumes [that] that measure is more correct than<br />
the others. At this time (1987) the evidence cannot<br />
support that assumption.<br />
There is also a marked disparity between the<br />
sophisticated, statistical treatment of the scores<br />
that emanate from these inventories (and the treatment<br />
is becoming ever more sophisticated), and the<br />
simplicity – some would say the banality – of many<br />
of the questionnaire items. However, it can be argued<br />
that the items need to be obvious rather than recondite<br />
if they are to be valid.<br />
There is also an inbuilt pressure on all test developers<br />
to resist suggestions for change because, if even just<br />
a few words are altered in a questionnaire, the situation<br />
facing the respondent has been changed and so all<br />
the data collected about the test’s reliability and validity<br />
is rendered redundant.<br />
No clear implications for pedagogy<br />
There are two separate problems here. First, <strong>learning</strong><br />
style researchers do not speak with one voice;<br />
there is widespread disagreement about the advice<br />
that should be offered to teachers, tutors or managers.<br />
For instance, should the style of teaching be consonant<br />
with the style of <strong>learning</strong> or not? At present, there<br />
is no definitive answer to that question, because –<br />
and this brings us to the second problem – there<br />
is a dearth of rigorously controlled experiments<br />
and of longitudinal studies to test the claims of the<br />
main advocates. A move towards more controlled<br />
experiments, however, would entail a loss of ecological<br />
validity and of the opportunity to study complex<br />
<strong>learning</strong> in authentic, everyday educational settings.<br />
Curry (1990, 52) summarised the situation neatly:<br />
Some <strong>learning</strong> style theorists have conducted repeated<br />
small studies that tend to validate the hypotheses<br />
derived from their own conceptualizations. However,<br />
in general, these studies have not been designed<br />
to disconfirm hypotheses, are open to expectation<br />
and participation effects, and do not involve wide<br />
enough samples to constitute valid tests in educational<br />
settings. Even with these built-in biases, no single<br />
learner preference pattern unambiguously indicates<br />
a specific instructional design.<br />
An additional problem with such small-scale studies<br />
is that they are often carried out by the higher-degree<br />
students of the test developers, with all the attendant<br />
dangers of the ‘Hawthorne Effect’ – namely, that<br />
the enthusiasm of the researchers themselves may<br />
be unwittingly influencing the outcomes. The main<br />
questions still to be resolved – for example, whether<br />
to match or not – will only be settled by large-scale,<br />
randomly controlled studies using experimental and<br />
control groups.