06.11.2014 Views

learning-styles

learning-styles

learning-styles

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

LSRC reference Section 7<br />

page 98/99<br />

Validity<br />

In contrast to claims by Entwistle and his colleagues<br />

about the validity of the ASI, there is less agreement<br />

in external evaluations. For example, in a review<br />

of seven external studies and two by Entwistle and<br />

Ramsden, Richardson found problems with construct<br />

validity for many of the 16 sub-scales and individual<br />

items of the ASI generally. He argued that the ASI<br />

provided a convenient way of characterising students’<br />

approaches to <strong>learning</strong> within different contexts, but an<br />

ongoing problem for researchers had been to retrieve<br />

the original constituent structure of the ASI. Although<br />

factor analyses in both internal and external studies<br />

of the ASI have retrieved the basic distinction between<br />

meaning and reproducing orientations, ‘dimensions<br />

concerning achieving orientation and <strong>styles</strong> and<br />

pathologies have been much less readily identifiable’<br />

(Richardson 1992, 41). He concluded (1997) that<br />

meaning and reproducing orientations constitute a valid<br />

typology of approaches to studying and that there is<br />

evidence of gender and age differences in orientations.<br />

Problems with construct validity in the ASI are<br />

confirmed by Sadler-Smith (1999a), while other<br />

studies question the construct validity of some items<br />

for students of other cultures (see Meyer and Parsons<br />

1989; Kember and Gow 1990). Kember and Gow<br />

argue that the test needs to be more culturally specific<br />

in terms of construct validity. There has also been<br />

disagreement about whether it offers predictive validity<br />

in correlating orientations and final assessment among<br />

18–21-year-old undergraduates (Richardson 1992).<br />

However, Entwistle argues that the inventories were<br />

developed to describe different approaches to studying,<br />

not to predict achievement as such. In addition, the<br />

absence of standardised assessment criteria in<br />

higher education makes predictive validity difficult<br />

to demonstrate (Entwistle 2002).<br />

In response to problems with construct and predictive<br />

validity, Fogarty and Taylor (1997) tested the ASI<br />

with 503 mature, ‘non-traditional’ (ie without entry<br />

qualifications) entrants to Australian universities.<br />

Their study confirmed problems with internal<br />

consistency reliability for seven of the sub-scales,<br />

with alpha coefficients in the range 0.31 to 0.60.<br />

In a similar vein to other studies that advocate a focus<br />

on broad orientations, the authors argued (1997, 328)<br />

that it ‘may be better to concentrate on the meaning<br />

and reproducing orientations rather than on the various<br />

minor scales’. In terms of predictive validity, they<br />

found a negligible correlation between reproduction<br />

orientation and poor academic performance among<br />

their sample, but also a lack of correlation between<br />

a deep approach and good performance. This led them<br />

to argue that students unfamiliar with study may have<br />

appropriate orientations, but lack appropriate study<br />

skills to operationalise them.<br />

Another study (Kember and Gow 1990) explored<br />

relationships between, on the one hand, performance<br />

and persistence; and on the other, approaches<br />

and orientation as measured by the ASI. In a study<br />

of 779 students divided between internal and external<br />

courses, discriminant analysis evaluated which<br />

of the sub-scales could distinguish between those<br />

who persist and those who do not. For both internal<br />

and external students, the surface approach was the<br />

variable that discriminated between non-persisters<br />

and persisters [discriminant coefficients of 0.71<br />

(internal students) and 0.94 (external students)].<br />

The other variable was fear of failure. Persistence was<br />

therefore partly related to fear of failure, while a surface<br />

approach was more likely to lead to dropping out.<br />

In a study of 573 Norwegian undergraduates following<br />

an introductory course in the history of philosophy,<br />

logic and philosophy of science, Diseth (2001)<br />

evaluated the factor structure of the ASSIST. His study<br />

found evidence of the deep and surface approaches,<br />

but was less positive for items about course perception<br />

and assessment demands. In another test with<br />

89 Norwegian psychology students, he found no links<br />

between general intelligence measures and approaches<br />

to <strong>learning</strong>. However, he noted (Diseth 2002) that<br />

straightforward correlations between achievement<br />

and the approaches that students adopt are not<br />

sufficient to predict success in assessment: instead,<br />

a surface approach had a statistically significant<br />

curvilinear link to examination grade: the highest level<br />

of achievement related to a low or moderate surface<br />

approach. The more that students used a surface<br />

approach, the more their achievement declined.<br />

A strategic approach is also associated with high<br />

achievement, suggesting a need to differentiate<br />

between deep and surface approaches to <strong>learning</strong><br />

and a strategic approach to studying (Entwistle and<br />

McCune 2003). This also suggests the need for<br />

lecturers, and students themselves, to be realistic<br />

about the importance of strategic approaches<br />

in students’ responses to teaching and curriculum<br />

and assessment design. For example, the pressures<br />

of ‘credential inflation’ for achieving ever higher grades<br />

and levels of qualification are likely to encourage<br />

strategic approaches.<br />

There has recently been a large upsurge of interest<br />

in describing and measuring the study strategies<br />

of students in higher education. This interest arises<br />

from both political and pedagogical goals: for example,<br />

policy decisions such as the training and certification<br />

of teachers in universities demand empirical evidence<br />

about appropriate pedagogy (see Entwistle and McCune<br />

2003). In addition, current proposals to use student<br />

evaluations of their courses as the basis for league<br />

tables of universities derive heavily from the Course<br />

Perceptions Questionnaire developed for quite different<br />

purposes in the 1980s.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!