20.10.2015 Views

A COMPENDIUM OF SCALES for use in the SCHOLARSHIP OF TEACHING AND LEARNING

compscalesstl

compscalesstl

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

The Motivated Strategies <strong>for</strong> Learn<strong>in</strong>g Questionnaire (MSLQ)<br />

Researchers from multiple <strong>in</strong>stitutions developed <strong>the</strong> MSLQ based on social cognitive<br />

pr<strong>in</strong>ciples. As Garcia and P<strong>in</strong>trich (1995) expla<strong>in</strong>ed, <strong>the</strong> MSLQ has two major sections<br />

(motivation and learn<strong>in</strong>g strategies) with <strong>the</strong> motivational section subdivided <strong>in</strong>to three<br />

components: expectancy (<strong>in</strong>clud<strong>in</strong>g <strong>the</strong> self-efficacy scale), value (e.g., extr<strong>in</strong>sic and <strong>in</strong>tr<strong>in</strong>sic<br />

orientations), and affect (encompass<strong>in</strong>g test anxiety). Beca<strong>use</strong> beliefs and strategies likely<br />

differ across courses, students respond to all 81 items <strong>in</strong> terms of a specific course us<strong>in</strong>g a 7-<br />

po<strong>in</strong>t Likert scale with answer options rang<strong>in</strong>g from 1 (not at all true of me) to 7 (very true of<br />

me).<br />

The eight items on <strong>the</strong> self-efficacy scale (α = .93, P<strong>in</strong>trich et al., 1991) reflect expectancy <strong>for</strong><br />

success and self-efficacy. Regard<strong>in</strong>g predictive validity, scores on this scale positively correlated<br />

with f<strong>in</strong>al grade (r = .41); <strong>the</strong> correlation was stronger <strong>for</strong> <strong>the</strong> self-efficacy scale than <strong>for</strong> any of<br />

<strong>the</strong> o<strong>the</strong>r MSLQ motivation or learn<strong>in</strong>g strategy scales. Self-efficacy scores also correlated<br />

more strongly with <strong>in</strong>tr<strong>in</strong>sic motivation (r = .59) than with any o<strong>the</strong>r scale. Komarraju and<br />

Nadler (2013) reported a significant correlation (r = .50) between self-efficacy and ef<strong>for</strong>t<br />

regulation and found that students with higher self-efficacy were more likely than those with<br />

lower self-efficacy to believe that <strong>in</strong>telligence can change and to adopt mastery goals.<br />

Conclusion<br />

A well-developed assessment plan encompasses both classroom and program-level<br />

assessments of learn<strong>in</strong>g based on specific goals. For example, <strong>the</strong> measures discussed <strong>in</strong> this<br />

chapter co<strong>in</strong>cide with goals <strong>in</strong> <strong>the</strong> APA Guidel<strong>in</strong>es 2.0 (APA, 2013) related to <strong>the</strong> knowledge<br />

base of psychology, communication, and professional development. Both <strong>for</strong>mative and<br />

summative assessments play a dist<strong>in</strong>ct role <strong>in</strong> promot<strong>in</strong>g and measur<strong>in</strong>g student learn<strong>in</strong>g;<br />

although, educators may not consistently recognize dist<strong>in</strong>ctions between <strong>the</strong> two (Taras, 2002)<br />

or realize that <strong>the</strong> same types of assessments (e.g., rubrics, quizzes) may be <strong>use</strong>d <strong>for</strong> both<br />

<strong>for</strong>mative and summative purposes.<br />

The tools discussed <strong>in</strong> this chapter to evaluate actual classroom learn<strong>in</strong>g (e.g., CATs and rubrics)<br />

and program assessment (e.g., standardized exam<strong>in</strong>ations) may not be classified as traditional<br />

scales. However, consider<strong>in</strong>g <strong>the</strong> variability <strong>in</strong> learn<strong>in</strong>g goals <strong>for</strong> both content and skills across<br />

courses, it would be challeng<strong>in</strong>g to develop scales of actual learn<strong>in</strong>g suitable <strong>for</strong> universal<br />

application at <strong>the</strong> course level. Moreover, multiple possibilities do exist <strong>for</strong> mean<strong>in</strong>gful<br />

classroom assessment (e.g., exam<strong>in</strong>ations, presentations, problem-solv<strong>in</strong>g assignments), which,<br />

if constructed and evaluated appropriately, may serve as effective measures of learn<strong>in</strong>g. For<br />

program evaluation, <strong>the</strong> <strong>use</strong> of such standardized assessments can provide a common metric of<br />

per<strong>for</strong>mance. Given <strong>the</strong>se exist<strong>in</strong>g assessments <strong>for</strong> evaluat<strong>in</strong>g course and program-specific<br />

knowledge and skills, it may not be necessary to create generalizable, traditional scales <strong>for</strong><br />

<strong>the</strong>se purposes.<br />

On <strong>the</strong> o<strong>the</strong>r hand, multiple scales are available to measure students’ perceptions of learn<strong>in</strong>g<br />

and constructs related to learn<strong>in</strong>g. As Rovai and colleagues (2009) noted, educational<br />

outcomes are <strong>in</strong>fluenced by numerous variables, <strong>in</strong>clud<strong>in</strong>g factors related to course design and<br />

67

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!