20.10.2015 Views

A COMPENDIUM OF SCALES for use in the SCHOLARSHIP OF TEACHING AND LEARNING

compscalesstl

compscalesstl

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

o Participants respond to items such as “I prefer class work that is challeng<strong>in</strong>g so I<br />

can learn new th<strong>in</strong>gs,” “I th<strong>in</strong>k I will be able to learn what I learn <strong>in</strong> this class <strong>in</strong><br />

o<strong>the</strong>r classes,” “I expect to do very well <strong>in</strong> this class,” and “I worry a great deal<br />

about tests.”<br />

- Depth of process<strong>in</strong>g (Enwistle, 2009)<br />

- Lifelong Learn<strong>in</strong>g Scale (Wielkiewicz, & Meuwissen, 2014)<br />

- Procrast<strong>in</strong>ation (Tuckman, 1991)<br />

o Participants respond to items such as “I postpone start<strong>in</strong>g <strong>in</strong> on th<strong>in</strong>gs I don’t like<br />

to do,” “I delay mak<strong>in</strong>g tough decisions,” “I get right to work, even on life’s<br />

unpleasant chores,” and “when someth<strong>in</strong>g is not worth <strong>the</strong> trouble, I stop.”<br />

- Study Behaviors/Process (Fox, McManus, & W<strong>in</strong>der, 2001; Gurung, Weidert, & Jeske,<br />

2012)<br />

- Textbook Assessment and Utility Scale (Gurung & Mart<strong>in</strong>, 2011)<br />

Why Should You Use Published Scales?<br />

The easy answer is that it saves you a lot of work. Measurement is not someth<strong>in</strong>g done casually<br />

or quickly. Develop<strong>in</strong>g valid and reliable measures is a complex and <strong>in</strong>volv<strong>in</strong>g process (Noar,<br />

2003; Sosu, 2013), so <strong>the</strong> simple reason to <strong>use</strong> published scales is that <strong>the</strong> hard work has been<br />

done <strong>for</strong> you. Robust scale development <strong>in</strong>volves multiple studies and iterations. A published<br />

scale has been through <strong>the</strong> peer review process and <strong>the</strong> associated checks and balances.<br />

Fur<strong>the</strong>rmore, o<strong>the</strong>r researchers will have also <strong>use</strong>d that published scale provid<strong>in</strong>g you with<br />

additional <strong>in</strong><strong>for</strong>mation about <strong>the</strong> construct. Correspond<strong>in</strong>gly, you have <strong>the</strong> <strong>use</strong> of a scale that<br />

should satisfy two important criteria <strong>for</strong> a good scale: validity and reliability.<br />

Validity and reliability are essential concepts <strong>in</strong> measurement. How well have you measured<br />

your outcomes and predictors? How likely are your measures to provide <strong>the</strong> same results when<br />

<strong>use</strong>d aga<strong>in</strong>? Validity <strong>in</strong> general refers to <strong>the</strong> extent to which <strong>the</strong> scale measures what it is<br />

supposed to measure (Anastasi, 1988). There are many different <strong>for</strong>ms of validity (e.g.,<br />

external, statistical, <strong>in</strong>ternal) but when us<strong>in</strong>g scales we care most about construct validity.<br />

Construct validity refers to <strong>the</strong> idea that a scale is measur<strong>in</strong>g what we th<strong>in</strong>k it is (Morl<strong>in</strong>g, 2015).<br />

Even when you <strong>use</strong> published scales, it is prudent to be aware and com<strong>for</strong>table with <strong>the</strong> ma<strong>in</strong><br />

<strong>for</strong>ms of construct validity so you can assess <strong>the</strong> quality of <strong>the</strong> scale. Whereas some <strong>for</strong>ms of<br />

construct validity are subjective <strong>in</strong> nature, <strong>the</strong> majority of <strong>the</strong>m are objective <strong>in</strong> nature and<br />

easily assessed by statistical rubrics. Subjective <strong>for</strong>ms of construct validity <strong>in</strong>clude face validity<br />

and content validity. A scale with good face validity looks like it is measur<strong>in</strong>g what it is supposed<br />

to (you can see how subjective this is). Are <strong>the</strong> items plausible ways to get at <strong>the</strong> underly<strong>in</strong>g<br />

concept? Content validity gets at whe<strong>the</strong>r a measure encompasses all parts of <strong>the</strong> underly<strong>in</strong>g<br />

concept. Are <strong>the</strong> different items gett<strong>in</strong>g at all <strong>the</strong> different parts of concept? To have adequate<br />

content validity, a scale should have items at all <strong>the</strong> different parts of a concept. Often, scales<br />

have different sub-components or subscales <strong>in</strong> order to fully capture concepts.<br />

Objective <strong>for</strong>ms of construct validity <strong>in</strong>clude criterion (concurrent and predictive), divergent,<br />

and convergent validity. Criterion validity assesses if <strong>the</strong> scale is related to <strong>the</strong> outcome it is<br />

13

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!