20.10.2015 Views

A COMPENDIUM OF SCALES for use in the SCHOLARSHIP OF TEACHING AND LEARNING

compscalesstl

compscalesstl

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

psychometrically sound tool. However, <strong>the</strong> quality of <strong>the</strong> outlet <strong>in</strong> which a scale appears does<br />

potentially say someth<strong>in</strong>g about its psychometric quality. 2 Second, <strong>in</strong> addition to <strong>the</strong> orig<strong>in</strong>al<br />

publication that conta<strong>in</strong>s <strong>the</strong> scale, look to see if subsequent work has somehow improved <strong>the</strong><br />

orig<strong>in</strong>al scale. For example, <strong>the</strong> Teacher Behaviors Checklist (TBC; Buskist, Sikorski, Buckley, &<br />

Saville, 2002) has been subjected to subsequent psychometrically-foc<strong>use</strong>d research (e.g.,<br />

Keeley, Smith, & Buskist, 2006). The result? A scale that is better, psychometrically-speak<strong>in</strong>g,<br />

than it o<strong>the</strong>rwise would be 3 . F<strong>in</strong>ally, do cited reference searches on <strong>the</strong> publication that<br />

conta<strong>in</strong>ed <strong>the</strong> orig<strong>in</strong>al scale and any subsequent research that amended and improved <strong>the</strong><br />

orig<strong>in</strong>al scale. If such cited reference searches reveal few or no citations to <strong>the</strong>se sources, it is<br />

an <strong>in</strong>dication, albeit an imperfect one, that perhaps <strong>the</strong> scale is not well-accepted <strong>in</strong> <strong>the</strong><br />

scientific community.<br />

With my personality and I/O worldview, I am notic<strong>in</strong>g many scales that seem like <strong>the</strong>y are<br />

measur<strong>in</strong>g <strong>the</strong> same construct 4 . For example, a recent publication exam<strong>in</strong>ed work ethic and<br />

GRIT as predictors of job satisfaction, turnover <strong>in</strong>tention, and job stress (Meriac, Woehr, &<br />

Banister, 2015). Work ethic was def<strong>in</strong>ed as “a set of beliefs and attitudes reflect<strong>in</strong>g <strong>the</strong><br />

fundamental value of work” (p. 316). GRIT is <strong>the</strong> “perseverance and passion <strong>for</strong> long-term<br />

goals” and entails “work<strong>in</strong>g strenuously toward challenges, ma<strong>in</strong>ta<strong>in</strong><strong>in</strong>g ef<strong>for</strong>t and<br />

<strong>in</strong>terest….despite failure, adversity, and plateaus <strong>in</strong> progress (Duckworth, Peterson, Mat<strong>the</strong>ws,<br />

& Kelly, 2007, pp. 1087-1088). At first read, work ethic and GRIT may sound like closely related<br />

constructs, and <strong>in</strong> fact, <strong>the</strong>y were correlated (r = .44; Meriac et al., 2015). Do we really need<br />

both of <strong>the</strong>se measures? Indeed, Meriac and his colleagues suggested that <strong>in</strong> fact <strong>the</strong>y do<br />

differentially predict job satisfaction, turnover <strong>in</strong>tentions, and stress. Fur<strong>the</strong>rmore, Meriac and<br />

his colleagues found that <strong>the</strong>re is <strong>in</strong>cremental validity <strong>in</strong> us<strong>in</strong>g both of <strong>the</strong>se measures. Indeed,<br />

at least with<strong>in</strong> <strong>the</strong> context of I/O psychology, it does appear that <strong>the</strong>re is value <strong>in</strong> hav<strong>in</strong>g both of<br />

<strong>the</strong>se measures. Researchers need to consider which one is appropriate to answer a given<br />

research question, or if fact it is worth us<strong>in</strong>g both of <strong>the</strong>m. Ultimately <strong>the</strong>re is no correct or<br />

<strong>in</strong>correct choice; ra<strong>the</strong>r, it is imperative that some justification <strong>for</strong> <strong>the</strong> choice of a scale or scales<br />

is needed, particularly when <strong>the</strong>re are seem<strong>in</strong>gly overlapp<strong>in</strong>g possible scales to choose from.<br />

Indeed, with so many scales available, look <strong>for</strong> those that have demonstrated <strong>in</strong>cremental<br />

validity <strong>in</strong> prior research. In my op<strong>in</strong>ion (and my op<strong>in</strong>ion only), <strong>in</strong>cremental validity seems to<br />

receive short shrift relative to o<strong>the</strong>r <strong>for</strong>ms of validity <strong>in</strong> establish<strong>in</strong>g <strong>the</strong> psychometric<br />

properties of a scale. The cynical part of me does wonder how many new scales actually predict<br />

outcomes above and beyond exist<strong>in</strong>g scales. The more-erudite part of me believes that new<br />

scales are <strong>in</strong>deed add<strong>in</strong>g to what’s already available, and if such evidence has not been offered,<br />

it is <strong>in</strong>cumbent on <strong>the</strong> scientific community to offer evidence such is <strong>the</strong> case.<br />

In Chapter 14, Richmond (2015) describes <strong>the</strong> issue of rely<strong>in</strong>g on self-report data and not actual<br />

behavior, someth<strong>in</strong>g <strong>in</strong>herent with any research area that relies on scales. In personality and<br />

I/O, <strong>the</strong>re is great concern about common method bias. Common method bias occurs when <strong>the</strong><br />

same general methodology (e.g., reliance exclusively on self-report data) is <strong>use</strong>d to answer a<br />

research question. The trend <strong>in</strong> personality and I/O is to avoid common method bias to <strong>the</strong><br />

23

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!