20.10.2015 Views

A COMPENDIUM OF SCALES for use in the SCHOLARSHIP OF TEACHING AND LEARNING

compscalesstl

compscalesstl

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Assess<strong>in</strong>g and Report<strong>in</strong>g Validity<br />

In addition to <strong>for</strong>ms of reliability, ano<strong>the</strong>r important component of <strong>the</strong> psychometric properties<br />

of scales is validity. As is noted earlier <strong>in</strong> this e-book (see Hussey & Lehan, 2015), <strong>the</strong>re are<br />

several <strong>for</strong>ms of validity (e.g., face, construct, criterion) which constitute ways <strong>in</strong> which<br />

researchers test if variables are actually measur<strong>in</strong>g <strong>the</strong> constructs <strong>the</strong>y are <strong>in</strong>tended to<br />

measure. Although reliability was consistently assessed and reported <strong>in</strong> SoTL research us<strong>in</strong>g<br />

scales, validity was far less likely to be assessed and reported. Only eight of 47 (17%) published<br />

studies us<strong>in</strong>g scales to measure SoTL outcomes <strong>in</strong> <strong>the</strong> past ten Teach<strong>in</strong>g of Psychology issues<br />

(not <strong>in</strong>clud<strong>in</strong>g scale development-foc<strong>use</strong>d articles) assessed any <strong>for</strong>m of validity and reported<br />

those results. Two of those eight studies reported previously published validity <strong>in</strong><strong>for</strong>mation<br />

ra<strong>the</strong>r than validity from <strong>the</strong> present study (e.g., Wielkiewicz & Meuwissen, 2014). One<br />

example of a study that assessed and reported validity <strong>in</strong> <strong>the</strong> current study is Buckelew, Byrd,<br />

Key, Thornton and Merw<strong>in</strong>’s (2013) study of perception of luck or ef<strong>for</strong>t lead<strong>in</strong>g to good grades.<br />

The researchers adapted a scale of attributions <strong>for</strong> good grades from a previous study, not<strong>in</strong>g<br />

that no published validity <strong>in</strong><strong>for</strong>mation was available <strong>for</strong> <strong>the</strong> orig<strong>in</strong>al scale. Researchers <strong>the</strong>n<br />

established validity through a pilot test where <strong>the</strong>y correlated values of <strong>the</strong>ir adapted scale to a<br />

previously established criterion and found significant positive correlations with <strong>the</strong> total scale<br />

and subscales. This example of establish<strong>in</strong>g criterion validity through a pilot test is one way to<br />

ensure valid measures be<strong>for</strong>e collect<strong>in</strong>g actual study data. However, it should be noted that<br />

validity is truly established over time and through many studies, provid<strong>in</strong>g a compell<strong>in</strong>g reason<br />

why rout<strong>in</strong>ely report<strong>in</strong>g validity is vitally important despite <strong>the</strong> dearth of studies assess<strong>in</strong>g it.<br />

Validity <strong>in</strong> its several <strong>for</strong>ms (e.g., content, construct, and criterion) are crucial to researchers’<br />

confidence that <strong>the</strong> data collected are measur<strong>in</strong>g <strong>the</strong> constructs <strong>the</strong>y were <strong>in</strong>tended to<br />

measure. SoTL researchers are urged to <strong>in</strong>corporate validity test<strong>in</strong>g and report<strong>in</strong>g <strong>in</strong>to <strong>the</strong> work<br />

<strong>the</strong>y do.<br />

The Development and Validation of Scales<br />

There have been some great advances <strong>in</strong> <strong>the</strong> ways <strong>in</strong> which scales are developed and validated.<br />

The practice of us<strong>in</strong>g both exploratory analyses (e.g., pr<strong>in</strong>cipal factor analysis) and <strong>the</strong>n<br />

confirmatory factor analysis has greatly enhanced researchers’ confidence <strong>in</strong> <strong>the</strong> psychometric<br />

properties of <strong>the</strong> scales <strong>use</strong>d to measure outcomes (Noar, 2003). The <strong>in</strong>creas<strong>in</strong>gly common <strong>use</strong><br />

of structural equation model<strong>in</strong>g has enhanced scale development by provid<strong>in</strong>g easily accessible<br />

ways to conduct confirmatory factor analysis and latent variable model<strong>in</strong>g (Noar, 2003).<br />

There were five studies out of 141 articles published <strong>in</strong> <strong>the</strong> last ten issues of Teach<strong>in</strong>g of<br />

Psychology devoted to <strong>the</strong> development and validation of a scale. These studies employed<br />

some of <strong>the</strong> most thorough methodology <strong>in</strong> scale development and validation. One particularly<br />

good example of such scale validation is Renken, McMahan, and Nitkova’s (2015) <strong>in</strong>itial<br />

validation of <strong>the</strong> Psychology-Specific Epistemological Belief Scale (Psych-SEBS). In this two-part<br />

study, researchers first drafted and <strong>the</strong>n ref<strong>in</strong>ed an item pool us<strong>in</strong>g exploratory factor analysis,<br />

followed by confirmatory factor analysis and an assessment of <strong>in</strong>ternal consistency, test-retest<br />

reliability, and convergent validity of <strong>the</strong> Psych-SEBS. In <strong>the</strong> second study, researchers assessed<br />

<strong>the</strong> criterion validity by compar<strong>in</strong>g <strong>the</strong> Psych-SEBS with an established criterion and also tested<br />

47

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!