20.10.2015 Views

A COMPENDIUM OF SCALES for use in the SCHOLARSHIP OF TEACHING AND LEARNING

compscalesstl

compscalesstl

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Content Validity<br />

Although <strong>the</strong>re are some differences <strong>in</strong> how content validity is def<strong>in</strong>ed <strong>in</strong> <strong>the</strong> literature, “<strong>the</strong>re<br />

is general agreement <strong>in</strong> <strong>the</strong>se def<strong>in</strong>itions that content validity concerns <strong>the</strong> degree to which a<br />

sample of items, taken toge<strong>the</strong>r, constitute an adequate operational def<strong>in</strong>ition of a construct”<br />

(Polit & Beck, 2006, p. 490). However, content validity is often left to <strong>the</strong> judgment of <strong>the</strong><br />

researcher and/or a panel of experts to ensure only relevant items related to <strong>the</strong> construct are<br />

<strong>in</strong>cluded (DeVellis, 2012). One issue with stopp<strong>in</strong>g at this step is that it is a subjective judgment<br />

of whe<strong>the</strong>r an <strong>in</strong>strument has adequate content validity. Some researchers suggest <strong>the</strong><br />

computation of a content validity <strong>in</strong>dex (CVI) to better judge <strong>the</strong> psychometrics of an<br />

<strong>in</strong>strument (Polit & Beck, 2006; Wynd, Schmidt, & Schaefer, 2003). Generically speak<strong>in</strong>g, this<br />

<strong>in</strong>volves hav<strong>in</strong>g a panel of raters review each item <strong>in</strong> <strong>the</strong> measure, rate how relevant each item<br />

is to <strong>the</strong> construct be<strong>in</strong>g measured, and <strong>the</strong>n <strong>the</strong> researcher(s) assess agreement between<br />

raters.<br />

Construct Validity<br />

Construct validity is based on a <strong>the</strong>oretical framework <strong>for</strong> how variables should relate.<br />

Specifically, “it is <strong>the</strong> extent to which a measure ‘behaves’ <strong>the</strong> way that <strong>the</strong> construct it<br />

purports to measure should behave with regard to established measures of o<strong>the</strong>r constructs”<br />

(DeVellis, 2012, p. 64). Some deem construct validity as one of <strong>the</strong> most important concepts <strong>in</strong><br />

psychology (John & Benet-Mart<strong>in</strong>ez, 2000; Westen & Rosenthal, 2003). However, <strong>the</strong>re is no<br />

agreed upon or simple way to assess construct validity (Bagozzi, Yi, & Phillips, 1991; Westen &<br />

Rosenthal, 2003). Instead, many researchers <strong>use</strong> correlations to show relationships between<br />

variables as suggested by <strong>the</strong> literature, often referred to as convergent and discrim<strong>in</strong>ant<br />

validity (Messick, 1995). For example, Ryan and Wilson (2014) exam<strong>in</strong>ed <strong>the</strong> validity of a brief<br />

version of <strong>the</strong> Professor-Student Rapport Scale by compar<strong>in</strong>g <strong>the</strong> brief measure to scales that<br />

were similar (e.g., Immediacy Scale, to which it should positively relate) and dissimilar (i.e.,<br />

Verbal Aggressiveness Scale, to which it should negatively relate). There are also computations<br />

that can be per<strong>for</strong>med to demonstrate levels of construct validity <strong>in</strong>clud<strong>in</strong>g confirmatory factor<br />

analysis, effect size correlations, and structural equation model<strong>in</strong>g (Bagozzi et al., 1991; Westen<br />

& Rosenthal, 2003).<br />

Criterion-Related Validity<br />

Criterion-related validity is ma<strong>in</strong>ly concerned with practicality versus <strong>the</strong>oretical underp<strong>in</strong>n<strong>in</strong>gs<br />

like construct validity; “it is not concerned with understand<strong>in</strong>g a process but merely with<br />

predict<strong>in</strong>g it” (DeVellis, 2012, p. 61). However, many conf<strong>use</strong> construct validity and criterionrelated<br />

validity beca<strong>use</strong> <strong>the</strong> same example listed above can be <strong>use</strong>d <strong>for</strong> ei<strong>the</strong>r type of validity;<br />

<strong>the</strong> difference is <strong>in</strong> <strong>the</strong> <strong>in</strong>tent of <strong>the</strong> researcher (e.g., to expla<strong>in</strong> or explore variable relationships<br />

versus simply identify<strong>in</strong>g and predict<strong>in</strong>g) (DeVellis, 2012). In <strong>the</strong> case of <strong>the</strong> latter, <strong>the</strong>re are<br />

multiple types of correlations that can be computed to identify specific relationships among<br />

variables and <strong>in</strong> turn, different types of validity. For example, researchers can exam<strong>in</strong>e whe<strong>the</strong>r<br />

<strong>the</strong> scores on <strong>the</strong>ir <strong>in</strong>strument predict future behaviors (i.e., predictive validity), correlate with<br />

scores on ano<strong>the</strong>r <strong>in</strong>strument shown to measure <strong>the</strong> same construct (i.e., convergent validity),<br />

38

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!