Journal of Research in Innovative Teaching - National University
Journal of Research in Innovative Teaching - National University
Journal of Research in Innovative Teaching - National University
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
that there is a consistent positively correlation between student evaluations and student<br />
performance as measured by exams, grades, etc. (see Cash<strong>in</strong>, 1990; Huemer, 1990).<br />
Thus, the evidence suggests that student evaluations <strong>of</strong> <strong>in</strong>struction provide<br />
valuable <strong>in</strong>formation. Nonetheless, Rice (2009) cautions that “student evaluations have<br />
their limits. They should never be the only means <strong>of</strong> evaluat<strong>in</strong>g faculty members . . . and<br />
faculty members who actually want to become better teachers—and who believe that<br />
good teach<strong>in</strong>g skills are not bequeathed to them <strong>in</strong> perpetuity with the award<strong>in</strong>g <strong>of</strong> a<br />
Ph.D.—should read them over and over aga<strong>in</strong>. We cannot see ourselves as others see us .<br />
. . They enable academe to ma<strong>in</strong>ta<strong>in</strong> quality <strong>in</strong>struction <strong>in</strong> the classroom and, equally<br />
important, to susta<strong>in</strong> a conversation about teach<strong>in</strong>g.”<br />
Hoover-Dempsey (2009) goes on to add: “analyze the <strong>in</strong>formation . . . look for<br />
patterns <strong>in</strong> students’ comments—identify trends, note what you have done well and what<br />
needs improvement.”<br />
The literature on student evaluations is vast and complex, and many subtle<br />
relationships and <strong>in</strong>teractions are embedded <strong>in</strong> the data. <strong>Research</strong> has shown that certa<strong>in</strong><br />
discipl<strong>in</strong>es tend to be rated more highly than others. Arts and humanities courses, for<br />
<strong>in</strong>stance, are rated higher than social sciences and biological sciences, which are rated<br />
higher than bus<strong>in</strong>ess and computer sciences, followed by mathematics, eng<strong>in</strong>eer<strong>in</strong>g, and<br />
physical sciences (Javakhishvili, 2009). Javakhishvili further claims that if a student does<br />
not have <strong>in</strong>terest <strong>in</strong> subject but takes the class because it is required, she or he might give<br />
a lower score to a pr<strong>of</strong>essor than a student who is very much <strong>in</strong>terested <strong>in</strong> a class that he<br />
or she has taken. It is reasonable, therefore, to suggest that the more difficult the subject<br />
matter and the less motivated the student, the lower the evaluation an <strong>in</strong>structor should<br />
expect.<br />
A common criticism <strong>of</strong> student evaluations is that students tend to give higher<br />
rat<strong>in</strong>gs when they expect higher grades <strong>in</strong> the course (Cash<strong>in</strong>, 1990). Thus, student<br />
evaluations seem to be as much a measure <strong>of</strong> an <strong>in</strong>structor's leniency <strong>in</strong> grad<strong>in</strong>g as they<br />
are <strong>of</strong> teach<strong>in</strong>g effectiveness (Huemer, 1990). Many believe that this is a cause <strong>of</strong> grade<br />
<strong>in</strong>flation (Goldman, 1985). Another criticism is that student evaluations encourage<br />
pr<strong>of</strong>essors to dumb down courses <strong>in</strong> an effort to keep students happy at all costs (Ryan,<br />
Anderson, & Birchler, 1980), which results <strong>in</strong> lower academic rigor and decreased<br />
learn<strong>in</strong>g outcomes. This argument is supported by Ory & Ryan (2001), who suggest that<br />
among the un<strong>in</strong>tended consequences <strong>of</strong> student evaluations are (a) <strong>in</strong>structors alter their<br />
teach<strong>in</strong>g <strong>in</strong> order to receive high rat<strong>in</strong>gs (lower content difficulty, provide less content,<br />
give only high grades), and (b) students reward poor teach<strong>in</strong>g by believ<strong>in</strong>g they can give<br />
high rat<strong>in</strong>gs <strong>in</strong> return for high grades. If this is <strong>in</strong>deed the typical situation, it<br />
compromises academic rigor and <strong>in</strong>structional quality.<br />
In addition to what has already been discussed, the literature reveals a few other<br />
important facts about student course evaluations that are critical to accurate <strong>in</strong>terpretation<br />
and use <strong>of</strong> the <strong>in</strong>formation:<br />
• Centra and Creech (1976) found that smaller classes have higher rat<strong>in</strong>gs.<br />
Normally, smaller classes with fewer than 15 students tend to have higher<br />
rat<strong>in</strong>gs than larger classes.<br />
• Results with low response rates should be treated with caution. If there are<br />
fewer than 10 responses, the data need to “be <strong>in</strong>terpreted with particular<br />
175