Download as PDF

The first question that must be considered related to student ratings of teaching effectiveness addresses their purpose. Historically, student ratings of college teaching were designed for purposes of improving teaching practice or performance. Over time, however, student ratings have increasingly been used to assist in personnel decisions. In many cases, this has become their exclusive purpose. Evaluation that serves both purposes is possible, however, research suggests that it is preferable to separate formative and summative evaluation – both conceptually and in practice. Furthermore, guidance for those who use student ratings of instruction is important in order to avoid misuse and misinterpretation of the results.

Improving Teaching Effectiveness

Formative evaluation refers to information that is gathered for the purpose of improving teaching. Student ratings provide feedback that instructors can use to make positive changes in their courses or teaching practice. The diagnostic information provided by such feedback can identify strengths and weaknesses as perceived by students in a particular context. Typically, such evaluations have focused on all of the choices an instructor makes while planning and teaching a course – not only in-class “performance.”

When formative evaluation is used for purposes other than teaching improvement, its impact is compromised. Ideally, this type of evaluation allows instructors to experiment with alternative teaching practices without fear of punitive repercussions. Research indicates that instructors benefit most from formative evaluation when they have helped to shape the questions posed, when they understand the feedback that is provided, and when assistance and resources for making improvements are available.

Making Personnel Decisions

Summative evaluation refers to evaluation conducted for the purpose of making personnel decisions, such as hiring, tenure, promotion, awards, and merit raises. Since student ratings provide only one source of feedback on teaching effectiveness, their use for such decision-making can be problematic. When student ratings are used in combination with other evidence of teaching quality, the assessment is more representative of the instructor’s overall teaching effectiveness. Additional evidence of teaching effectiveness includes such sources as: alumni ratings, peer ratings, self-assessment, syllabi and other course documents, evidence of student work and progress, and teaching portfolios.

If student ratings are used as one component of teaching effectiveness in making personnel decisions, research suggests that the overall evaluation items (i.e., What is your overall rating of this course/instructor? Would you recommend this course/instructor to a friend with interests similar to yours?) are better measures of teaching effectiveness than specific items.

Questions about Student Ratings of Teaching Effectiveness

Are students qualified to evaluate their instructors and courses?

Research indicates that students are the most qualified source to report on the extent to which the learning experience was productive, informative, satisfying, or worthwhile. While opinions on these matters are not direct measures of instructor or course effectiveness, they are legitimate indicators of student satisfaction, and there is substantial research linking student satisfaction to effective teaching.

Are ratings related to learning? (i.e., Are ratings valid?)

A meta-analysis of 41 research studies provides the strongest evidence for the validity of student ratings since these studies investigated the relationship between student ratings and student learning. There are consistently high correlations between students’ ratings of the “amount learned” in the course and their overall ratings of the teacher and course.

Are student ratings reliable?

The research indicates that student ratings are remarkably consistent, whether reliability is measured within classes, across classes, over time, or in other ways.

Do students rate instructors on the basis of expected or given grade?

This is currently the most controversial question in student ratings research. Most studies show a weak correlation between student ratings and expected grade. The conclusion reached by most researchers is that there should be a relationship between ratings and grades. The ratings simply reflect the correlation between good teaching and student learning – leading to student achievement and satisfaction.

Are ratings based solely on popularity or an instructor’s ability to entertain?

This question assumes that a popular or entertaining instructor is not a teacher who fosters student learning. There is no basis for this argument and no research to substantiate this claim. Studies have shown that “expressive” instructors maintain student attention. Attentive students may be more engaged in the learning process and, therefore, may learn more. Expressiveness, however, relates to a range of specific behaviors and not to popularity or entertainment value.

Are ratings affected by situational variables (i.e., possible biases)?

Research indicates that the effects of situational variables are quite small. Such variables include: class size, out-of-major course, time of day the class meets, difficulty level of course within a discipline, gender of instructor or student, student academic ability, teaching style, and personality of instructor or student.

Can students make accurate judgments while still in class or in college?

The research does not support the claim that students can assess teaching only after years of career experience. Studies comparing ratings within the same semester, in the following semester, in the next year, immediately after graduation, and several years later indicate consistent ratings of teachers by students over time. Although students may realize later that a particular subject was more important than they thought, student opinions about instructors change very little over time.

More Information

If you would like more information on this topic, please contact the Office by email or by phone at 443-8700.