|Student evaluation of teaching (SET) or student ratings is a method accepted and used by education policy decision-makers to evaluate faculty performance, determine teacher effectiveness, and improve instruction / Photo by: chase4concept via 123RF|
Student evaluation of teaching (SET) or student ratings is a method accepted and used by education policy decision-makers to evaluate faculty performance, determine teacher effectiveness, and improve instruction. The technique recognizes the students as logical evaluators of the quality, satisfaction, and effectiveness of the method of instruction, interest in the subject, and course content, among others.
The American Sociological Association and 17 other professional groups, however, said that student evaluation is not the best measure of teaching effectiveness. ASA is a nonprofit organization dedicated to advancing the profession and discipline of sociology, which is the study of human social relationships and institutions.
Why Universities and Colleges Should Not Rely Too Much on SET
The professional groups are also urging all universities and colleges not to rely too much on SET in personnel decisions. In a statement released by the sociological association that was already shared with other scholarly groups, it was said that the student evaluations are instruments that are “easy to implement” and “cheap.” They reiterated that SET provides an easy way of gathering information to evaluate teachers for merit raises, contract renewal, promotion, tenure, and hiring.
They added that despite the popularity of SETs, evidence shows that it could be “problematic” to use them in personnel decisions. For example, the ratings can be influenced by other factors, such as class size, subject, time of day, or whether the course is really required for the students and all these things are not even related to the effectiveness of teaching at all. Furthermore, undue weight is given to small differences in the SETs.
In North America settings, SETs are likewise found to be biased against people of color and women. For instance, students would rate women teachers lower compared to men despite the fact that these instructors are exhibiting the same behaviors in teaching. The rating scale can likewise affect how female instructors are evaluated by their students compared to men who are in the male-dominated industry.
What the American Sociological Association Suggests
Instead of using the usual format of SETs as the primary measuring tool in determining the effectiveness of teachers during a faculty review, the American Sociological Association, as well as other scholarly societies, suggest that education institutions should just use practices that are evidence-based but still collect and use student feedback. These practices include:
1. Questions on the student ratings or student evaluation should give more attention to student experiences. It should be considered as a platform for students to give their feedback instead of it being an instrument for formal ratings on the effectiveness of the method of teaching. The real-life application of this practice would be in the University of North Carolina Asheville and the Augsburg University. These two universities renamed their instruments as “Student Feedback on Instruction Form” and “University Course Survey.” The titles or name of the document, therefore, give emphasis that it is a feedback and not an evaluation.
2. SETs should be considered as a part of the “holistic assessment” and should include instructor self-reflections, assessment on the teaching materials used, and peer observations.
3. SETs should be used as an instrument to document the pattern of feedback and not to compare the individual members of the faculty or their department.
The American Sociological Association has also cited the method used by the University of California, Irvine, wherein they are requiring their faculty to present two types of evidence to evaluate teaching effectiveness. So aside from student evaluations, the professors also hand in “reflective teaching statements.”
ASA’s statement was already endorsed to the Sociologists for Women in Society, Society of Architectural Historians, American Folklore Society, Middle East Studies Association, Society for Personality and Social Psychology, and American Dialect Society, among others.
Overuse of SETs
The University of California, Berkeley’s professor of statistics Philip Stark, who has already written various studies about how SETs are a flawed measure of determining the effectiveness of teaching, said that student evaluations only measure the satisfaction of the students. Satisfaction, however, is not the same as the effectiveness of teaching. Stark went on to say that associations play a significant part in educating their members about the limitations of the student evaluations, such as in personnel decisions.
In 2016, a previous study titled “Student Evaluations of Teaching (mostly) Do Not Measure Teaching Effectiveness,” authors Anne Boring from Paris Dauphine University in France and team discussed that SETs are likewise sensitive-grade expectations and gender bias than they should be to teaching effectiveness. Their study focused in France and was a consensus of 23,001 SET. They concluded that SET put female instructors at a disadvantage and bias varies by institution and course. For example, male students give “significantly” higher scores in the SET to male instructors.
The European Union’s Seventh Framework Program has provided funding for Boring and team’s research.
|They concluded that SET put female instructors at a disadvantage and bias varies by institution and course. For example, male students give “significantly” higher scores in the SET to male instructors / Photo by: ammentorp via 123RF|