Rated Unfit: WashU’s course evaluation system

and | Staff Writer and Contributing Writer

(Jaime Hebel | Head of Illustration)

As the fall semester wraps up, you are swarmed with the stress of finals and the anticipation of winter break when you see the email appear in your inbox: “Reminder: Course Evaluations Are Open.” 

Begrudgingly, you decide to address them sooner rather than later, going through the motions of rating professors on inconsistent scales and answering the same multiple-choice questions 50 times over. Only if you have a really strong opinion do you leave a comment on the form. After finishing your last evaluation, you shut your computer, never to think of them again.

However, in your haste to complete these evaluations, many serious issues may have passed you by. Some of your professors aren’t tenured which means that their evaluations may have serious implications for their jobs. Or, maybe some of your own implicit biases influenced your responses. Perhaps, your negative commentary towards your professor whose first language isn’t English was rooted in something deeper than the perceived inconvenience it causes you. 

While you were filling out your fourth evaluation, there’s a chance you stopped taking the time to reflect on the true quality of your professors and turned to superficial, surface-level items that are likely irrelevant to their skill, such as their hair or outfits. 

This experience is common among WashU students, and it sheds light on the far-reaching, deep-rooted issues with the course evaluation system. 

How course evaluations work

At WashU, course evaluations are not centralized, meaning that Arts & Sciences, Olin, McKelvey, Sam Fox, and the graduate programs all have different methods of doing course evaluations. The lack of a standardized system has left the responsibility of course evaluations distributed over several administrative powers across schools. 

Hillary Elfenbein, Olin professor of Organizational Behavior and former president of the Association for Women Faculty (AWF), said that having a different model for course evaluations for each school may create difficulties in interpreting a centralized data set.

“[The rating scales] are different at every school,” Elfenbein said. “Not only are they different, but you might have the same questionnaire item but it’s one to five in Arts and Sciences and one to 10 in Olin and one to seven in another school.” 

Even though students within a school may be filling out an identical form for different professors, not all course evaluations are weighted equally. For some professors, anxiety around course evaluations largely has to do with the professor’s title and experience.

Elfenbein explained that depending on the seniority of the professor, the importance of course evaluations can vary significantly. 

“[An adjunct’s] course evaluations matter tremendously,” Elfenbein said. “Their teaching evaluations can be the difference between getting hired and getting dropped. For full-time teaching faculty, colleagues may mentor them and give them the benefit of the doubt, so they have a little buffer. For tenured faculty, of course, it does matter, but they’re not literally being hired every class they get based on their course evals [from] last time.”

William Maxwell, a tenured professor of English and African and African American Studies, explained that professors pursuing the research track do not rely as heavily on course evaluations because their research holds more weight than their teaching. 

“Research is much more important at Washington University than teaching,” Maxwell said. “Not to say that if you’re an awful teacher, they’ll let that pass. That’s just to say that those numbers aren’t as important.”

Another issue is that course evaluations rely on self-reported data, leaving room for differences in data based on factors out of the professors’ control. Dalheimer mentioned that required classes generally receive lower ratings than elective classes, as do early morning classes in comparison to afternoon classes.

These issues may negatively impact all faculty that receive course evaluations, but some are more impacted than others. Elfenbein said for Ph.D. students, adjuncts, lecturers, and tenure track faculty who don’t get tenure, course evaluations play an important role in future job prospects. 

Maxwell emphasized that there is a complicated relationship between professors and course evaluations that evolves as professors become more experienced.

“Younger professors are rightly worried about what these numbers might mean,” Maxwell said. “Early on in your career, the teaching evaluations may mean too much. For many people, later, they mean a bit too little, in that you can essentially ignore them if you want.”

Student Life analyzed a report published by WashU’s Association for Women Faculty (AWF), a board committed to the advocacy of women’s issues and to providing support to women faculty, and found that some young professors are so dependent on the success of their course evaluations that they modify their teaching.

One anonymous response to the survey question “How does the current evaluation process impact your teaching?” reads: “I am so fearful of negative evaluations that I have trouble enforcing deadlines and other boundaries, and I am hesitant to give grades lower than A-/B+, even when lower grades would be well-deserved.”

Implicit bias

Course evaluations have been marred with implicit and explicit biases since their conception. The Rutgers University School of Law released an article with evidence that students’ implicit biases heavily impact their evaluations of their professors. The studies indicate that much of the variance in student evaluations is based on aspects of the students themselves, not the course or the instructor. 

Another former president of AWF and Senior Lecturer of Technical Writing, Seema Dahlheimer, said women, people of color, and those who have an accent or aren’t native English speakers receive lower ratings on course evaluations. 

Misogyny, in particular, has proven to be pervasive in course evaluations nationally, as shown in a study released by the Journal of the European Economic Association. Its findings determine that gender bias affects women significantly when they teach in male-dominated departments.

Both male and female students evaluate female instructors lower than male instructors. For male students, this difference is 20.7% of a standard deviation, while for female students, it is 7.6% of a standard deviation, according to the Journal of the European Economic Association.

Differences in gender in course evaluations is something Elfenbein said has been consistent across universities where she has been. 

“When I was a professor at UC Berkeley, what [the female faculty] would do was trade envelopes with each other and throw the unprofessional pages out,” Elfenbein said. “I had a colleague who got the comment ‘nice ass.’”

Elfenbein has been exposed to the misogyny of course evaluations by comparing her comments to her husband’s, who is also a professor, which shows glaring discrepancies. 

“I’ve gotten ‘you’d be prettier if you wore makeup’, and to my knowledge, the male professors don’t get anything like that,” Elfenbein said, “In my husband’s whole career, he has once gotten a comment about his appearance, and it was that ‘we liked your ties.’”

Dahlheimer pointed out that certain course evaluations at WashU ask students to give professors a humor rating, which has been singled out as being problematic and perpetuating gender inequality in evaluations.

“The College of Arts and Sciences has a question on some course evaluations about the professor’s sense of humor,” Dahlheimer said. “Typically, in the literature, we’ve found that people think men are funnier, so if those numerical scores are going towards people’s promotion or their pay or decisions like that, [that’s an issue].”

For students, it can be challenging to mitigate the impact of implicit biases in how they fill out course evaluations. Junior Jenny Rong said she attempts to be fair in her evaluations but acknowledges that bias plays a role. 

“Implicit bias is hard to combat,” Rong said. “I probably have [implicit biases] but not any that I’m very conscious of. I think I treat the professors by the way they teach, and the adjectives I would use are ‘disorganized’ or ‘organized’ or if they’re eloquent or not.”

Elfenbein asked students to consider more deeply how these biases may manifest themselves in their course evaluations and ensure that evaluations are focused, to the greatest extent possible, on classroom abilities.

“Faculty who don’t speak English as their first language or faculty that have an accent can get lower course evaluations, and I hope students can reflect on how they can be kind and try to judge people based on the quality of their work rather than the superficial things about them,” Elfenbein said. 

How are the evaluations used?

Vice Dean of Undergraduate Affairs in the College of Arts & Sciences, Erin McGlothlin, said the University’s Promotion and Tenure Committee looks at course evaluation reports to give insight into a professor’s growth.

“Whenever we review the case of a faculty member for promotion, we do look at the aggregate scores,” McGlothlin said. “Especially for a newer faculty member, you want to see the scores going up over time.”

While growth may be important to administrators like McGlothlin, consistently strong course evaluations are still critical to young faculty. This can be especially difficult when, as Dahlheimer said, many students only fill out course evaluations when they either strongly like or dislike a professor. She explained that the middle ground of opinions in course evaluations is often missing, and she finds value in more nuanced feedback.

‘“[When] we get very few responses, they’re really bimodal,” Dahlheimer said. “It’s the people who hated the class and wanted to give us all ones [out of seven, or] the people who absolutely love the class and gave us the highest score. All of that in between, which could be really useful, isn’t getting captured.”

Junior Katie Zhu said she will take the time to fill out her course evaluations when she has a positive experience in a class.

“If I have something negative [to say,] I probably won’t fill out the course eval,” Zhu said. “But if I thought the class was great, [and] the professor [was] great, I’ll take some time and do it.”

Many professors offer extra credit for giving feedback through course evaluations. Though Dahlheimer sees merit in this idea, she opposes giving extra credit for filling out course evaluations, because they are not relevant to the knowledge of the course. 

Instead, she likes to designate time in class for students to fill out the surveys. 

“What I do is give course time in that last week of class,” Dahlheimer said. “I say, ‘grab your laptops, grab your phones, do this course eval for me for 15 minutes, [and] I’ll leave the room.’” 

Navigating the path forward

According to Vice Provost for Educational Initiatives, Jennifer Smith, many have been deliberating amending the course evaluations process for some time, but no significant action has been taken to make changes. The problems surrounding course evaluations are systemic and would require collaboration across schools to make any kind of progress. 

The AWF report provides recommendations about course evaluations relating to tenure and how to potentially address the low response rate; it also advises the Danforth Campus to adopt a standard set of questions using a seven-point scale.

The report also identifies that students don’t feel inclined to fill out their course evaluations when they don’t understand what they are used for. Zhu related to this issue.

“Usually my professors are like, ‘Please submit the course evaluation if you can.’ I usually do it just because I think it helps them somehow,” Zhu said. “I don’t know exactly how, but professors are always saying to fill it out.”

The results from course evaluations are published and readily available for students to access by the Office of University Registrar; however, many students are unaware of this. McGlothlin reiterated the value of these published course evaluations and the impact that they can hold not only for students but also for faculty.

“[Some] students read the course evaluations to decide on what courses they want to take. If a student doesn’t have the time to do the comments, they should at least do the ratings, because that’s their chance to have a say and help future students make decisions,” McGlothlin said. 

Even if students were more motivated to fill out their evaluations, the content of the course evaluations still has room for improvement, according to Rick Moore, the Associate Director for Faculty Programming at the Center for Teaching and Learning. Moore holds workshops in which faculty members work on writing the open-ended questions on their course evaluations, paying specific attention to how these questions ought to be phrased to gain the most fair and useful feedback. 

Moore said the standardized questions in course evaluations should be revised to be more effective.

“I think it would be great for WashU to revisit everything and double-check that the questions are the best that they can be,” Moore said. “It’s always going to be imperfect, and there’s always  some room for bias in them. That’s kind of the nature of these kinds of questions, but it would be great to continue working towards less-biased [phrasing].”

Senior Advisor to the Chancellor for Leadership, Andrew Knight, said that adding a second data collection system could help improve the problems inherent to course evaluations.

“For example, we could have trained observers come into the classroom and examine the live classroom using a structured framework, an observational method,” Knight said.

Even before the AWF report was published, the issues with course evaluations had been visible to the Vice Provost’s office for some time.

Smith said she grapples with pinpointing who actually holds responsibility for course evaluations because so many entities have a stake in the process.

“We would have to pull people together from across the schools. Within that group, [we would need to] come to some agreements about changes we wanted to make,” Smith said. “Then, if there are school processes for approval, it would have to go back to those schools. Quite honestly, that is why this hasn’t been something I have tackled yet.”

Smith said that while administrators are keen to fix the issues around course evaluations, their complexity makes starting that process daunting.  

“This is one of the things that absolutely eats away at me that we haven’t done, but I haven’t managed to find the ability or time to make it a priority,” Smith said. “We should really fix this.”

Sign up for the email edition

Stay up to date with everything happening at Washington University and beyond.

Subscribe