University offering resources and support — not policies — for AI in the classroom

| Investigative News Editor

Illustration by Ryan Davis

This Fall Semester is slated to be one of trial and error for how generative AI, technology that mimics human-generated content, fits into education. Washington University has not implemented universal policies surrounding the use of AI technology, leaving it up to individual professors to decide how much to engage with it or limit its use. 

Some professors want to integrate AI into assignments and teaching to help students learn how to utilize the new technology. Others are telling students not to use the technology at all, out of a concern that it will stymie learning or hurt academic integrity. 

Administrators have offered guidelines for how faculty can think about AI in pedagogy, but it has been largely up to professors to decide how much to engage with the University’s AI resources.

Provost Beverly Wendland sent an email to professors in early August that included links to AI informational resources and to a University web page titled “Addressing Artificial Intelligence in 2023.” 

The web page focuses primarily on how AI affects academic integrity, but shares information at the end for faculty to learn more about using the technology in courses. The page states that “Where AI tools can be used to enhance student learning and/or prepare students for effective use of these tools in their future careers, we encourage incorporation of AI tools in coursework.”

The information included on this page reflects many educators’ focus on the obstacles that AI is creating for education.

“Most of the conversation is about concerns rather than opportunity,” Dr. Dennis Barbour, Chair of the Faculty Senate Council, said. Barbour, who works with faculty across the University to serve as a faculty liaison to the administration, specified that these concerns most often regard academic integrity. “The thing that might be missing [from this discourse] is about the opportunity.” 

One of the key obstacles for higher-education institutions is that it is difficult to mandate that faculty enact any policies or undergo new training — because doing so could be viewed as an infringement of academic freedom.

“The autonomy of an individual faculty member in a classroom is connected to academic freedom; it is fairly sacred,” Dr. Jen Smith, Vice Provost for Educational Initiatives, said. “We have a hard time requiring anything, so ultimately, the policy is at the discretion of the faculty member.”

Dr. Jay Turner, Head of the Division of Engineering Education, is embracing the University’s approach to not have policies and to instead offer guidance and encourage transparency about AI usage. He said that general policies would likely not remain relevant for long.

“I think policies could run the risk of being suboptimal because it’s going to be so case-specific how this plays out,” Turner said. “Instead, the key is to help instructors structure their conversations with their students in a way [where] everyone’s crystal-clear on the opportunities and the guardrails on [using AI].” 

In an email to Student Life, Provost Beverly Wendland outlined some of the resources that the IT department and the Center for Teaching and Learning (CTL) offer to faculty about AI. 

These resources include access to the online platforms LinkedIn Learning and Garter research and an ongoing series the IT department is planning to publish in The Record to educate faculty about AI.

The Center for Teaching and Learning offered workshops about AI over the spring and summer that several hundred faculty members participated in, Dr. Eric Fournier, the Center’s Director of Educational Development, said. One of the summer workshops was directed towards faculty in writing-intensive courses, and the other was a more general workshop.

Fournier said that the workshops encouraged professors to think about the purpose of their assignments and how AI tools can help support or undermine their goals. 

“I’m certainly concerned about faculty who are ignoring this…and are thinking what worked 10 or 15 years ago will still work in their courses,” Fournier said. 

Smith echoed a similar concern about professors who might not realize how widespread the use of this technology has become. “I think some faculty are really in denial about what’s going on.”

Wendland approved a proposal last Friday to eventually add a staff member to the CTL to focus on training and support for faculty around AI in teaching. Smith said this staff member will likely hold department-specific meetings about AI to ensure more faculty can take part in these conversations.

“Of the around 1600 Danforth faculty that we have, not everybody is listening, and I am hoping that getting into people’s departments will help us raise a little more awareness,” Smith said.

In addition to department-level meetings and opportunities to interact with the CTL in workshops, Smith said that professors can sign up for one-on-one consultations with CTL members to brainstorm ways that AI can be integrated into their courses.

In addition to administrative support, several professors encouraged students to talk with professors about how to best use these technologies.

“I think students should ask questions about how we as faculty think they can best use ChatGPT,” Dr. Joseph Loewenstein, Director of the Humanities Digital Workshop and the Interdisciplinary Project in the Humanities, said. “I think they should not simply talk to each other; I think they should talk to us.”

Barbour similarly said he hopes students talk with their professors about how to engage with AI in their courses.

“You might have professors who just can’t imagine how it would play in their classrooms,” he said. “I would encourage students to prompt conversations with their faculty — often, faculty are really interested in having that conversation.”

Aside from specific course-policy questions, professors are contending with how this technology might change what students should be getting out of their education and whether AI might eventually become another staple of the educational toolkit.

“Using tools of this sort will be just as important as using a word processor in the future,” Barbour said, echoing a comparison that several professors made between generative AI and now fully accepted technological tools like spell-check and typing.

Barbour further likened generative AI tools to Aladdin’s genie — both serving as wish-generators for content. 

“What we need to learn is how to get exactly what we want out of that and not what we don’t want,” Barbour said. “Managing that genie is going to be one of the biggest challenges of the creative generation: [the student body’s] generation.”

Dr. Ruopeng An, an associate professor at the Brown School, said he thinks that the University is in the middle range of how quickly universities are responding to advances in artificial intelligence. An, who has expertise in the applications of artificial intelligence, said he thinks this middleground is a good place for the University to be in. 

“It is safer for us to observe and then make our response based on what has happened in other institutions making really proactive policies,” he said, referencing the University of Hong Kong’s decision to ban and then unban ChatGPT.

“We’re in untapped waters. We all have to have the courage to tap into that water and see what’s going to happen,” An said. “It’s scary, and it’s also exciting.”

Sign up for the email edition

Stay up to date with everything happening at Washington University and beyond.

Subscribe