News
Faculty implements range of guidelines to address AI’s effect on academic integrity
This is the first full academic year for professors and students to contend with the academic integrity implications of generative AI.
In the past, students could plagiarize information, but their work could be scanned by highly accurate plagiarism detection tools. These tools could also pinpoint which parts of the content were taken from other parts of the internet.
The detectors make their decisions based on machine-generated probabilities and are thus unable to pinpoint exactly why content is flagged for potential plagiarism. These processes make it nearly impossible for a professor to be sure that a given piece of work was plagiarized using AI.
The academic integrity section of the “Syllabus Resources and Template Language” document that faculty receive states that “In all academic work, the ideas and contributions of others (including generative artificial intelligence) must be appropriately acknowledged, and work that is presented as original must be, in fact, original.”
How professors take it from there is up to them. Syllabi run the spectrum of how much to permit or ban the technology, with little to no way to enforce those bans.
Jennifer Smith, Vice Provost for Educational Initiatives, said that professors should assume that students are going to be using ChatGPT and that nobody she’s spoken with is “banning generative AI,” although she added that that might not be the case for all professors.
Smith said that beyond banning directly copying and pasting a chatbot’s response to a prompt, which professors might be able to flag on their own as plagiarized, it is futile to ban the technology.
She said a philosophy colleague argued that enacting an unenforceable ban undermines the rule of law. “When you make rules that are fundamentally unenforceable, you diminish the value of rules in general,” Smith said, summing up his point.
Turnitin is still being used on Canvas as a way to try and catch academic integrity violations, including AI-related ones, with the caveat that professors should not rely solely on the technology to assume that someone has used AI.
“I don’t want students to be freaked out that they’re going to get falsely accused of using AI,” Smith said. “That’s why we ended up in this middle place where that’s something that might alert a faculty member to take a closer look at a student’s work to see if it shows any of the other signs of use of AI.”
Some professors are letting their students use AI, but are requiring that the students let them know.
Dr. Joseph Loewenstein, Director of the Humanities Digital Workshop and the Interdisciplinary Project in the Humanities, said he’s asking students to tell him when they’ve used AI, in part to help him pinpoint why something in their work might be inaccurate.
“I’m going to encourage students to tell me if they’re using it, but frequently, I won’t have a way of confirming or disproving [that],” he said.
He added that it may be a waste of time to try and regulate students over their use of these technologies and that it’s ultimately up to students to take responsibility for their own education. “I’ll explain that ‘It’s your education,’” Loewenstein said. “If you want to cut corners, if you want to use ChatGPT to make paper-writing go faster, you’re not going to learn as much.”
Dr. Wolfram Schmidgen, a Professor of English, is one of many faculty members who have asked students not to use ChatGPT in their courses. His syllabus notes that he “will treat the submission of an essay that has been partially or wholly generated by a computer as a violation of academic integrity.”
Schmidgen wrote in an email to Student Life that his policy acts as “a basic reminder of the values that govern a course such as mine,” which include “individual expression, original analysis and insight, and independent thought.”
He also acknowledged the unreliability of AI detectors, saying that he can’t fully enforce his policy on academic integrity. “But I thought it was important to remind students that the ArtSci policy on academic integrity obviously applies to ChatGPT-generated content, even if we have, as of yet, no very effective way to identify such content.”
Regardless of how professors are allowing students to use AI in their courses, many across the board are working to develop assignments that can’t be answered well by AI.
Dr. Eric Fournier, Director of Educational Development at the Center for Teaching and Learning, said that the faculty AI workshops they hosted over the spring and summer encouraged faculty to engage with the technologies to see what its limits are.
“Faculty were really behaving like students,” he said. “They were taking their essay prompts and putting them into ChatGPT and looking at the output and critiquing it. They then spent time refining the prompts to make it harder for AI to complete the assignments well.”
He said that prompts about current events or literature specific to the class are harder for AI to answer comprehensively, so professors can lean into those themes in their assignments.
Fournier also said that he asked faculty to, at a minimum, spend a couple of hours engaging with the chatbots like they did in the workshops. He said this includes “entering the essay prompts from your course, looking at the output, [and] playing with the tools so you understand enough about them to make effective policies for your course.”
One idea that’s been floated by some academics and AI pundits is to shift towards more in-person assessments to counteract increases in undetectable cheating.
Fournier said he’s planning on increasing the amount of in-person testing he conducts in his own class, specifically to test his students on knowledge accumulation. “Recall-questions might be best answered in class, on paper, instead of [on] take-home exams. And then keep the more in-depth analyses as papers.”
His line of thinking reflects the commonly held understanding that generative AI is currently stronger at basic regurgitation of facts than forming thoughtful pieces of argumentative writing or literary analysis.
Fournier also said professors could ask students to turn on track changes in their writing to look at the forensics of the work.
Dr. Aaron Bobick, Dean of the McKelvey School of Engineering, brought up a similar idea to Fournier’s, focusing on checking for academic integrity in coding assignments. He said professors could ask students to show iterations of their code or engage in dialogue with students about parts of their code to test their understanding.
“There are a lot of things that are open,” he said. “Some of this is going to be incredibly frustrating to our students this semester because they’re going to get an awful lot of different directions from different faculty in different situations. I hope they cut us a little slack because this is a new phenomenon.”
Both Bobick, and his colleague Dr. Jay Turner, Head of the Division of Engineering Education, brought up the challenges of assessing students depending on the size of a class.
Turner said he once taught a course with about 15 students and was able to discuss term projects with individual students to see how well they knew the material. “That doesn’t scale well, but [there are] other things we can be doing.”
He focused on the role that Assistant Instructors, undergraduate teaching assistants, can hold in this assessment process.
“Our aspirational goal is that, in the very near-term, we’ll be leveraging our Assistant Instructors, our teaching assistants, in ways that help them really understand what the students know, but more importantly, [that] help the students learn how to learn in this new environment,” Turner said.