Why Teachers Should Talk to Students Before Accusing Them of Using AI to Cheat

This post was originally published on this site.

When schools first became aware that new versions of generative artificial intelligence tools could churn out surprisingly sophisticated essays or lab reports, their first and biggest fear was obvious: cheating.

Initially, some educators even responded by going back to doing things the old-fashioned way, asking students to complete assignments with pencil and paper.

But Michael Rubin, the principal of Uxbridge High School in Massachusetts, doesn’t think that approach will prepare his students to function in a world where the use of AI is expanding in nearly all sectors of the economy.

“We’ve been trying to teach students how to operate knowing that the technology is there,” Rubin said during a recent Education Week K-12 Essentials Forum about big AI questions for schools. “You might be given a car that has the capacity of going 150 miles an hour, but you don’t really drive 150 miles an hour. It’s not about the risk of getting caught, it’s about knowing how to use the technology appropriately.”

While students shouldn’t use writing crafted by AI tools like ChatGPT or Gemini and pass it off as their own, generative AI can act as a brainstorming partner or tutor for students, particularly those who don’t have other help in completing their assignments, he said.

Rubin recalled that his daughter recently needed his assistance with a history assignment. “She has me to go to, and some kids don’t,” he said. “We do believe that the AI chatbots can sometimes be that great equalizer in terms of academic equity.”

But he added, “I did not do the work for my kid. So I want to make sure the AI chatbot isn’t doing the work for anybody else’s either.”

Rubin’s school uses a tool that helps teachers get a sense of how students composed a document they later turned in for an assignment. It allows teachers to see, for example, if a student did a lot of cutting and pasting—which could indicate that they took chunks of AI writing wholesale and passed it off as their own work.

If a teacher at Rubin’s school suspects one of their students plagiarized content from an AI tool, the teacher doesn’t launch into an accusatory diatribe, he said.

Instead, they’ll use it as a “learning opportunity” to talk about appropriate uses of AI, and perhaps allow the student to redo the assignment.

“It’s not just about giving a zero and moving on,” Rubin said.

Never assume AI-detection tools are right about plagiarism

Those conversations are important, particularly when a teacher suspects a student of cheating because an AI detection tool has flagged work as potentially plagiarized, said Amelia Vance, the president of the Public Interest Privacy Center, a nonprofit organization that aims to help educators safeguard student privacy. Vance was also speaking during the Education Week K-12 Essentials Forum on AI.

Most AI detection tools are wildly inaccurate, she noted. Studies have found that commercially available detection tools tend to erroneously identify the work of students of color and those whose first language is not English as AI-crafted.

Programs that look at whether a student copied and pasted huge swaths of text—like the one Rubin’s school uses—offer a more nuanced picture for educators seeking to detect AI-assisted cheating, Vance said. But even they shouldn’t be taken as the final word on whether a student plagiarized.

“Unfortunately, at this point, there isn’t an AI tool that sufficiently, accurately detects when writing is crafted by generative AI,” Vance said. “We know that there have been several examples of companies that say, ‘We do this!’ or even experts in education who have said, ‘This is available as an option to deal with this cheating thing.’ And it doesn’t work.”

The kind of technology that Uxbridge High School relies on gives educators “a better narrative” to work with than other types of detection tools, Vance added. “It’s not just, ‘Is this student cheating or not?’ It’s, ‘How is this student interacting with the document?’”

That’s why Uxbridge’s practice of talking to students directly when AI cheating is suspected is an important first step.

If a student admits to cheating using AI in those conversations, “you need to make it clear to the student that is not acceptable,” Vance said. But teachers should never take the word of an AI detector—or even the type of product Rubin described—as gospel.

“Avoid ever assuming the machine is right,” Vance said.