Parents Sue After School Disciplined Student for AI Use: Takeaways for Educators

This post was originally published on this site.

The parents of a Massachusetts teenager are suing his high school after they say he was unfairly punished for using generative artificial intelligence on an assignment.

The student used a generative AI tool to prepare an outline and conduct research for his project, and when the teacher found out, he was given detention, received a lower grade, and excluded from the National Honor Society, according to the lawsuit filed in September in U.S. District Court.

But Hingham High School did not have any AI policies in place during the 2023-24 school year when the incident took place, much less a policy related to cheating and plagiarism using AI tools, the lawsuit said. Plus, neither the teacher nor the assignment materials mentioned at any point that using AI was prohibited, according to the lawsuit.

On Oct. 22, the court heard the plaintiffs’ request for a preliminary injunction, which is a temporary measure to maintain status quo until a trial can be held, said Peter Farrell, the lawyer representing the parents and student in the case. The court is deciding whether to issue that injunction, which, if granted, would restore the student’s grade in social studies and remove any record of discipline related to this incident, so that he can apply to colleges without those “blemishes” on his transcript, Farrell said.

In addition, the parents and student are asking the school to provide training in the use of AI to its staff. The lawsuit had also originally asked for the student to be accepted into the National Honor Society, but the school already granted that before the Oct. 22 hearing, Farrell said.

The district declined to comment on the matter, citing ongoing litigation.

The lawsuit is one of the first in the country to highlight the benefits and challenges of generative AI use in the classroom, and it comes as districts and states continue to navigate the complexities of AI implementation and confront questions about the extent to which students can use AI before it’s considered cheating.

“I’m dismayed that this is happening,” said Pat Yongpradit, the chief academic officer for Code.org and a leader of TeachAI, an initiative to support schools in using and teaching about AI. “It’s not good for the district, the school, the family, the kid, but I hope it spawns deeper conversations about AI than just the superficial conversations we’ve been having.”

Conversations about AI in K-12 need to move beyond cheating

Since the release of ChatGPT two years ago, the conversations around generative AI in K-12 education have focused mostly on students’ use of the tools to cheat. Survey results show AI-fueled cheating is a top concern for educators, even though data show students aren’t cheating more now that they have AI tools.

It’s time to move beyond those conversations, according to experts.

“A lot of people in my field—the AI and education field—don’t want us to talk about cheating too much because it almost highlights fear, and it doesn’t get us in the mode of thinking about how to use [AI] to better education,” Yongpradit said.

But because cheating is a top concern for educators, Yongpradit said they should use this moment to talk about the nuances of using AI in education and to have broader discussions about why students cheat in the first place and what educators can do to rethink assignments.

Jamie Nunez, the western regional manager for Common Sense Media, a nonprofit that examines the impact of technology on young people, agreed. This lawsuit “might be a chance for school leaders to address those misconceptions about how AI is being used,” he said.

Policies should evolve with our understanding of AI

The lawsuit underscores the need for districts and schools to provide clear guidelines on acceptable uses of generative AI and educate teachers, students, and families about what the policies are, according to experts.

At least 24 states have released guidance for K-12 districts on creating generative AI policies, according to TeachAI. Massachusetts is among the states that have yet to release guidance.

Almost a third of teachers (28 percent) say their district hasn’t outlined an AI policy, according to a nationally representative EdWeek Research Center survey conducted in October that included 731 teachers.

One of the challenges with creating policies about AI is that the technology and our understanding of it is constantly evolving, Yongpradit said.

“Usually, when people create policies, we know everything we need to know,” he said. With generative AI, “the consequences are so high that people are rightly putting something into place early, even when they don’t fully understand something.”

This school year, Hingham High School’s student handbook mentions that “cheating consists of … unauthorized use of technology, including Artificial Intelligence (AI),” and “Plagiarism consists of the unauthorized use or close imitation of the language and thoughts of another author, including Artificial Intelligence.” This language was added after the project in question prompted the lawsuit.

But an outright ban on using AI tools is not helpful for students and staff, especially when its use is becoming more prevalent in the workplace, experts say.

Policies need to be more “nuanced,” Yongpradit said. “What exactly can you do and should you not do with AI and in what context? It could even be subject-dependent.”

Another big challenge schools have is the lack of AI expertise among their staff, so these are skills that every teacher needs to be trained on and be comfortable with. That’s why there should also be a strong foundation of AI literacy, Yongpradit said, “so that even in situations that we haven’t thought of before, people have the framework” they need to assess the situation.

One example of a more comprehensive policy is that of the Uxbridge school district in Massachusetts. Its policy says that students can use AI tools as long as it’s not “intrusive” and doesn’t “interfere” with the “educational objectives” of the submitted work. It also says that students and teachers must cite when and how AI was used on an assignment.

The Uxbridge policy acknowledges the need for AI literacy for students and professional development for staff, and it notes that the policy will be reviewed periodically to ensure relevance and effectiveness.

“We believe that if students are given the guardrails and the parameters by which AI can be used, it becomes more of a recognizable tool,” said Mike Rubin, principal of Uxbridge High School. With those clear parameters, educators can “more readily guard against malfeasance, because we provide students the context and the structure by which it can be used.”

Even though AI is moving really fast, “taking things slow is OK,” he said.