ChatGPT Can Make English Teachers Feel Doomed. Here’s How I’m Adapting (Opinion)

This post was originally published on this site.

I’m in the midst of the most existentially dreadful fall of my 25-year English teaching career. Last year, too many of my high school students ChatGPTed their way through too many of my assignments—and they weren’t alone. According to one 2023 study, 20 percent of students reported using an AI chatbot to prepare the entirety of a paper, project, or assignment; unlike a plagiarized paper, which can become the genesis of a conversation (and consequences) from which students can learn, AI usage is essentially unpoliceable via school policies, and technology experts do not believe reliable AI-detection tools will emerge soon.

Am I now doomed to just go through the motions of class, unsure if my students are really writing anything, or if there is even value any longer in them learning how? Recent Atlantic think-pieces have outright declared the end of high school English. As one writing professor bemoaned to the magazine, “With ChatGPT, everything feels pointless.”

To combat those narratives of doom and pointlessness, I hope to change the “point” of my class this year, in two major ways.

First, I plan to focus more on process than product. Since 2001, schools have operated in No Child Left Behind’s world of “measurable outcomes.” We threw out the student-centered, constructivist methods of the 1970s, which focused on the journey of the learner and, instead, adopted backward design, aiming our teaching toward students meeting performance goals on all-important assessments. Products and scores became the proof of learning.

But today, AI has broken that fundamental equation; it produces products instantly, no learning required.

Therefore, what I’ve decided to do now is spend more time assessing my students’ process of drafting multiple iterations of their work, with and without AI assistance. I’ll depend increasingly on students’ structured in-class reflections, rather than on their finished essay itself, to demonstrate learning.

Yes, I could just have my students write everything in class, by hand—but we don’t write essays just to produce essays. Rather, we write them as a means of developing our faculties for analysis, evaluation and presentation of evidence, and clear communication of ideas.

Assessing the more complex aspects of an essay—originality of ideas, incisiveness, sophistication of argument—takes longer, is more nuanced, and raises perhaps legitimate concerns about consistency, equity, and subjectivity in grading. But I don’t think English teachers now have any other choice. I’m considering jettisoning those “objective” but oh-so-limited rubrics for assessing an essay’s basic structural components. Instead, I’m experimenting with letting students have ChatGPT instantly create those “five-paragraph essays” for them and then helping them examine what’s worth keeping, what they might want to modify, and why, in order to make the writing more ambitious, more distinctive and personal to each of them.

Many of us have always pushed our students toward “big-idea critical thinking,” but making the fuzzier aspects of writing the core of what we assess and grade will present challenges. It may become harder to compare students’ progress against one another, which will alarm those who depend on such comparisons and rankings for the purposes of everything from college admissions to identifying equity concerns.

Then again, by looking at the process of how students develop their thinking, perhaps we can shift the emphasis to comparing each student against their own past progress, which is both more pedagogically useful and, I believe, more humane.

But, I also want to reduce the role that writing plays in my classroom in general. This may seem like anathema to our profession but only because the last 30 years have moved English away from exploring what great literature could teach us about the human condition and toward teaching students “job useful” writing skills. Yet, my friends with office jobs routinely outsource their memos, annual reports, and grant proposals to ChatGPT.

Since AI has automated much “practical” writing, while simultaneously raising enormous questions about what it means to be human, perhaps it’s time for English teachers to return to the less measurable—but arguably more important—philosophical work we used to do.

For centuries, authors from Plato to Mary Shelly to Aldus Huxley have written about how humans have grappled with society-changing technologies; an even wider range of authors have explored love and rejection and loss, what constitutes a meaningful life, how to endure despair and face death.

And we don’t just read these books for prescriptive advice—especially in an isolating age like ours, we read to know we’re not alone.

Additionally, numerous studies have made the link between reading and empathy; immersing students in fictional worlds is vital preparation for navigating the highly diverse, highly polarized communities in which we live.

So, too, is class discussion. Although I still plan to use short, in-class writing assignments as one means to assess student thinking, I am substantially increasing the role of discussions: paired, small group, and full class.

This will require including more support and scaffolding for students whose social-emotional or linguistic needs might create barriers for them, but that only makes the practice more necessary. Learning how to be good speakers and listeners, how to actively engage, how to respectfully disagree—these skills have atrophied in our post-pandemic, digitally mediated and politically divided world.

The English classroom may now be the only place where many students can get practice in real-time social interaction and discourse that is at the heart of a functioning democracy. I’ll be spending less time on grammar and mechanics and more on analyzing and synthesizing competing narratives, current and historical (that several states now try to ban such historical analysis only reaffirms its necessity).

The humanities have spent the last three decades desperate to prove that we’re of “practical use.” Well, these skills are the “new practical.”

I’m hardly alone; many of my colleagues are making similar shifts. But many more are afraid to do so because those all-important state standardized tests reward rote skills more than complex, critical thinking. While the pandemic briefly engaged policymakers’ creativity around alternate assessment methods, their support for those traditional multiple-choice tests has since come roaring back in both K-12 and higher ed. If COVID wasn’t enough to force policymakers to realize the futility of continuing with accountability as we currently know it, maybe AI will be.

Focusing on process, and on “big picture” issues, will make grading messier. But I can get behind a future where the bots take care of anything requiring simple-to-measure skills, leaving teachers and students alike to focus on the importantly messy work of figuring out how to be amazing humans. I admit I don’t have a crystal-clear vision of what this new shape of English class will eventually look like. But then, neither does ChatGPT. That’s the whole darned idea.