A Professor’s Journey Through Grief Over ChatGPT (opinion) – Inside Higher Ed

This post was originally published on this site.

At a recent hands-on workshop regarding the future of work hosted by Babson College’s The Generator, I found myself listening to a conversation that perfectly encapsulated our collective reckoning with AI. One group was exploring the concept of an AI therapist—an always-available bot designed to help students facing personal crises outside the typical doctor’s office hours.

The idea sparked debate. Would people willingly talk to bots about their most human problems? Someone cited research showing that, in certain cases, AI outperforms human doctors when it comes to bedside manner. Others wondered if students might find comfort in knowing that an AI, free from judgment or bias, was “listening” and offering advice on how to navigate an argument with a roommate. The discussion left me thinking about how quickly and profoundly AI has inserted itself into roles we once believed required uniquely human skills.

This conversation wasn’t just about therapy. It spoke to the broader challenge of adapting to AI in spaces that rely on trust, creativity and emotional connection. As an educator, I’ve seen these same questions play out in my own field of writing studies. When ChatGPT was released on Nov. 30, 2022, it entered higher education like a tidal wave. Just six days later, The Atlantic published “The College Essay Is Dead,” which sparked widespread alarm about the future of academic integrity.

Most Popular

Professors across the globe scrambled to process what this new AI technology meant for their classrooms, their assignments and their roles as educators. Reflecting on these months of adaptation, I see a pattern familiar to anyone who has experienced loss: the five stages of grief.

While AI doesn’t necessarily mark the death of anything, it has transformed how we think about writing, teaching and learning. For many professors, this journey mirrors the emotional and intellectual upheaval of denial, anger, bargaining, depression and acceptance. Along the way, I’ve found actionable strategies to navigate these changes—lessons that might help us move forward.

Stage 1: Denial

In the early days, denial was rampant. “It’s just a fancy autocomplete,” I heard colleagues say. The dominant narrative was that AI couldn’t possibly match human creativity or critical thought. Many professors dismissed ChatGPT as a gimmick, convinced it wouldn’t affect their students or their assignments.

And then the essays arrived. Flawless grammar, neatly organized ideas and an eerily polished voice. Denial began to crack.

A colleague confidently told me, “No AI could ever replace the depth of my essay questions.” And yet, within weeks, a student in the writing center showed me how ChatGPT had answered one of those very prompts—seemingly flawlessly. At first, it was like watching a magic trick wherein the magician refuses to reveal the secret. Only the student was revealing the secret, and I desperately wanted the rabbit to go back in the hat.

Stage 2: Anger

Once the reality set in, anger followed. Faculty meetings buzzed with frustration over students using AI to bypass the writing process. “How could they do this?” educators asked, forgetting that our students often see themselves more as pragmatic problem-solvers than as traditional academics.

There was anger, too, at the tech industry: Why weren’t educators consulted about the potential fallout of tools like this? How could OpenAI release something with such broad implications without safeguards? As assignments started to collapse under the weight of ChatGPT’s capabilities, the frustration was palpable.

Anger wasn’t just directed at students or tech companies—it was self-directed, too. Had we made our assignments so predictable that a machine could excel at them? Were we, as educators, part of the problem? This frustration underscores a crucial question: How do we design assignments that move beyond what ChatGPT can easily replicate?

Stage 3: Bargaining

In the bargaining stage, we searched for ways to control or coexist with this new technology. I listened to colleagues from other colleges talking about how their departments debated policies: Should they ban ChatGPT? Should they create strict rules about disclosure? Faculty brainstormed ways to “AI-proof” assignments by forcing students to compose their essays in Google Docs so that the version history could be checked. “Policing is not pedagogy” echoed in my head.

Bargaining also meant trying to figure out how to use AI constructively. Could it be a brainstorming partner? A research assistant? Professors began asking, if students are going to use this, how do we ensure they learn something from it?

In one faculty meeting, a colleague proposed assigning handwritten essays, while another suggested oral exams. I think it’s good that we rethink assessments based on what we want students to learn, but I feared that a total move away from writing and the writing process wasn’t the best move. “Maybe we should teach calligraphy!” a writing studies colleague joked when I asked if there was a good way to remind others that our field exists.

Ultimately, for some professors, bargaining showed us that the solution wasn’t about banning AI but engaging with it in meaningful ways.

Stage 4: Depression

This was, for me, the hardest stage. Once it became clear that ChatGPT—and tools like it—were here to stay, it felt overwhelming. How do you teach writing in a world where students can outsource their first drafts? What happens to the painstaking process of revision when AI churns out polished prose in seconds?

It wasn’t just about what this meant for me as an educator—it was about what it meant for student learning. Writing has always been a process of discovery, a way for students to think through ideas, wrestle with nuance and develop their voices. But if they were handing over that process to a machine, what were they learning? There was tension between resistance and tradition and keeping up with the times. For a moment, I felt like I was 100 years old yelling about kids today and their punk rock music.

Further, this stage wasn’t just about ChatGPT—it was about what it represented. If AI could write essays, what else could it do? Would students lose the ability to grapple with language and meaning in all its messy, beautiful forms? How would we ensure that linguistic diversity and the unique ways people express themselves aren’t flattened by AI’s polished but generic outputs? If ChatGPT can synthesize and summarize ideas faster than any human, what’s lost in the process?

More than anything, I worried about homogenization. Would the ease of relying on AI push us toward a one-size-fits-all way of thinking where nuance and originality are casualties of efficiency? The unspoken fear wasn’t just about my role as an educator; it was about what kind of world we were creating.

Stage 5: Acceptance

And yet, a little more than two years later, many of us are finding ways to adapt. Like many of my writing colleagues, I’ve used ChatGPT to model critical thinking with students, prompting the AI to draft an essay and then critiquing its outputs. This approach has led to rich conversations about bias, creativity and what makes writing “human.”

One of my favorite projects last semester asked students to analyze art in our on-campus gallery. They picked one piece and analyzed its colors, symbolism and so on. The pieces, unknown to them at the time, were co-created with AI (I realize “the big AI reveal” is quite trite by now, but it worked for this assignment in that they had an authentic experience analyzing a text). After their initial reactions, they read Ted Chiang’s “Why A.I. Isn’t Going to Make Art,” and their task was to write an argument—drawing on their experiences in the gallery and in conversation with Chiang—about whether or not AI could make art. My classes were pretty evenly split on the issue, which meant the discussions were lively, insightful and, most importantly, engaged.

In their next assignment, when students were asked to remix their previous writing projects into a small portfolio of new texts created with and without AI, and then reflect on those outputs, even the fully copied AI outputs didn’t replace the learning—they enriched it.

Acceptance doesn’t mean we have all the answers—it means we’re open to asking better questions. How do we teach writing as a process of thinking rather than just producing text? How do we prepare students for a world where AI is a tool, not a threat?

Conclusion

I recently read Melanie Dusseau’s passionate call for resistance to AI in writing studies with both recognition and respect. Like Dusseau, I’ve felt the urge to “burn it down”—to reject the drumbeat of inevitability that says we must adopt generative AI in our classrooms. Her call reminds us that critique and resistance are essential to preserving the human spirit of creative and intellectual work.

But as much as I admire the power of her resistance, I’ve found myself moving in a different direction. After two years of wrestling with the implications of AI in my teaching, I can’t reverse the stages of grief I’ve already lived through. What comes after resistance, I’ve found, is the messier, less satisfying work of critical engagement—not embracing AI uncritically but inviting students to think deeply about what it means to write, create and think alongside these tools.

Any therapist—bot or not—or, really, any human who has loved and lost will tell you that grief comes in unpredictable waves. Resistance reminds us what’s worth fighting for. Acceptance reminds us that the fight doesn’t always mean saying no—it can also mean saying yes thoughtfully, critically and creatively. In this way, resistance and acceptance are not opposites but part of the same ongoing conversation about what it means to teach, learn and create in a world forever changed by AI.

Kristi Girdharry is an associate teaching professor of English and director of the writing center at Babson College.