Your AI policy is already obsolete (opinion) – Inside Higher Ed

This post was originally published on this site.

aydinynr/iStock/Getty Images Plus

For the past two years, a lot of us have written course, program and university policies about generative artificial intelligence. Maybe you prohibited AI in your first-year composition course. Or perhaps your computer science program has a friendly disposition. And your campus information security and academic integrity offices might have their own guidelines.

Our argument is that the integration of AI technology into existing platforms has rendered these frameworks obsolete.

We all knew this landscape was going to change. Some of us have been writing and speaking about “the switch,” wherein Gemini and Copilot are embedded in all the versions of the Google and Microsoft suites. A world where when you open up any new document, you will be prompted with “What are we working on today?”

Most Popular

This world is here, sort of, but for the time being we are in a moment of jagged integration. A year ago, Ethan Mollick started referring to the current AI models as a “jagged frontier,” with models being better suited to some tasks while other capabilities remained out of reach. We are intentionally borrowing that language to refer to this moment of jagged integration where the switch has not been flipped, but integration surrounds us in ways it was difficult to anticipate and impossible to build traditional guidance for.

Nearly every policy we have seen, reviewed or heard about imagines a world where a student opens up a browser window, navigates to ChatGPT or Gemini, and initiates a chat. Our own suggested syllabus policies at California State University, Chico, policies we helped to draft, conceptualize this world with guidance like, “You will be informed as to when, where and how these tools are permitted to be used, along with guidance for attribution.” Even the University of Pennsylvania guidelines, which have been some of our favorites from the start, have language like “AI-generated contributions should be properly cited like any other reference material”—language that assumes the tools are something you intentionally use. That is how AI worked for about a year, but not in an age of jagged integration. Consider, for example, AI’s increasing integration in the following domains:

  • Research. When we open up some versions of Adobe, there is an embedded “AI assistant” in the upper right-hand corner, which is ready to help you understand and work with the document. Open a PDF citation and reference application, such as Papers, and you are now greeted with an AI assistant ready to help you understand and summarize your academic papers. A student who reads an article you uploaded, but who cannot remember a key point, uses the AI assistant to summarize or remind them where they read something. Has this person used AI when there was a ban in the class? Even when we are evaluating our colleagues in tenure and promotion files, do you need to promise not to hit the button when you are plowing through hundreds of pages of student evaluations of teaching? From an information-security perspective, we understand the problems with using sensitive data within these systems, but how do we avoid AI when it is built into the systems we are already using?

The top hit in many Google searches is now a Gemini summary. How should we tell students to avoid the AI-generated search results? Google at least has the courtesy to identify theirs (probably as a Gemini promotion), but we have no idea how these systems are supplying results or summaries unless search engines tell us. The commonality here and throughout this piece is that these technologies are integrated into the systems we and our students were already using.

  • Development. The new iPhone was purpose-built for the new Apple Intelligence, which will permeate every aspect of the Apple operating system and text input field and often work in ways that are not visible to the user. Apple Intelligence will help sort notes and ideas. According to CNET, “The idea is that Apple Intelligence is built into your iPhone, iPad and Mac to help you write, get things done and express yourself.” Many students use phones to complete coursework. If they use a compatible iPhone, they will be able to generate and edit text right on the device as part of the system software. What’s more, Apple has partnered with OpenAI to include ChatGPT as a free layer on top of the Apple Intelligence integrated into the operating system, with rumors about Google Gemini being added later. If a student uses Apple Intelligence to help organize ideas or rewrite their discussion post, have they used AI as part of their project?

One piece of technology gaining traction is Google’s NotebookLM. This is the only non-integrated technology we are discussing, but that is because it is designed to be the technology for writers, researchers and students. This is a remarkable platform that allows the user to upload a large volume of data, like a decade’s worth of notes or PDFs, and then the system generates summaries in multiple formats and answers questions. Author and developer Steven Johnson is up front that this system is a potential hangup in educational settings, but it’s not designed to produce full essays; instead, it generates what we would think of as study materials. Still, is the decision to engage with this platform to do organizational and conceptual work the same as copy-pasting from ChatGPT?

  • Production. Have you noticed the autocomplete features in Google Docs and Word have gotten better in the last 18 months? It is because they are powered by improved machine learning that is AI adjacent. Any content production we do includes autocomplete features. Google Docs has had this active since 2019. You can use Gemini in Google Docs in Workspace Labs right now. Do we need to include instructions for turning autocomplete off for students or people working with sensitive data?

When you log into Instagram or LinkedIn to publish an update, an AI assistant offers to help. If we are teaching students content production for marketing, public relations or professional skill development, do they need to disclose if the AI embedded in the content platforms helped them generate ideas?

Beyond Policy

We don’t mean to be flippant; these are incredibly difficult questions that undermine the policy foundations we were just starting to build. Instead of reframing policies, which will likely have to be rewritten again and again, we are urging institutions and faculty to take a different approach.

We propose replacing AI policies, especially syllabus policies, with a framework or a disposition. The most seamless approach would be to acknowledge that AI is omnipresent in our lives in knowledge production and that we are often engaging with these systems whether we want to or not. It would acknowledge that AI is both expected in the workforce and unavoidable. Faculty might also indicate that AI usage will be part of an ongoing dialogue with students and that we welcome new use cases and tools. There may be times when we encourage students to do work without using these tools, but this is a matter of conversation, not policy.

Alternatively, faculty may identify these integrations as a threat to student learning in some fields of study. In these cases, we need to use the syllabus as a place to articulate why students should work independently of AI and how we intend to set them up to do so. Again, framing this as an ongoing conversation about technology integration instead of a policy treats adult learners as adults while acknowledging the complexity of the situation.

There continues to be a mismatch between the pace of technological change and the relatively slow rate of university adaptation. Early policy creation followed the same frameworks and processes we have used for centuries—processes that have served us well. But what we are living through at the moment cannot be solved with Academic Senate resolutions or even the work of relatively agile institutions. There will be a time in the near future when jagged integration is smoothed into complete integration, where AI is at the core of every operating system and piece of software. Until that time, in the classroom, in peer evaluation and in institutional structure, we have to think about this technology differently and move beyond policy.

Zach Justus is director of faculty development and Nik Janos is a professor of sociology, both at California State University, Chico.