This post was originally published on this site.
Editor’s note: All opinions, columns and letters reflect the views of the individual writer and not necessarily those of the IDS or its staffers.
Much like the winter weather, we’re all soldering into another semester with hopefully a renewed sense of curiosity, interest and discipline. Despite each of us entering different phases of school, programs and responsibilities, we all encounter artificial intelligence every day. Â
Without a doubt, AI permeated our lives in a matter of a few years and enriched them in multiple capacities. Google searches are optimized with AI summaries for faster and more comprehensive results. The automotive industry continues to sell and manufacture self-driving cars using AI software. Even hospitals are investing in AI to improve diagnostic and treatment procedures. The list of AI fostering disruptive growth in existing industries is lengthy. Â
As a student frequently using large language models like ChatGPT, Claude and Perplexity, I question whether the benefits outweigh the potential drawbacks in a demanding educational environment like college. For context, large language models (LLMs) are a type of AI that uses complex algorithms to generate human responses via summarizing, generating, calculating and processing information.Â
Every semester, I learn about a new AI system that can do the same work as me in a better, simpler, faster and more authentic way. And obviously, that’s amazing. Make no mistake, I never use AI to cheat on deliverables, but I’ve gotten into a routine of consulting AI for lots of things. When I apply for jobs, I ask AI to give an impression of my cover letter and resume and point out areas of improvement and confusion. When I’m nervous about emailing professors for specific items, I use AI to draft an email and cross-check my draft. And so many other little tasks. Â
While it may not seem significant, over time this reliance has led to concern. Without limiting how much time I can spend using AI or in what capacities, I feel I’ve become lazy, less creative and underconfident in my abilities. Even for minor tasks, I rely on AI to check my work; I’m crafting longer prompts to minimize future editing and seeking AI-generated ideas at the first sign of writer’s block.Â
And I’m not the only one who thinks that. Â
According to a survey conducted by the Digital Education Council, more than 50% of students reported over-relying on AI would harm their academic performance; the sample also believed over-reliance in teaching would decrease the value of education received. Â
Similarly, The Hechinger Report said researchers at the University of Pennsylvania found the array of AI chatbots substantially inhibited learning even when designed to emulate tutors because students reference the bots as a “crutch” and make fewer decisions.Â
However, some professors are embracing AI fluency in their classrooms. Is the solution simply a matter of self-discipline? Â
I spoke with Kelley King, a self-proclaimed “AI optimist” and marketing lecturer at the Kelley School of Business, to understand her perspective on AI in the academic world. Â
She said she encourages students in her class to use AI and test different models so long as they’re transparent about how they used it and reflect on its merits in the particular assignment. Â
“I want them to use it because I believe it’s in the real world and it’s here to stay,” King said. “And I would be remiss if I didn’t teach them how to use this amazing new tool, but with a very critical mindset not only to proof but really think about what it’s saying and how it’s generating responses.”Â
She admitted that as we reach the point of singularity, which is a point in time when AI will surpass human intelligence, she is equally fascinated and frightened by the technology. But she has faith we can mitigate such concerns and obsoleteness with rules and adherence to keeping the idea human. Â
So, what can we take from this?Â
While AI can be a powerful tool in our educational journey, it’s imperative to approach with balance and engagement. As it stands, IU doesn’t have a policy related to the use of generative AI. But to avoid overdependence on AI, consider implementing the following strategies:Â
1. Set limits on what you will consult AI for, for how much time it will take and at what stage of the project you’ll use it for. Â
2. Refrain from using AI in the beginning phases to enhance originality.Â
3. Assess your knowledge of the material before using AI to refine that understanding. This could as simple as quizzing yourself, summarizing the content, or checking in with your professor. But the goal is to ensure you have a proper foundation. Â
4. After each use of AI, reflect on how it improved or hindered your process. What did you learn from AI that you can use elsewhere? Â
5. Consult others before using AI to discuss process and challenges organically.
By practicing these suggestions, we can all foster a healthier relationship with AI and ensure it’s a tool for enhancement rather than a crutch. Â
Here’s what IU professors said on students using AI.
Meghana Rachamadugu (she/her) is a senior studying marketing and business analytics and pursuing a minor in French.  Â