SAK: Stop personifying AI – The Vanderbilt Hustler

This post was originally published on this site.

I was sitting in a lecture one day when a professor made a comment about generative AI that struck me. In his presentation, he included an AI-generated image to add a visual element to the slide. The image was a mess with people’s faces disconfigured, random objects in weird places and a series of real and fake letters with numbers incoherently scattered throughout. The professor made a quick remark about how, with work of that quality, AI would likely not be stealing our jobs anytime soon. The class laughed, and he returned to his lecture. 

What might have been a throwaway joke to him caught my attention because I saw a lot of meaning in it. His comment spoke to an anxiety many of us feel about the insanely rapid emergence and evolution of generative AI. In only two years, we went from ChatGPT’s launch to a point where it feels like AI is everywhere we turn. At that same time, Vanderbilt University went from actively discouraging the use of generative AI to launching its own custom AI model for students and staff to use. Understandably, this rapid change is leading to fear and anxiety about the future, especially for students who are wondering if the careers they’re planning to pursue will still exist in a few years. 

In addition to the anxiety present in his comment, my professor pinned full responsibility for the quality of this image on the AI model he used to generate it. It’s common to think and talk about AI this way. We personify it and give it credit for what it produces. We think about it in terms of its talents: What is it good at? Bad at? Even its name — artificial intelligence — implies some kind of inherent independence from us.

So, of course, when this professor talked about the quality of this image, he blamed the AI for painting a bad picture — any of us would. But in reality, this is not an accurate representation of what AI is. 

For my HOD Capstone independent learning challenge, I learned about generative AI and how to use it well. After spending hours every week using ChatGPT, DALL-E and a variety of other AI models, I came to have a different perspective on this technology. 

For my final project, I created a coloring book featuring over 20 images that I generated, and that required serious attention to detail since the placement of every line matters. I took it upon myself to maintain a high-quality standard for everything that was produced. Too many fingers or not enough legs? That was on me to correct. No image started perfect, nor was it ready to go until I had done some major work with it. And, after all of the work that went into that project, I saw myself — not generative AI — as the creator.

This is an AI-generated graphic I created using DALL-E and Canva AI tools. The left half displays the graphic after a single prompt and no editing. The right half displays the graphic after I have re-prompted and made substantial edits with multiple AI tools. (Hustler Staff/Daniel Sak)
(Daniel Sak)

This process is why we need to stop thinking about AI as an independent intellect and start to see it exclusively as a tool. We don’t give a hammer credit for the house or C++ credit for the software. We credit the builders and developers using those tools, and we should do the same for those who create with generative AI. 

When we frame AI as a tool, we can see the flaws in our common concerns about its capabilities. It might not be good at certain tasks, but neither is a fork if I’m eating soup. I wouldn’t throw away my silverware because of it; I’d just grab a spoon. 

Generative AI is one of the most powerful tools we’ve ever seen, but it still has its limits. AI often won’t produce anything particularly spectacular without some serious human guidance, but we shouldn’t expect it to. If AI is the tool you choose, you must actively use it throughout a project. We just don’t yet know how and when to use it.

The biggest difference between generative AI and other tools is consistency. Most tools are useful because they’re consistent: A knife cuts the same way, a guitar plays the same notes and a block of code produces the same output. When that consistency starts to waiver, we sharpen the knife and tune the guitar so they work the way they’re supposed to.

AI tools are different; inconsistency is their primary feature. Large Language Models (LLMs) and Image Generators are built on probability and utilize randomization in their outputs. A common critique of generative AI is its tendency to hallucinate, but this ability is actually what makes these models such powerful tools. AI outputs are varied and, even when prompted vaguely, can produce a response. It might not be able to produce the consistent output that a block of code can, but generative AI will still respond if you miss a comma. Neither of these tools are inherently superior, but depending on the need, one will be much more useful than the other.

The current challenge with viewing AI as a tool is our inexperience with inconsistent tools. The vast majority of technological innovations have shifted towards standardization: The printing press, industrial machines, automobiles and medicine are all most useful when their output is reliable. We do have some tools — like dice and cards — that create randomness, but they’re the exception, not the rule. Generative AI can be difficult to understand because it’s intentionally designed to be variable, and traditionally, that’s been a bad trait for a tool to have.

To conceptualize tools of variability, it can be easier to think of it like a person since humans are the major source of variation in a world of consistent tools. However, this framing leads us to over-crediting AI with what we think it can and should do. This misconception is how we end up overly focused on what AI is incapable of, even though we’d never complain that a saw can’t glue things together. Compound that fixation on AI’s limits with the fear of AI’s influence on our future, and it’s no wonder we find comfort in AI’s failures. Something that can’t even tell you how many Rs are in the word “strawberry” can’t take our jobs, right?

This approach is dangerous and problematic because it closes off innovation and eliminates human autonomy. This tool is extremely new and extremely different, so it will take time and experimentation to learn how and where to use it. When a specific method doesn’t work, we shouldn’t hold it against the AI; we should note it and move on to our next trial. A tool is only useful for what it can do, and we have nothing to gain by holding a grudge against a computer program.

I also think treating AI like a tool instead of a person helps to remove it from its pedestal and reassure ourselves of our own autonomy, creativity and control. If AI is a person, we’re at its mercy, but if AI is a tool, we can use it however we’d like. I can tell you with confidence that it is only a tool, so the choice to build it into something more is entirely your own.

For those who worry about its impact on the arts, DALL-E is the brush, paint and canvas, but not the painter. And for those worried about AI stealing your job, take comfort in the fact that AI cannot be employed. This is not to say that the arts or the job market won’t change — new forms of art will emerge and old tasks will be automated — but the decisions about how these tools will be implemented and their use thereof remain firmly in human hands.

We are living through an incredible time of technological development. Generative AI exists in a language we can all understand, which means anyone can experiment, innovate and discover. This rapid innovation is why I have a profound respect for the professor who sent me down this path. I may take issue with his framing of generative AI, but I admire his willingness to try it out and share what he made with the class. That took guts, and that’s what’s going to move us forward. 

Students, professors and administrators at universities like Vanderbilt are in a prime position to explore this new frontier because that’s what we do in the academy: experiment, learn and share. We must foster an environment conducive to this exploration by setting policies that allow and promote the creative usage of artificial intelligence in and beyond the classroom. 

This reframing can take many different forms. First, we should rethink our approach to “citing” these tools. ChatGPT doesn’t do the work of a researcher; it can only compile the work of others. If we want to acknowledge our usage of AI in academic work, that acknowledgment should be included in our methodology section, not the works cited. Treating AI as a tool also means not using its perceived weaknesses to justify banning them — in other words, not banning students from using AI because it “won’t be useful anyways.” While these policies may be intended to “protect students from themselves,” they stifle both students’ and professors’ ability to learn how to use these tools.

We have an opportunity unlike any other to build our understanding of an incredible set of new tools. I predict that we’ll look back in a few years and laugh about some of the ridiculous ways we attempted to use generative AI, but that journey is how we will have found the good uses. The only way to get there is to fail fast and keep moving forward.