The Fork in the Road: Jobs for AI—or for People? | Psychology Today

This post was originally published on this site.

In the 1983 movie Local Hero [1], men representing a Texas tycoon try to convince a Scottish beachcomber to sell his beach so the tycoon can build an oil terminal there. You’ll be paid millions, the tycoon’s reps tell the beachcomber, you’ll never have to work again. The beachcomber looks at them in astonishment. “We all have to work,” he admonishes the men. Besides, he adds, if he sells his beach, there will be no one to look after it properly.

I think of that scene when I hear, on an almost daily basis, about how artificial intelligence (or AI), and large language models in particular, are taking over tasks that a big tranche of humanity—from computer coders to medical aides, translators to taxi drivers, graphic designers to salespeople—once assumed would always provide them with a living. An MIT study recently concluded that current AI capabilities “extend to cognitive and administrative tasks spanning 11.7% of the (US) labor market [representing] $1.2 trillion in wage value across finance, healthcare and professional services.” The chief executive officer of Anthropic claims up to half of entry-level white-collar jobs will be grabbed by AI in the next five years, while Microsoft AI’s CEO has predicted it will take only a year for most white-collar work to be taken over by artificial intelligence platforms.

But what happens to people if they don’t work?

The optimistic view is that as AI takes over more and more tasks, not only does it allow workers to drop boring clerical or factory work in favor of labor requiring human intelligence and skills (or in favor of hobbies and vacations), it also unlocks demand for work that previously had been pent-up behind staff constraints.

The pessimistic view is that as AI corrects itself, learns, progresses, and becomes capable of doing more and more at lesser and lesser expense, fewer jobs will be available for humans and we will all have less and less to do. The reality might well lie somewhere in the middle of these predictions but, either way, the ever-increasing influence and pervasiveness of artificial intelligence is certain to reiterate and emphasize the question posed above: What will happen to those of us who lose our jobs to AI, and how will it affect us?

Work is what humans do

The fact that work is not only something that humans “do,” but something that’s good for us, seems self-evident. Not until AI and robotic tools that do its bidding came along did the idea of not-working even make sense outside of science fiction. “Work” encompasses, of course, exploitative and even destructive jobs, such as sweatshop sewing or mindless assembly-line work, which most of us would be thrilled to hand over to machines. But it also covers the spectrum from pre-agriculture humans hunting and gathering, to philosophers musing over the meaning of life, to bus drivers driving, carpenters hammering, and scientists inventing life-saving vaccines.

Multiple studies have shown that working at a job which requires skill and is not exploitative augments our health and general happiness, whereas inability to work degrades us. A U.K. government study on the benefits or disadvantages of working found work “is central to individual identity, social roles and social status” and “meets important psychosocial needs in societies.” The U.K. study concluded that “there is a strong evidence base showing that work is generally good for physical and mental health and well-being.” On the other hand, “Worklessness is associated with poorer physical and mental health and well-being.”

However, most of these studies, by considering work as a generic economic activity AI might or might not absorb, miss a crucial point: good, non-exploitative, and especially creative work of whatever nature not only can give pleasure and confer happiness; it also employs and enhances cerebral and bodily systems that are intrinsic to how humans function. Thus, to look at “work” in purely economic terms ignores the fact that, irrespective of the objective efficiency or commercial value of what we do, it adds layers of internal value to the person doing the work. What this implies is that at some point humans will have to prevent jobs from being taken by AI systems that can probably do those jobs more efficiently, but which we require to keep our own systems functional and content.

Here’s one example I know something about from researching a book [2] on the human navigational function. GPS-enabled smartphones have allowed millions of people, in richer societies at least, to trade the “work” of physically finding their own way for the ease and convenience of following directions from Google Maps or other navigational programs, many of them now AI-driven.

The result has been twofold: on the one hand, many of those millions have simply stopped looking at the world around them in favor of staring at a screen, which results in a quantifiable drop in situational awareness and attention span. Also, drastically reducing use of the navigational function, which along with memory retrieval is centered in the hippocampal system, has been proven to result in atrophy of the hippocampus, with a corresponding spike in neurological pathologies such as dementia.

This kind of trade-off—skills for ease—is something we will see more and more of as AI takes over increasingly complex tasks from the people whose job it was to accomplish them.

Put a sign on certain jobs: ‘Humans Only’

I teach creative writing, and at the start of term make this pitch to my students: You might use AI to write assigned stories, and those stories could well be cleaner, and even better in narrative terms, than what you craft on your own. But are you here to merely plug prompts into ChatGPT? Or are you here to learn how to write better—to craft stories that please you as well as others—to take pleasure in the skills you are developing as a writer? And do the people you know and appreciate your writing not take pleasure from the fact that they are in some way connected to the human who wrote it?

The implication of all the above seems clear: At some future fork in the road, humans will have to choose between our own well-being, versus giving in to the cold efficiency of artificial intelligence and quite possibly of artificial consciousness. At that point it will become imperative to reserve for ourselves work we need to do to keep our human systems up and running, as well as to give us pleasure, if that’s not one and the same thing. We will have to fence off certain tasks with a sign reading “HUMANS ONLY,” not because AI can’t do the tasks more cheaply and efficiently, but because doing them ourselves is crucial for us to function well as humans in a human society.

In an era where big tech and government are investing massively in agented AI-controlled systems that can work independently, learn exponentially, and make decisions for themselves, common sense would seem to advise making that choice sooner rather than later.

Leave a Reply

Your email address will not be published. Required fields are marked *