Why AI might not take all our jobs—if we act quickly | Mint

This post was originally published on this site.

Massachusetts Institute of Technology economist Sendhil Mullainathan, 53, makes the point that AI isn’t a thing that is happening to humans but a thing that humans are making. We have a choice about what kind of technology it becomes.

Mullainathan, recipient of a MacArthur genius grant in 2002, spent much of the first stage of his career working on ways in which the insights from behavioral economics could benefit the poor, culminating in a 2013 book with behavioral psychologist Eldar Shafir, “Scarcity: Why Having Too Little Means So Much.” He then turned his focus to AI.

A touchpoint for Mullainathan is an idea that Apple co-founder Steve Jobs came up with after seeing a 1973 Scientific American graphic. It showed that pound for pound, “Man on Bicycle” was a vastly more efficient traveler than other animals. The computer should be “a bicycle for the mind,” Jobs said, amplifying our inherent abilities.

Mullainathan thinks the idea that computers are tools meant to help us rather than replace us needs to be restored and applied to AI.

Economists have been debating how AI will affect work. Does it replace people? Does it help highly educated people but hurt less-educated people? Does it help everybody? But you’re looking at it differently.

People imagine that AI is going to automate things, but they don’t appreciate that automation is just one path. There’s nothing intrinsic about machine learning or AI that puts us on that path. The other path is really the path of augmentation. For me, bicycles for the mind describe that.

Whether we end up building things that replace us, or things that enhance our capacities, that is something that we can influence. But I am feeling as much urgency as everyone else: If we keep going down the automation path, it’s going to be very hard to walk back and start changing things.

What’s wrong with how AI tools are being developed and deployed?

Every time Anthropic or OpenAI or Google releases a new model, you’ll notice they always talk about, oh, we did better on these benchmarks. That’s the way they keep score. In many ways those benchmarks dictate what these models are asked to be good at.

We pick an area and then we say, “Can this thing do this as well as people?” So we’re building algorithms with a strong capability for automation. And when we say they’re getting better and better, we mean their capabilities for automation are getting better and better. If you look at the standard benchmarks, there is nothing in them that would make you say, “Oh, here’s a metric for helping a person do something better.”

Are there settings in which AI is playing this augmentation role, either accidentally or on purpose?

One of my favorite examples is a paper by former student of mine, Lindsey Raymond, with Erik Brynjolfsson and Danielle Li. They go to these call centers. (These are more like chat centers—people are typing—but I’m going to keep calling them call centers). Queries come in—technical queries, like someone’s stuck on something—and workers answer them.

An AI bot is introduced that gives suggestions to the workers. [The researchers] study the effect of the bot on performance, and they find that when workers get access to the bot they do better. And they find that the worst workers get helped the most.

Then they study what happens to these workers’ performance when the bot goes offline for a day or so. What they find is that early on, without the bot, workers just revert. But after a few months, remove the bot, and the worker is just as good as with the bot. So what was happening is this bot is actually not a helper bot, it is a teacher bot.

How does AI as a “bicycle for the mind” fit in?

Imagine that you’re looking for a job, and you wanted some help from an algorithm that would help you decide where you should apply. The where-should-I-apply question is inherently a bicycle-for-the-mind question. It requires combining some things the person knows—what kind of jobs do they like, where are they willing to live, etc.—with some stuff that the algorithm is better suited for: Given your résumé, where are you likely to get an interview? Where are you likely to get an offer?

So you’ve got these two different kinds of information. The algorithm understands, given your résumé, what your opportunities may look like. You understand your preferences. If some communication could happen, a lot could get unlocked.

A lot of this seems to come down to AI is much better than us working with the data, but it doesn’t see anything outside of the data.

And there are just so many problems where what’s not in the data is as important as what’s in the data.

What insights from behavioral economics could be used to design better AI for workers?

One of the most useful things augmentation can do is it can help us with the things that we’re not as good at, to leave room for the things we are excellent at. Behavioral economics has helped identify those blind spots.

Take something like a résumé screening. We’re very bad at reading through things really fast. It’d be really interesting if, after I did the résumé screen, there was a product that said, “Hey, here’s 10 résumés that are the kind you usually don’t pick. But when you do pick them it looks like you actually hire the person, or they do well in the interview. Why don’t you give these more time?”

Your work shows that scarcity—scarce time, scarce money—basically steals mental capacity. How could AI help us deal with scarcity issues at work?

A product I think would fundamentally transform the nature of work is one that helps me really just make better decisions about what I take on and don’t take on. It seems a bit mundane, you already have Google Calendar, you have automatic schedulers. But while those things all solve the logistics of scheduling, they’re not solving the core time-management problem which we all have, which is not about, can this meeting fit in here. The core problem is we’re not managing bandwidth very well. We’re not thinking about, “Oh, man, if I take all these meetings, I’m going to be overwhelmed.”

Notice this has two elements we’ve mentioned. The algorithm has access to a wealth of understanding about your calendar, your past meetings, about known psychological biases. You have a wealth of understanding of what you’re trying to accomplish, what has worked for you, what does not work for you, what has made you nervous. If we could combine these two things, I think we’d have a totally different way to approach time management.

Write to Justin Lahart at Justin.Lahart@wsj.com