This post was originally published on this site.
Every day, another feature of society is poised for an AI revolution. How will AI change (big inhale) our jobs, relationships, parenting, schools, art, music, sex, war, public safety, public policy, public transport, scientific research, health care, food systems, factories, banks, insurance, retirement…?Â
We can’t answer these questions about AI today. We don’t know where we’ll end up. But we can get a sense for where we’re headed. To do that, I spoke with Kristen Berman, who has worked at the intersection of behavioral science and technology in Silicon Valley for the past decade and a half.
I’ve known Kristen for about 10 years and I look to her to help me understand what’s going on in the tech world’s capitol. Since we both speak the same language when it comes to behavioral science, she has been an especially helpful translator.
Kristen is the founder and CEO of Irrational Labs, which helps companies build behavioral insights into their products and services. She is also the author of the Product Teardowns newsletter, where she reveals the behavioral principles underpinning today’s most popular apps and products. And she hosts The Irrational Mind podcast, where she interviews experts from a range of fields—including winemakers, authors, designers, and product managers—to learn how they think about human behavior and behavior change in their fields. Before all of this, she worked on financial inclusion at Common Cents Lab and helped Google start its behavioral science team. (Full disclosure Irrational Labs has donated to Behavioral Scientist.)
So what’s her on-the-ground view of where AI is headed?Â
In our conversation below, we touch on what she thinks AI developers could learn from behavioral scientists, how OnlyFans might improve your relationship with your doctor, the AI applications she is excited about, the ones that give her pause, and advice on what to do if AI is coming for your job.
Our conversation has been edited for length and clarity. Â
Evan Nesterak: What is a blind spot that you think people developing AI have about human behavior?
Kristen Berman: One AI promise is a “do it for me” model. “Write my email for me. Tell me the best restaurants in this place for me.” That is actually very aligned with behavioral science, with our methodology of making things easier for people. If we make something easier, more people will do it. It turns out a lot of things in the world are hard, and AI has the promise of making things easier.
I think the blind spot right now for these engineers is if you say, “Magically write my email for me,” people don’t really want it to send your email for you. People don’t really trust the tone or the voice that it’s in. We call it the Betty Crocker effect—the “add an egg” idea—which is that we value things that we contribute to more. We would probably trust AI more if we add our own two cents. My hunch is that this “add an egg” opportunity, where humans contribute and see their own effort, will be very important.Â
And at some point the tools will get so good and we will get familiar with them that we don’t need to add an egg. We’re just not there yet.Â
Are there any other blind spots we should be worried about?
I think the biggest blind spot that anyone outside of the Valley could be worried about is the algorithmic bias that we are building and coding into our algorithms—biases that humans have. I’m very much in the camp of AI will be easier to change than humans. Any kind of teaching that we do to get people to change their stereotypes or bias is very, very difficult.
I’m very much in the camp of AI will be easier to change than humans.
I’ll give you one example. For hiring, you can imagine that the AI is trained to accept more men because more men applied in the past and have been accepted. If you train the data on these types of hiring algorithms, you could then build in a bias against women. Well, it turns out people know this, and now you can train the AI to overcome the bias towards men. By understanding those types of probabilities and bias, you correct the training model to push it to the equitable solution.Â
Human decision-making is tough. It would take a human to figure that out and convince their organization to make a change. We have the answer to how to overcome bias within AI. I am pretty stumped when it comes to changing humans and their own bias.
I was speaking with a friend who works in UX at a large financial firm. She said that there is a team working in parallel to hers that is essentially developing an AI that could take her role. She is also tasked with helping that team develop the AI. So she’s stuck with the thought, “I’m helping develop an AI that could put me out of a job.”Â
This seems like tension a lot of employees are facing. We might hear company leaders say, “No, no, no, it’s not going to take your job. It’s going to make your job better, easier, more creative.” Yet for some employees, there is a palpable sense that they’ll be replaced. What are you seeing play out in companies?Â
Two things. One, companies are pretty nervous about telling their employees that their jobs are going to be taken. We’ve worked with a large financial advisor company, with thousands and thousands of advisors, and they publicly came out to their advisors and said, “AI will always be an assistant to you. It will not take your job.” They call this idea human in the loop. A human will always be in the loop. I think this is probably false advertising and causing humans to have a little bit more complacency than is appropriate for this moment in time.
At Irrational Labs, we did a study where we asked a Prolific population of close to 800 people, “How likely is it that AI will take your job?” There are two ways to answer this question. In one world, you could be very fearful and terrified that AI will take your job. This probably will cause you deep unhappiness everyday, because you’re worried. In another world, you could not be worried at all and say, “You know what? I’m smart, knowledgeable, empathetic, all of these things that make me human, and it just wouldn’t take my job.”Â
You probably want a healthy fear, and that’s not what we saw in the results. Only eight percent of the respondents said they believe that AI would take their job.
Even though we can be optimistic about society, it will be difficult to be optimistic about yourself if your job is at risk.
Your friend should be pretty nervous about AI taking her job, but not so nervous that she can’t sleep at night. She should be nervous enough to take reasonable steps to differentiate her knowledge, whether that be understanding how to control an AI to become the prompt engineer that would drive it, or moving fields.Â
As a techno-optimist, I think our society, our GDP will be fine. We will not collapse with everyone unemployed because AI took their jobs. But that kind of hides the fact that some people in this mass job swapping will actually lose their jobs. So even though we can be optimistic about society, it will be difficult to be optimistic about yourself if your job is at risk.
The follow-up question in the Irrational Labs study was, “How likely do you think it’ll be that other people’s jobs are taken?” This, people are more worried about. When thinking about their own field, they responded that AI was two times more likely to take somebody else’s job than their own. If it was somebody outside their field, the said AI was four times more likely to take somebody’s jobs than their own. This represents the idea that we believe we are special, that we have something unique to offer, and this uniqueness, I’m going to label it essentialism, can help us keep our jobs.
I want to pick up on the essentialism idea, but before we do my friend wanted me to ask what advice you have for somebody who finds themselves in this kind of position?
My main recommendation is to use AI. Having an abstract understanding or fear of it may keep you in the status quo. So be the leader within your cohort or your peer group. Be the one to play around with the tools. It’s actually very difficult now to do things well with AI. It still takes a lot of work. You can be one of those people who can figure out how to use it. When something does get displaced, you’ll be the one that’s going to stay at the company because you are one of the thought leaders there.
Coming back to this idea about human essentialism. What do AI developers think is fundamentally human that they wouldn’t develop for? Maybe it’s a moral or philosophical stance, “I am not going to develop for that.” Or maybe it’s something to do with a unique human ability, “Oh, humans are way better at that than what I think I could engineer.” Or is it none of that, and anything is on the table?
I think anything is on the table. I’ll give you an example. OnlyFans is one of the fastest growing sites that’s ever existed. They just released some of their revenue and economics, and a large percentage of the population is on OnlyFans. And you’d think that these are really intimate relationships that people are creating, and yet the new thing is to have a bot be the creator. And so, you’re actually talking to a bot. I don’t know how to say…
Is this like a bot that is a porn bot, basically?
Yeah.
And then you’re developing a human relationship with it?
Well, it’s a real picture of a person that you may be buying, or a video of a person that you may be buying, but the conversation that you’re having that you think is with a person could be, and will likely be in the future, a bot.
So if I’m the most advanced OnlyFans performer or worker, what I need in order to scale is to chat with more people, but because I’m only one person, there’s a limit to what I can do. So I created a bot that allows me to have chats with many more people around the world at one time for a certain fee. Now my business as one person has scaled incredibly. Is that kind of how it’s working?
And they’re negotiating. The bot is negotiating for you and doing better than you would do.
What do you mean, negotiating?
How OnlyFans works is you have to buy something from the creator, and that means there’s a back and forth about how much it costs. Negotiating, you may say, is an essentially human characteristic. I can read your body language, I know your history, I understand the context. In reality, the bot is doing this quite well.
So basically, in this case, the person is still real, but these interactions and negotiations for what the real person will do are run by bots?
Yes.
Got it.
And this is actually very exciting for healthcare. I’ll make a switch here.
Whoa. Okay.
Because you can imagine that if you met your doctor once, and you’ve had a great connection, now it’s possible that when they message you—and maybe it’s a bot—to take your medication or to come in for a checkup or ask you about symptoms after you’ve been prescribed a new medication, that you will engage with them.
While the OnlyFans seems like a crazy side market, these things will likely be pulled into highly regulated industries in some near future. It’s in these highly regulated industries like health and finance where the most good could be done.Â
For instance, if you’re going to get a loan for a mortgage, right now you should be calling multiple bankers to get quotes. The likelihood that people do that is very low because it takes a long time. You have to find a banker, you have to submit the loan application, usually you have to have a conversation with them, and all of this under the extreme time pressure of your mortgage or housing bid. If more people did that, rates would be more competitive. If these types of interactions were automated, it may actually have a pretty big upside for society.
Can you tell me more about the kinds of applications that you think will have a really positive impact on behavior?Â
I do think more people will become creators. I’m somebody who doesn’t draw. I don’t sing. I don’t make music. I’m bad at video editing. I don’t think I have an aesthetic taste as much as other people do. That’s kind of kept me out of the creative field. That’s probably a shame because there’s a level of personal exploration you can do when you’re creative, and then there’s a level of output that different people with different views can give, even if they don’t feel like they are technically good in the artistic sense.
I can ask DALL-E to create an image, and then I can do 15 versions of it. My first one doesn’t have to take me 30 hours to draw. I can say, “Now give me feedback on this image, and pretend you’re somebody else giving me feedback.” I think we will have people become doers, and maybe our world will be a little bit more fun because we’ll all be creating. I’m excited for that.
Some of these tools are wild, but that doesn’t mean people will use them, like them, or do anything differently because of them . . . getting me to do something different from yesterday is very difficult.
The second thing, and this is the controversial one, is that we have these words like loneliness epidemic. I actually think it’s a friendship epidemic where people need more friends, more high quality friends, and that’s actually a very difficult problem to solve because it’s an offline problem. It takes a lot of vulnerability from people. There is an upside of having an AI as your friend or somebody just to get feedback on something, “Do you like this skirt I’m wearing?” or, “Hey, what do you think of this new video game?” I think it’s actually very nice.Â
You could say that, well, that will replace all of our real friends, and it’s possible. It’s also possible to just be additive and we will actually have more. More is better when it comes to friendships and relationships. Again, I think I have a techno-optimist view on how we solve this loneliness epidemic with AI.
The third thing I’m excited about is self-driving cars. This is the biggest AI app that we ever have created. It will likely change the framework of society in the next decade. It already exists in San Francisco. When we think about AI, many times we think about it as a digital representation of our lives, and actually, self-driving cars are this physical representation of AI in our daily lives. And we know that commuting causes deep unhappiness. So if we can solve the commuting thing, we can solve some level of this time allocation problem.
This is a nice segue into my next question, because I think my answer to it would have been the loneliness chatbot. What are some of the applications you’re most fearful about? What worries you, even as a techno-optimist?
This is the world in which every app can become a TikTok. I think that’s scary. TikTok uses a lot of clever behavioral signals like how long have you stayed watching a video and AI to quickly create a map of your preferences and then serve you up very, very compelling videos. I challenge anyone to go on TikTok for 10 minutes and not stay on for an hour. Humans, regardless of their intellect or willpower, will likely fall to the algorithm. So we should be nervous about the attention economy.Â
Again, the techno-optimist might say, “Well, we will have more algorithms to filter things from us, so I don’t want to see all the news, and I want to see summaries of some things and not of others.” Economist Tyler Cowen is a proponent of AI simplifying the attention economy because you get to create your own model of what you want. It’s not somebody else’s model; it’s your model. So there could be problem-solving here.Â
What do you wish AI developers would ask behavioral scientists?Â
My sense is that a lot of AI developers, and I think this goes for any kind of technical product in general, are leading with the technical solution versus the insight on if people would use something, how they would change behavior, what preferences do they have? Some of these tools are wild, and that doesn’t mean people will use them, like them, or do anything differently because of them. I think that is the humility that AI developers and technologists should be going into this with; getting me to do something different from yesterday is very difficult.
What do you wish behavioral scientists would ask AI developers?
“What’s possible?” As a techno-optimist, I do think we’re in for real change to our day-to-day, and this may not happen this year or next year, but over the next decade, we are likely to change a lot of our day-to-day workflows in our life. I think we should be curious about how AI can help in the problems that we’re personally working on, whether it’s related to our internal productivity and operations or how we conduct our work or research.
Is there anything we didn’t get to cover that you want to add?
I think folks reading probably think, “I could probably use AI more.” There’s probably a couple of apps you wanted to try and you haven’t done it yet. Silicon Valley doesn’t have this problem. Everyone is using the apps. You’re meeting with people and they’re working on some new AI app, so you’re learning about it without having to read a blog post. That’s actually not great for leaving folks behind. So there is this question: how do we help people bridge the intention-action gap so we’re not leaving people behind?