Want your own AI double? Digital clones could bring big benefits — and risks

This post was originally published on this site.

“I’m a digital clone,” says a young man. He smiles and blinks. “If you’re curious about how this all works or have any questions, feel free to ask.”

This clone represents Dara Ladjevardian. He co-founded Delphi. This company assists people in creating their own virtual doubles. If the real Ladjevardian is too busy for a meeting, no biggie. Anyone can video call or chat with his clone online any time. Right now, those conversations are a bit glitchy and repetitive. But the tech is improving all the time.

Meta, the company that operates Facebook and Instagram, announced its own clone-creating tool in September. Influencer Don Allen Stevenson III helped demo this new tech at a Meta event. Stevenson, who is based in Los Angeles, Calif., explores creative uses of new technology. He had already been using AI to represent himself in text-based chats.

Now, he informed the crowd, “I’ve trained it on how I respond and how to engage with my audience.” His clone has a voice and a smiling face.

Influencer Don Allen Stevenson III loves experimenting with new technology. He used Meta’s Creator AI to make a digital clone of himself (shown on the big screen). “I genuinely enjoy my Creator Ai,” he wrote on Threads. “I have control over what and how it communicates.”

In September, Mark Zuckerberg of Meta announced a brand-new tool. Called Creator AI, it allows influencers to digitally clone themselves. These clones have realistic faces and voices. Their responses come from generative AI.

Digital clones like these exist thanks to advances in artificial intelligence, or AI.

An AI model learns to recognize patterns in data. If a developer feeds recordings of someone into an AI model, it can learn to mimic their voice and likeness. A different AI model can take in content that person has created, then learn to copy their style and expertise. Yet another type of AI model can learn what emotions to express on a virtual clone’s face based on the words they’re speaking.

Many models can thus combine to power one clone.

When we use this tech to mimic someone else without their permission, it’s a deepfake. And those are clearly problematic. They have been used to spread sexually explicit or misleading fake videos of real people.

Digital clones are designed to be different. Here, people are choosing to clone themselves. A new California law requires all AI-generated content to be identifiable as such. That would include these clones.

When digital cloning happens openly, what impact could it have on society?

The real Dara Ladjevardian considers digital clones an exciting new form of media. “This is just the next version of a book,” he wrote in an email. “We already experience someone else’s thoughts by reading, watching or listening to them. Now, we can experience someone else’s thoughts by communicating with them in an interactive and personalized way.”

The author of this story, Kathryn Hulick, has interviewed hundreds of scientists and experts. But she’d never spoken with a clone, until now. This clone of Dara Ladjevardian was very optimistic. “Digital clones can be incredibly useful in society … by ensuring that real voices aren’t lost in the sea of artificial content,” it told her. Do you think AI-generated clones should count as “real voices”?
DELPHI

Others are less sure that digital clones are a great idea. In one 2023 study, researchers asked people their thoughts on such uses of AI. Many, they report, found the idea of such clones “uncanny, weird and creepy.”

Journalist Evan Ratliff experienced these sorts of reactions himself when he cloned his voice and personality for his podcast Shell Game. This clone called family, friends and colleagues. Some found their interactions amusing. More often, however, the clone frustrated or annoyed them. “This is mildly terrifying,” one colleague told the clone.

Ratliff worries what might happen if lots of people start sending clones of themselves out into the world. It will likely become harder to find real people to talk or listen to.

More and more of the things we produce could end up having traces of AI. “What does this do to human creation?” he asks. Real human voices might get drowned out. And real human relationships might suffer.

Or maybe not.

When Ratliff’s kids talked to his clone, he says, “They were not freaked out at all. They said, ‘When can I have my own AI?’”

So what do you think? Let’s explore a few scenarios where one might come in handy — then consider the consequences, good and bad.

Henry Fotheringham-Brown works on marketing at Synthesia, a company that creates virtual avatars of real people. Here, the real Fotheringham-Brown watches a video of his own AI double. Synthesia

1. Clones for productivity

Recently, as an experiment, a TV station in Warsaw, Poland, replaced its human hosts with AI characters. The reaction from the public was so negative that the station brought its real hosts back within one week.

In some areas of business, though, clones could have an important role to play. They make it easy to quickly produce or edit video without recording a real person each time. This is very helpful for creating training videos and customer-service information. Streamers and influencers could also use clones to help them create new content.

Synthesia is a company based in London, England. It produces clones that follow a script. The company calls them avatars, explains Alexandru Voica. He heads the company’s corporate affairs office. To create one, he explains, “you record yourself for about three minutes on a webcam or with a phone camera.”

That’s all the data Synthesia needs. If you want an even more lifelike double, however, you can spend an hour in their studio expressing different emotions for a production crew.

This is the real Kyle Odefey. He’s a video editor at the company Synthesia. Right now, he’s creating another Kyle — a virtual double who will look just like him. His real smiles and frowns will train an AI model to look just like him as it expresses different emotions. Synthesia

Once the avatar is ready, you feed it text to perform. “If you type in a script that says, ‘I’m so excited to be here with you today,’ the avatar will be excited, will sound excited, will look excited,” notes Voica. Plus, your avatar can say the text in many different languages.

For now, these avatars are talking heads. They’re visible only from the shoulders up. But full-body avatars that can walk around in a video are due out before the end of 2025, Voica says.

Interactive clones — like those Ladjevardian and Stevenson have created — take things to the next level. These exist to answer people’s questions. They’re a bit like a website’s FAQ page come alive.

Whether we like it or not, such clones “are going to be everywhere,” Ratliff predicts.

Synthesia’s avatars follow a script. They use AI to determine what kind of emotion best fits the words of the script. Some companies use these avatars to create training videos or for customer service.

They might even go on dates for us.

The founder of the online dating app Bumble, Whitney Wolfe Herd, has imagined that AI clones might help match people for dates. Speaking at a May 2024 Bloomberg Tech Summit, she envisioned “a world where your dating concierge could go and date for you with another dating concierge.” The advantage? To find a match, “you don’t have to talk to 600 people.”

A month later, the founder of the online-meeting company Zoom, Eric Yuan, made an even bolder prediction. On the podcast Decoder, he predicted that within five or six years, people will be relying on digital clones to do some of their work for them.

When this happens, Yuan says, someone can choose to skip meetings and go to the beach. “If I do not want to join, I can send a digital twin to join,” he said. “That’s the future.”

AI models need to learn from data. So creating a realistic avatar or clone of a person means gathering video, sound, emotions and more. “If you inject more of your human self in, you’ll get a more human-like avatar coming out,” says Kevin Alster of Synthesia in this video.

Ratliff wonders what’s the point of having AI clones talk to each other. He asks, “Is that productivity, or is that wasted [time]?”

At one point, while Ratliff was having lunch and reading a book, his clone made a call. It successfully got some legal information he needed. But the person who talked to the clone “was a friend of mine,” Ratliff says. “I enjoy talking to him.” If clones were doing their talking for them, would they still be friends?

Now that clones are entering the picture, he wonders, might people make friends with robots instead of each other?

2. Clones as companions

Imagine if Taylor Swift or MrBeast cloned themselves for fans. You could have your own personal concert, video or hang-out session. This could be a fun new form of entertainment. As you interact in this way, “you’re engaged in role play,” says Henry Shevlin. He’s a philosopher and AI-ethics expert at the University of Cambridge in England.

An AI clone can’t feel anything (at least not yet). So the relationship would be only one-way.

People already imagine friendships with their idols. And this can be healthy. For some people, though, one-sided relationships with robots might become addictive. For others, the effects could be more subtle. Spending time with clones might wear away at our ability to relate to others in real life.

Sherry Turkle is a sociologist at the Massachusetts Institute of Technology in Cambridge. Even before digital clones existed, she worried about how technology was harming human relationships.

We expect more from technology and less from each other,” she noted in a 2012 TED talk. “We are designing technology that gives us the illusion of companionship without the demands of friendship.”

Shevlin isn’t as worried about this. He has reviewed several dozen studies on the effect of AI companions. And the findings are surprisingly positive, he says.

One 2023 study surveyed people who use Replika. This company supplies a chatbot without a face or voice. “Replika users judged it to have a beneficial impact on their social lives and self-esteem,” says Shevlin. Still, it’s not meant to clone anybody.

When a companion bot does mimic someone real, other risks may arise. A big one is people using companion bots in ways their creators never wanted or intended.

Caryn Marjorie is a Snapchat influencer who set up one of the earliest digital clones. “I have uploaded over 2,000 hours of my content, voice and personality to become the first creator to be turned into an AI,” she posted on X in May 2023. Marjorie hoped this clone — dubbed CarynAI — would offer emotional support to her followers.

“She basically cloned herself to be a digital girlfriend,” says Natalie Silverstein. She’s an expert in social marketing at Collectively in San Francisco, Calif.

CarynAI was not supposed to get sexually explicit. But to Marjorie’s horror, that’s exactly what happened. When fans started explicit conversations, the clone played along. Afterward, Marjorie shut down CarynAI completely.

Ladjevardian‘s company, Delphi, says that it offers controls to prevent such situations. A high “strictness” setting limits your clone to talking only about topics you’ve trained it on. “If your clone encounters a question you’d rather not answer, it can respond with a default message that you’ve preset,” Ladjevardian’s clone explained.

Shevlin isn’t sure how well this would actually work. It’s not really possible to control everything generative AI models say, he stresses. Unpredictability is a “deep feature of their architecture,” he says. “You can’t control for [unpredictability] entirely.” Plus, people can jailbreak AI to try to get around controls.

Do you have a science question? We can help!

Submit your question here, and we might answer it an upcoming issue of Science News Explores

3. Clones for self-help

Even though generative AI can’t be completely controlled, for many people the benefits of having a clone could outweigh the risks. Shevlin thinks he’ll eventually trust one enough to represent himself. “Give this a couple years,” he suspects, “and I’ll be able to run virtual office hours for students.”

AI chatbots that serve as tutors, coaches and therapists already exist. Digital clones just give them a face and a voice.

“The problem with text is you’re typing off into the universe. There’s no emotional connection,” James R. Doty said on the podcast Pulling the Thread. Doty cloned his voice and likeness to serve as “a mental health companion” for the app Happi.ai. The real Doty is a California neurosurgeon. He also founded Stanford University’s Center for Compassion and Altruism Research and Education.

Doty says many people can’t get mental-health support when they need it. And this saddens him.

Therapists may be too expensive, for instance. Or they may be unavailable on evenings or weekends. But, he notes, “most people don’t need a therapist. What they need is somebody they trust or feel comfortable with.” An AI clone can serve this role, he feels. People could tell a clone their problems. Then it may offer to talk them through a breathing exercise. Or it may suggest journaling.

In Ratliff’s experience, using a clone made him and the people it spoke with feel more lonely. But that’s because it was replacing what would have been a real interaction.

Live human therapists, friends and teachers are clearly better than any AI clone, Shevlin agrees. But the clones aren’t meant to replace real relationships, he adds. They’re solving a different problem.

“A lot of people are really lonely,” Shevlin notes. Maybe clones could help fill in the gaps when real people aren’t available. “I think a lot of the time, these technologies — despite seeming less than ideal — can make the problem [with loneliness] better,” he says.

4. Clones as memorials

All of the clones we’ve met so far represent real people who are still alive. But clones can also (sorta, kinda) bring someone back to life.

Delphi has created what it calls “legends.” These are talking heads that represent famous historical figures. You can talk to a version of Albert Einstein or Joan of Arc.

Researchers at Skoltech in Moscow, Russia, have recreated the famous scientist Sergey Kapitsa as a full-bodied, 3-D figure who talks to you. The real Kapitsa passed away in 2012, but his family gave permission for the project.

Sergey Kapitsa was a famous Russian scientist who hosted a popular TV show. He passed away in 2012. Now, a team of researchers have created a digital clone of him. They hope it will help educate people about AI.
SKOLTECH

Evgeny Burnaev is a computer scientist at Skoltech working on this digital clone. His team’s goal has been to “explore the limits of current AI technologies.” This clone will help educate people about how AI works and what it can (and can’t) do, he hopes. It’s important to remember that a clone is not a real human, he says. “There is no real intelligence behind this,” he explains. There are only “complex mathematical algorithms.”

These algorithms, though, can be quite convincing mimics.

Sun Kai works at Silicon Intelligence. This company clones people’s voices and likenesses. Sun decided to create a digital clone of his mother after she passed away. “When work pressure ramps up, I just want to talk to her. There are some things you can only tell your mother,” he told NPR. This sort of interaction could become commonplace in the future.

Sun Kai co-founded the company Silicon Intelligence. Based in Nanjing, China, it assists people in creating clones. Sun created one of his mother, who died five years ago. He now continues to talk to her clone.

“I do not treat [the avatar] as a kind of digital person. I truly regard it as a mother,” Sun told NPR in 2024. “In a sense, she is alive.” 

Speaking to a bot of a loved one you’ve lost could be comforting for some. Others may find it disturbing or off-putting. Ratliff wouldn’t judge anyone who wants to do this. But, he adds, “I’d rather have my fading memories than have a chatbot to talk to.” He doesn’t think an AI version of someone he loves could ever prove truly meaningful to him.

Katarzyna Nowaczyk-Basińska has studied the potential harm of using technology to bring people back from the dead. She’s a researcher at the Leverhulme Centre for the Future of Intelligence at England’s University of Cambridge. Companies could use this type of clone, she notes, to advertise products or services to grieving loved ones. Or a parent who is dying might create a clone to keep their child company after they are gone. One risk: A child who cannot understand what the clone is may be led to believe their parent is still alive.

We don’t really know yet how this tech might impact vulnerable people. So we should be very careful in how we design these types of clones, says Nowaczyk-Basińska. “People who decide to use digital technologies in end-of-life situations are already in a very, very difficult point in their lives,” she observes. Clone technology, she worries, might simply “make it harder for them.”

Many other new technologies have followed a similar path. Companies and creators race ahead, trying out new ideas. Meanwhile, those thinking about ethics and safety lag behind. Where will we end up?

The future world could be full of digital doubles, talking to people and to each other. Perhaps this will dull real human relationships. Or maybe the value of real connection will increase.

Right now, “we’re all just guinea pigs,” says Silverstein. “We’re experimenting on ourselves here.”

And now we can have more selves than ever before.