This post was originally published on this site.
Nick Meyer said $100,000 would have changed his life.
The 26-year-old actor said it would have âtaken a lot of weightâ off his shoulders and provided relief for his family. Although heâs been acting professionally for a decade, Meyer said he makes less than $10,000 a year from acting and supplements his income with food service and retail jobs. So why would he turn down a voice-acting gig offering roughly 10 times his annual acting salary for only 20 hours of work?
Because the job entailed recording his voice to train artificial intelligence-powered voice replication models. âI am not going to sacrifice my morality for a paycheck, no matter how big,â Meyer said.
The L.A.-based performer is one of many voice actors reckoning with AIâs industry disruptions. Voice cloning has become much easier, requiring just seconds of audio. This poses a host of challenges for actors who have found their voices replicated online without their consent, knowledge or compensation, reducing paid job opportunities and stripping them of their agency.
When Meyer made it clear to his representatives in February that he was not going to take the gig, he said he was met with ire. He ended up parting ways with his agents after they told him they would not be a good fit going forward if he turned down the job. Meyer declined to name the agency, but The Times reviewed email exchanges between the actor and his former agents that verify the events.
About a year ago, Meyer said his voice was replicated without his permission by users of the popular AI chat platform Character.AI. Users cloned recordings of his voice and created online personas to accompany the voices. There are at least a dozen âNick Meyerâ characters featuring his name and image on the app, and they have collectively engaged in more than 100,000 chats â defined by the number of âhuman messagesâ sent to those characters. So Meyer knows what itâs like to not have control over what his voice is saying.
âIf this gets any better, if this continues to get trained, if this has more footage or more recording of my voice, how much closer can it get to sounding like me?â Meyer said.
A Character.AI spokesperson said in a statement to The Times that the company takes âswift action to remove reported Characters that violate copyright law and our policies.â Meyer said he has reported the characters as unapproved uses of his name, likeness and voice.
During the course of reporting this story, Meyerâs cloned voice was replaced with generic voices, but the characters that bear his name and image havenât been taken down.
âUsers create hundreds of thousands of new Characters on the platform every day,â the statement continued. âOur dedicated Trust and Safety team moderates these Characters proactively and in response to user reports, including using industry-standard blocklists and custom blocklists that we regularly expand.â
âI am not going to sacrifice my morality for a paycheck,â actor Nick Meyer said about turning down a job for an AI voice-modeling program.
(Emil Ravelo / For The Times)
Nearly a dozen actors interviewed by The Times said they are fearful of what their voices could be used for if theyâre cloned without their knowledge. Whether that content is a violation of exclusivity clauses they signed with existing clients or something they morally disagree with, voice cloning could hurt more than just their wallets.
About 80% of working voice actors arenât represented by a union, so the onus often falls on the individual to protect themselves. Up until a few years ago, worries about voice cloning were virtually nonexistent. Now, they concern thousands in the industry.
âItâs like the Wild West,â said Joe Gaudet, a Connecticut-based voice actor with more than 20 years of experience. Gaudet, 41, voiced more than 30 videos for a company before he says it replicated his voice and cut him out of additional work by using the clone for quick edits to scripts.
Gaudet said he was gutted, especially because he believed the company was working in good faith.
âYou feel like youâre useless and you have no value,â he said. âItâs the worst feeling in the world. Itâs the worst. And I know itâs not just me. These people in many, many companies are screwing people over.â
The National Assn. of Voice Actors aims to help performers navigate this essentially uncharted territory. The nonprofit, founded in March 2022 with the goal of providing healthcare for freelance voice actors, has become a crucial source of AI information and guidance for many in the industry. The organization crafted a contract rider that addresses many actorsâ concerns about their voice being cloned or used to train AI models.
Although several actors said the riderâs language is now a non-negotiable part of new contracts, it doesnât help those who signed contracts with expansive and vague language before the advent of AI. Agreements commonly include verbiage that actorsâ recordings can be used in all âtechnology known or yet to be developedâ or âin perpetuity throughout the universe.â Others have language buried in the fine print that enables companies to sell an actorâs voice to other parties.
The women behind the voices of Siri and TikTok speak out
Atlanta-based voice actor Susan Bennett is among the performers who signed vague contracts decades ago, not anticipating the advances in voice replication technology.
On Oct. 14, 2011, Apple released the iPhone 4s, which introduced the digital voice assistant Siri. Siri was, at the time, novel â she was the first interactive voice that didnât sound robotic or monotone. And she was even programmed to have a bit of humor and sarcasm (in response to the question âWhat are you wearing?â Siri would say, âAluminosilicate glass and stainless steel. Nice, huh?â).
Bennett received an email that day from a friend and fellow voice actor, asking if it was her voice.
âI went, âWell, gee, I donât remember doing that work. I certainly didnât get paid for that work,ââ Bennett recalled. âIt was a conflict of feelings, of course. I was very flattered that my voice was chosen, but on the other hand, itâs like, âWow, thereâs my voice, itâs just going to be completely ubiquitous, and how is that going to affect my livelihood as a voice actor?â And, of course, thereâs no way to really measure that.â
Six years before Siriâs launch, Bennett worked on a project with software company ScanSoft to create interactive voice recordings. She spent several months recording nonsensical phrases such as âSay bow geeky preface todayâ and âSay the doesnât ding againâ to capture as many sound variations as possible. After months of tedious voice-over work, she was paid by ScanSoft and sent on her way. She didnât think about the project again until fall 2011, when her voice was suddenly everywhere.
Bennett, 75, said she knew her voice would be used for interactive text-to-speech technology, but she had no idea about the scale or reach. She said she wasnât notified that she would be the voice of Siri or compensated by Apple. A representative for Apple did not respond to The Timesâ requests for comment.
âI was extremely naive about what I was doing,â Bennett said. âItâs like, âOh yeah, here I am, saying everything that could possibly be said. What could go wrong?â
âThey could have thrown me a bone, sent me a few thousand and pat me on the head,â she said.
Years after Bennettâs debacle, Canadian voice actor Bev Standing found herself in a similar situation. TikTok debuted a text-to-speech generator in late 2020 that had a strong resemblance to Standingâs voice.
Standingâs first thought after friends and family sent her videos featuring her voice was, âWhatâs TikTok?â Standing had done recordings a few years earlier for a different company that said her voice would be used for Chinese translations.
When Standing saw a video that featured foul language in her voice, she knew similar problems would keep cropping up. TikTokâs text-to-speech feature has few content restrictions, so users could use Standingâs voice to say almost anything.
Standing said she wasnât informed or paid by TikTok ahead of the release of the feature, so she sued its parent company, ByteDance, in 2021.
âYou canât do it to a movie star. They stand up and their lawyers stand up and their agents stand up. But when youâre a little nonunion person that lives in the middle of nowhere, no big deal,â Standing said. âWrong. Itâs a big deal. And because I spoke up, and because people took note, theyâre standing up, and thereâs a lot to be said in doing things in numbers.â
The complaint was settled out of court about four months after it was filed. Standing cannot discuss the terms of the settlement, and TikTok did not respond to The Timesâ requests for comment.
The threat voice cloning poses is not limited to those with hours of high-quality recordings of their voices online. Realistic voice clones can be created with as little as three seconds of audio, said Tim Friedlander, president and co-founder of NAVA.
âIf you have a video on social media somewhere that has your voice, image, name and likeness in it, it is in a system somewhere,â Friedlander said. âIt has been used to train something, and it will more than likely be used to be sold back to you as a product in some capacity at some point.â
âA violation of our humanityâ
Thereâs a significant financial effect on actors when their voices are replicated and they are left to essentially compete for jobs with a cheaper version of their own voice. Itâs often difficult for performers to track where their cloned voices end up or how theyâre used, so itâs almost impossible to quantify the monetary impact of unauthorized clones.
Paul Skye Lehrman and Linnea Sage, New York City-based voice artists, discovered that both of their voices were cloned by AI company Lovo in 2022 and 2023. The married couple was listening to a podcast â ironically, about the dangers of AI â while driving when they recognized Lehrmanâs voice, or rather, a clone of his voice. They estimate that their voices could have been used for âhundreds of thousands of scripts around the world.â Lehrmanâs voice was the default option on Lovo for roughly two years, according to the complaint he filed last year in court. The companyâs co-founder Tom Lee confirmed on the podcast âCategory Visionariesâ in 2023 that the technology had been used to create more than 7 million voice-overs at the time.
Linnea Sage, left, and Paul Skye Lehrman are in a legal battle against the AI company they say cloned their voices. âWe are going to continue fighting the Goliath,â Sage said.
(Justin Jun Lee / For The Times)
âVoice is as personal as our fingerprints,â Lehrman said. âItâs just such a violation of our humanity and an invasion of our privacy. It felt like being violated. And then everything â fear, anger, shame â all of this came with it.â
Sage and Lehrman worked with distinct clients on Fiverr, an online marketplace for creative freelancers, in 2019 and 2020, respectively. They now believe those clients were working for Lovo without disclosing their identities or motives. Both actors said they asked the clients â who had the anonymous usernames âUser25199087â and âtomlsgâ â in advance for the explicit purposes of the recordings they were submitting. They said they were told, unequivocally, that their voices would not be used for commercial purposes â only for research and internal purposes â without any mention of AI.
Lehrman and Sage claim that Lovo, without informing or paying them, cloned their voices and made them available for use on the site under fake names and for promotional materials. They sued Lovo in May 2024, and the case is ongoing. The company did not respond to requests for comment.
âWeâre in a unique position to hold our destroyers accountable, and we are going to continue fighting the Goliath for everybody in our industry, to at least set some sort of message that you just cannot do this,â Sage said. âYou canât take advantage of actors and artists.â
Remie Michelle Clarke, an Irish voice actor and writer, came across her voice on the AI-powered narration site Revoicer, a company sheâd never worked for. Clarke had booked a text-to-speech gig for Microsoft Azure in 2020, not understanding that the recordings could be used by third parties. She said the job description indicated that the recordings would be âmainly for internal use, and possibly for end use down the line.â
That possibility was more probable than she expected. When Clarkeâs voice appeared on Revoicer in January 2023, the mom of two young children said she worried her voice would be used for nefarious purposes.
âMy older boy, whoâs nearly 3, is starting to hear my voice on the radio and TV and knows itâs Mummy. And I just wonder when he gets a bit older and he comes across things on the internet that might be very unsavory and hears Mummyâs voice â that makes it extremely personal and extremely difficult for me,â she said.
Clarkeâs contract with Microsoft gave the company the rights to her voice recordings in perpetuity. A Revoicer representative declined to comment, but a developer confirmed to the Washington Post in 2023 that the company had a licensing agreement with Microsoft, which would have given it access to Clarkeâs sample.
âThe allusion to âThe Little Mermaidâ has been used so many times, but this is it. Itâs Ursula scraping the bottom of the ocean to try and get absolutely everything that they can at the expense of culture, at the expense of art, at the expense of individuals, families, societies,â Clarke said. âItâs huge, and itâs all taking far too long for it to change for the better.â
Clarke said her voice has since been removed from the site after she spoke about the situation in several interviews.
A glimmer of hope
Some actors are trying to embrace voice cloning to stay ahead of the curve. Bob Carter, a seasoned Atlanta-based voice actor and owner of recording space and voice-over education center the Neighborhood Studio, worked with AI company ElevenLabs to create a highly realistic clone of his voice. Heâs paid every time his voice clone is used and can set parameters for how itâs utilized.
âI knew that thereâs no stopping this. This train has already left the building. It is off and running,â Carter said. âI had to protect myself.â
Carter said the voice of his wife â actor and coach September Day Carter â was used without her knowledge, consent or compensation for a slew of projects.
âItâs always better to be proactive than reactive,â said Carter, 52. Heâs now paid every eight days by ElevenLabs and said he takes comfort knowing heâs benefiting from how AI is transforming the industry, although he realizes some of his peers are hesitant to embrace the technology. âChange is scary when it happens to us, but itâs a good thing when it comes from us,â he said.
In addition to engaging voice actors directly, ElevenLabs has multiple safeguards in place to prevent users from cloning othersâ voices.
âThere is no single safety mitigation that is completely effective in preventing misuse on its own,â said Artemis Seaford, head of safety at ElevenLabs. âSo what you want to have is essentially a safety stack, which is a series of safeguards that work together in order to provide a robust system against abuse.â
Some of those safeguards include a proprietary voice verification technology and several layers of screening and moderation to ensure users are using the technology only to clone their own voices.
A few states, including California and New York, are enacting legislation to protect against the misuse of unauthorized digital replicas, including video deepfakes and AI voice clones. But performers and creatives outside of those states remain at risk without federal legislation.
The Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act), introduced by U.S. Sen. Chris Coons (D-Del.), aims to address that gap. Scott Mortman, a lawyer and AI advisor who works with NAVA and teaches a course on AI law at Purdue University, said heâs ânot optimisticâ the law will pass anytime soon, despite its bipartisan support.
âLord would hope if the two parties can agree on anything it would be the need to restrict the unlawful use of somebodyâs image or voice or likeness, but that is to be determined because this administration overall appears to be quite resistant to any form of regulation and seems to be making a great effort to undo existing regulations,â Mortman said. âSo whether or not this particular regulation ultimately gets signed into law very well may depend upon the person who has to sign it into law.â
As actors contend with quickly evolving voice replication technology and the threat of its misuse, many seem more aligned with Meyer, the 26-year-old who turned down a lucrative AI voice clone job, than Carter. Whether his voice would be distinguishable or just one of many voices layered to create a new product, Meyer said he didnât want to be âcomplicit in the destruction of digital media.â
Meyer said those who deem voice cloning just the latest in a string of technological advancements in Hollywood, like CGI, are not seeing the full picture. CGI, he said, âmade it easier to tell stories that were once thought impossible to tell,â solving a problem. To Meyer, voice cloning doesnât come close to accomplishing that goal.
âIt created a problem that didnât exist.â
Newsletter
Sign up for This Evening’s Big Stories
Catch up on the day with the 7 biggest L.A. Times stories in your inbox every weekday evening.
You may occasionally receive promotional content from the Los Angeles Times.