This post was originally published on this site.
Dawid MoczadĆo is the chief technology officer at Vidoc Security Lab, which uses AI to find and fix errors in source code. Heâs a tech guy, but twice recently heâs been targeted in scams to embed bad actors in tech companies.
In late 2024, Vidoc was looking for a remote developer. One candidate passed several phone screenings and the technical evaluation. Next was an on-camera interview. âI donât know how to describe it, but our intuition was that something was up with him,â MoczadĆo says. âHis answers were good, but the camera looked really weird.â Spooked by the feeling that something was just off, Vidoc ultimately turned him down.
In January, MoczadĆo interviewed another candidate. Vidoc gave advance notice of the on-camera interview. At meeting time, the candidate said his camera wasnât working and asked to reschedule on a different platform. âWhen he turned on his camera, I immediately knew. I was like, oh, my god, it happened again.â
Lately, some recruiters and hiring managers are finding themselves on video calls with job applicants they suspect are using AI to manipulate their appearance or voice.Â
Manisha Bavabhai is an Atlanta-based senior technical recruiter at health care payments platform Rialtic. Sheâs interviewed several candidates for full-stack developer jobs whom she suspects were manipulating their voice, which was either entirely fabricated or belonged to someone off camera. In London, Conor Larkin, a senior recruiting manager at tech staffing firm Harnham, also believes heâs encountered deepfake candidates. Though itâs hard to explain exactly whyâmaybe itâs that the voice doesnât quite match the face or the audio is out-of-sync with the imageâthey both say they leave the call with the sense that something is off about the job seeker.Â
How it works
To pull the scam, actors invent false identities to apply for jobs. Like Vidocâs candidate, theyâre often qualified and able to pass screening calls, complete technical interviews, and speak to a hiring manager. When theyâre asked to give an on-camera interview, they either defer, claiming a bad internet connection, or they manipulate their image or voice to disguise who they are.
Cybersecurity experts say itâs getting easier to create synthetic images. In a video call with Inc., a representative from Reality Defender, which makes deepfake detection software, demonstrated, step-by-step, how easy it is to spin up a deepfake likeness with off-the-shelf tools available to virtually anyone with an internet connection. From start to finish, it took less than two minutes.
This used to take huge amounts of computing power, says Reality Defender CEO Ben Colman. Now, all you need is a high-powered gaming laptop and a good internet connection. âIn the last year, not only have the computational costs gone down by a factor of over 100, but a lot of the previous computational load that required you to spin up your own instance on Amazon, AWS, Google Cloud, or Azure now no longer needs any cloud compute. You could do it 100 percent locally on a computer.â
Whoâs behind the deepfakesÂ
Itâs not always clear whoâs on the other side of these deepfake videos, but there are usual suspects.Â
Bad actors working on behalf of hostile nation states like North Korea, Russia, and China use these tactics to worm their way into companies to collect paychecks, swipe company secrets, and access private data, says Colman. For years, North Korea has used cybercrime to fund its government. In January, the Justice Department indicted a group of North Koreans and one Mexican national for an employment scam that funneled money to the North Korean government.
Small-scale scammers pull these heists too, says Ryan LaSalle, CEO of cyber investigation firm Nisos. They get hired, gain access to internal company systems, and then sell the credentials online. Some are attempting basic corporate espionage to dig up trade secrets or swipe IP. In some cases, these are solo actors trying to get jobs, sometimes several jobs at once, for the money. Long-term employment is not the goal, and many collect just one paycheck.
These arenât new scams, LaSalle explains, but âjust a new technique to accelerate the effectiveness.â In fact, itâs not even limited to employers â deepfake interviewees are also showing up in university interviews.
Unlike some employment scams where employers are collateral damage, employers are the prey here. Tech companies make good targets: They often hire remote workers, have valuable IP, and pay well. Vidoc is an early-stage startup, and by MoczadĆoâs own admission, itâs inexperienced at recruiting, and not naturally suspicious of its job applicants. âIf the candidate said he had some problems with the camera, we just assumed, âOK, his camera is probably broken.â We didnât think about a situation where he could be an agent from North Korea.â
Yet heâs one of the few whoâs caught it on camera. Scammers have been using deepfakes for a while, but itâs difficult to capture it in the wild since itâs not common practice to record interviews, says LaSalle. He calls Vidocâs video the âsmoking gun.â
How to spot at deepfake candidate
As soon as he realized the image on screen was a deepfake, MoczadĆo started recording the call (and later posted it on LinkedIn). Because AI-generated video still struggles with boundary detection â the hazy border that appears around a person using a background image â MoczadĆo asked the candidate to hold up his hand. If he were real, there would be no border between his face and his hand. âIf you donât do it, weâll end the conversation right now,â he says in the recording. The interviewee doesnât even try.
If you want to spot an AI-generated interviewee, MoczadĆoâs hand trick works. Some companies, like Reality Defender, are also building plugins to detect deepfakes in video calls. Vidoc has also changed its recruiting policy and requires candidates to be on camera for all interviews. Experts recommend doing a gut check too. If something just feels off about the video, ask if theyâre willing to use a different platform.
If you suspect youâve hired a scammer, you can suss them out. LaSalle recommends checking for productivity fakers, like mouse-jigglers, and making sure their IP address places them where they claim to be working. But proceed with caution, he says. No one wants to work at a company conducting a witch hunt.Â
Despite the deepfakes, recruiters Larkin and Bavabhai are getting better at detecting suspicious candidates early on. LinkedIn profiles with blurry photos and no connections, resumes that look like theyâve been lifted from ChatGPT, or applicants obfuscating questions in screening calls are all red flags. Both say they want to give people the benefit of the doubt. Poor video and audio can easily be chocked up to a bad internet connection or an old camera, so they donât want to unduly penalize applicants. Still, Bavabhai admits her tolerance for suspicious incidents is much lower than it used to be.
Deepfakes like these will only get more convincing. âThe AI is bad enough that we can detect it,â MoczadĆo says. But in a year, it will get much better. I donât know if weâll be able to even tell if the person is a person.â
The application deadline for Inc.’s Best Workplaces Awards is Friday, March 7, at 11:59 p.m. PT. Apply Today.