How to Tell If Your Job Candidate is an AI Deepfake – Inc. Magazine

This post was originally published on this site.

Dawid MoczadƂo is the chief technology officer at Vidoc Security Lab, which uses AI to find and fix errors in source code. He’s a tech guy, but twice recently he’s been targeted in scams to embed bad actors in tech companies.

In late 2024, Vidoc was looking for a remote developer. One candidate passed several phone screenings and the technical evaluation. Next was an on-camera interview. “I don’t know how to describe it, but our intuition was that something was up with him,” MoczadƂo says. “His answers were good, but the camera looked really weird.” Spooked by the feeling that something was just off, Vidoc ultimately turned him down.

In January, MoczadƂo interviewed another candidate. Vidoc gave advance notice of the on-camera interview. At meeting time, the candidate said his camera wasn’t working and asked to reschedule on a different platform. “When he turned on his camera, I immediately knew. I was like, oh, my god, it happened again.”

A screenshot from Dawid MoczadƂo’s call with a job applicant using AI-generated video to disguise his appearance.

Lately, some recruiters and hiring managers are finding themselves on video calls with job applicants they suspect are using AI to manipulate their appearance or voice. 

Manisha Bavabhai is an Atlanta-based senior technical recruiter at health care payments platform Rialtic. She’s interviewed several candidates for full-stack developer jobs whom she suspects were manipulating their voice, which was either entirely fabricated or belonged to someone off camera. In London, Conor Larkin, a senior recruiting manager at tech staffing firm Harnham, also believes he’s encountered deepfake candidates. Though it’s hard to explain exactly why—maybe it’s that the voice doesn’t quite match the face or the audio is out-of-sync with the image—they both say they leave the call with the sense that something is off about the job seeker. 

How it works

To pull the scam, actors invent false identities to apply for jobs. Like Vidoc’s candidate, they’re often qualified and able to pass screening calls, complete technical interviews, and speak to a hiring manager. When they’re asked to give an on-camera interview, they either defer, claiming a bad internet connection, or they manipulate their image or voice to disguise who they are.

Cybersecurity experts say it’s getting easier to create synthetic images. In a video call with Inc., a representative from Reality Defender, which makes deepfake detection software, demonstrated, step-by-step, how easy it is to spin up a deepfake likeness with off-the-shelf tools available to virtually anyone with an internet connection. From start to finish, it took less than two minutes.

This used to take huge amounts of computing power, says Reality Defender CEO Ben Colman. Now, all you need is a high-powered gaming laptop and a good internet connection. “In the last year, not only have the computational costs gone down by a factor of over 100, but a lot of the previous computational load that required you to spin up your own instance on Amazon, AWS, Google Cloud, or Azure now no longer needs any cloud compute. You could do it 100 percent locally on a computer.”

Who’s behind the deepfakes 

It’s not always clear who’s on the other side of these deepfake videos, but there are usual suspects. 

Bad actors working on behalf of hostile nation states like North Korea, Russia, and China use these tactics to worm their way into companies to collect paychecks, swipe company secrets, and access private data, says Colman. For years, North Korea has used cybercrime to fund its government. In January, the Justice Department indicted a group of North Koreans and one Mexican national for an employment scam that funneled money to the North Korean government.

Small-scale scammers pull these heists too, says Ryan LaSalle, CEO of cyber investigation firm Nisos. They get hired, gain access to internal company systems, and then sell the credentials online. Some are attempting basic corporate espionage to dig up trade secrets or swipe IP. In some cases, these are solo actors trying to get jobs, sometimes several jobs at once, for the money. Long-term employment is not the goal, and many collect just one paycheck.

These aren’t new scams, LaSalle explains, but “just a new technique to accelerate the effectiveness.” In fact, it’s not even limited to employers — deepfake interviewees are also showing up in university interviews.

Unlike some employment scams where employers are collateral damage, employers are the prey here. Tech companies make good targets: They often hire remote workers, have valuable IP, and pay well. Vidoc is an early-stage startup, and by MoczadƂo’s own admission, it’s inexperienced at recruiting, and not naturally suspicious of its job applicants. “If the candidate said he had some problems with the camera, we just assumed, ‘OK, his camera is probably broken.’ We didn’t think about a situation where he could be an agent from North Korea.”

Yet he’s one of the few who’s caught it on camera. Scammers have been using deepfakes for a while, but it’s difficult to capture it in the wild since it’s not common practice to record interviews, says LaSalle. He calls Vidoc’s video the “smoking gun.”

How to spot at deepfake candidate

As soon as he realized the image on screen was a deepfake, MoczadƂo started recording the call (and later posted it on LinkedIn). Because AI-generated video still struggles with boundary detection — the hazy border that appears around a person using a background image — MoczadƂo asked the candidate to hold up his hand. If he were real, there would be no border between his face and his hand. “If you don’t do it, we’ll end the conversation right now,” he says in the recording. The interviewee doesn’t even try.

MoczadƂo asked the candidate to put his hand in front of his face. Some AI-generated video struggles with boundary detection, and this technique can help spot a fake.

If you want to spot an AI-generated interviewee, MoczadƂo’s hand trick works. Some companies, like Reality Defender, are also building plugins to detect deepfakes in video calls. Vidoc has also changed its recruiting policy and requires candidates to be on camera for all interviews. Experts recommend doing a gut check too. If something just feels off about the video, ask if they’re willing to use a different platform.

If you suspect you’ve hired a scammer, you can suss them out. LaSalle recommends checking for productivity fakers, like mouse-jigglers, and making sure their IP address places them where they claim to be working. But proceed with caution, he says. No one wants to work at a company conducting a witch hunt. 

Despite the deepfakes, recruiters Larkin and Bavabhai are getting better at detecting suspicious candidates early on. LinkedIn profiles with blurry photos and no connections, resumes that look like they’ve been lifted from ChatGPT, or applicants obfuscating questions in screening calls are all red flags. Both say they want to give people the benefit of the doubt. Poor video and audio can easily be chocked up to a bad internet connection or an old camera, so they don’t want to unduly penalize applicants. Still, Bavabhai admits her tolerance for suspicious incidents is much lower than it used to be.

Deepfakes like these will only get more convincing. “The AI is bad enough that we can detect it,” MoczadƂo says. But in a year, it will get much better. I don’t know if we’ll be able to even tell if the person is a person.”

The application deadline for Inc.’s Best Workplaces Awards is Friday, March 7, at 11:59 p.m. PT. Apply Today.