AI Job Applicants: The New Fraud Wave Impersonating Candidates in Video Interviews
In the United States, a recruiter conducting a live video interview encountered a bot impersonating a job applicant. This incident highlighted the rapid evolution of AI-driven fraud schemes: no longer limited to generating resumes, these systems can now participate in online interviews using synthetic voices and fabricated video feeds. Experts warn that this poses more than just a risk of wasted time — North Korean networks are deploying fake identities to secure remote IT positions, granting them access to critical systems and, ultimately, classified information. In response, major corporations are partially reintroducing in-person interviews.
The story began at Nisos, a company hiring engineers and AI specialists for remote roles. Chief Talent Officer Megan Giacinto received an alert from a manager: a candidate’s behavior seemed suspicious. During a follow-up interview, troubling signs emerged. The applicant’s answers lagged, and he continually glanced to the side, as though awaiting prompts from another screen. His speech was clipped, marked by unnatural pauses and rehearsed phrasing. Moreover, his appearance did not match the résumé’s seniority — his face looked far too young for the roles claimed.
To test her suspicions, Giacinto shifted to behavioral questions requiring detailed accounts of prior responsibilities, results, and team reactions. Here, the bot faltered, stumbling over specifics. She then asked a simple, grounding question about the local weather — a tactic often used to confirm a candidate’s claimed location. The response was incorrect, and the “applicant” unraveled completely under further questioning.
A similar case was described by Vidoc Security co-founder David Mochadlo, who suspected that an interviewee was using a real-time deepfake mask. To verify, he requested a simple gesture: cover the face with a hand. The candidate refused, and the call ended abruptly. The principle is simple — obstructing the face can break the filter, instantly revealing the deception.
Why this is dangerous: such impostors typically target engineering and IT roles with access to sensitive infrastructure and data. Security specialists report that North Korean networks actively deploy fake personas to infiltrate companies in the United States, Japan, and beyond — seeking to circumvent sanctions, earn salaries, and, where possible, exfiltrate valuable intelligence.
How the market is responding: major employers are reinstating in-person stages. According to The Wall Street Journal, companies such as Google, Cisco, and McKinsey already conduct face-to-face interviews at select hiring phases. Recruiting firm Coda Search/Staffing (Dallas) notes that client demand for offline interviews has surged from 5% last year to 30% this year. A full return to traditional processes remains difficult, especially for firms hiring hundreds of engineers annually in fully remote settings, but live evaluations are being revived — at least for the final rounds.
Meanwhile, a broader issue looms: a flood of auto-generated resumes. LinkedIn reports a 45% year-over-year surge in applications — around 11,000 submissions per minute. Many are easily filtered out due to inconsistent dates or exaggerated roles. As Giacinto explains, a typical red flag might be “a recent graduate suddenly claiming to have led a large team of developers” in their very first job. The paradox is that weak applications from real people may also be discarded if they resemble AI-generated fabrications.
Key signals of impersonation in interviews (as observed by Nisos and Vidoc Security):
- Noticeable delays before answering and constant sideways glances
- Stilted speech devoid of natural intonation
- Appearance inconsistent with claimed experience
- Inability to answer detailed behavioral questions
- Refusal to replicate a simple gesture on camera — deepfake masks often break under such tests
- Control questions about everyday life (such as local weather) expose discrepancies instantly
Takeaway for hiring teams: strengthen vetting where data access is at stake; restore in-person interviews at least in final stages; shift away from binary yes/no formats to probing behavioral questions; and incorporate simple “grounding” checks during video calls. Above all, remember that the gap between a sophisticated bot and a genuine candidate is narrowing rapidly. They can still be distinguished — but only through a deliberate, well-structured process.