64/69 Tuesday, February 3, 2026

The threat of deepfake-based job applications is rapidly becoming a critical issue in the cybersecurity landscape. Jason Rebholz, CEO of an AI security startup, revealed that he was nearly deceived by a sophisticated scam believed to be linked to North Korean IT operatives. Despite being a seasoned security professional with prior experience as a CISO and years of research into deepfakes, Rebholz almost fell victim to an impostor posing as a security researcher candidate. The attack began with a fake LinkedIn profile and an AI-generated résumé that appeared highly professional. The incident underscores that regardless of company size-and even for top-level experts-any organization can become a victim without early skepticism and careful scrutiny.
During a video interview, Rebholz noticed several red flags: the applicant initially attempted to keep the camera off, displayed unnaturally smooth, plastic-like facial features, and had reflections on eyeglasses consistent with a green screen. However, the most dangerous factor was not just the improving realism of the technology, but the interviewer’s psychological hesitation-the fear of misjudging and unfairly rejecting a “real” person. This hesitation allowed the deception to nearly succeed, until the video was analyzed with deepfake detection technology and confirmed to be fabricated. The impostor also repeated questions and delivered answers that closely mirrored Rebholz’s own public posts, suggesting the use of AI for real-time response generation-a tactic that can easily confuse interviewers, even when they are 95% confident they are seeing a deepfake.
Data from major technology companies such as Amazon indicates that more than 1,800 North Korean impostor applicants were blocked within a short period, highlighting that this problem has spread across Fortune 500 companies. The risk extends beyond wasted payroll costs to potential source code theft and extortion. Experts therefore recommend stricter countermeasures, such as disabling virtual backgrounds during interviews, asking candidates to interact with physical objects in the room, or requiring new hires to work on-site during their first week to verify identity. Most importantly, organizations are urged to trust their instincts over social politeness when something feels wrong.
Source https://www.theregister.com/2026/02/01/ai_security_startup_ceo_posts/
