A security‑focused executive nearly hires a synthetic candidate during a video interview, exposing how realistic fraud can evade even expert scrutiny.
An AI‑security startup CEO, with years of experience as a CISO and deepfake researcher, thought he was safe from the kind of scams he often warns others about — until a deepfake job candidate almost slipped through his own hiring process, according to TheRegister.
Jason Rebholz, CEO, Evoke Security, had recently posted openings for security‑research roles on LinkedIn, and within hours a stranger messaged him saying he knew a strong candidate for the position.
The would‑be applicant’s profile picture looked like an anime character rather than a real person (the first red flag of a scam), but Rebholz still gave the candidate the benefit of the doubt because the résumé appeared polished and professional.
Subsequently the intermediary claimed the candidate in question had worked overseas and previously collaborated with them at a San Francisco‑based company, a detail that struck Rebholz as odd but not impossible for a remote‑first startup. After exchanging emails, the candidate appeared on a video call using a virtual background; his face looked slightly blurred and “plastic”, with telltale greenscreen reflections in his glasses and dimples that flickered in and out as he moved.
Rebholz later described feeling “95% sure” he was talking to a deepfake, yet he hesitated to call it out, worrying he might unfairly reject a real person who simply looked unusual on camera.
During the interview, the candidate repeatedly echoed Rebholz’s own public statements and even parroted his questions back before answering, creating an eerie sense that Rebholz was essentially talking to himself. He later learned that the applicant was likely part of a broader North Korean‑linked job‑scam network that uses fake IT profiles to infiltrate companies, steal code, and funnel salaries back to fund weapons programs.
In the end, Rebholz did not terminate the interview on the spot but later confirmed with deepfake‑detection tools that the video was synthetic — underscoring how even seasoned security leaders can be psychologically disarmed by the fear of misjudging a human candidate.



