Agentic AI is emerging as an abuse multiplier, with threat actors using AI-assisted Python code to target a second-factor-authentication (2FA) bypass.
Agentic AI is turning from nascent productivity promise into a force multiplier for cyber abuse, giving threat actors speed, scale, and adaptability that make even familiar attack techniques far more dangerous.
According to a Security Week report on 11 May 2026, a prominent cybercrime group had launched the first known use of an AI-generated zero-day exploit targeting a 2FA bypass in an open-source web admin tool.
The exploit’s Python script showed AI hallmarks such as structured formatting and odd scoring, but had failed to achieve mass exploitation compared to earlier discoveries, such as Google’s 2024 Big Sleep AI finding a SQLite zero-day.
Although the perpetrators have not been named, some cybersecurity firms have weighed in on the incident .
- CSO Online noted that the significance of the incident is not simply that AI wrote code, but that it appears capable of finding high-level trust-assumption flaws in authentication logic. That shifts the concern from basic automation to AI-assisted exploit reasoning.
- Google Threat Intelligence Group stated high-confidence of the use of AI assistance in the exploit’s creation, although their Gemini model was ruled out as a factor. Proactive vendor notification stopped the attack, highlighting AI’s growing role in adversary automation beyond phishing.
- Shane Barney, CISO, Keeper Security, said the episode shows AI is no longer just speeding up familiar attacks. “It has become an operational tool,” he said, adding that defenders still rely too heavily on multi-factor authentication (MFA) that is not resistant to phishing or logic-layer abuse.
Finally, researchers from Qualys blogged that AI is shrinking the time between vulnerability discovery and weaponization. They argued that security teams need to focus less on assuming they can prevent every breach and more on limiting blast radius when one happens.
The incident underscores needs for resilient authentication and zero trust, not just MFA. Earlier AI defensive finds predate this incident, so it is not the first, and definitely not the last.


