Validated security tests show AI‑generated passphrases follow certain patterns that can be cracked quickly, prompting calls to audit and rotate them.
Generative AI tools are turning out to be surprisingly bad at creating strong passwords, with security researchers warning that chatbots such as Claude, ChatGPT, and Gemini often produce “strong‑looking” passphrases that are actually predictable and crackable in hours, according to a report by The Register.
The findings come from AI‑security firm Irregular, which tested several large‑language‑model (LLM) systems and concluded that LLM‑generated passwords are “fundamentally weak” and should not be trusted for sensitive accounts.
In the test, Claude, OpenAI’s GPT‑5.2, and Google’s Gemini 3 Flash were prompted to generate 16‑character passwords mixing uppercase and lowercase letters, numbers, and special characters. The resulting strings appeared complex and were often rated “strong” by online password‑strength checkers, some of which estimated cracking times of centuries on standard hardware.
In reality, password complexity checkers do not account for the common patterns the models repeatedly use. When Claude Opus 4.6 was tasked to produce passwords in 50 separate sessions, only 30 were unique, with 20 duplicates — including 18 identical strings — and most starting and ending with the same characters. None of the 50 passwords contained repeating characters, a sign they were not truly random. Similar patterns showed up across GPT‑5.2 and Gemini 3 Flash outputs, and even in passwords “written” on a Post‑It note by Google’s Nano Banana Pro image‑generation model.
The firm then used Shannon entropy calculations, and estimated that 16‑character LLM‑generated passwords had roughly 20–27 bits of entropy, far below the 98–120 bits expected for truly random strings of the same length. The team argues that such low entropy means an attacker who understands the models’ patterns could brute‑force these passwords in a matter of hours, even on older hardware.
Many of the AI-generated common character sequences appear in open‑source code and documentation on platforms such as GitHub, suggesting developers have already baked LLM‑generated passwords into real projects. Irregular urges organizations to audit and rotate any passwords created with LLMs, and warns that the broader problem may extend beyond passwords, as AI‑assisted coding and “vibe coding” become more widespread.



