AI-Generated Passwords Found Vulnerable to Easy Cracking

The Promise and Pitfall of AI-Generated Passwords

In recent years, artificial intelligence (AI) has transformed numerous aspects of our digital lives, from content creation to cybersecurity. However, a new study from cybersecurity firm Irregular has raised concerns over the actual security of passwords generated by large language models (LLMs) such as ChatGPT, Claude, and Gemini. Despite their apparent complexity and randomness, these AI-generated passwords have been found to be fundamentally insecure and surprisingly easy to crack.

How AI Models Generate Passwords

To evaluate the strength of AI-generated passwords, Irregular tasked Claude, ChatGPT, and Gemini with creating 16-character passwords that included special characters, numbers, and letters, mimicking the requirements of most secure systems. At first glance, the output appeared robust and even received high marks from popular password checkers like KeePass. However, the research revealed that beneath this facade of complexity, these passwords were highly susceptible to attack due to predictable patterns in their generation.

Patterns Undermine Security

One of the key findings was that LLMs struggle with true randomization. When prompted to create 50 unique passwords, Anthropic’s Claude Opus 4.6 model consistently began each password with a letter—often an uppercase ‘G’—followed by the digit ‘7.’ The characters ‘L,’ ‘9,’ ‘m,’ ‘2,’ ‘$,’ and ‘#’ appeared in all 50 passwords, while much of the alphabet was conspicuously absent. This pattern was not an isolated issue; OpenAI’s ChatGPT began nearly every password with ‘v,’ with ‘Q’ frequently appearing as the second character. Gemini, Google’s LLM, started most passwords with ‘K’ or ‘k,’ followed by a handful of predictable symbols or numbers like ‘#,’ ‘P,’ or ‘9.’

Researchers also noted that the AI-generated passwords lacked repeating characters. While this might seem to suggest a higher degree of randomness, it actually highlighted the limitations of the models. As Irregular explained, “Probabilistically, this would be very unlikely if the passwords were truly random.” Instead, the AI was producing outputs that merely appeared random to human eyes, without the statistical entropy required for true security.

The Importance of Entropy in Password Security

Password security is often measured in terms of bits of entropy, which quantify the number of possible combinations a password can have. The higher the entropy, the more difficult it is for an attacker to guess or brute-force the password. For example, a password with just 20 bits of entropy has about a million possible combinations, which could be cracked in seconds with modern hardware. In contrast, a password with 100 bits of entropy would take trillions of years to break using current technology.

According to Irregular’s research, a genuinely secure password should provide around 6.13 bits of entropy per character. However, AI-generated passwords lagged far behind, offering only about 2.08 bits per character. This means that a standard 16-character password crafted by an LLM would have just 27 bits of entropy, compared to the 98 bits expected from a truly secure password. As a result, passwords generated by LLMs are alarmingly vulnerable to brute-force attacks.

Real-World Implications

While it may seem simple for users to avoid these weak passwords by not relying on AI tools for generation, the reality is more complex. AI agents are increasingly being tasked with automating coding and other technical work, including the creation of login credentials for applications and services. Researchers discovered that the patterns found in LLM-generated passwords are already present in the wild—surfacing in code repositories on platforms like GitHub and embedded in technical documentation.

Google’s Gemini has even issued warnings, advising users not to use its suggested passwords for sensitive accounts. Nevertheless, as more workflows are automated and delegated to AI, the risk of weak, predictable passwords proliferates across a growing number of systems and applications.

A Problem Without an Easy Fix

Experts at Irregular caution that this is not a flaw that can be remedied by simply tweaking prompts or adjusting model temperature settings. “People and coding agents should not rely on LLMs to generate passwords. Passwords generated through direct LLM output are fundamentally weak, and this is unfixable by prompting or temperature adjustments: LLMs are optimized to produce predictable, plausible outputs, which is incompatible with secure password generation,” the firm stated.

At the time of publication, representatives from Anthropic, OpenAI, and Google had not responded to requests for comment regarding these findings.

Staying Secure in an AI-Driven World

For individuals and organizations alike, the takeaway is clear: do not use LLMs or AI chatbots as password generators for sensitive accounts. Instead, rely on traditional password managers and tools specifically designed to maximize entropy and randomness. As AI becomes more integrated into our daily digital routines, vigilance and a clear understanding of its limitations will be essential for maintaining robust cybersecurity.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter