https://www.irregular.com/publications/vibe-password-generation
To security practitioners, the idea of using LLMs to generate passwords may seem silly. Secure password generation is nuanced, and requires care to implement correctly; the random seed, the source of entropy, the mapping of random output to password characters, and even the random number generation algorithm must be chosen carefully in order to prevent critical password recovery attacks. Moreover, password managers (generators and vaults) have been around for decades, and this is exactly what they’re designed to do.
At the heart of any strong password generator is a cryptographically-secure pseudorandom number generator(CSPRNG), responsible for generating the password characters in such a way that they are very hard to predict, and are drawn from a uniform probability distribution over all possible characters.
Conversely, the LLM output token sampling process is designed to do exactly the opposite. Basically, all LLMs do is iteratively predict the next token; the random generation of tokens is, by definition, predictable (with the token probabilities decided by the LLM), and the probability distribution over all possible tokens is very far from uniform.
In spite of this, LLM-generated passwords are likely to be generated and used. First, with the explosive growth and significant improvement in capabilities of AI over the past year (which, at Irregular, we have also seen direct evidence of in the offensive security domain), AI is much more accessible to less technologically-inclined users. Such users may not know secure methods for password generation, not place importance on them, and rely on ubiquitous AI tools to generate a password instead of looking for a specialized tool, such as a password manager. Moreover, while LLM-generated passwords are insecure, they appear strong and secure to the untrained eye, exacerbating this issue and reducing the likelihood that users will avoid these passwords.
Furthermore, with the recent surge in popularity of coding agents and vibe-coding tools, people are increasingly developing software without looking at the code. We’ve seen that these coding agents are prone to using LLM-generated passwords without the developer’s knowledge or choice. When users don’t review the agent actions or the resulting source code, this “vibe-password-generation” is easy to miss.
TFA shows results obtained using several major LLMs, including GPT, Claude, and Gemini in their latest versions and most powerful variations, and found that all of them generate weak passwords.
Originally spotted on Schneier on Security.
(Score: 4, Touché) by istartedi on Saturday February 28, @06:35PM (3 children)
You're using LLMs to do WHAT now???
OK, we've all rolled our eyes at the "developers are expensive, hardware is cheap" mentality. It has led to megabytes of JavaScript to display a few lines of texts and many other travesties but this might take the cake.
Consider all the lines of code, vector processing and power consumed by an LLM to do what you could easily accomplish far better via /dev/random which seeds from real entropy, or without any computing power at all by rolling dice and such.
Using LLMs to generate passwords? This might be the biggest waste-ratio ever; unless I'm missing something. What could I be missing?
Appended to the end of comments you post. Max: 120 chars.
(Score: 3, Insightful) by acid andy on Saturday February 28, @07:34PM
It's tempting to blame the effects of COVID on most people's brains, but people generally have always been terrible at choosing passwords and LLMs are being so ridiculously hyped up it doesn't surprise me in the slightest users would trust them to be competent enough to do something like this.
"rancid randy has a dialogue with herself[...] Somebody help him!" -- Anonymous Coward.
(Score: 2) by darkfeline on Sunday March 01, @05:22AM (1 child)
Alas, you missed reading the article.
This is about passwords getting generated as part of code or some other task. A user may see a "unique API key" in the generated code and assume it's good because it looks sufficiently random, but it's not. Presumably these are the same kinds of devs that don't follow best practices like keeping secrets outside of code. Or, if you'll allow me a jab, the kind of dev that doesn't carefully read articles about security issues like this one.
Join the SDF Public Access UNIX System today!
(Score: 5, Touché) by istartedi on Sunday March 01, @05:58AM
You want us to read the articles? What do you think this is? Playboy?
Appended to the end of comments you post. Max: 120 chars.