Skip to main content


Looking forward to all the vulnerable AI-generated code
looking forward to using AI to find vulnerabilities in AI code.

Let the war games begin :flan_hacker:
It's like a GAN but with real world consequences!
Many (most?) classes of bugs involve the developer having the wrong mental image of what’s going on under the abstraction they’re working with. Seems to me like writing code by prompting an AI might be the ultimate invitation to this.
but the “ai” isn’t writing anything. It’s making a statistical pastiche.

It’s the code equivalent of word salad and only gets the simplest of use cases only mostly correct and only then because it’s regurgitating chunks of other people’s code…

We need to stoping talking about these LLMs as though there’s any intent or design behind what they do. It’s not helpful or backed up by what these system actually do — it’s the fetishization of a SciFi trope.
That's the point - no mental model is way worse than a wrong mental model