It’s refreshing when a leading AI company states the obvious. In a detailed post on hardening ChatGPT Atlas against prompt injection, OpenAI acknowledged what security practitioners have known for years: “Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully ‘solved.'”What’s new isn’t the risk — it’s the admission. OpenAI, the company deploying one of the most widely used AI agents, confirmed publicly that agent mode “expands the security threat surface” and that even sophisticated defenses can’t offer deterministic guarantees. For enterprises already running AI in production, this isn’t a revelation. It’s validation — and a signal that the gap between how AI is deployed and how it’s defended is no longer theoretical.None of this surprises anyone running AI in production. What concerns security leaders is the gap between this reality and enterprise readiness. A VentureBeat survey of 100 technical decision-makers found that 34.7% of organizations have deployed dedicated prompt injection defenses. The remaining 65.3% either haven’t purchased these tools or couldn’t confirm they have.The threat is now officially permanent. Most enterprises still aren’t equipped to detect it, let alone stop it.OpenAI’s LLM-based automated attacker found gaps that red teams missed OpenAI’s defensive architecture deserves scrutiny because it represents the current ceiling of what’s possible. Most, if not all, commercial enterprises won’t be able to replicate it, which makes the advances they shared this week all the more relevant to security leaders protecting AI apps and platforms in development.The company built an “LLM-based automated attacker” trained end-to-end with reinforcement learning to discover prompt injection vulnerabilities. Unlike trad …