Enterprise security teams are losing ground to AI-enabled attacks — not because defenses are weak, but because the threat model has shifted. As AI agents move into production, attackers are exploiting runtime weaknesses where breakout times are measured in seconds, patch windows in hours, and traditional security has little visibility or control.CrowdStrike’s 2025 Global Threat Report documents breakout times as fast as 51 seconds. Attackers are moving from initial access to lateral movement before most security teams get their first alert. The same report found 79% of detections were malware-free, with adversaries using hands-on keyboard techniques that bypass traditional endpoint defenses entirely.CISOs’ latest challenge is not getting reverse-engineered in 72 hoursMike Riemer, field CISO at Ivanti, has watched AI collapse the window between patch release and weaponization.”Threat actors are reverse engineering patches within 72 hours,” Riemer told VentureBeat. “If a customer doesn’t patch within 72 hours of release, they’re open to exploit. The speed has been enhanced greatly by AI.”Most enterprises take weeks or months to manually patch, with firefighting and other urgent priorities often taking precedence. Why traditional security is failing at runtimeAn SQL injection typically has a recognizable signature. Security teams are improving their tradecraft, and many are blocking them with near-zero false positives. But “ignore previous instructions” carries payload potential equivalent to a buffer overflow while sharing nothing with known malware. The attack is semantic, not syntactic. Prompt injections are taking adversarial tradecraft and weaponized AI to a new level of threat through semantics that cloak injection attempts.Gartner’s research puts it bluntly: “Businesses will embrace generative AI, regardless of security.” The firm found 89% of business technologists would …