The Patch Window Just Closed
Five million times.
That's how many times automated testing tools hammered the same flaw in FFmpeg — a piece of software that touches nearly every video you've ever streamed — without ever finding it. This week, an AI found it in one pass.
Anthropic announced Project Glasswing on Tuesday. If you're a CISO, CFO, or anyone who signs off on security budgets, the announcement matters less than what it means for every patch SLA currently posted in your SOC.
The gap between "vulnerability discovered" and "vulnerability exploited" — the window your entire patching strategy quietly depends on — just collapsed.
What Anthropic Actually Found
The headline numbers are impressive: thousands of high-severity vulnerabilities across every major operating system and browser. The details are what should keep audit committees up tonight:
-
A 27-year-old bug in OpenBSD, one of the most security-hardened operating systems on Earth
-
The FFmpeg flaw that conventional fuzzing tools had tested 5 million times without detecting
-
Chained Linux kernel vulnerabilities that walk user accounts straight to root access
Found nearly autonomously. By a model Anthropic isn't even making generally available yet.
I've spent two decades watching security teams treat patching cadence like a religion. Thirty days for criticals. Sixty for highs. The religion's fine. The calendar isn't.
The Silent Assumption Underneath Every Patch Policy
Every CISO program I've reviewed — and I've reviewed dozens — quietly assumes a window between disclosure and exploitation. Days. Sometimes weeks. Time to triage the finding. Time to test the patch in dev. Time to schedule the maintenance window. Time to write the customer notification letter if something breaks.
That window is the silent assumption underneath every patch SLA on every wall in every security operations center.
CrowdStrike's own line in Anthropic's announcement captures it: what once took security researchers months now happens in minutes.
Here's the uncomfortable part: if defensive AI can find these vulnerabilities autonomously, offensive AI can too. The model Anthropic used isn't publicly available. The techniques it demonstrated are. The capability gap between "researcher finds bug" and "attacker finds bug" just narrowed from months to minutes — and the attacker doesn't file responsible disclosure reports first.
Pull up your last quarterly vulnerability report right now. Find the criticals you sat on past 30 days because "no public exploit observed yet" or "low likelihood of exploitation." That math changed Tuesday.
This Movie Has Played Before
The pattern is older than the industry. When network scanning got industrialized in the late 1990s, the entire perimeter security model got rewritten. Companies that had built castle-and-moat architectures suddenly faced automated tools that could map every open port on the internet in hours. The security vendors who moved first — building intrusion detection, building better firewalls, building the concept of defense in depth — survived. The ones who insisted their walls were tall enough didn't.
When credential stuffing became industrialized in the early 2010s, fraud teams got rebuilt from the ground up. Stolen passwords that once required manual testing could suddenly be validated at machine speed against millions of accounts. The banks that waited for "evidence of active exploitation" before implementing multi-factor authentication spent the next five years apologizing to customers and regulators.
When an attacker capability gets industrialized, defender economics change overnight — and the teams that move first survive the rewrite.
Nobody gets fired the day the new tool arrives. The breaches just slowly start happening faster.
What Actually Changes on Thursday Morning
I'm not suggesting you throw out your vulnerability management program. I'm suggesting the program was designed for an assumption that no longer holds.
Here's what I'm telling the audit committees I brief: your patch SLA was built on a coin toss. The coin just got faster.
The old model: Security researcher finds vulnerability → Responsible disclosure to vendor → Vendor releases patch → You have 30-60 days to deploy before attackers reverse-engineer the patch and build exploits.
The new model: AI finds vulnerability → If it's your AI, you patch. If it's their AI, you're breached. The window between those two outcomes is measured in hours, not weeks.
This isn't theoretical. The FFmpeg vulnerability Anthropic found had been sitting in production code, tested millions of times by conventional tools, for years. It took one pass with a different kind of intelligence to surface it. Every attacker with access to similar models — and the barrier to entry drops every quarter — now has the same capability.
The Questions Your Security Team Needs to Answer
I don't have clean answers here. I have questions that didn't need asking six months ago and become urgent this week:
What's your actual time-to-patch for critical vulnerabilities right now? Not the SLA on the poster. The real number, including triage time, testing time, change approval time, deployment time. If it's more than 72 hours, you're now operating outside the window where "no public exploit" means anything.
Who owns the decision to emergency-patch versus wait for the maintenance window? Because you're about to have that conversation more often, with less information, and higher stakes.
What's your exposure if zero-days stop being rare? The entire concept of a "zero-day" assumed scarcity — vulnerabilities were hard to find, so discoveries were infrequent, so emergency response was sustainable. If vulnerability discovery becomes industrialized, your incident response capacity becomes your limiting factor.
How do you patch systems you didn't know were vulnerable yesterday? The OpenBSD bug Anthropic found was 27 years old. It sat through thousands of security audits, code reviews, and penetration tests. How many others are sitting in your environment right now, invisible to every tool you're currently running?
What I'm Doing About It
I'm advising three clients through this right now. Here's what we're actually changing:
First: Collapsing patch windows for anything internet-facing. If the asset touches the public internet and a critical patch exists, we're moving from 30-day SLAs to 72-hour SLAs. Yes, that requires more staffing. Yes, that requires better testing automation. The alternative is explaining to regulators why you sat on a known critical for three weeks after AI-powered exploit tools became widely available.
Second: Treating "no known exploit" as a coin toss, not a signal. We're removing it from the severity calculation entirely. If the vulnerability is critical and a patch exists, we patch. We're not waiting for evidence that attackers found it first.
Third: Running our own AI-assisted vulnerability discovery. If these tools can find 27-year-old bugs in hardened systems, they can find bugs in our code. Better we find them first. This isn't optional anymore — it's the new table stakes for "we take security seriously."
But what do I know — I've only watched this particular movie four times.
What to Do Monday Morning
Here's your specific next action: Pull your last quarterly vulnerability report. Identify every critical or high-severity finding that's been open longer than 30 days. For each one, ask: "If an exploit for this dropped on GitHub today, how bad would Thursday be?"
Then ask your security team: Who owns the conversation about what's different now that vulnerability discovery is industrialized?
Because if the answer is "we'll discuss it at the next quarterly review," you're already behind.
The patch window didn't gradually narrow. It closed. The teams that move first this quarter will survive the rewrite. The ones that wait for "more evidence" will spend next year explaining to audit committees why they didn't act when they had the chance.
Your patch SLA assumed time you no longer have. The question isn't whether to change it. The question is whether you change it this week or after the breach.
More Ai Posts
Why Solo AI Builders Are Your Market Canaries
Solo developers using AI are discovering pricing models and tools enterprises will demand in 2-3 years. Watch them to pr...
Season 1: Masterclass
Dive into the Season 1 Masterclass podcast episode, featuring highlights and diverse perspectives from the past 12 weeks...
Stop Waiting for AI: Your Competition Already Started
AI disruption isn't coming tomorrow—it's happening now. While most companies debate, competitors are shipping. Here's wh...
