When AI Gets Blamed for Layoffs, Check Last Year's Hiring Spree
A company I'm watching announced layoffs last week. The press release was elegant: "AI-native restructuring," "player-coach operating model," "productivity leverage at scale." The CEO spent six minutes on the earnings call explaining how AI was fundamentally changing how the business operates.
Then I looked at the 10-K.
They grew headcount 31% in 2025 while revenue dropped 22% year-over-year. The "AI-driven" cut takes them back to roughly where they were in 2024. AI didn't cause this layoff. A hiring decision twelve months ago did.
This isn't an isolated incident. It's becoming the playbook for spring 2025. And if you're a finance leader, auditor, or board member trying to separate signal from narrative, you need a better diagnostic than the press release.
The New Framing, Same Old Math
I've watched this movie before. In 2022-2023, Big Tech shed roughly 250,000 jobs under the "post-COVID correction" banner. The framing wasn't technically wrong — pandemic hiring had been aggressive. But here's what the coverage glossed over: many of those companies had grown headcount 50-90% in under two years while revenue growth was already decelerating.
The cuts weren't tragic miscalculations. They were predictable corrections to unsustainable hiring velocity. "Post-COVID adjustment" was doing a lot of narrative work, covering for workforce planning that had already failed.
Now "AI-native restructuring" is the new load-bearing phrase.
Two things are simultaneously true:
-
AI is genuinely changing how teams are structured, how work gets distributed, and what productivity looks like
-
Most of the cuts you're reading about this quarter were baked in before anyone deployed an AI agent
The problem is distinguishing between them. When a CFO tells you they're "rightsizing for an AI-enabled operating model," are you witnessing a transformation or watching someone rebrand a headcount problem?
Run the Headcount Diagnostic
I advise clients on technology risk and operational resilience, which means I spend a lot of time translating between what companies say is happening and what the numbers show is actually happening. When an "AI productivity" narrative lands on your desk, here's the four-question diagnostic I run:
1. What was the firm's 2024 headcount?
2. What was 2025 headcount?
3. How did revenue trend during that growth?
4. Where does the post-cut number land relative to 2024?
If the post-cut headcount is still above 2024 levels, the AI story isn't explaining a transformation. It's covering for a hiring decision that was already underwater. That's not innovation — that's a correction with better PR.
The firm I opened with? After the "AI-native" cut, they'll still have 8% more employees than they did in 2024, while revenue per employee has cratered. The AI tooling they've deployed may be real. The causal claim connecting that tooling to this specific workforce decision is not.
The Causal Claim Needs Evidence
Here's where this gets uncomfortable for audit and finance teams: the AI productivity claim is a causal claim, and like any causal claim, it needs evidence.
Not vibes. Not "we're seeing efficiency gains across the organization." Not a demo of an internal AI tool that summarizes Slack threads. Actual metrics.
If AI is genuinely driving the restructuring, you should be able to document:
-
AI leverage per team (what's getting automated, what's the before/after)
-
Output per dollar of compensation (are you producing more with less, or just... less?)
-
Control coverage at lower headcount (who's reviewing the AI's work, and how does that scale?)
I sat in a meeting two weeks ago where an operator walked us through their "AI transformation" — twenty slides on the tools they'd deployed, the team structure changes, the cultural shift to "AI-first thinking." Inspiring stuff. Then someone asked: "What's your revenue per employee compared to last year?"
Silence.
The burden of proof sits with the productivity numbers, not the restructuring story. Show me the metric, not the Medium post. Otherwise this is just Right-Sizing 2026 with a better domain name.
What This Looks Like From the Inside
I'm not arguing AI isn't real or that workforce transformation isn't happening. I've seen teams genuinely restructure around AI tooling — fewer junior analysts because the senior people are using AI to do the work that used to require a pyramid of support. That's real productivity leverage.
But I've also seen companies deploy ChatGPT Enterprise, call it "AI transformation," and then lay people off to hit a margin target that was already on the roadmap. The AI tooling and the layoffs are both happening. One is not causing the other.
The uncomfortable question: how much of what you're seeing is transformation, and how much is narrative management?
If you're a finance leader evaluating an acquisition target that just did an "AI-native restructuring," which story are you underwriting? If you're an auditor reviewing management's going concern assumptions, are the "AI productivity gains" driving the forecast supported by operational evidence or by vibes?
This matters because the two scenarios have very different risk profiles. A company that overhired and is now correcting has execution risk and credibility risk, but the underlying business model is probably fine. A company that actually restructured around AI has technology dependency risk, control risk, and the operational complexity of running a hybrid human-AI workforce at scale.
You need to know which movie you're in.
The Pattern: Narratives Follow Necessity
Here's the castle-and-railroad part: every technology disruption cycle produces a narrative that makes necessary cuts look like visionary transformation.
When factories automated in the 1980s, it was "lean manufacturing" and "just-in-time." When the internet scaled in the early 2000s, it was "digital-first operating models." When mobile ate retail, it was "omnichannel transformation." The technology was real. The operational changes were real. And also, a lot of companies used the narrative to cover for cost structure problems that predated the technology shift.
Nobody gets fired the day the new tool arrives. But six months later, when the workforce is 20% smaller, the tool gets credit for the "efficiency gains."
I'm not cynical about AI. I'm cynical about the stories we tell when cutting costs. AI is doing incredible things. It's also doing a lot of narrative work right now, covering for workforce planning failures that have nothing to do with large language models.
What to Do Monday Morning
If you're a CFO, auditor, or board member fielding "AI productivity" claims from operators or counterparties, here's the one question I'd ask:
"Show me the output-per-employee metric before and after the AI deployment, and walk me through how the cut connects to that number."
If they can show you that, you're looking at a real transformation. If they pivot to talking about "cultural change" or "positioning for the future," you're looking at a rebrand.
The AI leverage is real. The cuts are real. The causal connection between them is often fiction.
What's the diagnostic question your firm runs when an "AI-native" cut hits a portfolio company? I'd love to hear what's working — because this framing is going to get a lot more mileage before the cycle turns.
More Leadership Posts
Why Toxic Leadership Costs You Top Talent
Discover how outdated management practices—no WFH, banned conversations, 100-hour weeks—drive your best people to compet...
Silicon Valley's Rebranding Obsession: Why We're Lying
Tech leaders are rebranding old concepts with trendy names—gambling as 'prediction markets,' consultants as 'full-stack ...
Stop Grinding: Why Renewal Beats Optimization
Learn why taking breaks—not grinding harder—drives innovation and prevents burnout. Discover how strategic rest fuels be...
