We Didn't Catch It From AI — AI Caught It From Us
I flagged a resume last month. Strong candidate. Clean prose. Every sentence earned its place. My compliance partner ran it through an AI detector — 87% probability of artificial generation. We moved on to the next applicant.
Three days later, I'm reading The Great Gatsby on a flight and I see it: the em dashes, the rhythm, the "it's not this — it's that" construction. Fitzgerald wrote like ChatGPT. Or more accurately, ChatGPT learned to write like Fitzgerald. And now we're penalizing humans for writing like... humans.
AI detection tools are about to enter their dumbest era.
The Pattern We're Teaching Ourselves to Fear
Saw a post this week that crystallized something I've been circling: AI learned to talk like humans, and now humans are starting to talk like AI. We're absorbing the patterns back from the thing we taught. The em dash. The rhetorical pivot. The clean antithesis. Two years of reading AI output, and it's seeping into how we think.
Here's the problem: AI didn't invent any of those moves.
"It's not X, it's Y" is a rhetorical figure called antithesis. Cicero used it in the Roman Senate. Kennedy's "ask not" line is built on it. The em dash was Emily Dickinson's signature — and Fitzgerald's, and Cormac McCarthy's, and pretty much every literary writer of the last 200 years who wanted to create rhythm and breath in prose.
ChatGPT didn't borrow the em dash from us. Both of us got it from Dickinson.
But detection tools don't know that. They pattern-match. Too many em dashes, too much structural parallelism, too consistent a register — you get flagged as a bot. The heuristic was always shaky. It's about to collapse entirely.
When the Mirror Finishes Its Work
I was advising a professional services firm last quarter on their content review process. They'd implemented AI detection across client deliverables, compliance documents, even internal memos. Anything flagged over 60% went back for rewrite. Sounded prudent. Except the false positives kept hitting the same people — their best writers. The senior manager who'd taught legal writing at Northwestern. The audit partner who published in industry journals.
The detector was penalizing people for being good at their jobs.
Here's what's happening: when humans unconsciously absorb the patterns they've been reading in AI output for two years, "writes like AI" becomes "writes like a human in 2026." The feedback loop is tightening. Junior staff read AI-assisted content. They internalize those rhythms. They write their own work — genuinely original work — and it trips the detector. No algorithm survives that circular reference.
This isn't theoretical. I'm watching it happen in real time. Associates getting coaching for prose that's "too polished." Resumes bounced for sounding "artificial." Marketing teams second-guessing perfectly good copy because some vendor tool threw a red flag.
The vendors will keep promising accuracy. The false positives will skew toward your strongest writers — the ones who read the most, who've internalized good structure, who know what clean prose sounds like.
The Typewriter Detector Paradox
We've been here before. Not exactly here, but close enough to see the pattern.
In the 1980s, document examiners could identify typewriters by their signatures — the slight misalignment of the 'e', the pressure variance in the 't'. Then word processors arrived. Suddenly every document looked machine-perfect. The examiners built new tools. They looked for other markers: sentence length variance, error correction patterns, formatting tells.
Those tools worked until they didn't. Once everyone started using word processors, "typed on a word processor" stopped being a useful signal. It just meant "created a document."
We're doing the same thing with AI detection, except faster and with higher stakes.
The town doesn't empty the day the railroad arrives — it empties slowly as the pattern becomes clear. Nobody wakes up one morning and decides "our detection tools are useless now." They just start noticing that every third flagged document turns out to be fine. Then every second one. Then they stop checking.
The Questions Nobody's Asking
Here's what keeps me up: I hate that I hear myself thinking in language I now associate with AI. The author of the post that sparked this hates it too. We're both wrong to hate it. We didn't catch it from AI. AI caught it from us.
But that intellectual knowledge doesn't change the visceral reaction. And if I'm feeling it — someone who understands how these models work, who knows the training provenance — what are your junior staff feeling? Your hiring managers? Your compliance reviewers?
Are they self-censoring? Dumbing down their prose to avoid the detector? Writing worse on purpose because good writing now carries a scarlet letter?
I don't have a clean answer to that. I know what I'm seeing: people second-guessing their instincts. Trusting the algorithm over their own judgment. Letting a pattern-matching tool define what "human" writing is allowed to sound like.
What This Means Monday Morning
If your firm runs AI detection over resumes, you're filtering for people who write poorly enough to avoid the flag. If you're using it in compliance review, you're teaching staff that clarity and structure are suspicious. If you're applying it to client deliverables, you're about to have an awkward conversation when a senior partner's work gets bounced back.
The detection tool isn't wrong exactly — it's measuring what it was designed to measure. The problem is that what it's measuring stopped being a useful signal sometime in the last eighteen months, and we haven't updated our mental model yet.
This doesn't mean abandon all review. It means stop outsourcing judgment to a heuristic that's collapsing in real time. If you're worried about actual AI misuse — someone passing off ChatGPT output as original analysis — you need better questions:
Does this analysis contain non-obvious insights? Does it engage with your firm's specific context? Can the author defend it in conversation? Those are harder to assess than running text through a detector, but they're also harder to fake.
The easy metric is becoming useless. The hard questions remain essential.
The Mirror Is Finishing Its Work
But what do I know — I've only watched this pattern play out three times in different domains. Plagiarism detection in academia. Spam filters in email. Fraud detection in finance. Every time, the sophisticated actors adapted faster than the tools did. Every time, the false positives eventually overwhelmed the signal.
The difference this time: we're not detecting bad actors. We're detecting our own reflection.
So here's what I'm asking you to do this week: pull the last five documents your firm flagged as "AI-generated." Read them. Really read them, not just check the score. Ask yourself: is this actually problematic, or does it just sound like someone who knows how to write?
Then ask your team: what's our policy on "AI-sounding" writing, and how confident are we that the detector is still right?
Because if you can't answer that with specifics — if you're trusting the percentage in the report — you're not detecting AI anymore.
You're just penalizing clarity.
More Ai Posts
Why Solo AI Builders Are Your Market Canaries
Solo developers using AI are discovering pricing models and tools enterprises will demand in 2-3 years. Watch them to pr...
Season 1: Masterclass
Dive into the Season 1 Masterclass podcast episode, featuring highlights and diverse perspectives from the past 12 weeks...
Stop Waiting for AI: Your Competition Already Started
AI disruption isn't coming tomorrow—it's happening now. While most companies debate, competitors are shipping. Here's wh...
