Provenance Over Detection: AI, Deepfakes & Trust
Blockchain
financial services
April 30, 2026· 6 min read

Provenance Over Detection: AI, Deepfakes & Trust

As deepfakes become 95% real, detection tools fail. Blockchain's provenance model—the oldest audit principle encoded into technology—is the only viable defense for financial services leaders.

When 95% Real Becomes 100% Dangerous

WIRED just documented something that should terrify every auditor, general counsel, and compliance officer in the country: the latest deepfakes keep 95% of the original photograph intact. Real metadata. Real sensor noise. Real lighting physics. One square inch of fiction — a replaced face, a weapon placed into a hand, evidence manufactured from thin air. Pixel-level detectors clear them because the image is, in most respects, genuine.

I've spent the last six months advising clients on AI governance frameworks, and every person who's been burned by synthetic media says the same thing: "But it seemed real."

It seemed real because it was real. Mostly.

The Signal Has Inverted

Here's the uncomfortable part: the absence of a digital footprint used to signal authenticity. No filters, no edits, no manipulation — just a clean image straight from the camera. Off-the-grid and genuine.

That signal just flipped. Now, no digital trail might mean it was never captured by a lens at all. Fully synthetic. Generated whole cloth by a model that learned what "real" looks like and reproduced it perfectly.

The tools we built to detect manipulation assume the faker left fingerprints. But when the fake is the fingerprints — when the metadata is genuine because it was copied from a real image, when the noise patterns match because they were preserved intentionally — detection becomes a losing game.

I've Seen This Movie Before

In the early 2000s, spam filters worked by looking for specific words and patterns. "Nigerian prince." "Congratulations, you've won." Obvious tells. Then spammers got smarter. They poisoned their own emails with legitimate text — full paragraphs from news articles, real company names, personalized details scraped from LinkedIn. The filter couldn't tell the difference because 95% of the email wasn't spam.

Detection collapsed. Reputation systems survived.

Email authentication didn't win by getting better at spotting fakes. It won by making senders prove where they came from. SPF records. DKIM signatures. Chain of custody baked into the protocol itself. You stopped asking "does this look real?" and started asking "can you prove you sent it?"

We're at that same inflection point with images, video, and every other digital artifact your firm relies on to make decisions.

Provenance Is Not a New Idea

Audit solved this problem seventy years ago. Paper ledgers, wire transfer confirmations, KYC documentation — the artifact was never the evidence. The chain of custody was.

I don't accept a bank statement because the numbers look right. I accept it because I can trace it back to a signed attestation, a timestamped original, an independent third party who vouched for its accuracy. The document might be a photocopy of a photocopy, but the provenance is intact.

Blockchain took that audit principle and built it into the infrastructure itself. Every transaction carries its own provenance. Every block is a signed attestation. The ledger is the chain of custody. Satoshi Nakamoto didn't invent a new concept in 2009 — he encoded the oldest control we had.

But what do I know — I've only watched detection tools lose to attackers four times in my career.

The Verification Toolkit Is Narrowing

While we're debating governance frameworks, the ground is shifting underneath us:

Bots now represent 51% of internet traffic, according to recent research — and they're scaling eight times faster than humans. The synthetic supply is exploding.

Meanwhile, Planet Labs just pulled Iran satellite imagery at US government request. Verification sources we assumed would always exist are disappearing for geopolitical, commercial, and regulatory reasons.

The toolkit is narrowing at the exact moment we need it most.

Your AI Governance Policy Is Asking the Wrong Question

I've reviewed dozens of AI governance frameworks in the past year. Almost all of them include a section on deepfake detection. Tools to flag synthetic media. Red-team exercises to test the defenses. Incident response plans for when a fake slips through.

Detection is a scoreboard. Provenance is a control.

You can't governance-framework your way out of an arms race you're losing. Every detection tool you deploy tells the attacker what to optimize for next. You're teaching the model how to beat you.

Provenance flips the burden. Instead of proving something is fake, you require proof that it's real. Signed attestations at capture time. Cryptographic hashing at the moment of creation. Timestamped ledgers that can't be back-edited.

It's not foolproof — nothing is. But it's defensible. And more importantly, it's auditable.

The Uncomfortable Question

Here's what I'm asking clients to sit with: If you can't verify where a document came from, do you have any business relying on it to make a material decision?

Not "can you detect manipulation." Not "does it pass the smell test." Can you prove — to a regulator, to opposing counsel, to your own board — that this artifact is what it claims to be?

Because the standard assumption — that real things look real and fake things look fake — just died. We're entering an era where synthetic content is indistinguishable from authentic content at every technical level.

Your fraud controls, your compliance documentation, your litigation evidence, your audit trails — all of it assumes you can tell the difference.

What happens when you can't?

What to Do Monday Morning

This isn't theoretical. The shift is happening now, and the firms that adapt early will have a structural advantage over the ones that wait for the first catastrophic failure.

Here's what to ask your security and compliance teams:

  1. For critical documentation (contracts, financial records, identity verification), do we have provenance controls in place, or are we relying on detection after the fact?

  2. For vendor-supplied data (satellite imagery, market feeds, third-party reports), can we verify the chain of custody, or are we assuming it's real because it came from a trusted source?

  3. For AI-generated outputs our own teams are producing, are we timestamping and signing them at creation, or are we trusting future-us to remember what was real?

The castle has already fallen. The question is whether you're building on the railroad line or waiting for the town to empty out.

Nobody gets fired the day the fakes become undetectable. Your firm just slowly stops being able to prove anything was real.

Need Enterprise Solutions?

RSM provides comprehensive blockchain and digital asset services for businesses.

More Blockchain Posts

July 01, 2024

Wallet Backups: Protecting Your Funds

In our ongoing journey to demystify the world of blockchain and digital assets, we've covered the ins and outs of Hierar...

October 25, 2024

Exploring the Use Cases of Zero-Knowledge Proofs Beyond Cryptocurrencies

Hey there, blockchain enthusiasts! In our last post, we dove into the exciting world of DeFi and how zero-knowledge proo...

May 04, 2024

Distributed Ledger Technology: The Backbone of Blockchain

In our last post, we discussed the key differences between centralized and decentralized systems. Today, we're going to ...