AI Security: Why Operational Hygiene Beats Capability
AI
financial services
April 29, 2026· 6 min read

AI Security: Why Operational Hygiene Beats Capability

Anthropic's breached deployment reveals a critical lesson: sophisticated AI means nothing without basic security practices. Real security requires transparent systems, not vendor promises.

The AI Company That Found 26-Year-Old Bugs Got Breached by a Guessed URL

Anthropic's Claude can identify zero-day vulnerabilities in code written before the internet had a homepage. Last month, a researcher accessed a model the company called "too dangerous for public release" by typing a URL on a hunch after stumbling across a data leak from Mercor, a recruiting platform.

The most sophisticated AI detection capability on the planet, protected by a URL someone guessed.

A RUSI researcher called it "a humiliation." I call it the oldest lesson in security, learned again at the frontier of capability.

The Ceiling Doesn't Matter When the Floor Is Missing

I was reviewing an incident response plan last week with a client whose infrastructure includes cutting-edge encryption, multi-factor authentication, and real-time threat monitoring. Impressive stack. Then I asked about their third-party vendor access controls. "We send them a login link."

The capability ceiling of your tech stack means nothing if the floor is a default password.

This is not new. In 2019, Capital One lost 100 million customer records because a former employee exploited a misconfigured firewall. The bank had world-class detection systems. The breach happened because someone left a door unlocked that nobody thought to check. Anthropic's incident follows the same pattern: extraordinary capability, elementary failure.

Claude can parse decades of code and surface vulnerabilities human researchers would miss. And someone accessed a restricted deployment of that same model because the company relied on obscurity instead of access control. The gap between what the technology can do and how the organization deploys it is where every catastrophic failure lives.

Security Through Obscurity Has a 143-Year Losing Streak

In 1883, Auguste Kerckhoffs published a principle that still governs every secure system you depend on: a cipher must remain secure even when everything about it is known except the key.

RSA. AES. SHA-256. Every cryptographic algorithm protecting your bank transactions, your client records, and your government communications is public. Anyone — including adversaries — can read every line of the implementation. They are still secure. That is the only definition of real security that has ever held up.

Open source cryptography is not a nice-to-have for the security community. It is the foundation of trust in a world where you cannot verify intent, only mathematics. When the NSA contributes to encryption standards, the security community does not take their word for it — they review the code. When vulnerabilities are discovered, they are patched in public, with full disclosure of what failed and why.

The system you can read and still can't break is secure. The system that requires secrecy is just holding a secret it can't afford to have discovered.

Anthropic's breach is the inverse of this principle. The company built a model with extraordinary detection capability, then protected access to it by hoping nobody would find the door. That is not a security posture. That is operational optimism.

The Last Time Elegance Met Reality

I have watched this movie before. In the 1990s, the cryptography wars centered on export controls — the U.S. government classified strong encryption as a munition and restricted its distribution. The argument was that if adversaries could access the algorithms, they would break them.

Phil Zimmermann released PGP anyway. He published the source code as a book, shipped it overseas, and forced the government to confront the reality that security through obscurity does not scale. You cannot classify mathematics. The only systems that survived were the ones built to withstand public scrutiny.

The pattern repeats in AI safety. Companies building frontier models insist that responsible deployment requires keeping implementations confidential. The theory is elegant: if adversaries do not know how the model works, they cannot exploit it. The practice is that someone guesses the URL.

Nobody gets fired the day the encryption algorithm gets published. The system just becomes more secure because a thousand researchers can test it. Or it fails immediately because it was never secure to begin with. Either outcome is better than the illusion of safety.

What "Responsible AI" Actually Requires

I am not arguing that Anthropic should publish every model variant they build. I am arguing that "we keep that confidential for security reasons" is not a security model when your adversaries are nation-states, organized crime, and researchers with free time.

The AI safety conversation has focused on capability risk — what happens if a model is too powerful, too autonomous, too unpredictable. We are underweighting deployment risk — what happens when a powerful model is protected by a system that assumes nobody will look for it.

Here is the uncomfortable question I keep sitting with: if your safety case depends on the implementation staying secret, what happens when it does not?

Because it will not. Every meaningful AI model will eventually leak, get reverse-engineered, or become accessible to someone you did not intend. The question is whether your safety mechanisms survive that moment.

Cryptography assumed adversaries would have the algorithm. Authentication systems assume adversaries will probe for access. Secure architectures assume breach. AI safety frameworks that assume secrecy are designing for a world that has never existed.

The Question Your Vendor Cannot Avoid

When your next AI vendor hands you a safety whitepaper, ask one question: where can I read the implementation?

If the answer is "we keep that confidential for security reasons," that is exactly the wrong answer. It means their safety case depends on something they cannot control.

If the answer is "here is the model card, the evaluation framework, and the access controls we use in production," you have something to verify. You can test their claims. You can assess whether their deployment hygiene matches their capability claims.

Which is worth more to your clients: a vendor who promises safety, or a vendor whose safety you can verify?

I have been through enough technology cycles to know how this ends. The companies that survive are not the ones with the most sophisticated models. They are the ones whose operational hygiene matches their ambition. The ones who understand that the floor matters more than the ceiling.

What to Do Monday Morning

If you are evaluating AI vendors, add this to your diligence checklist:

  • Access controls: Can someone outside your organization reach the system by guessing a URL, reusing credentials, or exploiting a vendor integration you forgot about?

  • Deployment transparency: Can you verify the claims in the safety documentation, or are you taking their word for it?

  • Breach assumption: Does their architecture assume the implementation will eventually be known, or does it require secrecy to function?

The hardest part of this conversation is that most vendors have not thought through these questions. They have perfected the capability. They have not stress-tested the deployment.

But what do I know — I have only watched the "security through obscurity" movie five times. It has the same ending every time.

The system that requires secrecy is already compromised. It just has not been discovered yet.

Need Enterprise Solutions?

RSM provides comprehensive blockchain and digital asset services for businesses.

More Ai Posts

February 23, 2026

Why Solo AI Builders Are Your Market Canaries

Solo developers using AI are discovering pricing models and tools enterprises will demand in 2-3 years. Watch them to pr...

December 09, 2015

Season 1: Masterclass

Dive into the Season 1 Masterclass podcast episode, featuring highlights and diverse perspectives from the past 12 weeks...

December 22, 2025

Stop Waiting for AI: Your Competition Already Started

AI disruption isn't coming tomorrow—it's happening now. While most companies debate, competitors are shipping. Here's wh...