AI Credential Risk: The Gap CISOs Are Missing
AI
financial services
May 15, 2026· 8 min read

AI Credential Risk: The Gap CISOs Are Missing

CISOs focus on prompt security while AI agents quietly accumulate dangerous credentials. Learn why traditional access controls fail with AI and how to build a credential inventory.

Your AI Security Policy is Protecting the Wrong Thing

Every CISO I've worked with this year has a policy for what employees can paste into ChatGPT. Almost none have a policy for what the AI itself can access.

That gap isn't theoretical anymore.

Last month, Claude Code OAuth tokens were stolen via MCP hijacking persistence. Braintrust separately urged org-wide AI provider key rotation after an AWS account compromise. Two named incidents in thirty days. Same pattern: the AI didn't break because someone typed the wrong prompt—it broke because nobody knew what credentials the AI was holding.

For the past year, we've been running the wrong playbook. We built guardrails around the input box. We trained employees on prompt hygiene. We worried about intellectual property leaking through conversational interfaces. Those weren't wrong concerns—they just stopped being the primary ones somewhere around October, and most security policies haven't caught up.

The Threat Moved While We Were Writing Policies

Here's what changed: AI stopped being a text prediction engine your employees query and became an agent your systems authenticate.

When your developer adds an MCP server to their Claude setup, that server gets credentials. Not for the session—persistent credentials with scoped access to whatever that integration can read. When your sales team connects an AI assistant to Salesforce via OAuth, you've granted a non-human entity ongoing access to customer records, pipeline data, and contract terms. The authorization doesn't expire when they close the browser tab.

Your AI risk policy covers prompt injection. Your AI doesn't read prompt injection. It reads your Salesforce.

I watched this exact scenario play out last quarter with a client—mid-market financial services firm, mature security posture, real investment in AI governance. They had a comprehensive acceptable use policy. Clear guidelines on what could and couldn't be entered into generative AI tools. Mandatory training. The works.

Then someone in FP&A connected an AI assistant to their ERP system to help with variance analysis. Legitimate use case. Approved by their manager. Nobody asked what the approval grant looked like under the hood. That OAuth token had read access to three years of transaction-level data across four subsidiaries. The AI could see everything the employee could see—and it retained that access whether the employee was logged in or not.

They discovered it during a routine access review six weeks later. Not because anything broke. Because someone finally asked: "What does our AI credential inventory look like?"

They didn't have one.

We've Watched This Movie Before

This isn't new territory—we just forgot the plot.

In 2011, every enterprise was wrestling with the explosion of SaaS applications. Employees were spinning up Dropbox accounts, connecting third-party tools, granting OAuth permissions to apps that promised to make their jobs easier. Security teams were playing whack-a-mole trying to track what had access to what.

The answer wasn't "ban all SaaS." It was identity governance: SSO, service account registries, periodic access reviews, scope minimization, automated deprovisioning. Nobody gets fired the day Salesforce arrives—you just slowly lose track of who can see what.

We solved this once. The controls exist. Credential custody. Rotation evidence. Audit trails showing who connected what service to which data. Segregation of duties for high-privilege integrations. Finance and audit leaders reading this have run this playbook dozens of times.

The difference now is the what. The entity holding credentials isn't a SaaS app with a security questionnaire and a SOC 2 report. It's an AI agent with decision-making capability, contextual awareness across previously siloed systems, and—here's the part that makes this interesting—no consistent model for how it stores, processes, or retains what it accesses.

The Inventory Problem Nobody's Talking About

I've asked seventeen CISOs in the past two months the same question: "Do you have a register of AI integrations with the same fidelity as your service account inventory?"

Fourteen said no. Two said "we're working on it." One said yes—and when I asked to see it, it was a spreadsheet tracking ChatGPT Enterprise seats, not the credential grants underneath them.

This is the gap widening in real-time.

Your IAM team can tell you every service account with access to your financial systems. They can show you the last rotation date, the scope, the business owner, the review history. That discipline took years to build, but it's foundational now.

Can you produce the same report for AI agents?

Most organizations can't answer:

  • Which AI tools have persistent credentials to internal systems?

  • What scope do those grants include?

  • When were they last reviewed?

  • Who approved them, and under what authority?

  • What happens when the employee who set up the integration leaves?

The absence of answers isn't negligence—it's that the technology moved faster than the governance architecture designed to contain it. Your access management framework assumed the authenticated entity was human or a documented service. AI agents are neither.

What This Looks Like Monday Morning

The fix isn't complicated. It's just unfamiliar.

Start treating AI integrations like service accounts—because functionally, that's what they are. Non-human entities with persistent credentials and system access.

That means:

Inventory first. You can't govern what you can't see. Build the register. Every AI tool with OAuth tokens, API keys, or MCP server credentials. Don't wait for perfection—start with the obvious ones (Salesforce, ERP, HRIS, code repositories) and expand from there.

Assign ownership. Every credential needs a business owner responsible for justifying its scope and attesting to its continued necessity. "The dev team set it up" isn't governance.

Review on cadence. Quarterly at minimum for high-privilege access, annually for everything else. Same rhythm as service account reviews—because it's the same risk.

Scope minimization. If the AI assistant helping with expense reports doesn't need write access to the general ledger, don't grant it. Least privilege isn't just for humans.

Rotation and expiration. Credentials that never expire are credentials you've lost control of. Set rotation policies. Enforce expiration. Yes, it's friction. That's the point.

For audit and finance leaders, this maps directly to existing control frameworks. SOC 2 Trust Services Criteria CC6.1 (logical access controls). SOX IT general controls around user access. The novelty isn't inventing new controls—it's recognizing that AI credentials fall under existing ones.

The Question That Keeps Me Up

Here's what I don't know yet: what happens when an AI agent's decision-making capability outpaces our ability to audit what it accessed and why?

Right now, we can review logs. We can see what data the agent touched. We can trace the OAuth grant back to a specific approval. That works as long as the volume is manageable and the logic is traceable.

But what happens when agents are making hundreds of micro-decisions per hour, accessing data across a dozen integrated systems, and operating with contextual reasoning we can't easily reconstruct? When the audit trail says "the agent accessed customer records" but doesn't capture why it determined that access was necessary for the task at hand?

The controls assume we can review what happened. I'm not sure we've stress-tested whether those reviews remain meaningful at AI speed and scale.

I don't have a clean answer. Neither does anyone else I've asked. But the absence of an answer doesn't pause the technology—it just means the gap keeps widening while we figure it out.

Start With What You Can See

The immediate risk isn't theoretical. It's named incidents with public postmortems. Claude Code. Braintrust. Organizations that had security policies, technical competence, and mature risk frameworks—and still got caught because nobody inventoried what the AI could access once authenticated.

Your Monday morning action item isn't a two-year governance transformation. It's asking your IAM lead and your enterprise architecture team to sit down together and answer one question:

What's our firm's AI credential register look like—and is anyone reviewing it on the same cadence as service-account access?

If the answer is "we don't have one," you're not behind. You're in the majority. But that majority is holding a risk nobody's priced yet.

The playbook exists. We built it for SaaS in the 2010s. We just need to run it again—this time for entities that don't send emails, don't attend onboarding, and don't show up in your HRIS when you're trying to figure out who approved what.

But what do I know—I've only watched this movie four times.


Want to pressure-test your firm's AI access governance? Email me ([email protected]) with "AI credential inventory" in the subject line. I'll send you the twenty-question assessment I'm using with clients to map existing IAM controls to AI integration risk.

Need Enterprise Solutions?

RSM provides comprehensive blockchain and digital asset services for businesses.

More Ai Posts

February 23, 2026

Why Solo AI Builders Are Your Market Canaries

Solo developers using AI are discovering pricing models and tools enterprises will demand in 2-3 years. Watch them to pr...

December 09, 2015

Season 1: Masterclass

Dive into the Season 1 Masterclass podcast episode, featuring highlights and diverse perspectives from the past 12 weeks...

December 22, 2025

Stop Waiting for AI: Your Competition Already Started

AI disruption isn't coming tomorrow—it's happening now. While most companies debate, competitors are shipping. Here's wh...