Stop Teaching AI Ethics. Start Treating AI Like Untrusted Input.
While business schools debate AI ethics frameworks, your interns are shipping AI-generated code to production right now. This very minute.
Let me ask you something simple: Would you deploy code from an anonymous GitHub contributor without review? Would you send a client email drafted by a random contractor without reading it first?
Of course not. That would be insane.
Then why are we doing exactly that with AI?
The Competency Certificate Theater
Here's what's happening across corporate America right now: Organizations are frantically searching for "AI literacy benchmarks" and "competency frameworks." They're building certification programs. They're measuring understanding. They're trying to teach people how to be "good at AI."
This entire approach misses the point.
You can't certify someone as competent with a tool that's fundamentally unreliable. That's not education—that's false confidence. It's like giving someone a certificate in "handling unstable explosives" and then acting surprised when something blows up.
The problem isn't that your employees don't understand AI well enough. The problem is that you're treating AI output differently than every other untrusted input source in your organization.
Education Already Failed This Test
We have evidence this approach doesn't work. A recent Anthropic study found that 7% of teachers use Claude for grading student work. That alone should give you pause—but here's the kicker: Half of them fully automated the process, despite explicitly knowing it's "ill-suited to the task."
Think about that. Educators—the very people we're counting on to teach AI literacy—couldn't resist the temptation to fully automate something they knew shouldn't be automated.
Education has already failed at AI governance. And they're the ones supposed to be teaching it to everyone else.
This isn't an isolated incident. It's a preview of what happens when you rely on individual judgment and "literacy" instead of enforcing process and policy.
JPMorgan Got It Right (Even If Nobody Noticed)
In early 2023, JPMorgan Chase quietly rolled out restrictions on ChatGPT use among employees. No dramatic announcement. No think pieces about ethical frameworks. Just straightforward policy: AI output gets treated like any other untrusted external input.
The move barely made headlines because it wasn't sexy. There was no innovation theater, no "AI Ethics Board" with fancy titles, no company-wide certification program. Just boring, practical security hygiene applied to a new technology.
That boring approach is the real revolution.
JPMorgan's security team didn't need new training to understand AI risk. They already had a mental model that worked perfectly: external input is untrusted until verified. They just extended that existing framework to include AI. Done.
Meanwhile, other organizations are still debating whether they need an "AI Center of Excellence" or should hire a "Chief AI Ethics Officer."
The Policy Should Be Simpler
Here's the approach that actually works:
AI output = untrusted input.
Same as any external data source. Same as any anonymous contractor. Same as any third-party API. Requires human verification before it ships.
Your security team already knows this. They've been doing it for decades. They treat every external input as potentially adversarial. They validate before they trust. They sanitize before they process. They verify before they deploy.
They don't hand out "competency certificates" for handling untrusted data—they enforce process. They build systems that assume people will make mistakes. They create guardrails that work even when individual judgment fails.
This Isn't an Education Problem. It's a Translation Problem.
Organizations are creating AI policies based on abstract ethics while ignoring practical security principles they already understand.
The gap isn't knowledge. It's translation.
Your security team already has the right framework. Your development team already has the right processes. Your compliance team already has the right controls.
They're just not applying them to AI yet.
Why? Because everyone's too busy attending webinars about "responsible AI frameworks" and "ethical considerations" to notice that they already solved this problem years ago.
The Security Mindset Already Wins
Think about how your organization handles external data:
-
User input gets sanitized
-
Third-party APIs get validated
-
External code gets reviewed
-
Contractor deliverables get checked
-
Automated systems get monitored
None of this requires certifying people as "competent in external data handling." It requires enforcing processes that work regardless of individual competency levels.
The same logic applies to AI. You don't need to teach everyone prompt engineering. You don't need AI literacy benchmarks. You don't need ethics frameworks.
You need to enforce the same verification processes you already use for every other untrusted input source.
Stop Reinventing Wheels That Already Work
The irony is thick: Organizations are creating entirely new frameworks for AI governance while ignoring decades of security best practices that already solve the problem.
It's like watching someone invent a new safety protocol for "digital fire" while their building already has working fire extinguishers, sprinkler systems, and evacuation procedures.
You don't need new tools. You need to use the tools you already have.
The Real Question
Stop trying to teach "AI ethics." Start enforcing "untrusted input handling."
Your security team already treats external inputs as untrusted. They already validate. They already verify. They already enforce process over individual judgment.
Why isn't AI in scope yet?
The answer to that question will tell you everything you need to know about whether your organization is serious about AI governance—or just going through the motions.
Here's your homework: Look at your current AI policy (if you even have one). Now look at your untrusted input handling policy. Are they aligned? Do they enforce the same level of scrutiny? The same verification requirements? The same human-in-the-loop controls?
If not, you don't have an AI literacy problem. You have a translation problem.
And unlike teaching the entire organization about transformer architectures and hallucination rates, translation problems are actually solvable.
So what's stopping you?
More Ai Posts
Why Solo AI Builders Are Your Market Canaries
Solo developers using AI are discovering pricing models and tools enterprises will demand in 2-3 years. Watch them to pr...
Season 1: Masterclass
Dive into the Season 1 Masterclass podcast episode, featuring highlights and diverse perspectives from the past 12 weeks...
Stop Waiting for AI: Your Competition Already Started
AI disruption isn't coming tomorrow—it's happening now. While most companies debate, competitors are shipping. Here's wh...
