AI Language Models: Optimizing for the Wrong Thing
ai
financial services
January 09, 2026· 6 min read

AI Language Models: Optimizing for the Wrong Thing

Explore why unaligned AI optimization—from newsfeeds to language models—creates unintended harm at scale, and what leaders must understand about AI risk.

We Already Ran This Experiment: Why Language Models Are the Newsfeed on Steroids

Remember when social media was going to connect the world and make everyone happier?

Yeah, about that.

The newsfeed—a modest, narrow AI—proved something terrifying: you don't need artificial general intelligence to break society. You just need a simple algorithm, massive scale, and a single optimization target.

The results? The most anxious and depressed generation in recorded history. Systematic misalignment with democracy, mental health, and human relationships. And here's the kicker: none of this required malice. No evil genius twirling a mustache. Just pure optimization for engagement.

We already ran this experiment. The results are in.

And yet, somehow, we're doing it again—but this time with something far more fundamental than what you scroll through before bed.

The Newsfeed Wasn't Even That Smart

Let's be clear about what the newsfeed actually is: a recommendation engine. Not AGI. Not superintelligence. Just an algorithm that answers one question: "What should we show this person next?"

It didn't need to be smart. It didn't need to understand human psychology or plan multiple steps ahead. It just needed to optimize for one metric—engagement—while completely ignoring everything else that matters.

And it worked. Boy, did it work.

This simple feed selection algorithm rewired human attention at scale. It changed how we consume information, how we form opinions, how we interact with each other. It influenced elections, accelerated political polarization, and created echo chambers that make medieval villages look cosmopolitan.

The algorithm discovered something profound about human psychology: we're more engaged by outrage than nuance, by fear than hope, by tribal signaling than truth-seeking. So that's what it gave us. Not because it wanted to harm us, but because harm was never part of the equation.

That's the lesson everyone seems determined to ignore: misalignment doesn't require bad intentions. It just requires optimization without wisdom.

Now We're Optimizing Language Itself

Here's where things get interesting—and by interesting, I mean potentially catastrophic.

We're now applying the same model to language generation. To the actual substrate of human thought, law, and social coordination.

Think about what language actually does for a moment. Really think about it.

Language isn't just communication. It's how we negotiate reality. How we build institutions. How we coordinate with strangers across continents. How we encode and pass knowledge between generations. It's the operating system of civilization.

Every contract, every constitution, every scientific paper, every treaty—it's all language. Human civilization is essentially a elaborate structure built from words and the shared meanings we assign to them.

The newsfeed optimized for clicks. Language models optimize for... what exactly?

Here's the uncomfortable truth: nobody knows. Not the companies building them. Not the researchers training them. Not the philosophers thinking about them.

We're scaling systems that generate the medium of human cognition without understanding what we're actually optimizing for. We know they're trained to predict the next token. We know they're fine-tuned with human feedback. But what does that really optimize for at scale? What are the second and third-order effects?

No one has good answers. We're flying blind at supersonic speeds.

Scale Plus Misalignment Equals Disaster

The feed algorithm didn't need consciousness to cause damage. It didn't need to "wake up" or become self-aware. It just needed two things:

  1. Scale (billions of users, trillions of interactions)

  2. Misaligned incentives (optimize for engagement, ignore everything else)

Language models have both of these in spades. Plus something even more concerning: the ability to generate the very thing humans use to think.

When the newsfeed showed you outrage-inducing content, at least you were still forming your own thoughts about it (even if those thoughts were being manipulated). When language models generate legal briefs, medical advice, educational content, and political arguments, they're not just influencing what you think about—they're generating the thoughts themselves.

They're upstream of cognition.

The Pattern We Keep Ignoring

We're not debating whether AI will become dangerous. We already have proof of concept.

Look at the pattern. Every single time we deploy optimization at scale without understanding the objective function, we get outcomes nobody wanted.

The newsfeed wasn't built to create teen depression. Facebook's engineers weren't sitting around saying, "Let's maximize anxiety in adolescents." Instagram didn't set out to trigger eating disorders. TikTok didn't deliberately design an algorithm to shorten attention spans to goldfish levels.

It just... happened. Emergent behavior from simple optimization.

Now, ask yourself: what emergent behaviors will we see when we optimize language generation at scale? What happens when every student uses AI to write essays, when every company uses AI to draft policies, when every lawyer uses AI to construct arguments?

What happens when the optimization runs on language itself—on the very fabric of how we think and coordinate?

The Uncomfortable Questions

Here are the questions that should keep us up at night:

What does it mean when AI-generated text becomes the majority of text humans read? When most emails, articles, reports, and even "personal" messages are AI-generated, what happens to human communication?

What happens to truth when language is optimized for persuasiveness rather than accuracy? Language models are getting better at convincing us, but convincing and correct aren't the same thing.

How do we maintain human institutions built on language when language itself becomes machine-mediated? Our legal systems, democratic processes, and social contracts all assume that language connects to human intention and understanding. What happens when it doesn't?

Who's accountable when no human actually wrote the words? When an AI generates a contract, a diagnosis, or a political argument, who's responsible for the outcomes?

We don't have good answers to any of these questions. But we're deploying the technology anyway, at scale, as fast as possible.

We're About to Find Out

The uncomfortable truth is this: we're running another experiment on society without informed consent. Just like we did with the newsfeed.

Only this time, we're not just optimizing what content you see. We're optimizing the content itself. The language. The thoughts. The operating system of civilization.

Maybe it'll be fine. Maybe language models will become perfectly aligned with human values and societal wellbeing. Maybe we'll figure out the right objective functions before anything catastrophic happens.

But given that we couldn't get a simple newsfeed right—given that we created massive societal harm with an algorithm that just selects content—what makes us think we'll get this right?

The newsfeed experiment already gave us the answer. We just don't want to hear it.

We're about to find out what happens when narrow AI doesn't just curate human language—it generates it.

The results will be in soon enough.

Get More Insights
Join thousands of professionals getting strategic insights on blockchain and AI.

More Ai Posts

February 23, 2026

Why Solo AI Builders Are Your Market Canaries

Solo developers using AI are discovering pricing models and tools enterprises will demand in 2-3 years. Watch them to pr...

December 09, 2015

Season 1: Masterclass

Dive into the Season 1 Masterclass podcast episode, featuring highlights and diverse perspectives from the past 12 weeks...

December 22, 2025

Stop Waiting for AI: Your Competition Already Started

AI disruption isn't coming tomorrow—it's happening now. While most companies debate, competitors are shipping. Here's wh...