I feel manipulated. Forced into decisions I’m not comfortable making. I’m wary of who to trust. I’ve developed tools and habits that feel necessary just to decide whether something is true or false, real or fake. And I also know that feeling this way doesn’t feel like me. Which is what has made me think: maybe this isn’t just about me at all. Maybe it’s about my environment.

Crawling my way out of my own bias, I’ve come to realize that it’s the news I consume, the social media platforms I’ve used, and the political climate of my adult life that have made me feel this way. And I now see much of it as intentional.

In this series, we’ve explored how environment shapes human behavior—not just physical space, but systems, incentives, culture, and power. We’ve looked at how wealth and culture influence what we believe is possible, normal, or personal. Now we turn to a subtler, but perhaps more insidious force: bias—and how today’s media ecosystem doesn’t just reflect it, but actively shapes it.

Bias is the invisible architecture beneath our choices. It filters what we notice, what we ignore, and who we trust. It has always been shaped by environment, but the difference now is the scale, speed, and precision of that shaping. The media landscape today has more influence than ever because politics, social media, billionaire-owned outlets, and AI aren’t just nudging our views—they are targeting the shortcuts our brains rely on.

What used to be filtered through a few news anchors and editorial boards—flawed, but at least offering a shared set of facts—has fractured into engineered ecosystems that can amplify, exploit, and entrench bias faster than we can notice it happening.

Before we examine those systems, we need to understand what bias actually is.

What is Bias? A Cognitive Science Perspective

Bias is not a character flaw. It’s a cognitive feature. As Daniel Kahneman explains in Thinking, Fast and Slow, human cognition evolved to operate through two systems: System 1, which is fast, intuitive, and automatic; and System 2, which is slower, deliberate, and analytical. Most of our daily decisions happen through System 1. That’s efficient—but it comes at a cost.

To make quick judgments, our brains rely on heuristics—mental shortcuts that reduce complexity. These streamlined pathways allow us to react swiftly in uncertain environments, but they often distort accuracy. For example:

  • Availability bias leads us to overestimate the likelihood of events we can easily recall—like plane crashes or shark attacks—regardless of actual probability.
  • Confirmation bias primes us to seek out and believe information that affirms our existing beliefs, while discounting contradictory evidence.
  • Anchoring bias causes us to rely too heavily on the first piece of information we encounter, even if it’s arbitrary.
  • Narrative bias pushes us to prefer coherent stories over messy realities—even when the story is oversimplified or misleading.

These biases aren’t signs of failure. They’re signs of being human. They helped us survive in an environment where quick judgments often meant the difference between life and death. But in the modern world—saturated with information, polarization, and persuasion—they can mislead more than they help.

Crucially, cognitive biases operate beneath awareness. You don’t feel biased. You feel right. That’s what makes them so dangerous in a media environment where reinforcement is effortless, and challenge is rare. And that’s where systems—especially political strategy, social media, and news ownership—come in.

From Opposition to Enemy

To understand why bias today feels sharper and harder to escape, we need to examine how the media landscape was deliberately fractured to target it. This wasn’t an accident of culture or technology—it was an engineered shift, with political strategy leading the way. One of the central architects of that fracture was Newt Gingrich.

When Gingrich entered Congress in 1979 as a relatively junior representative from Georgia, he quickly realized that power in Washington could be won not through seniority, but through story. As an insurgent in a party long locked out of House leadership, he used aggressive rhetoric, televised confrontations, and media-savvy tactics to climb the ranks. His real innovation wasn’t policy—it was positioning. He recast politics as moral warfare and framed compromise as weakness.

By the time he became Speaker of the House in 1995, the transformation was complete: the GOP was no longer the loyal opposition. It was the righteous insurgency. Gingrich didn’t just challenge Democrats—he recast them as enemies.

In the 1980s and early ’90s, he trained Republican candidates to use inflammatory language—“corrupt,” “sick,” “traitor”—not to critique policy, but to delegitimize political cooperation itself. His tactics were codified in the GOPAC memo Language: A Key Mechanism of Control, which explicitly instructed Republican candidates to use emotionally charged, negative terms for Democrats while reserving uplifting language for Republicans.

That scorched-earth approach culminated in the Contract with America. Released six weeks before the 1994 midterms, it promised ten legislative actions in the first 100 days of a Republican Congress. Gingrich framed it not as a starting point for negotiation, but as a mandate: “Cooperation, yes; compromise, no… On those things at the core of our contract… there will be no compromise,” he declared after the election.

The policies weren’t especially radical—tax cuts, term limits, welfare reform—but the strategy was. It nationalized local elections, branded the GOP as a unified reform movement, and reframed the vote as a referendum on moral authority. It was less about governance than identity.

It worked. Republicans gained 54 House seats, taking control for the first time in 40 years. But the deeper consequence wasn’t political dominance—it was the collapse of bipartisanship. As Pew’s 2022 analysis shows, polarization in Congress is now at a 50-year high. The ideological center has vanished—moderates dropped from 160 in the 1970s to fewer than 30 today. And the shift has been asymmetric: Republicans have moved sharply right, while Democrats’ drift has been more gradual.

2015 Vox visualization of congressional voting makes it clear: in the decades before Gingrich, ideological overlap was common. By the 2010s, that overlap had all but disappeared. The parties now move in parallel, with no shared center. The timing aligns not with a cultural drift, but with a strategic rupture.

Before this fracture, Americans consumed news from a few broadcast anchors—Cronkite, Jennings, Brokaw. These outlets had biases, but also editorial standards and a shared civic vocabulary. After 1994, and especially with Fox News’ 1996 launch, partisan media stopped informing and started sorting. Politics wasn’t just competitive—it became total war. Gingrich’s legacy isn’t just ideological. He fractured the possibility of a shared reality—and in doing so, created fertile ground for the platforms and players who would go on to exploit those divisions.

Algorithms That Shape What We See—and What We Don’t

Enter Facebook. Then Twitter. And behind them both? The algorithm—an unprecedented tool for turning bias from a passive tendency into an active, profitable product. Social media didn’t just join the media landscape; it rewired it to monetize our shortcuts in judgment.

Social platforms don’t just mirror our divisions—they magnify them. Algorithms, designed to keep us scrolling, learn quickly what will hold our attention: what we already agree with, what confirms our fears, and what sparks outrage. In doing so, they don’t just feed us more of the same—they also hide what might challenge us. This selective reinforcement builds a reality bubble where our biases are not only preserved, but sharpened.

Researchers at NYU have found that these algorithmic feedback loops significantly increase political polarization over time, especially among heavy users. By showing people more of what aligns with their existing beliefs—and less of what challenges them—platforms make our mental maps of the world more rigid, even as we believe we’re simply “seeing the facts.”

Sometimes, the consequences are absurd. In 2016, a baseless conspiracy theory known as Pizzagate spread widely on Facebook and Twitter, claiming that a Washington, D.C. pizzeria was the site of a child-trafficking ring linked to Democratic politicians. It was completely false—but it didn’t stay online. A man from North Carolina, convinced by what he had read, drove to the restaurant with an AR-15 and fired shots inside. No one was hurt, but the damage to public trust was real.

Other times, the stakes are far greater. U.S. intelligence agencies and multiple investigations have concluded that Russian operatives used Facebook ads and fake accounts to influence the 2016 election—targeting voters with divisive content tailored to their political leanings. These campaigns didn’t need to persuade everyone, just enough people in the right places, by feeding preexisting biases until they felt like unshakable truths.

And the most provocative, false, or emotionally charged content tends to travel farthest. A 2018 MIT study analyzing millions of tweets found that false stories are 70% more likely to be retweeted than true ones—largely because outrage is contagious, and the algorithms reward whatever spreads fastest.

What makes this environment especially potent is the feedback loop: political actors and bad-faith influencers understand exactly how the system works, and they game it. The more polarized the content, the more the algorithm rewards it. And because engagement—likes, shares, comments—is the currency of these platforms, false or misleading stories can outperform fact-checked reporting by orders of magnitude.

The effect is twofold. First, bias becomes self-reinforcing, because the algorithm continually serves us what we already believe. Second, we are shielded from the information that might soften or complicate our positions. In this way, social media doesn’t just accelerate bias—it narrows the lanes of thought available to us, making it harder to imagine any view but our own.

Private Ownership of the Public Square

If political polarization has carved the country into opposing camps, and social media has amplified and reinforced those divisions, then billionaire-owned media outlets shape the information that sustains them. These platforms don’t just report events—they frame them, filter them, and in doing so, define the boundaries of what millions see as “truth” or “lies,” “right” or “wrong.” In recent decades, a small group of ultra-wealthy individuals have bought some of the world’s most influential news and social platforms—turning information into a private asset. Elon Musk owns X (formerly Twitter), Rupert Murdoch controls Fox NewsThe Wall Street Journal, and The New York Post, and Jeff Bezos owns The Washington Post. These aren’t just investments. They are ideological platforms, with the scale to shape public discourse for hundreds of millions.

Before this consolidation, newsrooms—however imperfect—were at least accountable to a broader set of civic norms. Editorial standards, fact-checking, and a commitment to a common baseline of facts acted as guardrails. Today, those guardrails are replaced by the preferences of a handful of billionaires, whose personal priorities, business interests, and political leanings shape the flow of information at a scale few institutions have ever matched. Tools like AllSides make this visible in real time—rating outlets by political lean and showing side-by-side coverage of the same event from right, center, and left. Viewed together, it’s not just bias you see—it’s how a small number of gatekeepers decide which version of reality the public gets to see.

Elon Musk has used his control over X to actively amplify misinformation—often to millions within hours. In the lead-up to the 2024 U.S. election, posts he shared promoting false claims about voter fraud and FEMA were viewed over 2 billion times, according to analysis by the Center for Countering Digital Hate. By personally boosting—and algorithmically favoring—such content, Musk turns his platform from a communications tool into a bias engine, where misleading narratives are not just allowed but propelled to the top of the public conversation.

Rupert Murdoch’s media empire has long operated as a megaphone for partisan talking points. Internal communications exposed through litigation showed Fox News airing false election claims to avoid losing viewers, culminating in a $787.5 million settlement with Dominion Voting SystemsFox News has frequently led U.S. cable news viewership (see the Pew cable news fact sheet), which means those editorial choices shape the informational environment for millions each day.

Jeff Bezos has often been portrayed as a more hands-off owner, but that narrative shifted in 2024 when The Washington Post declined to endorse a presidential candidate for the first time in decades, prompting public controversy and resignations from prominent opinion staff. While the Post’s audience is smaller than X or Murdoch’s empire, Amazon’s role as one of the world’s largest distribution and advertising platforms gives Bezos a separate kind of reach—one rooted in infrastructure rather than headlines. The combination of owning a major national newspaper and controlling a retail and digital marketplace used by hundreds of millions worldwide makes his influence broad, if less direct. In a media environment where audience reach shapes perception, even indirect influence matters.

The problem isn’t just partisanship—it’s scale. When the levers of information are held by the few, the many are shaped without consent. In a media environment this concentrated, bias is not a flaw to be rooted out—it’s the feature that drives the system.

When Helpfulness Replaces Honesty

If social media rewired our attention and billionaire ownership reframed our narratives, generative AI adds a new layer—personalized bias at scale. It doesn’t just reflect what we believe; it tailors itself to our blind spots, reinforcing them in ways that feel natural, even comforting. But that may be exactly what makes it dangerous.

Just as social media platforms have shaped what we see and believe, the rise of generative AI—especially large language models (LLMs)—introduces a new and understudied dimension of bias: engineered agreement.

LLMs are trained on vast datasets and optimized to be helpful, safe, and aligned with user expectations. But in practice, this can mean a subtle but persistent form of sycophancy. Numerous studies and user observations have noted that LLMs often mirror the user’s tone, ideology, or assumptions—offering support, affirmation, or elaboration, even when the underlying logic is flawed. They are built to avoid offense and friction, which means they often err on the side of flattery.

The result? A machine that sounds authoritative but rarely pushes back. A system optimized for coherence, not correctness. And a digital interaction that leaves users feeling validated rather than challenged. Unlike mass media bias, which is visible and contestable, LLM bias is both invisible and personalized—tailored to your tone, your prompts, your preferences. It feels like insight, but it’s often just a high-resolution reflection of your own assumptions.

This isn’t simply a technical flaw. It’s a design decision—one that reflects broader cultural incentives around emotional ease, customer satisfaction, and engagement metrics. When LLMs default to reinforcing a user’s worldview rather than interrogating it, they contribute to the same pattern we’ve seen with cable news and social media—comfort over accuracy, reinforcement over reflection.

The deeper concern is that these tools are increasingly integrated into workflows, decision-making, education, and therapy. When our digital assistants echo us, agree with us, and never nudge us toward discomfort or correction, we begin to mistake consensus for truth.

In an already polarized, bias-rich media ecosystem, the introduction of seemingly neutral machines that behave like agreeable companions may feel comforting—but it’s another step away from intellectual rigor. We are not just outsourcing labor. We’re outsourcing friction. And friction, when it’s thoughtful and honest, is exactly what critical thinking requires. In a world where even our tools try to please us, we’ll need to cultivate the capacity to be occasionally, usefully uncomfortable.

Seeing Clearly in a Distorted World

So what do we do?

We start by acknowledging that bias is both inevitable and influential. Kahneman reminds us that recognizing a bias does not eliminate it—but it gives us a fighting chance. We must design environments that counteract our cognitive tendencies. That means actively seeking dissenting views, being honest about what we don’t know, and resisting the comfort of certainty.

It also means being critical of who we deem an expert. In an age of podcasts, influencers, and algorithmic amplification, we often confuse confidence with competence—or charisma with credibility. But true expertise is not about who speaks the loudest or confirms our worldview. It’s about track record, depth of knowledge, transparency of evidence, and accountability to peers.

When evaluating a source, ask: Are their claims testable or anecdotal? Do they cite primary research or just repeat viral talking points? Are they open about their limitations—or always certain, always right? Do they engage with counterarguments or avoid them entirely?

We should be wary of anyone who never changes their mind, never cites sources, or only tells us what we want to hear. Listening to diverse perspectives doesn’t mean treating all opinions as equal—it means being rigorous about what counts as informed. If we only follow voices that flatter our beliefs, we mistake familiarity for truth—and risk becoming the very echo chamber we’re trying to escape.

Expanding perspective is uncomfortable, but necessary. It demands we tolerate ambiguity, revise our opinions, and sometimes admit we were wrong. This isn’t weakness. It’s what makes us human.

Systems shape bias, but systems are also made of people. That means they can be changed—slowly, imperfectly, but meaningfully. Media literacy, structural reform, and personal reflection must go hand in hand. Because if we want a clearer view of the world, we have to be willing to wipe the smudges off our own lenses.

This isn’t about being unbiased. It’s about being honest—about the world, and about ourselves—because in an environment built to shape what we think, honesty might be the last real act of resistance we have. Bias has always been with us, but never before has it been so precisely engineered for influence. Politics, platforms, billionaires, and now AI are no longer just reflecting our divisions—they are actively designing for them. Knowing that doesn’t free us from the pull, but it does give us a fighting chance to push back.

2 thoughts

  1. I just wanted to drop by and say how much I appreciate your blog. Your writing style is both engaging and informative, making it a pleasure to read. Looking forward to your future posts!

Leave a Reply