What the Forecasts Miss About Minds, Institutions, and Influence

AI is accelerating. That much is hard to dispute. From language models writing code to algorithms detecting disease, we are living through a moment of profound technological momentum. But beneath the speed and spectacle, a more difficult truth remains: the future of artificial intelligence won’t be shaped by what machines can do — but by how humans respond.

History tells us that breakthroughs don’t guarantee adoption. Systems don’t self-upgrade. And people — especially those in power — rarely change course without a fight. Even the most transformative tools have been slowed, distorted, or rejected outright — not because of technical failure, but because of psychology, politics, and belief.

This essay takes a closer look at AI 2027, a forecasting project that envisions both catastrophe and coordination as we approach the possibility of artificial general intelligence (AGI) — systems that could match or exceed human-level performance across a wide range of tasks. The project lays out two dominant futures: a “race” scenario, where countries and companies accelerate development in competition, despite serious risks; and a “pause” scenario, where progress is intentionally slowed to allow time for oversight, safety, and alignment with human values. Both pathways assume that what comes next will be shaped primarily by technological capability and institutional will.

But I argue that AI 2027 overlooks a messier reality: that human systems — shaped by emotion, status, inertia, and fear — don’t integrate new tools just because they work. What will shape (or stall) the next era of intelligence won’t be a hardware failure. It will be us: our habits, our hierarchies, and our willingness — or refusal — to trust something beyond ourselves.

Which means the real question isn’t just about capability. It’s about judgment. About values. About who we trust to decide what kind of future we’re building, and what we’re willing to surrender in the process.

The Limiting Factor Isn’t Silicon. It’s Us.

History is full of examples where revolutionary technology didn’t lead to revolutionary outcomes — at least not immediately. Not because the tools weren’t powerful enough, but because people weren’t ready, willing, or able to use them effectively.

Medical advancements offer some of the clearest illustrations. Scientific breakthroughs often emerge rapidly in response to urgent needs, but their integration into society is anything but smooth. HIV treatments, for instance, have saved millions of lives in wealthy nations, yet remain under-distributed across large swaths of the Global South. The technology exists, but political, economic, and systemic inequalities create barriers to access.

Vaccines provide another example. Despite their proven effectiveness and widespread availability, especially in high-income countries, they remain underutilized for reasons that have nothing to do with technology. The reemergence of measles in parts of the United States is not due to a lack of vaccine access — it’s due to misinformation, distrust, and politicization. Daniel Kahneman, the psychologist and Nobel-winning author of Thinking, Fast and Slow, reminds us that people don’t act like rational agents — even when the facts are clear. We are loss-averse, drawn to confirmation bias, and far more influenced by emotion and tribal affiliation than by evidence. If something threatens how we see ourselves — or who we trust — we often reject it, no matter the logic.

So if the argument in AI 2027 is that superintelligent systems will soon be able to solve humanity’s most pressing problems, I don’t disagree with the premise. But the idea that institutions, governments, and social systems will automatically adopt these tools — just because they’re better — underestimates a deeper force: human psychology.

Confusion Breeds Caution

It’s easy to forget just how confused many decision-makers are when it comes to technology. The infamous U.S. Senate hearings with tech CEOs exposed a generational and conceptual gap so wide it was almost farcical. Senators struggled to articulate basic questions about social media platforms — let alone AI alignment, algorithmic bias, or autonomous systems.

That confusion matters. As neuroscientist Robert Sapolsky notes in Behave, humans are neurologically wired to respond to uncertainty with stress, vigilance, and a pull toward simple explanations. In complex systems, when people don’t understand what’s happening, they tend to default to inaction, not innovation. And in high-stakes contexts — national security, healthcare, economic policy — confusion breeds caution.

This isn’t just individual psychology. It’s institutional behavior. When people in power don’t understand what they’re dealing with, they don’t step aside. They stall, regulate, or double down on the old way of doing things. Whether that’s out of fear, pride, or inertia, the effect is the same: delay.

And delay, in a race-to-the-finish model like AI 2027, changes everything. If key governments are unsure, ideologically divided, or simply incapable of consensus, the idea that they’ll uniformly embrace and deploy transformative AI systems begins to break down.

We Don’t Upgrade Systems as Fast as We Build Tech

Technological capability doesn’t guarantee adoption. Just ask anyone who works in clean energy.

Residential and commercial solar power has been available for decades. The technology is proven, the economics compelling, and the environmental benefits enormous. Yet in many regions, adoption remains low. Why? Because systemic change is hard. Utility companies resist decentralization. Local regulations create friction. Consumers, even when interested, hesitate at the upfront costs, the paperwork, or the social signaling — the fear of standing out or seeming strange — involved in doing something ‘different.’

Again, this isn’t a knowledge problem. It’s a human one. Kahneman’s research shows that we treat losses as more significant than gains — which makes the potential disruption of adopting something new feel more threatening than sticking with the familiar, even when the familiar is broken.

Electric vehicles follow a similar pattern. They outperform combustion engines in nearly every metric. But adoption remains uneven due to infrastructure gaps, range anxiety, oil lobbying, and habit. As Lisa Feldman Barrett emphasizes in How Emotions Are Made, human brains construct meaning from past experiences and culturally shaped concepts. If a new technology doesn’t fit those concepts, it doesn’t feel intuitive — it feels risky.

This psychological inertia — the preference for what feels coherent over what is better — slows change even when the tools are in front of us. That’s why I question the plausibility of a fast, global pivot toward AGI-led governance, science, and coordination. If we can’t restructure our energy systems or transportation sectors despite overwhelming evidence and tools, why would we assume we’ll hand over cognitive authority to machines?

Human Beings Don’t Like Losing Power — Especially the Powerful

At the core of the AI 2027 race scenario is an implicit assumption: that nation-states and institutions, once they see the advantages of AI, will embrace it out of necessity or self-interest. But this overlooks one stubborn truth: people — especially those in power — rarely give up control without a fight.

As Anand Giridharadas, a political commentator and author, argues in Winners Take All, much of the real power in our current era resides with the ultra-wealthy — individuals and corporations who operate increasingly above and beyond the reach of democratic governance. These elites aren’t just interested in solving problems — they want to decide which problems get solved, and how. Philanthropy, venture capital, and innovation hubs all serve as levers of influence that allow the powerful to shape social agendas while maintaining control. If AGI begins making better decisions about global coordination, poverty, education, or justice, that’s not a tool — that’s competition. For a class that has grown accustomed to operating as unelected problem-solvers — whose wealth and influence are justified by a belief in their superior judgment — handing the reins to machines isn’t just unsettling. It’s unthinkable.

Robert Sapolsky’s work in Behave grounds this idea in biology. Among primates like baboons, social rank determines access to resources, reproductive success, and even health outcomes. High-status individuals show lower stress hormone levels, while low-status ones face chronic physiological wear. Sapolsky explains that our brains are wired to constantly track social cues — who dominates, who defers — because these patterns historically dictated survival. This instinct didn’t disappear with modernity. When something like AGI threatens established hierarchies, the response isn’t just rational calculation. It’s defensive — a deep-rooted impulse to preserve position and influence.

Belief, Not Just Data, Guides Behavior

There’s a tempting assumption in tech circles: if something works better, people will adopt it. But humans don’t function that way. As Sapolsky, Barrett, and Kahneman each show in their own ways, we are belief-driven creatures. What we trust, who we follow, and how we make decisions are shaped by context, emotion, social belonging, and meaning-making — not just outcomes.

Even when something is objectively better, it doesn’t matter unless people believe in it. And belief isn’t distributed equally — it’s shaped by who controls the narrative.

In that sense, the future of AI is less about algorithms and more about legitimacy. Who gets to decide what is trustworthy? What counts as “real intelligence”? Who gets to define what counts as progress? These aren’t technical questions — they’re political ones. And they will be answered not just by engineers, but by people with vested interests in keeping the current systems intact.

Why We Struggle to Trust AI — Even If It’s Better

Part of what makes us human is our ability to collaborate across difference. Evolution didn’t gift every individual with every skill — instead, it shaped us into a heterogeneous species that thrives on division of labor and social interdependence. From hunter-gatherer bands to modern democracies, human survival has depended on one crucial skill: knowing who to trust.

In that context, we are wired to defer to others’ expertise — but we don’t do it blindly. We evaluate two key factors: competency (does this person know what they’re doing?) and care (do they have my interests in mind?). Psychologically, this is a blend of cognitive appraisal (judging skill or knowledge) and interpersonal appraisal (judging intent or alignment). It’s how we decide whether to trust a doctor, a pilot, or a leader — not just based on credentials, but also on whether we feel safe in their hands.

This is precisely why many people — especially those in power — will resist turning over high-stakes decisions to AI systems. Even if the system is more competent, skepticism will persist about whether it is aligned with human values and interests. Alignment refers to ensuring that advanced AI systems pursue goals that are beneficial and compatible with human values. The worst-case scenario — central to the “race” outcome in the project — is a misaligned AGI: a system so powerful that, if its goals differ even slightly from our intentions, it could take actions that are catastrophic on a global scale. This concern isn’t about evil robots or rebellion; it’s about a machine optimizing for objectives we failed to fully define or constrain. We hesitate — not because we’re irrational, but because we’re wired for a different kind of social contract.

Until AI systems can convincingly signal both competence and care, large-scale adoption will meet resistance. Because as much as we value efficiency, we value belonging and safety more.

The Final Decision Isn’t Theirs. It’s Ours.

None of this is to say we shouldn’t take AI seriously. The pace of progress is real, and the breakthroughs outlined in AI 2027 are within the realm of possibility — even within reach in the next few years. I believe AI will absolutely generate solutions to some of humanity’s most persistent problems — problems we’ve proven unable to solve on our own. From medical breakthroughs to economic modeling to education and safety, AI could offer leaps forward that human intelligence, with all its biases and limitations, has yet to deliver.

But the story doesn’t end with possibility. It hinges on how we respond. The most consequential variable in the years ahead isn’t what AI will become — it’s how human beings will interpret, integrate, or resist it. The scenarios in AI 2027 rush toward a binary: race or pause. But in reality, the future will be shaped by a far messier negotiation. Not just between governments or companies, but between worldviews, values, and levels of trust. Between hope and fear.

Some human resistance will be misguided, rooted in misinformation, fear, or the protection of power. But not all resistance is regressive. In many cases, hesitation will be necessary — a form of discernment, an ethical pause. We will need to collectively wrestle with what kinds of problems AI should solve, and what realms of life are too relational, too moral, or too sacred to delegate.

It’s not just about delay—it’s about deliberation. Healthy tension. Necessary conflict. Wise governance. And that process, however uncomfortable, is the work of being human in the face of non-human transformative power.

We don’t yet know what AGI will be capable of — but we do know a great deal about ourselves. And if we want to build a future shaped by wisdom — not just speed — we have to take those human tendencies seriously, not as flaws to be eliminated, but as signals to be interpreted.

Because the future won’t be defined solely by what artificial general intelligence can do. It will be defined by what humanity chooses to embrace, question, and protect along the way.

Leave a Reply