AI Is Accelerating—But Is The Narrative All Wrong?

AI Is Accelerating—But Is The Narrative All Wrong?

We Need a Better Story About AI

 As a psychotherapist and AI consultant, I’ve watched the evolving narrative around artificial intelligence with something between curiosity and quiet horror.

The conversation, such as it is, is dominated by the usual cast: Sam Altman, Demis Hassabis, Gabe Alfour, Elon Musk and the rest of the techno-oracle fraternity. You know the type. Wealthy. Brilliant. Publicly worried that AI might destroy the world. Privately launching another round of funding.

They speak of “existential risk” as if it were a KPI. Some even assign percentage likelihoods to humanity’s extinction, which, if you’re prone to catastrophising, is not recommended reading before bed. Their tone swings wildly between messianic and mournful, with only the occasional nod to the fact that no one else knows what they’re talking about either.

Now, while these apocalyptic concerns are not entirely ridiculous (machines with no off-switch are, admittedly, worth thinking about), they also obscure more immediate and arguably more important issues. Namely, what is AI for? Who is it for? And why does so much of the public narrative sound like the pitch for a Netflix sci-fi series?

AI, in its public framing, has become the Terminator in a tux. It’s either a godlike saviour or the harbinger of doom—rarely a tool that might, say, quietly revolutionise how we detect cancer or personalise education. This framing is unhelpful not only because it’s alarmist but because it’s alienating. And when you combine it with tech billionaires flirting with ‘broligarchy’ political influence—as though unimaginable wealth were somehow not enough—it becomes deeply cynical, too.

This vacuum, naturally, invites others to speak. Geoffrey Hinton, Tristan Harris, Shoshana Zuboff, Carole Cadwalladr —figures rightly alarmed by the breakneck pace of development and the cultural carelessness with which it’s being pursued. They’re pointing out, in various ways, that if AI is going to change everything, we might want to think about what “everything” actually means. And who gets a say in it?

Meanwhile, a quieter truth is going unheard: we have no compelling story about AI. And that’s a problem.

Because humans are storytelling animals. The story isn’t just a device—it is the psychological process. Narrative is how we make sense of change, how we metabolise uncertainty. Without it, the future feels like a formless threat. With it, the future becomes imaginable, even navigable. It’s semiotic scaffolding—older than writing, older than reason.

But right now, the story of AI is being written by people who are either too afraid or too enamoured to tell it straight. The result? A narrative void. And into that void creeps fear, cynicism, paralysis.

As a psychotherapist, I see this first-hand. My clients are increasingly anxious about the world they’re stepping into—a world where machines seem to be accelerating faster than meaning. They worry about losing their jobs to automation. They worry about their children growing up in a world where human relationships are mediated entirely by screens. And they worry, quite reasonably, that these decisions are being made in rooms they’ll never be invited into.

They don’t trust the people building this technology. And they don’t see where they fit in.

But as an AI consultant with Synima, I also see something else—something that isn’t getting airtime. I see the extraordinary, often quiet ways AI is already improving lives. In healthcare, it’s helping radiologists detect tumours earlier and with greater accuracy. In mental health, it’s supporting overwhelmed clinicians with triage tools and conversational agents that can offer basic guidance. In education, it’s reshaping how children with learning differences engage with school—offering adaptive tools that meet students where they are, not where the curriculum says they should be.

Even in wellbeing—another badly under-reported domain—AI is showing promise. It can help track emotional patterns, offer reminders for medication, or simply act as a non-judgemental companion during moments of loneliness. These aren’t dystopian. They’re deeply human. And they deserve more space in the conversation.

If we’re going to continue our headlong sprint toward Artificial General Intelligence—and let’s be honest, there’s no sign of slowing—the least the major players can do is bring people along for the ride. Not by frightening them with theoretical annihilation. Not by posing for the cover of Time. But by telling a story that helps people see what’s at stake, and what might be possible.

AI needs to be midwifed into the world—not hacked together in a lab and sprung on the rest of us like a surprise birthday party from a sociopath. It needs care. It needs ethics. But most of all, it needs meaning.

Because the ethical frameworks that ought to guide this transformation are still lagging far behind the technology itself. The communication is piecemeal, contradictory, and often motivated more by shareholder value than human values.

And this noise—this endless din of press releases and panic—obscures something vital. The truth is, we’re not really talking about intelligence at all. We’re talking about power. We’re talking about agency. We’re talking about what kind of species we want to be, and how we relate to the tools we create.

Which brings us back to the story. If we want AI to serve us, not subsume us, we need a new narrative. One that’s grounded in hope, not hype. One that acknowledges risk without fetishising it. One that invites people in, instead of pushing them out.

Because without that story, the future won’t just be uncertain. It will be incoherent.

And coherence, in the end, is what we’re really craving.

Amazing article! Thanks. As an animator of course my main concern is with the ethical use of it, which is a very muddy field I need to learn more about. However I do worry that as with the recent Ghibli issue, the difference between the real thing and an AI-generated imitation is less and less discernible to many people. When such imitations can be quickly and casually produced it inevitably devalues the original material on which it was based. Now, when AI gets to the point that it can replicate human personalities and capabilities with similar verisimilitude, and when such "personalities" can be rapidly run off a production line, say with random "human" foibles built in to make them all just a little different, what does that imply for enduring value of the the source material - actual humans?

Like
Reply

To view or add a comment, sign in

More articles by Quint Boa

Others also viewed

Explore topics