Possible | Yuval Noah Harari on trust, the dangers of AI, power, and revolutions
This week on Possible I sat down with historian-philosopher Yuval Noah Harari, acclaimed author and thinker, whose intense knowledge of the past makes him more than a bit cautious about our AI future. We agree that AI will reshape the human story as profoundly as writing or electricity—perhaps more. Yet we diverge, sometimes sharply, on whether that reshaping will benefit society, and how much runway humanity still has to steer it.
Yuval’s warning is that AI is moving faster than any society can adapt. By the time we’ve built laws, regulations, and frameworks, the AI has already evolved 10 times—and history shows that when change outruns our institutions the bill is paid in human suffering.
He grades the Industrial Revolution a “C-minus.” We survived, but only after imperialism, totalitarian experiments, and two world wars. Since I (and many in Silicon Valley) am keen to call this the cognitive industrial revolution, Yuval argues that a “C-minus” may not cut it this time. His prescription is to focus on rebuilding trust at every level, and treat self-correction—feedback loops that let us spot and fix mistakes—as the number-one design requirement.
I don’t disagree on the dangers, but I believe that humanity still has a role to play in steering AI towards positive outcomes. As I write in my book, Superagency, we can get there by advancing the research while embedding safety work inside it, then use AI itself to widen those self-correcting loops.
Another fork in the road: intelligence versus consciousness. Yuval argues that genuine concern for truth springs from the capacity to feel, to suffer, to reject reality. Intelligence alone, however supercharged, might just manufacture grander delusions. Our power struggle, he says, will imprint itself on whatever agents we unleash.
I’m more optimistic that rigorous transparency, like forcing models to show their work and audit each other, can ground intelligence in something that behaves like humility. But he’s right that we’ve never built a machine mind; we can’t assume our favorite virtues will copy-paste cleanly. I believe that well-designed learning systems can be aimed at truth-seeking first, power-seeking second.
Where we agree is on trust. Yuval recounts how, as a gay teenager in 1980s Israel, he found community only after the early internet created new connections. Here is an example where technology can absolutely expand empathy, but the same channels can amplify cynicism if the underlying incentives reward outrage.
Talking to Yuval is a reminder that constructive disagreement is the only way to improve our odds of an A+ handling of the AI revolution, and not a C-. Our forecasts diverge, but the act of mapping those differences sharpens the path forward.
We need more of this conversation, across labs, companies, and governments, because the choices we make this decade will echo well into the future.
Until next time,
Recommended by LinkedIn
Reid
Here's the full episode with Yuval: https://link.chtbl.com/4-tX2qaM
YouTube: https://youtu.be/uuBLxWowDqI
Transcript: https://www.possible.fm/podcasts/yuval/
You can subscribe to catch more episodes of Possible here: https://www.possible.fm/podcast/
Relationship Manager | Data Analytics Nerd | Goldman Sachs, Heinz Finance & UT Biz School Alum | Communications Expert | Innovator | Natural Leader | Director
2wThis was a solid interview. Good work Reid Hoffman. Harari is one of my favorite thinkers out there. I’ve read all of his stuff multiple times. Highly recommend all of his stuff
Helping Founders & SMBs scale with AI-powered Content Systems | Agency Founder | Building in Public | Growth Content Strategist
3wBenevolence probably should have been the guiding principle for all of humanity, now it has become a survival necessity. While building any kind of intellectual moat seems like an outdated idea, but you should still build to keep a sense of purpose and a direction to aim at. How do we keep ourself relevant and adaptable with the rapid change of pace of tehnological innovation Reid Hoffman ? Or as George Hinton puts it in a recent podcast with steven bartlett hosting Diary of a CEO - start training to be a plumber.
We agree, our focus should be rebuilding trust at every level. The opportunity lies in keeping our frameworks adaptive while strengthening our core values through each wave of change. Thanks for this insightful episode.
Can't wait to hear this one
Turning Words into Influence & Impact Every brand has a story. Every Entrepreneur, Visionary Thinker has a message. But not everyone knows how to put it into words that matter. That’s where I come in!
1moYuval's AI warning reflects the usual intellectual cynicism; it’s short on imagination and long on fear. It's pointless.