The Coming Wave: Why This Tech Optimist Is Looking for Higher Ground
The Pessimism-Aversion Trap
In the tech world, we have a deeply uncomfortable relationship with pessimism. Bring up existential risks at a meeting or raise concerns about unintended consequences at a product review, and you’ll often get dismissive waves or awkward sidesteps. "Let's focus on the positive impact," they'll say, or "We can deal with that later."
Mustafa Suleyman's "The Coming Wave" puts a name to this phenomenon: the pessimism-aversion trap. It's our tendency to look away from potentially dark realities, especially when they conflict with our optimistic narratives about technology's future. As someone who has (directly and indirectly) worked in tech for the past 25 years, I've witnessed this firsthand, and reading Suleyman's articulation of it felt like finally seeing a pattern I'd been staring at for years but couldn't put my finger on.
Not Just Another Tech Critique
But this book isn't just another tech critique. Instead, it presents something far more nuanced and urgent: a clear-eyed examination of how artificial intelligence and synthetic biology are converging to create what Suleyman calls "the great metaproblem of the twenty-first century." And here's why this hits different: Suleyman isn't an outside observer or academic theorist. As the co-founder of DeepMind and Inflection AI, and current head of AI at Microsoft, he's been at the frontier of AI development.
The stakes couldn't be higher - these technologies promise unprecedented wealth and solutions to intractable problems, yet simultaneously could empower bad actors to cause catastrophic harm with increasingly accessible tools. When he warns about the challenges ahead, he does so with the credibility of someone who has helped build the very technologies he's asking us to think more carefully about.
The Convergence of AI and Biology
What sets this book apart is how it crystallizes just how different this technological moment is from anything we've experienced before. Consider this striking fact: while we often talk about Moore's Law in computing, the cost of DNA sequencing has fallen even more dramatically - a million-fold in under twenty years, a thousand times faster than Moore's Law. This isn't just incremental change; it's a transformation happening at a pace that's difficult for human minds to grasp.
The convergence of AI and synthetic biology perfectly illustrates this acceleration. When Suleyman describes how AI systems like AlphaFold revolutionized protein folding, here's what stopped me in my tracks: they beat 98 established teams in a competition without any prior track record. This isn't just about technologies advancing in parallel; they're accelerating each other. Each breakthrough in AI enables faster progress in synthetic biology, which in turn provides new data and capabilities for AI to leverage.
Think about it - it's only been 2 years since ChatGPT was released to the public and look how pervasive AI tools already are. The closest historical parallel might be how the printing press and paper manufacturing evolved together, but even that pales in comparison to the speed and scope of today's technological convergence.
Beyond Black Boxes
But here's where things get really interesting...these technologies work in ways we can't even comprehend. In the past, you could point to clear cause-and-effect relationships—like how a combustion engine converts fuel into mechanical energy—but today's AI systems are what Suleyman calls "black boxes." Someone explained it to me like this: imagine pouring the whole internet into a blender and hitting start. That's essentially how LLM training works. And just like a blender, once you've mixed everything together, you can't unmix it to see exactly what went where.
This becomes especially dangerous when you consider what Suleyman calls the "Infocalypse" - the point where our information ecosystem begins to crumble under the weight of synthetic media and misinformation. We're already seeing early signs of this. Look at Facebook's recent content moderation reversal or the questionable effectiveness of X's community notes. But what's coming is different in both scale and kind.
As Suleyman explains, just as the costs of processing and broadcasting information plummeted in the internet era, we're about to see the same dramatic drop in the cost of "doing" - of taking action and projecting power in the physical world. The stakes aren't just about fake news or deep fakes anymore. They're about society's ability to maintain stability when anyone with access to these black box technologies can have outsized impact. It's what Suleyman describes as a transformation of "the very ground on which society is built."
The Rise of Tech Empires
And when you combine these black box systems with unprecedented concentrations of power, the risks become even more acute. We're already seeing early signs of what Suleyman calls "empires of a sort." Take Elon Musk's influence through his control of X, SpaceX, and AI ventures; not to mention his appointment to the Department of Government Efficiency. It's not just about wealth anymore; it's about the unprecedented ability to shape public discourse, access to space, and the development of transformative technologies. Consider the number of tech CEOs and billionaires with prominent placement at President Trump's inauguration - this concentration of power in the hands of tech leaders isn't new, but it's accelerating.
What makes this particularly concerning is how these powers are strengthening while traditional governance structures are actively weakening. When governments are "lurching from crisis to crisis," as Suleyman points out, they have little bandwidth for tackling the deeper challenges posed by emerging technologies. It's easier to focus on "low-hanging fruit more likely to win votes" than to grapple with the profound implications of AI and synthetic biology. The traditional nation-state's ability to regulate these technologies is being challenged from both sides: you have what Suleyman describes as an "influential minority in the tech industry" who actively welcome the demise of state power (not exactly surprising, but still a staggering admission from a tech insider like Suleyman), while the sheer technical complexity of these systems makes effective regulation nearly impossible - how do you control something that even its creators don't fully understand?
Learning from an Unlikely Source
So what's the solution? Here's where Suleyman offers something unexpected - a lesson from the aviation industry. Airlines have achieved remarkable safety records not through regulation alone, but through a culture that treats every failure as a systematic learning opportunity. This mindset stands in stark contrast to the "move fast and break things" ethos (famously coined by Mark Zuckerberg) that dominates tech. While Silicon Valley often treats failures as stepping stones to success - a reasonable approach for pre-AI technologies - aviation shows us that when the stakes are high enough, we can build industries that prioritize safety without sacrificing innovation.
This points to how we might approach the regulation of advanced technologies. Just as we don't let companies build and operate nuclear reactors however they see fit, Suleyman argues we need similar frameworks for AI and synthetic biology. He envisions a future where developing advanced AI systems requires a license, complete with safety standards, risk assessments, and ongoing monitoring - like how you can't simply launch a rocket into space without extensive oversight and approval processes.
A Framework for the Future
Suleyman offers a practical framework for evaluating emerging technologies that cuts through the usual abstract debates. He suggests two key questions that fundamentally shape how we might contain a technology's risks.
First: is it specific or omni-use? A nuclear weapon, despite its devastating power, is actually easier to regulate because it has a single, clear purpose. But AI is fundamentally different - it's a general-purpose technology that can be applied to almost anything, from writing poetry to designing bioweapons. This "omni-use" nature makes traditional regulatory approaches insufficient. Jeff Bezos, most notably, has likened AI to electricity; not a single technology, but a paradigm-shift that will impact anything and everything it touches.
Second, even more crucially: does the technology live in bits or atoms? The more a technology moves away from the physical world toward pure information, the harder it becomes to control. When combined with plummeting costs and increasing accessibility, this creates what he calls "hard-to-control hyper-evolutionary effects." We're seeing this already with AI - the pace of advancement is so insanely rapid that regulation drafted today will no longer make sense tomorrow.
A Path Forward?
So where does this leave us? The paradox at the heart of Suleyman's argument is that we can't simply stop developing these technologies - our entire civilization depends on continued technological progress. As he points out (and as my inner sci-fi nerd craves, though perhaps with more optimism than prudence), maintaining and improving our standard of living requires technological advancement. Yet every path forward brings significant risks.
As someone who helped build some of the world's most advanced AI systems, Suleyman's vision isn't just about government regulation - it's about reshaping how we develop and deploy these technologies. From reforming corporate structures through B Corps, to rethinking how we tax automation, to creating new mechanisms for responsible innovation. The goal isn't to slow progress, but to ensure it serves humanity rather than undermines it.
I found myself burning through this book, drawn in by Suleyman's clear prose and compelling examples. Despite tackling complex technical concepts and profound societal implications, it never feels dense or academic. It's that rare technology book that's both deeply thoughtful and genuinely enjoyable to read.
And we need these insights now more than ever. The incoming administration just rolled back the Biden AI executive orders and announced a $500 billion dollar AI infrastructure initiative called Stargate. As someone building in this space, my hope isn't just that people in power understand what's at stake - it's that we all do. Because the coming wave isn't just approaching their shore - it's approaching all of ours. The sooner we start thinking seriously about how to navigate it, the better chance we have of reaching safe harbor - and ensuring technology fulfills its promise of making life better for everyone.