Mustafa Suleyman The Coming Wave
- The Workforce for Responsible AI
- 23 hours ago
- 3 min read
Mustafa Suleyman’s The Coming Wave is an essential read for anyone seeking a foundational understanding of the rapid pace and scale of artificial intelligence—and its potential to transform society. Even AI skeptics will find useful insights, as Suleyman traces the evolution of the technology, examines its limitations, and addresses pressing questions about how today’s developments compare with past waves of disruption.
From the domestication of livestock to the rise of the printing press, railways, steamships, the internal combustion engine, electricity, genetic engineering, and the internet, history is filled with examples of disruptive technologies that have reshaped society. Yet, despite the upheavals they caused, societies ultimately emerged more productive, healthier, and more resilient. So how—if at all—is AI different?
One of the book’s most compelling arguments is Suleyman’s explanation of the characteristics that place artificial general intelligence (AGI) on a radically different trajectory from past innovations. AGI refers to general-purpose systems designed with the broad aim of exceeding human cognitive capabilities across a wide range of tasks. When combined with other emerging technologies—such as quantum computing, synthetic biology, and robotics—AGI has the potential to disrupt society at an unprecedented scale and pace.
A clear example of this technological shift is the rise of large language models (LLMs)—the type of system that underpins applications such as DeepMind’s systems, which Suleyman co-founded, and ChatGPT. These models rely on deep learning techniques, using neural networks loosely modeled by the human brain. Trained on enormous amounts of data to perform a wide range of tasks—from writing and editing to generating code or outperforming human champions in strategic games.
Where earlier iterations of these LLMs were trained on carefully curated datasets, today’s models can be trained on vast, open-ended, and unfiltered data with minimal human supervision. As Suleyman candidly notes, although model designers can adjust weights and structure to improve performance, the internal workings of these massive neural networks have become increasingly complex—making their specific outputs harder to predict.
Another key defining feature of this technology is its capacity for exponential growth or what he refers as hyperevoluation. A vivid example comes from the application of machine learning in the field of biology. Similar to the goal of the Human genome project, which sought to sequenced human DNA, scientists have sought for decades to learn how human protein or chains of amino acids fold, crucial for understanding biology and disease. In 2018, DeepMind’s AlphaFold – deep learning system designed to predict the 3D structure of proteins – correctly predicted 26 structures. By 2022, as AlphaFold2 was made publicly accessible, more than one million researchers used the tool, enabling DeepMind to release more than 200 million structures representing almost all known proteins in existence. This dramatic leap illustrates Suleyman’s point: when clusters of powerful technologies emerge simultaneously, anchored by a general-purpose technology like advanced AI, which act as accelerant, their combined impact can accelerate scientific and societal change at breathtaking speed.
While these technologies promise extraordinary benefits, they also pose profound risks—particularly when powerful capabilities become widely accessible without adequate guardrails. The AlphaFold example underscores the tension: democratizing advanced tools can accelerate discovery, but it can also equip malicious actors with unprecedented capabilities. Suleyman argues that outright containment—meaning slowing innovation itself—is unrealistic. Instead, political, legal, and cultural systems must evolve at the same pace as technological change. He proposes a framework of “ten steps toward containment”: targeted interventions designed to tightly control the most dangerous capabilities, limit their uncontrolled diffusion, and build robust oversight mechanisms to ensure their safe development and deployment.
As Suleyman forcefully contends, whether one is a critic or a champion of these technologies, the stakes are unmistakably high. The technology will continue to evolve—rapidly. Our responsibility is to ensure that it evolves as a tool for human advancement, not a catalyst for harm.
Comments