rtificial intelligence (AI) is already here. It is used in countless applications from online purchases to social media curation, flight routing to energy grid management. It is ubiquitous in daily life. ChatGPT nonetheless took the world by storm. When the chatbot was released to the public it quickly became the fastest–growing consumer application in history, reaching 100 million users in just two months. ChatGPT represents something new. It could hold conversations and generate its own content. Generative AI could create music, art, and code and more. The impact and effects of generative AI are only just beginning to be felt and will reverberate for years to come in ways we have not even begun to consider.
The story of how ChatGPT emerged is decidedly less well–known and just as interesting as its impact. Parmy Olson, a columnist with Bloomberg Opinion, tracks the stories of two of the key protagonists in the development of generative AI and the pursuit of Artificial General Intelligence (AGI)—machines that can think and multi–task as good, if not better, than humans—in her new book “Supremacy.” Olson’s subjects are OpenAI’s Sam Altman and Google DeepMind’s Demis Hassabis, each of whom has decidedly different ambitions for AGI. Hassabis aims to use AGI to solve the problems of today, whilst Altman wants to achieve AGI as fast as possible to render the problems of today irrelevant. Philosophically different, the ambitions of both men are so lofty that they ignore the immediate impacts of their ambitions.
The battle between Altman and Hassabis (and those in their respective corners) as recounted by Olson is a thrilling story of the collision of ambitions with realities, abstract ideas on humanity, and its future with the corporate demands of profit today. Olson goes beyond personality politics and explores the fascinating financial and commercial eco–system in which these ambitions exist. The pursuit of investment and capital to power their respective companies, the Mean Girls–esque back–biting and sniping (Elon Musk makes for a fierce Regina George), the ostensibly noble (though wholly self–serving), grand ambitions balanced against the need to generate profit, is all riveting. The internal tensions over OpenAI’s desire to remain a non-profit, versus the sheer profit potential of the company are playing out in real-time as this review goes to print. One does wonder how much of these grand ambitions are, or at least once were, real and how much is merely a convenient fiction clung to with ideological zeal.
Olson’s book offers yet another window into the curious evolution of late–stage capitalism: the concentration of wealth and power amongst increasingly large technological conglomerates. The decisions of the executives of these companies, public and not, carry greater weight and affect society more significantly than the legislators elected by the people. The people are not wholly irrelevant—they are both consumer and product—but they have even less to say about what takes place in Silicon Valley than they do in Washington.
This is a story that could easily have drowned under the weight of impenetrable jargon, but Olson’s fluid prose and clear explanations make large language models, neural networks, and the lexicon of AI accessible and even interesting. Olson’s writing is illuminating and clarifying in equal measures, both vitally needed in the discussion about AI.
This is superlative business writing.
Olson’s book is utterly compelling— it is a superb read, well–researched, and supremely captivating. Olson writes compelling and vivid portraits of Altman and Hassabis, their motivations and ambitions, their insecurities, and their ambitions for AGI. It is, almost exclusively, a story about the men behind this drive—the women who do make appearances are modern-day Cassandras, warning of the risks of AI such as bias or impacts on marginalized communities, are ignored, ridiculed, and hounded out of their respective companies.
What makes “Supremacy” stand out from the increasingly crowded space of books on AI are the questions her writing raises and that are all but absent from the core narrative itself.
The utopian and dystopian future focus of Altman and Hassabis blinds them to any consideration of the near-term impact of their creations. When considering the possible extinction of humanity or, conversely, bringing about a technological utopia, the near–term effects on the public appear inconsequential for Altman and Hassabis. In their disparate visions, humanity will in the end either be enslaved (or manipulated) by malicious AI, or AI will transcend basic economics and create unimagined sources of wealth. It is easy for them to dismiss immediate effects when they’re focused solely on the grand, over–arching vision, and are fortunate enough to escape those very consequences.
It is far less easy to dismiss if you are living paycheck–to–paycheck, watching your job become automated and experiencing evolutionary change in a single lifetime. Ironically, though less told in Olson’s book, much of the training of AI models is done using human hands. Underpaid and often exposed to the worst the internet has to offer, these human “computers” tag images to better refine the models. It is less Intel Inside and more Human Inside, although the latter is training their replacement.
With as consequential as AI could well be—even well short of the utopian goals or dystopian fears of the protagonists—one would think that governments should or would have a say in the discussion. In Olson’s telling, they do not. Washington barely features at all in the conversation about AI.. This isn’t all that surprising, but it is consequential, nonetheless.
Congress responds to the impacts of AI rather than proactively trying to shape or guide its development with regulatory or legislative guardrails. An argument could be made that such guardrails would hamper innovation and development, but given the forewarned impact that AI would have, perhaps intervention or greater involvement is warranted. Given Congress’ track record in dealing with the consequences of technology, it is not at all surprising that it was ill–prepared for the emergence of ChatGPT, and generative AI. The collective Washington was as much a victim of the ambitions of Altman and Hassabis, and others, as anything else.
This is, of course, not even considering the global competition and drive for AI supremacy. Militaries the world over are racing to put AI and autonomy into weapon systems to make them more precise, more lethal, and more capable. The fruits of this are increasingly seen on the battlefields of Ukraine and in preparation for hostilities in the Indo–Pacific. Efforts to agree to some standards for AI are ongoing, but the security dilemma remains—much like the market drivers behind Altman and Hassabis, the first–to–market (or in this case, first-to-battlefield) urgency is ever present in geopolitics.
Would early government intervention or engagement have changed the courses set by OpenAI and DeepMind? Arguably not. To governments’ credit, there now is an active effort to mitigate AI harms and improve AI safety within the United States and across allies and partners. The Bletchley Summit on AI in November 2023 is a step toward greater multilateral alignment on AI. These efforts are focused on the models and tools themselves. While critical to offset security risks and threats, it is a small component of the broader societal impact of AI. More significantly, government efforts will come up against first-to-market and geopolitical pressures alike, which could well stymie progress.
Perhaps the greatest and most fundamental question of all, and one that “Supremacy” leaves unasked, is whether generative AI (and in the future AGI) is worth it in the end. To quote Dr. Ian Malcom (brilliantly played by Jeff Goldblum) in the movie Jurassic Park, Altman and Hassabis “were so preoccupied with whether or not they could that they didn't stop to think if they should.” The academic, theoretical, and practical conversations about AGI were driven by the mentality of first–to–market and utopian ideals (and fears) rather than any actual reflection as to the immediate utility and value of AI—AI was a self–referential fait accompli. If ‘we’ don’t build AGI first, ‘they’ will—and that is just within the confines of Silicon Valley and DeepMind’s London outpost, let alone in the White House, the Kremlin, or in Zhongnanhai.
“Supremacy” is a truly riveting read as a standalone business book (it is with good reason it is shortlisted for the Financial Times Business Book of the Year, and found a place on the Diplomatic Courier’s best books of the autumn). It will, perhaps unintentionally, serve as a guidepost (or indeed roadside memorial) for the future for those seeking to understand how we arrived at a place where such crucial decisions on AI were made, who made them, and what alternative pathways could well have been pursued.
a global affairs media network
The public consequences of the private battle for AI dominance
Image by Solen Feyissa from Unsplash.
September 28, 2024
In a new book, Parmy Olson goes beyond personality politics and explores the fascinating financial and commercial eco–system supporting AI’s grandest, and unchecked, ambitions, writes Joshua Huminski.
A
rtificial intelligence (AI) is already here. It is used in countless applications from online purchases to social media curation, flight routing to energy grid management. It is ubiquitous in daily life. ChatGPT nonetheless took the world by storm. When the chatbot was released to the public it quickly became the fastest–growing consumer application in history, reaching 100 million users in just two months. ChatGPT represents something new. It could hold conversations and generate its own content. Generative AI could create music, art, and code and more. The impact and effects of generative AI are only just beginning to be felt and will reverberate for years to come in ways we have not even begun to consider.
The story of how ChatGPT emerged is decidedly less well–known and just as interesting as its impact. Parmy Olson, a columnist with Bloomberg Opinion, tracks the stories of two of the key protagonists in the development of generative AI and the pursuit of Artificial General Intelligence (AGI)—machines that can think and multi–task as good, if not better, than humans—in her new book “Supremacy.” Olson’s subjects are OpenAI’s Sam Altman and Google DeepMind’s Demis Hassabis, each of whom has decidedly different ambitions for AGI. Hassabis aims to use AGI to solve the problems of today, whilst Altman wants to achieve AGI as fast as possible to render the problems of today irrelevant. Philosophically different, the ambitions of both men are so lofty that they ignore the immediate impacts of their ambitions.
The battle between Altman and Hassabis (and those in their respective corners) as recounted by Olson is a thrilling story of the collision of ambitions with realities, abstract ideas on humanity, and its future with the corporate demands of profit today. Olson goes beyond personality politics and explores the fascinating financial and commercial eco–system in which these ambitions exist. The pursuit of investment and capital to power their respective companies, the Mean Girls–esque back–biting and sniping (Elon Musk makes for a fierce Regina George), the ostensibly noble (though wholly self–serving), grand ambitions balanced against the need to generate profit, is all riveting. The internal tensions over OpenAI’s desire to remain a non-profit, versus the sheer profit potential of the company are playing out in real-time as this review goes to print. One does wonder how much of these grand ambitions are, or at least once were, real and how much is merely a convenient fiction clung to with ideological zeal.
Olson’s book offers yet another window into the curious evolution of late–stage capitalism: the concentration of wealth and power amongst increasingly large technological conglomerates. The decisions of the executives of these companies, public and not, carry greater weight and affect society more significantly than the legislators elected by the people. The people are not wholly irrelevant—they are both consumer and product—but they have even less to say about what takes place in Silicon Valley than they do in Washington.
This is a story that could easily have drowned under the weight of impenetrable jargon, but Olson’s fluid prose and clear explanations make large language models, neural networks, and the lexicon of AI accessible and even interesting. Olson’s writing is illuminating and clarifying in equal measures, both vitally needed in the discussion about AI.
This is superlative business writing.
Olson’s book is utterly compelling— it is a superb read, well–researched, and supremely captivating. Olson writes compelling and vivid portraits of Altman and Hassabis, their motivations and ambitions, their insecurities, and their ambitions for AGI. It is, almost exclusively, a story about the men behind this drive—the women who do make appearances are modern-day Cassandras, warning of the risks of AI such as bias or impacts on marginalized communities, are ignored, ridiculed, and hounded out of their respective companies.
What makes “Supremacy” stand out from the increasingly crowded space of books on AI are the questions her writing raises and that are all but absent from the core narrative itself.
The utopian and dystopian future focus of Altman and Hassabis blinds them to any consideration of the near-term impact of their creations. When considering the possible extinction of humanity or, conversely, bringing about a technological utopia, the near–term effects on the public appear inconsequential for Altman and Hassabis. In their disparate visions, humanity will in the end either be enslaved (or manipulated) by malicious AI, or AI will transcend basic economics and create unimagined sources of wealth. It is easy for them to dismiss immediate effects when they’re focused solely on the grand, over–arching vision, and are fortunate enough to escape those very consequences.
It is far less easy to dismiss if you are living paycheck–to–paycheck, watching your job become automated and experiencing evolutionary change in a single lifetime. Ironically, though less told in Olson’s book, much of the training of AI models is done using human hands. Underpaid and often exposed to the worst the internet has to offer, these human “computers” tag images to better refine the models. It is less Intel Inside and more Human Inside, although the latter is training their replacement.
With as consequential as AI could well be—even well short of the utopian goals or dystopian fears of the protagonists—one would think that governments should or would have a say in the discussion. In Olson’s telling, they do not. Washington barely features at all in the conversation about AI.. This isn’t all that surprising, but it is consequential, nonetheless.
Congress responds to the impacts of AI rather than proactively trying to shape or guide its development with regulatory or legislative guardrails. An argument could be made that such guardrails would hamper innovation and development, but given the forewarned impact that AI would have, perhaps intervention or greater involvement is warranted. Given Congress’ track record in dealing with the consequences of technology, it is not at all surprising that it was ill–prepared for the emergence of ChatGPT, and generative AI. The collective Washington was as much a victim of the ambitions of Altman and Hassabis, and others, as anything else.
This is, of course, not even considering the global competition and drive for AI supremacy. Militaries the world over are racing to put AI and autonomy into weapon systems to make them more precise, more lethal, and more capable. The fruits of this are increasingly seen on the battlefields of Ukraine and in preparation for hostilities in the Indo–Pacific. Efforts to agree to some standards for AI are ongoing, but the security dilemma remains—much like the market drivers behind Altman and Hassabis, the first–to–market (or in this case, first-to-battlefield) urgency is ever present in geopolitics.
Would early government intervention or engagement have changed the courses set by OpenAI and DeepMind? Arguably not. To governments’ credit, there now is an active effort to mitigate AI harms and improve AI safety within the United States and across allies and partners. The Bletchley Summit on AI in November 2023 is a step toward greater multilateral alignment on AI. These efforts are focused on the models and tools themselves. While critical to offset security risks and threats, it is a small component of the broader societal impact of AI. More significantly, government efforts will come up against first-to-market and geopolitical pressures alike, which could well stymie progress.
Perhaps the greatest and most fundamental question of all, and one that “Supremacy” leaves unasked, is whether generative AI (and in the future AGI) is worth it in the end. To quote Dr. Ian Malcom (brilliantly played by Jeff Goldblum) in the movie Jurassic Park, Altman and Hassabis “were so preoccupied with whether or not they could that they didn't stop to think if they should.” The academic, theoretical, and practical conversations about AGI were driven by the mentality of first–to–market and utopian ideals (and fears) rather than any actual reflection as to the immediate utility and value of AI—AI was a self–referential fait accompli. If ‘we’ don’t build AGI first, ‘they’ will—and that is just within the confines of Silicon Valley and DeepMind’s London outpost, let alone in the White House, the Kremlin, or in Zhongnanhai.
“Supremacy” is a truly riveting read as a standalone business book (it is with good reason it is shortlisted for the Financial Times Business Book of the Year, and found a place on the Diplomatic Courier’s best books of the autumn). It will, perhaps unintentionally, serve as a guidepost (or indeed roadside memorial) for the future for those seeking to understand how we arrived at a place where such crucial decisions on AI were made, who made them, and what alternative pathways could well have been pursued.