.
A

t what point did we collectively agree that artificial intelligence (AI) is a net benefit? I am far from a Luddite, but it seems that at this stage in AI’s development, society is not getting the future we were promised. Instead of AI freeing us, as some far cleverer and funnier people have suggested, from the drudgeries of life (like laundry) and allowing us to pursue our creative talents, the inverse has happened. Large Language Models (LLMs) and generative AI tools are stealing—forgive me, training on—existing content and producing ‘new’ material, while we still must do laundry. 

Meta recently added its own AI search function in Instagram that is functionally useless and just downright annoying. Google’s AI–enabled search feature, having apparently drunk the Kool–Aid that is Reddit, produces hilarious (and sometimes dangerous) answers that are entirely wrong. Of course, we tend to overestimate the near–term impacts of technology, and under–estimate their long–term effects, but the auguries aren’t great from a technological point of view and, from a policy point of view, are downright terrifying. We are likely to have true artificial general intelligence (AGI) before Congress addresses the potential impact of AI on the economy and democracy (at which point it won’t matter anyway). 

The problem with the present discourse on AI is that it is very binary—you are either a techno–optimist or an aforementioned Luddite. AI will either turn everyone into paperclips or turn society into a utopia where we can pursue our passions at leisure, never having to work again. Futurist Ray Kurzweil is decidedly in the latter camp as awkwardly presented in his new book “The Singularity is Nearer.” 

The Singularity is Nearer | Ray Kurzweil | Viking, Penguin Random House

Building off his previous books in AI, including “The Singularity is Near,” Kurzweil’s successor volume reads as though a LLM was trained on his previous books, a few computer science textbooks, some speculative fiction, and several books by individuals who are sympathetic to Kurzweil’s worldview and hallucinated the result. Was there a human editor in the mix? One can’t really tell. In just a matter of pages, Kurzweil swings wildly from sections that dive deeply into the mechanics of AI, philosophical questions of consciousness, and how we will all be cloud–based avatars of ourselves.

Kurzweil also has a curious relationship with the process of innovation. Kurzweil sees where AI is today, discusses current obstacles, but then suddenly shifts to a far–future optimistic scenario. He skips all of the steps in between now and then, ignoring rather significant (and quite possibly logarithmic) technological leaps that are necessary to reach that end state, some of which strain the limits of physical laws. 

For example, he notes that science and medicine currently struggle to map brain activity with sufficient fidelity and accuracy. Improvements to internal and external measurements are occurring, but we still don’t have the necessary insights into neural activity to map what does what and when and in response to what stimuli with sufficiently granular detail. So far, so good. Kurzweil then highlights Elon Musk’s Neuralink as evidence of progress. He then suddenly jumps to the far–future, stating that nanobots will do the yeoman’s work of brain mapping and interface. He writes about today, skips the entire middle bit, and then zooms to a future technology. QED. Were it just once, that could be forgiven, but he does it repeatedly throughout the book. Were this issue limited to technology alone, this would be a harmless quirk of the techno–optimist and utopian. Kurzweil’s wilful ignorance of the squishy middle bit of consequences of innovation is a touch more dangerous. Indeed, he leaves all the downsides of AI to the very end of the book. Are there risks like weaponized AIs, synthetic biology, or near–term energy crunches due to computing power demands? Sure, but those are, in his mind, minor and with technology can be overcome. 

Kurzweil practically dismisses the negative impacts of technological innovation by waving them away with a metaphorical magic wand. Did a robot take your job? Poof. There will be new jobs. Big tech companies using AI for questionable purposes? Poof. The market will punish bad actors. Nation–states use AI and synthetic biology to create Covid–2030? Poof. AI will simply create wonderful new drugs and vaccines. An AI car drove you off the road to your death? Poof. Just press restart and re–download yourself to a new you, version 2.0. 

This is the luxury of the techno–futurist—boundless, unaccountable optimism. The future these authors articulate may well never come to pass. The technologies they articulate almost certainly require leaps of technical faith and hurdling of scientific laws. Who doesn’t want to believe the future will be better than the present? It is certainly more engaging to think of how things will go right; much more so than how they will go wrong. 

Kurzweil gets technology. He gets computer science. He doesn’t understand humanity and he certainly doesn’t understand politics. He outlines a curious set of six epochs of the evolution towards AI starting with, effectively the Big Bang and ending with human intelligence spreading across the universe a bit like the Culture series by Iain M. Banks. If you’re curious, we’re getting close to the fifth stage where we merge with computers. 

He borrows pages from authors like Steven Pinker, who argues that the overall trend of humanity is inexorably upward. On nearly every measure from health and longevity to the spread of democracy, Kurzweil argues that things are looking up. Given the long time horizons he measures progress against, he is not wrong, and neither is Pinker. Those trendlines don’t, however, matter for most people living in highly disruptive times. Kurzweil’s presentation is befuddling and he comes across as a condescending arch techno–utopian wholly disconnected from reality: “You, sir, may have lost your job to AI, but things were a lot worse a generation or two ago.” 

He presents these cases as illustrative of ‘accelerating returns’—things get better and get better faster with greater progress. It is the application of Moore’s Law where processing power doubles and halves in price every 18 months, to the rest of humanity. It is all well and good in theory until those trends reach an upper limit or stall out. Yes, there hasn’t ever been a better time to be a mother with maternal mortality at historic lows in the grand scheme of things, except for in the United States where it is shockingly high

Kurzweil clearly hopes to offset the naysayers and critics of AI’s impact on humanity. If we’ve made all these great leaps and bounds in improving health outcomes, reducing inequality, and generally improving the lot of humanity, just think what we can do with AI. Sure, he argues, there have been negative results from progress before, but we’ve recovered. All those bank tellers that lost their jobs to ATMs found new jobs and new skills. 

The irony is that Kurzweil’s own arguments expose the flaws in his techno–optimism. The pace and scale of change that AI will usher in, and about which he is so excited, will potentially outclass anything that came before. If, as he believes, AI will revolutionize every aspect of the way we live, work, love, and more and do so at a pace of accelerating returns, we simply won’t have time to adapt or adjust. Generational change will take place in just a matter of years and society and politics are ill–equipped to manage such a pace of disruption. That magic AI wand of his won’t change these facts no matter how optimistic he and other AI creators are. 

There is a place for techno–optimists like Kurzweil. They help push the boundaries of what is possible and create aspirational narratives necessary for innovation and creativity. Those same techno–optimists need a heavy dose of the humanities and liberal arts to ground them in reality. The utopian future Kurzweil envisions could come to pass, but even if it does, it won’t occur in the lifetimes of his readers. What will assuredly happen between now and then is considerable disruption; disruption that is real and impactful, and not all of it is positive. Abjuring the real–world consequences of technological innovation either by omission or wilful commission is intellectual malpractice. Kurzweil is a fantasist and dreamer, but he is not a prophet.

About
Joshua Huminski
:
Joshua C. Huminski is the Senior Vice President for National Security & Intelligence Programs and the Director of the Mike Rogers Center at the Center for the Study of the Presidency & Congress.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

An AI futurist runs amok

An artist’s illustration representing the concept of Artificial General Intelligence (AGI). Illustration by Domhnall Malone, as part of the Visualising AI project launched by Google DeepMind, from Unsplash

July 20, 2024

The problem with the present discourse on AI is that it is very binary—you are either a techno–optimist or a Luddite. In the “The Singularity is Nearer,” techno-optimist Ray Kurzweil foresees utopian futures but overlooks current societal disruptions, writes Joshua Humniski.

A

t what point did we collectively agree that artificial intelligence (AI) is a net benefit? I am far from a Luddite, but it seems that at this stage in AI’s development, society is not getting the future we were promised. Instead of AI freeing us, as some far cleverer and funnier people have suggested, from the drudgeries of life (like laundry) and allowing us to pursue our creative talents, the inverse has happened. Large Language Models (LLMs) and generative AI tools are stealing—forgive me, training on—existing content and producing ‘new’ material, while we still must do laundry. 

Meta recently added its own AI search function in Instagram that is functionally useless and just downright annoying. Google’s AI–enabled search feature, having apparently drunk the Kool–Aid that is Reddit, produces hilarious (and sometimes dangerous) answers that are entirely wrong. Of course, we tend to overestimate the near–term impacts of technology, and under–estimate their long–term effects, but the auguries aren’t great from a technological point of view and, from a policy point of view, are downright terrifying. We are likely to have true artificial general intelligence (AGI) before Congress addresses the potential impact of AI on the economy and democracy (at which point it won’t matter anyway). 

The problem with the present discourse on AI is that it is very binary—you are either a techno–optimist or an aforementioned Luddite. AI will either turn everyone into paperclips or turn society into a utopia where we can pursue our passions at leisure, never having to work again. Futurist Ray Kurzweil is decidedly in the latter camp as awkwardly presented in his new book “The Singularity is Nearer.” 

The Singularity is Nearer | Ray Kurzweil | Viking, Penguin Random House

Building off his previous books in AI, including “The Singularity is Near,” Kurzweil’s successor volume reads as though a LLM was trained on his previous books, a few computer science textbooks, some speculative fiction, and several books by individuals who are sympathetic to Kurzweil’s worldview and hallucinated the result. Was there a human editor in the mix? One can’t really tell. In just a matter of pages, Kurzweil swings wildly from sections that dive deeply into the mechanics of AI, philosophical questions of consciousness, and how we will all be cloud–based avatars of ourselves.

Kurzweil also has a curious relationship with the process of innovation. Kurzweil sees where AI is today, discusses current obstacles, but then suddenly shifts to a far–future optimistic scenario. He skips all of the steps in between now and then, ignoring rather significant (and quite possibly logarithmic) technological leaps that are necessary to reach that end state, some of which strain the limits of physical laws. 

For example, he notes that science and medicine currently struggle to map brain activity with sufficient fidelity and accuracy. Improvements to internal and external measurements are occurring, but we still don’t have the necessary insights into neural activity to map what does what and when and in response to what stimuli with sufficiently granular detail. So far, so good. Kurzweil then highlights Elon Musk’s Neuralink as evidence of progress. He then suddenly jumps to the far–future, stating that nanobots will do the yeoman’s work of brain mapping and interface. He writes about today, skips the entire middle bit, and then zooms to a future technology. QED. Were it just once, that could be forgiven, but he does it repeatedly throughout the book. Were this issue limited to technology alone, this would be a harmless quirk of the techno–optimist and utopian. Kurzweil’s wilful ignorance of the squishy middle bit of consequences of innovation is a touch more dangerous. Indeed, he leaves all the downsides of AI to the very end of the book. Are there risks like weaponized AIs, synthetic biology, or near–term energy crunches due to computing power demands? Sure, but those are, in his mind, minor and with technology can be overcome. 

Kurzweil practically dismisses the negative impacts of technological innovation by waving them away with a metaphorical magic wand. Did a robot take your job? Poof. There will be new jobs. Big tech companies using AI for questionable purposes? Poof. The market will punish bad actors. Nation–states use AI and synthetic biology to create Covid–2030? Poof. AI will simply create wonderful new drugs and vaccines. An AI car drove you off the road to your death? Poof. Just press restart and re–download yourself to a new you, version 2.0. 

This is the luxury of the techno–futurist—boundless, unaccountable optimism. The future these authors articulate may well never come to pass. The technologies they articulate almost certainly require leaps of technical faith and hurdling of scientific laws. Who doesn’t want to believe the future will be better than the present? It is certainly more engaging to think of how things will go right; much more so than how they will go wrong. 

Kurzweil gets technology. He gets computer science. He doesn’t understand humanity and he certainly doesn’t understand politics. He outlines a curious set of six epochs of the evolution towards AI starting with, effectively the Big Bang and ending with human intelligence spreading across the universe a bit like the Culture series by Iain M. Banks. If you’re curious, we’re getting close to the fifth stage where we merge with computers. 

He borrows pages from authors like Steven Pinker, who argues that the overall trend of humanity is inexorably upward. On nearly every measure from health and longevity to the spread of democracy, Kurzweil argues that things are looking up. Given the long time horizons he measures progress against, he is not wrong, and neither is Pinker. Those trendlines don’t, however, matter for most people living in highly disruptive times. Kurzweil’s presentation is befuddling and he comes across as a condescending arch techno–utopian wholly disconnected from reality: “You, sir, may have lost your job to AI, but things were a lot worse a generation or two ago.” 

He presents these cases as illustrative of ‘accelerating returns’—things get better and get better faster with greater progress. It is the application of Moore’s Law where processing power doubles and halves in price every 18 months, to the rest of humanity. It is all well and good in theory until those trends reach an upper limit or stall out. Yes, there hasn’t ever been a better time to be a mother with maternal mortality at historic lows in the grand scheme of things, except for in the United States where it is shockingly high

Kurzweil clearly hopes to offset the naysayers and critics of AI’s impact on humanity. If we’ve made all these great leaps and bounds in improving health outcomes, reducing inequality, and generally improving the lot of humanity, just think what we can do with AI. Sure, he argues, there have been negative results from progress before, but we’ve recovered. All those bank tellers that lost their jobs to ATMs found new jobs and new skills. 

The irony is that Kurzweil’s own arguments expose the flaws in his techno–optimism. The pace and scale of change that AI will usher in, and about which he is so excited, will potentially outclass anything that came before. If, as he believes, AI will revolutionize every aspect of the way we live, work, love, and more and do so at a pace of accelerating returns, we simply won’t have time to adapt or adjust. Generational change will take place in just a matter of years and society and politics are ill–equipped to manage such a pace of disruption. That magic AI wand of his won’t change these facts no matter how optimistic he and other AI creators are. 

There is a place for techno–optimists like Kurzweil. They help push the boundaries of what is possible and create aspirational narratives necessary for innovation and creativity. Those same techno–optimists need a heavy dose of the humanities and liberal arts to ground them in reality. The utopian future Kurzweil envisions could come to pass, but even if it does, it won’t occur in the lifetimes of his readers. What will assuredly happen between now and then is considerable disruption; disruption that is real and impactful, and not all of it is positive. Abjuring the real–world consequences of technological innovation either by omission or wilful commission is intellectual malpractice. Kurzweil is a fantasist and dreamer, but he is not a prophet.

About
Joshua Huminski
:
Joshua C. Huminski is the Senior Vice President for National Security & Intelligence Programs and the Director of the Mike Rogers Center at the Center for the Study of the Presidency & Congress.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.