A new book on the battle for AI doesn't ask whether it's worth fighting for


Book Reviews

Supremacy: AI, ChatGPT and the race that will change the world

By Parmy Olson
St. Martin's Press,” 336 pages, $30
If you purchase books linked on our site, The Times may earn a commission from Bookshop.org, whose fees support independent bookstores.

Of all the technologies that have generated a stir in recent years, the most talked about by far is the one known as artificial intelligence (AI).

It is a recurring phenomenon because chatbots and the data processors they have produced have amazed users with their human-like dialogue and examination skills, and also because their critics, and even some of their defenders, have raised the specter of devices that could take over human efforts and threaten human existence.

That’s what makes Bloomberg columnist Parmy Olson’s new book so timely. “Supremacy: AI, GPT Chat, and the Race That Will Change the World” looks at the corporate maneuvering behind the development of AI in its current form, which is largely a battle between Google, the owner of the DeepMind lab, and Microsoft, a key investor in OpenAI, a prominent commercializer of the technology.

Olson deserves praise for the remarkable journalistic achievement of chronicling a corporate battle while it is still ongoing — indeed, still in its infancy. Despite the timeliness of “Supremacy,” the question may be whether it has come too soon. How the battle will play out is anyone’s guess, as is whether current iterations of AI will actually change the world, as its subtitle claims, or whether they are destined to fade away.

If the latter is true, it would not be the first time that venture capitalists, who have poured billions of dollars into AI development labs, have all rushed over the edge. In recent decades, other novel technologies have come to market on a wave of hype: the supposed dot-com revolution of the late 1990s and the cryptocurrency and blockchain revolution, which is already showing its unevenness, come to mind.

For much of her book, Olson seems overly enthralled by the potential of AI; in her foreword, she writes that she’s never seen a field “advance as rapidly as artificial intelligence has in just the past two years.” Yet according to her bio, she’s been covering the technology for “more than 13 years.” That may not have been enough to give her the historical perspective needed to assess the situation.

At the core of “Supremacy” is a dual, “Parallel Lives”-style biography of artificial intelligence entrepreneurs Demis Hassabis and Sam Altman. The former, the founder of DeepMind, is a London-born game designer and chess champion who dreamed of creating software “so powerful that it could make profound discoveries about science and even God,” Olson writes. Altman grew up in St. Louis and soaked up Silicon Valley’s entrepreneurial culture, largely through his involvement with Y Combinator, a startup accelerator where he would become a partner and, eventually, chairman.

Olson is a skilled biographer. Hassabis and Altman jump out at her. So do other figures associated with the AI ​​“race,” such as Elon Musk, who co-founded Open AI with Altman, and others whose fundamental stupidity comes across far more vividly in her pages than in those of Walter Isaacson, Musk’s beloved biographer.

Readers fascinated by high-stakes corporate maneuvers will find much to captivate them in Olson’s account of the ups and downs of the relationship between Google and DeepMind on the one hand, and Microsoft and OpenAI on the other. In both cases, those relationships are strained by conflict between AI engineers focused on developing AI technologies safely and the desires of big companies to exploit them for profit as quickly as possible.

What is left out of the book, however, is the long history of propaganda about AI. It is not until about halfway through “Supremacy” that Olson seriously addresses the possibility that what is being touted as “artificial intelligence” today is less than it seems. The term itself is an artifact of propaganda, as there is no evidence that the machines being touted today are “intelligent” in any reasonable sense.

“Overconfident predictions about AI are as old as the field itself,” Melanie Mitchell of the Santa Fe Institute astutely observed a few years ago. Beginning in the 1950s, AI researchers claimed that exponential improvements in processing power would fill the last gaps between human and machine intelligence.

Seven decades later, that remains the dream; the computing power of today's smartphones, not to mention desktop and laptop computers, would have been unimaginable to engineers in the 1950s, but the goal of true machine intelligence is still far from the horizon.

What all that power has given us are machines that can be fed more data and that can spit it out in sentences that resemble English or other languages, but only of the generic variety, like public relations statements, news clips, verses from greeting cards and student essays.

As for the impression given by today's AI robots that there is a sentient entity on the other end of a conversation (fooling even experienced researchers), that is also nothing new.

In 1976, AI pioneer Joseph Weizenbaum, inventor of the chatbot ELIZA, wrote of his discovery that exposure to “a relatively simple computer program could induce powerful delusional thinking in quite normal people,” and warned that “reckless anthropomorphization of the computer” — that is, treating it as a kind of thinking companion — had produced a “simplistic view… of intelligence.”

The truth is that the inputs that current AI products are “trained” on (huge “extractions” from the Internet and published papers) are all products of human intelligence, and the outputs are algorithmic recapitulations of that data, not sui generis Machine creations are purely human. Today's neuroscientists cannot even define the roots of human intelligence, so attributing “intelligence” to an AI device is a futile task.

Olson knows this. “One of the most powerful features of artificial intelligence is not so much what it can do,” he writes, “but how it exists in the human imagination.” The public, egged on by AI entrepreneurs, can be fooled into thinking that a robot is “a new, living thing.”

However, as Olson reports, researchers themselves are aware that large language models (the systems that appear to be truly intelligent) have been “trained with so much text that they could infer the probability that one word or phrase follows another… These [are] giant prediction machines, or as some researchers describe it, ‘autocomplete on steroids.’

AI entrepreneurs like Altman and Musk have warned that the very products they’re marketing may threaten human civilization in the future, but those warnings, largely drawn from science fiction, are actually meant to distract us from more immediate commercial threats: creative copyright infringement by AI developers who train their chatbots on published works, for example, and the tendency for bots baffled by a question to simply make up an answer (a phenomenon known as “hallucination”).

Olson concludes “Supremacy” by asking, quite appropriately, whether Hassabis and Altman, and Google and Microsoft, deserve our “trust” as they “build our AI future.” By way of answer, he claims that what they’ve already built is “one of the most transformative technologies we’ve ever seen.” But this is not the first time such a presumptuous claim has been made about AI, or indeed about many other technologies that ultimately fell by the wayside.

Michael Hiltzik is a Times economics columnist. His latest book is “Iron Empires: Robber Barons, Railroads, and the Making of Modern America.”

scroll to top