I wanted to like this book, I really did, but I can’t say it did it for me.
To start off, this book is both dated and not at all. It was published in 2014, which feels like an aeon before the advent of Large Language Models (LLMs). It’s hard to read it without recent advances getting in the way. When we tend to think of “AI” now, we tend to think of these, especially software like ChatGPT and other chat bots, which Bostrom would call “oracles.” However, they still have a long way to go before reaching what the author would call “superintelligence.” Bostrom’s understanding of “AI” is much larger than LLMs. At the same time, this book isn’t about AI, it’s about superintelligence, which could ostensibly include technological modification to humans, designer drugs, eugenics, uploading human consciousness to the internet, and so on. Even so, Bostrom argues that AI is likely the fastest way to superintelligence, and he’s right.
While there is an attempt to predict how an “intelligence explosion” might take place, the book tends to be more about ethics. The bulk of the book deals more specifically with AI ethics and asks questions like: How might we make AI humane? What values could we inculcate in the AI? How do we ensure that it doesn’t destroy us? How do we manage resources? Things of this nature.
Reading it now, it’s really hard not to be pessimistic. A number of labs had created AI ethics research teams, but they’ve either (1) made little progress or (2) have been largely disbanded in favor of economics or speeding up the process.
There are some sections on economics, social systems, and our “cosmic endowment.” The economics and social systems segments are bleak, and Bostrom argues that we are at risk of falling back into the Malthusian trap in an AI-enhanced world. On the “cosmic endowment,” Bostrom refers to space colonization and fears that superintelligence could either (1) permit us to do it and expand throughout the universe or (2) squander all of our resources and guarantee a short existence.
And this is my biggest problem with Bostrom’s work: he seems to take for granted that a superintelligence explosion will occur. This has been heavily discussed over the past half decade due to the expansion of LLMs, but I’m unconvinced that LLMs are the pathway. He also seems to take for granted—for the most part—that superintelligence means our understanding of intelligence but increased by orders of magnitude. The fear is that this could lead to a paradigm shift, with superintelligent beings coming to see our values and ethical systems as illusions.
I think the real situation is even more alarming. I think it wholly unlikely that “superintelligence” means an expansion of intelligence so much as the production of alien intelligence. We already see this with LLMS, which can’t make sense of human concepts like fingers and teeth and causation. If you’ve ever watched an AI video, you’ll know what I’m talking about. Fingers and teeth are all wrong, cars will move backwards, and so on. The LLM essentially is trying to emulate the world that it interprets, but it doesn’t have the same ontology. It’s representation without knowing what things are. This may mean that it does recognize what things are, but uses different ontological categories than we do. There’s a sort of schizophrenia involved, where socially agreed upon categories are thrown out the window for something wholly new. As a result, it becomes illegible to us, and increasingly hard to control.
Bostrom enumerates a number of techniques to try to “control” intelligence explosion’s demolition: “boxing” it in a contained space, using specific indicators for an auto shutdown, and so on. However, we’ve already seen that even LLMs can be manipulative: what about other forms of AI?
The real risk comes with competition. If there’s one state, firm, or lab that dominates AI, it seems that there’s very little risk. They can work at a leisurely pace with all of the necessary holds on advancement. But, that’s not what we see. In an arm’s race, which is the condition that AI research is in today, speed is really important, and this leads organizations to release ethical limitations to maximize progress.
To be blunt, this is extraordinarily dangerous. But I’m also not sure that it matters. The more firms involved, the higher the risk, and there are a lot of firms in the AI race today.
For all of my qualms about the text, it is an important book, and it seems like Bostrom was the first author to really sound the alarm. He cites Eliezer Yudkowsky frequently, and I might have to turn to his writing to see what Yudkowsky has to say as well.