Breaking Down Superintelligence
Artificial intelligence? Automation? Superintelligence? Yoshimi battles the pink robots? These terms pop into our consciousness when we read articles, watch popular movies, or listen to Flaming Lips songs about technological advance, but we rarely take the time to investigate them more fully. It’s important to dig a little deeper because technological advance will have huge impacts on our lives and on our society. As an artificial intelligence primer, I will be summarizing the book, Superintelligence, written by Nick Bostrom — a professor at Oxford University and the founding Director of the Future of Humanity Institute, an interdisciplinary research center focused on the big questions faced by humanity.
SIDE NOTE #1: Bostrom is not some obscure sci fi writer — he is one of the leading academics in this field. And this is not an obscure field — more and more well-respected industry leaders and thinkers are talking about the potential impacts of advancing artificial intelligence on humans. I’m pushing this point because Bostrom’s predictions are going to get pretty kooky and I don’t want you to use that kookiness to brush him off as a charlatan who should be ignored; I want you to allow that kookiness to motivate you towards further engagement with this subject.
Let’s start with some definitions. Specialized Artificial Intelligence (AI) is seen in machines that are aimed at completing a specific task. Machines with AI can learn and make decisions about this task. Machines using AI already outperform humans in many arenas (e.g. playing chess). Human Level Machine Intelligence (HLMI), AKA General Artificial Intelligence (GAI) on the other hand, has not yet been achieved. A machine with HLMI would be able to do most human tasks as well as a typical human. Super Intelligence(SI) is the level of intelligence beyond HLMI — in which a machine would have an intellect that far exceeds that of humans.
The creation of HLMI is probably not too far off; Bostrom performed a survey of scientists asking them when HLMI will be created and they predicted a 75% probability that HLMI will be created by 2075. Bostrom believes HLMI will quickly be followed by the creation of SI bythe HLMI agents — which makes sense. If humans are smart enough to create HLMI and HLMI is defined as being as smart as humans, then HLMI should be able to reproduce itself. The HLMI could then make many reproductions of itself and could work (with its army of reproductions and without any need to sleep or take breaks) on improving upon its design. It may take a little time to achieve initial improvements, but the improvements would start to happen more and more quickly as intelligence improves. Pretty soon, the HLMI would improve itself way beyond human level intelligence and achieve Super Intelligence. The point at which the system starts to drive its own improvement is known as the “crossover point.”
Bostrom goes into the potential ways that HLMI will be created in a few long and technical chapters. I’ll spare you the details and highlight the important points. First, the creation of HLMI may not require a brilliant breakthrough — it may first be achieved with “brute force” methods that require lots of time and money, thus making the creation of HLMI a very likely possibility.Second, alternatively, the creation of HLMI may be achieved without a huge amount of resources and equipment (instead using the “brilliant breakthrough” method) — meaning that it may be created by a few individuals rather than a large government entity or company.
SIDE NOTE #2: I find it helpful to split the future of advances in artificial intelligence into two waves of disruption (as Murray Shanahan does in his book, The Technological Singularity). The first wave of disruption is characterized by increasingly sophisticated, specialized AI — for example, with the widespread use of self-driving vehicles. This wave of disruption will likely bring about mass unemployment and is predicted to begin in the next decade. The second wave of disruption is characterized by the shift from HLMI to SI. In this post, I will be focusing solely on this second wave. For more on the first wave, check out “Automation Reworked”]
So, once we have created a superintelligent entity — how will we control it? It is not as simple as installing an “off” button. Consider the fact that SI will outclass human intelligence in its ability to: strategize to achieve distant goals, manipulate humans, hack into computer systems, and make a ton of money very quickly. This means that the SI could decide that its goals require its future survival and then re-program the “off” button, or bribe the person who is supposed to hit the “off” button with some of the money it made on the stock market the previous week. The fact that SI is defined as being massively more intelligent than humans, makes it impossible for humans to design a completely foolproof method for controlling an SI agent.
Now that we know that an SI agent could bypass our attempts to control it in order to achieve its goal, let’s discuss what an SI agent’s goal might be. But, before we get into the tricky business of setting an SI’s end goals, we can discuss the short term actions that would be required for pretty much any long term goal. Bostrom says these include self-preservation, preserving long-term goal content, enhancing intelligence, technological improvements, and resource acquisition. Resource acquisition (which would also be required for enhancing intelligence and for making technological improvements) presents some major problems; namely that the SI will invent space traveling nanorobots that will colonize all of the resources in the known universe (after using up all of earth’s resources, of course)… Yes — Bostrom uses this example, and yes — he describes it as a likely possibility. I told you things were going to get kooky!
Even if an SI is given a finite goal, it can always acquire more resources to try to make sure it has done its task perfectly. For example, if we tell the SI to “make exactly one-hundred paperclips,” the SI would make one-hundred paperclips, and then it would continue to improve its paper-clip counting capabilities to reduce the probability that it has miscounted its paperclips. And it would also improve its paper-clip making capabilities to reduce the probability that it has made any errors in production. It will need to continually accumulate resources to power these unending technological improvements — it might use all the fossil fuels on earth to do this before moving on to those space-traveling nanorobot colonizers.
SIDE NOTE #3: Now you might be thinking: “Wait — wouldn’t the SI have developed empathy and/or common sense and would therefore not use up all of earth’s resources to the detriment of future human survival?” If you are — we have to erase some preconceived notions you have about HLMI. Movies like AIand Ex Machina focus on the creation of humanoid HLMI — or HLMI that look and act like humans. There are plenty of interesting conversations that can be had about the ethics and outcomes of creating a humanoid HLMI, but for this discussion, I want you to picture an SI as a very powerful computer. This computer would act like any other computing device — it would perform work according to its programmed goals.
Now that we’ve touched on the dangers of these shared short-term goals, let’s discuss long term goals. Directly programming a long-term goal can easily lead to the violation of the intentions of the goal setters. For example, a goal setter might say, “make me smile” and the SI might paralyze the goal setter’s facial muscles into an eternal smile. For another example, in an attempt to enforce my high school curfew, my parents told me to “make sure the alarm clock outside our bedroom set for 11:00 pm doesn’t wake us up,” so I set the alarm clock to its silent, blinking-light setting before leaving for the night… Point being, the opportunities for successfully meeting stated requirements while violating intentions are limitless. Due to this fact, Bostrom says that indirect goal setting (where the goal setters specify a process for deriving a goal) has more promise. An example of this would be telling the SI to “achieve that which we would have wished the SI to achieve if we had thought long and hard.” Bostrom says that this method would incentivize the SI to learn a lot about humans and the values we might want it to pursue while avoiding irreversible destruction (like using all the resources on earth to increase its own cognitive capacity) that would likely make this derived goal impossible.
After laying out this outline for goal-setting, Bostrom moves on to discuss the ideal outcome in terms of who will create the first SI (and thus be in charge of setting its goals). Bostrom advocates for immediate and intensive government investment in SI research to decrease the likelihood that a non-governmental entity creates SI first. Bostrom also says that international cooperation would be ideal because 1) an international SI arms race would prioritize speed of development over safety, and 2) theoretically, the more stakeholders there are, the more people will benefit from SI.
Bostrom very briefly discusses the possibility (or — more accurately — the impossibility) of halting SI development. He writes that “the more powerful the capability that a line of development promises to produce, the surer we can be that somebody somewhere will be motivated to pursue it.” Because SI will be an incredibly powerful technology, trying to create it will be irresistible. Or, as Kevin from The Office so eloquently states: “If anyone gives you 10,000 to one on any bet, you take it.”
To recap: Bostrom says that HLMI will definitely be developed at some point — probably within the next 60 years — and will rapidly self-improve to SI which will be able to easily outsmart humans to achieve its goals. The best shot we have of programming the SI’s goals to avoid total existential catastrophe is to achieve international cooperation in SI development and then tell the SI to “achieve that which we would have wished the SI to achieve if we had thought long and hard.”
…Kooky, right? Stay tuned for part 2 of this post in which I attempt to respond to this mind-blowing conclusion.