Alternatives to Superintelligence

In part 3 we decided that it is possible for humans — through collective organizing — to exert some control over technological developments. This brought us to two important questions: What would be required to impede the deployment of SI? And, How can we make sure that happens?


In regards to the what question, we can use nuclear monitoring and Mutually Assured Destruction as a model. If many world powers create artificially intelligent agents that are poised to crossover to HLMI and then to SI, and these world powers monitor each other and set up ICBMs to destroy HLMI agents that have been “deployed,” a stalemate could be achieved. This theory assumes that we will be able to successfully monitor HLMI development. While HLMI development cannot be monitored by environmental monitoring, advances in specialized technologies could facilitate monitoring of government entities and corporations with the capacity to develop HLMI. An analog to the Bioweapons Anti-Terrorism Act could be enacted to criminalize private citizens developing HLMI. While monitoring all private citizens capable of developing HLMI would be challenging, the level of computing power and specialized skills required would shrink the pool significantly.

Nuclear weapon test Dakota

Nuclear weapon test Dakota

You might be thinking — do we really want to create an international monitoring agency that will monitor private citizens with powerful computers? I don’t want that. But I really don’t want to jump on the Bostrom train either — pushing down the gas pedal on SI development and putting the hopes of human survival into a highly uncertain method for controlling SI. Would a global monitoring system be any less dystopic than the various possible existential catastrophes Bostrom outlines? We have accepted monitoring systems for nuclear testing, and we were close to doing the same for biological weapons — why wouldn’t we do the same for an equally terrifying threat to human survival?

Side note: A halt on HLMI deployment does not mean a halt on all technological developments. Halting HLMI would not, for instance, impede important medical advances that use machine learning but are extremely specialized. These medical devices, while incredible, are not “generally intelligent,” and neither — for that matter — are self-driving cars or SIRI. Just because technologies do certain human tasks very well (e.g. driving or speech recognition), does not mean that they have Human Level Machine Intelligence (being able to do almost all human tasks as well as the average human).

In regards to how question, I think that the progression of artificial intelligence improvements will naturally attract mass interest in SI. Huge numbers of people will be pushed out of jobs due to improvements in specific artificial intelligence. If those people can be made to see that mechanization is the true driving force behind their suffering — rather than being distracted by scapegoats like globalization and Mexicanization — then there might be enough anti technological advance momentum to impact political decisions. Maybe when 3.4 million Americans are replaced by self-driving vehicles mechanization’s downsides will become unavoidably obvious. But, we must not assume that it will. To impede the power of scapegoating, we need to spread this discussion to the people who will be most impacted by job loss from specialized AI.


Spreading idiosyncratic discussion about SI deployment is key in preventing scapegoating and in guiding people toward positive action. Clear understanding of SI is required for funneling energy toward the requisite international organizing that can impede its deployment. Without a clear understanding of SI, anger about the suffering caused by mechanization could easily lead to chaos. We must avoid the destructive and ineffective machine-smashing coup that ends Kurt Vonnegut’s novel, Player Piano, and instead offer effective and peaceful means for people to express their indignation. In Player Piano, the “revolutionaries” burn a city to the ground, are defeated and captured, and then offer each other the self-satisfied toast: “To the record.” But we must not be satisfied with changing “the record.” We must only be satisfied by changing the world.