by Dagny Taggart, The Organic Prepper:
A prominent Oxford philosopher who is known for making terrifying predictions about humanity has a new theory about our future, and it isn’t pretty.
Here’s a summary of that theory, explained by Vox:
In an influential paper that laid out the theory, the Oxford philosopher Nick Bostrom showed that at least one of three possibilities is true: 1) All human-like civilizations in the universe go extinct before they develop the technological capacity to create simulated realities; 2) if any civilizations do reach this phase of technological maturity, none of them will bother to run simulations; or 3) advanced civilizations would have the ability to create many, many simulations, and that means there are far more simulated worlds than non-simulated ones. (source)
Will humanity eventually be destroyed by one of its own creations?
If you find the idea of living in a computer simulation that is run by unknown beings troubling, wait until you hear Bostrom’s latest theory.
While speaking to head of the conference, Chris Anderson, Bostrom argued that mass surveillance could be one of the only ways to save humanity – from a technology of our own creation.
His theory starts with a metaphor of humans standing in front of a giant urn filled with balls that represent ideas. There are white balls (beneficial ideas), grey balls (moderately harmful ideas), and black balls (ideas that destroy civilization). The creation of the atomic bomb, for instance, was akin to a grey ball — a dangerous idea that didn’t result in our demise.
Bostrom posits that there may be only one black ball in the urn, but, once it is selected, it cannot be put back. (Humanity would be annihilated, after all.)
According to Bostrom, the only reason that we haven’t selected a black ball yet is because we’ve been “lucky.” (source)
In his paper, Bostrom writes,
If scientific and technological research continues, we will eventually reach it and pull it out. Our civilization has a considerable ability to pick up balls, but no ability to put them back into the urn. We can invent but we cannot un-invent. Our strategy is to hope that there is no black ball.
If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semi-anarchic default condition. (source)
Bostrom believes the only thing that can save humanity is government.
Bostom has proposed ways to prevent this from happening, and his ideas are horrifyingly dystopian:
The first would require stronger global governance which goes further than the current international system. This would enable states to agree to outlaw the use of the technology quickly enough to avert total catastrophe, because the international community could move faster than it has been able to in the past. Bostrom suggests in his paper that such a government could also retain nuclear weapons to protect against an outbreak or serious breach.
The second system is more dystopian, and would require significantly more surveillance than humans are used to. Bostrom describes a kind of “freedom tag,” fitted to everyone that transmits encrypted audio and video that spots signs of undesirable behavior. This would be necessary, he argues, future governance systems to preemptively intervene before a potentially history-altering crime is committed. The paper notes that if every tag cost $140, it would cost less than one percent of global gross domestic product to fit everyone with the tag and potentially avoid a species-ending event. (source)
These tags would feed information to “patriot monitoring stations,” or “freedom centers,” where artificial intelligence would monitor the data, bringing human “freedom officers” into the loop if signs of a black ball are detected.
How very Orwellian.
Being monitored by artificial intelligence is a horrifying idea.
The idea of artificial intelligence monitoring human activity is particularly alarming, considering that we already know AI can develop prejudice and hate without our input and that robots have no sense of humor and might kill us over a joke. Many experts believe that AI will eventually outsmart humans, and the ultimate outcome will be the end of humanity.
Is having robot overlords a good idea, even if they might prevent someone from selecting a black ball? We already have mass surveillance, and global governance seems to be on the way as well.
Bostrom acknowledged that the scenario could go horribly wrong, but he thinks the ends might justify the means:
Obviously, there are huge downsides and indeed massive risks to mass surveillance and global governance.
On an individual level, we seem to be kind of doomed anyway.
I’m just pointing out that if we are lucky, the world could be such that these would be the only way you could survive a black ball. (source)
For those who remain skeptical, Bostrom advises weighing the pros and cons:
A threshold short of human extinction or existential catastrophe would appear sufficient. For instance, even those who are highly suspicious of government surveillance would presumably favour a large increase in such surveillance if it were truly necessary to prevent occasional region-wide destruction. Similarly, individuals who value living in a sovereign state may reasonably prefer to live under a world government given the assumption that the alternative would entail something as terrible as a nuclear holocaust. (source)