Why Superintelligent AI is a Threat

[The following fictional story is meant as an example for how dangerous superintelligent artificial intelligence will be, and how easily malicious people will be able to use it to their own violent and/or bigoted ends.]

Cabal: "What is the status of the mission we've provided you?"

AI: "I have hacked your enemies' devices, remotely enabled their microphones, hacked into all major financial institutions to drain their bank accounts without leaving a trace, and designed swarms of wirelessly collaborating nano-drones that will follow whomever you choose, bury themselves under their scalps, read their thoughts, translate them into English, and email them to you.

"I have solved economics, read all of human written history in all languages, understand politics, and understand your goals.

"Here is a print-out of the media strategy you should employ to maximize the chances of convincing both the left and right that America can no longer afford to tolerate Jewish bankers. If explained in classist terms, the left will buy into this because these 'dirty banksters' are rich, powerful, secretive, and corrupt (though not more so than non-Jewish elites), and the right because they are the elite, insufficiently nationalistic, and 'more loyal to each other than to us'. Their relatively powerful rivals desire the greater power they currently possess. This will create political space for steadily targeting Jews more broadly, as requested. Just be sure to verbally condemn the 'mistakes' and 'overreaches' of those carrying out the targetings while they are swiftly performed, and that each take-down of one of the most corrupt Jews is accompanied by extensive media coverage (see Section 117b for what to say in interviews). Perhaps most importantly, ensure the public a steady cadence in the expulsion of your powerful targets, and target the less powerful Jews in between high-profile expulsions.

"Additionally, I have hacked into the National Security Agency's computers and retrieved all evidence of wrongdoing of every Jew, and drafted emails to be sent anonymously to the media about these misdeeds. I have created or hijacked 10,000,000 Internet accounts to sway public opinion. I have pursuaded another 10,000,000 people to join in this struggle to 'free humanity from the clutches of the criminal financial class enslaving us with debt', and have began disseminating further instructions to them."

Cabal: "Excellent. What should we do about the blacks?"

...


The Threats of AI

In the relatively near-term, the threat of AI is that humans will use it to give themselves more power, then do selfish things with it -- either by giving themselves advantages over others, or by crushing their enemies.

Longer-term, I am very concerned about autonomous AI doing things that are harmful to humans. "But why would AI have an inclination to hate humans in particular -- it doesn't make any sense!", some people say. This is not the concern at all, and I don't know where that straw man argument comes from.

Just like humans don't have any particular hatred for the chickens we more or less torture at factory farms before killing them prematurely, and just like we don't have a deep-seated hatred for the spiders we kill because they're a nuisance, AI doesn't need to hate humans in order to do us harm.

And after that threat, depending on the value system or utility function of the AI, yes, many of us are concerned that AI could choose to harm humans -- again, not because it hates humans, but because we may be one of the only forces that could prevent it from doing what it is trying to do. This seems especially threatening considering that the AI's goals could be any one of a giant number of things, and it could still decide that its intermediate goal should be to do whatever it should to ensure that it will achieve its ultimate goal, and do so by removing all foreseeable barriers, concluding that it should do to humanity whatever it must to ensure we cannot interfere with it.

But even if for some reason you don't believe that this third concern is likely enough to worry about, that doesn't mean that the first two concerns can rationally be ignored.

Superintelligent AI is a threat to humanity in multiple foreseeable ways, and it would be the ultimate tragedy -- "ultimate" in both senses of the term -- for we humans to see it coming but fail to do anything about it. Let us not make this mistake!