by Joseph P. Farrell, Giza Death Star:

Mr. J.B. sent this article along, and as one might imagine, it sparked some high octane speculation that I want to end this week’s blogs with. Artificial intelligence (AI) has been much in focus in recent years, with a number of prominent figures, like Mr. Musk, raising the alarm about its potentialities. More recently, just last week I blogged about Henry Kissinger’s own alarm about AI. In that blog, I pointed out that Musk’s version of the scenario is that it might actually transduce some entity or being of evil nature into its artificial neurons and circuits, what I called “The Devil Scenario.” I also speculated in that blog that perhaps the “elites” like Mr. Kissinger are afraid of the opposite scenario to Musk’s, one that does not get discussed very much, and that it the so-called “Angel” scenario, where an AI might “transduce” some entity that determines that the current globaloney crop of misfits, cultural Marxists, Darth Soroses and crony crapitalists are a threat to humanity, and… well, you know. Perhaps, I thought in that blog, the “elites” are seeing certain signs of that or a similar scenario, and they don’t like what they see. Either way, I’m still of the opinion some developed form of AI is already here.

So what has this to do with Mr. J.B.’s article? Well, here’s the article, and you tell me:

Elon Musk, DeepMind founders, and others sign pledge to not develop lethal AI weapon systems

The open three paragraphs say it all:

Tech leaders, including Elon Musk and the three co-founders of Google’s AI subsidiary DeepMind, have signed a pledge promising to not develop “lethal autonomous weapons.”

It’s the latest move from an unofficial and global coalition of researchers and executives that’s opposed to the propagation of such technology. The pledge warns that weapon systems that use AI to “[select] and [engage] targets without human intervention” pose moral and pragmatic threats. Morally, the signatories argue, the decision to take a human life “should never be delegated to a machine.” On the pragmatic front, they say that the spread of such weaponry would be “dangerously destabilizing for every country and individual.”

The pledge was published today at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, and it was organized by the Future of Life Institute, a research institute that aims to “mitigate existential risk” to humanity. The institute has previously helped issue letters from some of the same individuals, calling on the United Nations to consider new regulations for what are known as lethal autonomous weapons, or LAWS. This, however, is the first time those involved have pledged individually to not develop such technology.

Now, most of you probably know where I stand: do I think a machine should determine when or under what circumstances a human life should be taken? Well, as you might have gathered from yesterday’s blog, I have a big problem with “panels of ethics experts” deciding on baby-tinkering, must less juries or judges deciding capital punishment cases. As for the latter, in today’s corrupt court system, who would want to make thatdecision? I wouldn’t. As for a machine doing so?


But the article continues, and this is where it gets interesting, and lays the foundation for my high octane speculation of the day:

Paul Scharre, a military analyst who has written a book on the future of warfare and AI, told The Verge that the pledge was unlikely to have an effect on international policy, and that such documents did not do a good enough job of teasing out the intricacies of this debate. “What seems to be lacking is sustained engagement from AI researchers in explaining to policymakers why they are concerned about autonomous weapons,” said Scharre.

He also added that most governments were in agreement with the pledge’s main promise — that individuals should not develop AI systems that target individuals — and that the “cat is already out of the bag” on military AI used for defensive. “At least 30 nations have supervised autonomous weapons used to defend against rocket and missile attack,” said Scharre. “The real debate is in the middle space, which the press release is somewhat ambiguous on.” (Emphases added)

Now, with all respect to Mr. Scharre, one would have to be a dunce not to know what the concern is about “autonomous weapons.” This is just more boilerplate and academic-sounding avoidance. But then comes that very revealing line “that individuals should not develop AI systems that target individuals.”

That just put the whole AI debate on a very different playing field and confirms a suspicion I’ve had about  what is lurking behind all this “concern” about AI from “the elites”. Indeed, I have also blogged about this possibility before, namely, when we think of AI development, we tend to think just one all-powerful, globe-encompassing malevolent (or beneficent) machine running it all.  But there is nothing to prevent several AIs being developed, including AI’s to defend against other AIs, or to take out Don Corleone’s opposition, in some updated version of The Godfather. This line is revealing because it is really suggesting that what the “elite” fears is not even my “Don Corleone” scenario, but rather, that individuals or groups people will defend against AI by developing AI defenders, not one, but many AIs contending for domination.

Read More @