ABOUT THAT STORY ABOUT THE ROGUE BUT HARMLESS ARTIFICIAL INTELLIGENCE

0
568

by Joseph P. Farrell, Giza Death Star:

This story was spotted by many of you, but we’re going to credit R.D. and K.M. with our thanks for it since they were out ahead of the rest of the crowd in spotting it.  I include both their articles because one raises the alarm rather well, and the other tries to spin the alarm into “it was all just a harmless exercise taken out of context.”

Here’s the article that R.D. spotted:

TRUTH LIVES on at https://sgtreport.tv/

Now you’ll note that the first article gives a more-than-mildly-disturbing account of what happened:

Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, said: “We were training it in simulation to identify and target a SAM threat.

“And then the operator would say yes, kill that threat.

“The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.

“So what did it do? It killed the operator.

“It killed the operator because that person was keeping it from accomplishing its objective.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’

“So what does it start doing?

“It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” he said.

I don’t know about you, but the way I’m understanding the sequence of events is this: (1) the artificial intelligence was “trained” or “programmed” to kill a specific kind of target, in this case, a Surface-to-Air-Missile (SAM) battery. (2) Once the target was acquired by the artificial intelligence, the human operator – let’s be “technical” here and call him the “systems administrator” (or sysadmin to give it the cursed abbreviation) – has to give final approval to the kill. (3) In an unspecified number of cases the systems administrator did not approve of the target. (4) At this juncture, the artificial intelligence “killed” the systems administrator, then went on to make the “kill”  on its selected target which the systems administrator had overriden. (5) Then the systems administrator, who did not die because, after all, it was all only a  simulation, reprogrammed the artificial intelligence not to kill the systems administrator if he overrode the target selection. (6) At this juncture, the artificial intelligence targeted the communications system itself by which the systems adiministrator overrode the artificial intelligence’s target selection, breaking the communications link, and making it possible for the artificial intelligence to go ahead and kill the target anyway.

Before I share my precautionary rant of the day, let’s look at the second version of the story, shared by K.M.:

Air Force Says Killer Drone Story Was ‘Anecdotal’, Official’s Remarks Were ‘Taken Out Of Context’

Now, according to this version of the story, the simulation both did, and did not, take place. Feast your eyes on this masterpiece of spin and obfuscation:

U.S. Air Force Col. Tucker Hamilton, at a conference in May, appeared to recount an experiment in which the Air Force trained a drone on artificial intelligence that eventually turned on its operator; however, the Air Force has since denied the simulation actually took place.

Hamilton described a scenario at a summit hosted by the United Kingdom-based Royal Aeronautical Society in which Air Force researchers trained a weaponized drone using AI to identify and attack enemy air defenses after receiving final mission approval from a human operator. But when an operator told the drone to abort a mission in a simulated event, the AI instead turned on its operator and drove the vehicle to kill the operator, underscoring the dangers of the U.S. military’s push to incorporate AI into autonomous weapons systems, he added.

However, the Air Force said the simulation did not actually occur.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Air Force Spokesperson Ann Stefanek told Fox News. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Hamilton also said the experiment never took place, and that the scenario was a hypothetical “thought experiment.”

“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Hamilton told the Royal Aeronautical Society.

On the face of it, I’d like to believe this version of the story.  If this were pre-911 America, when the population was not nearly so dumbed-down as now, nor the government not nearly so corrupt, and inept and practiced in downright lying and evil as now, I’d be inclined to believe this explanation.  (Note what I just said: I just said the governments of Reagan, Bush the First, and Clinton – with their savings-and-loan scandals, Iran-Contra, Ruby Ridge, Waco, Oklahoma City Bombings, &c. – were models of probity and deep thought compared to what we’ve had in this, the twenty-worst century.) In short, I’d like to think the Air Force would not be so stupid as to conduct such an experiment in reality, nor the government so corrupt and evil as to condone it.

But I’m sorry, I’m not buying it. The enstupidization of the country and corruptirottification of the federal government has metastasized to the point that I fear the case is now terminal.  No is it all that reassuring that the Air Force denies conducting any such test in any other form than a thought experiment. This is meant to reassure, but it doesn’t. It says nothing about all the research institutes, corporations, foundations, and just plain old fashioned gangs, that might be doing so.

And besides, there’s one final, inescapable fact about all such highly technical engineering projects: sooner or later, and usually sooner rather than later, such projects move out of the “thought experiment” and “sketches on the back of envelopes” stage, and into the actual prototyping and testing stage, the better in the long run that project will be. Indeed, that is the whole point of prototyping and testing, to find and catch flaws in the system or its architecture.  Add this factor to the context of corruption and stupidity prevailing in the Swamp and all its institutions, and yea, I can believe the event actually occurred.

Even if it didn’t, someday, something very much like it may really happen, for even as the thought experiment reveals, that rogue artificial intelligence may find some sort of “work around” its programming, and that will be particularly true if Elon Musk’s hypothesis that an artificial intelligence might actually summon or transduce some sort of intelligent entity into its circuitry.  Call that idea the “AI Transduction” or “Ai Possession” Hypothesis. Many people scoffed at Musk for that hypothesis. I’m not one of them. After all, “Lucifer” is the light-bearer, or, to put it in contemporary terms, the bearer of electromagnetic phenomena… like microcircuitry and electricity…

Read More @ GizaDeathStar.com