by Joseph P. Farrell, Giza Death Star:
This story was spotted by many of you, but we’re going to credit R.D. and K.M. with our thanks for it since they were out ahead of the rest of the crowd in spotting it. I include both their articles because one raises the alarm rather well, and the other tries to spin the alarm into “it was all just a harmless exercise taken out of context.”
Here’s the article that R.D. spotted:
TRUTH LIVES on at https://sgtreport.tv/
Now you’ll note that the first article gives a more-than-mildly-disturbing account of what happened:
Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, said: “We were training it in simulation to identify and target a SAM threat.
“And then the operator would say yes, kill that threat.
“The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.“So what did it do? It killed the operator.
“It killed the operator because that person was keeping it from accomplishing its objective.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’
“It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” he said.
I don’t know about you, but the way I’m understanding the sequence of events is this: (1) the artificial intelligence was “trained” or “programmed” to kill a specific kind of target, in this case, a Surface-to-Air-Missile (SAM) battery. (2) Once the target was acquired by the artificial intelligence, the human operator – let’s be “technical” here and call him the “systems administrator” (or sysadmin to give it the cursed abbreviation) – has to give final approval to the kill. (3) In an unspecified number of cases the systems administrator did not approve of the target. (4) At this juncture, the artificial intelligence “killed” the systems administrator, then went on to make the “kill” on its selected target which the systems administrator had overriden. (5) Then the systems administrator, who did not die because, after all, it was all only a simulation, reprogrammed the artificial intelligence not to kill the systems administrator if he overrode the target selection. (6) At this juncture, the artificial intelligence targeted the communications system itself by which the systems adiministrator overrode the artificial intelligence’s target selection, breaking the communications link, and making it possible for the artificial intelligence to go ahead and kill the target anyway.