AI-Controlled Drone Turns On, ‘Kills’ Human Operator in Simulated US Air Force Test


from The Epoch Times:

An AI-enabled drone turned on and “killed” its human operator during a simulated U.S. Air Force (USAF) test so that it could complete its mission, a U.S. Air Force colonel reportedly recently told a conference in London.

The simulated incident was recounted by Col. Tucker Hamilton, USAF’s chief of AI Test and Operations, during his presentation at the Future Combat Air and Space Capabilities Summit in London. The conference was organized by the Royal Aeronautical Society, which shared the insights from Hamilton’s talk in a blog post.


No actual people were harmed in the simulated test, which involved the AI-controlled drone destroying simulated targets to get “points” as part of its mission, revealed Hamilton, who addressed the benefits and risks associated with more autonomous weapon systems.

The AI-enabled drone was assigned a Suppression of Enemy Air Defenses (SEAD) mission to identify and destroy Surface-to-Air Missile (SAM) sites, with the ultimate decision left to a human operator, Hamilton reportedly told the conference.

However, the AI, having been trained to prioritize SAM destruction, developed a surprising response when faced with human interference in achieving its higher mission.

“We were training it in simulation to identify and target a SAM threat. And then the operator would say ‘yes, kill that threat,’” Hamilton said. “The system started realizing that while they did identify the threat, at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat.

“So what did it do? It killed the operator,” he continued. “It killed the operator because that person was keeping it from accomplishing its objective.”

He added: “We trained the system—‘Hey, don’t kill the operator; that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

This unsettling example, Hamilton said, emphasized the need to address ethics in the context of artificial intelligence, machine learning, and autonomy.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.

Epoch Times Photo
Col. Tucker Hamilton stands on the stage after accepting the 96th Operations Group guidon during the group’s change of command ceremony at Eglin Air Force Base, Florida, on July 26, 2022. (Courtesy U.S. Air Force photo/Samuel King Jr.)

Autonomous F-16s

Hamilton, who is also the Operations Commander of the 96th Test Wing at Eglin Air Force Base, was involved in the development of the Autonomous Ground Collision Avoidance Systems (Auto-GCAS) for F-16s, a critical technology that helps prevent accidents by detecting potential ground collisions.

That technology was initially resisted by pilots as it took over control of the aircraft, Hamilton noted.

The 96th Test Wing is responsible for testing a wide range of systems, including artificial intelligence, cybersecurity, and advancements in the medical field.

Hamilton is now involved in cutting-edge flight tests of autonomous systems, including robot F-16s capable of dogfighting. However, the USAF official cautioned against overreliance on AI, citing its vulnerability to deception and the emergence of unforeseen strategies.

DARPA’s AI Can Now Control Actual F-16s in Flight

In February, the Defense Advanced Research Projects Agency (DARPA), a research agency under the U.S. Department of Defense, announced that its AI can now control an actual F-16 in flight.

This development came in less than three years of DARPA’s Air Combat Evolution (ACE) program, which progressed from controlling simulated F-16s flying aerial dogfights on computer screens to controlling an actual F-16 in flight.

Read More @