- USAF Col. Tucker Hamilton shared the disturbing story at conference in London
- Told how AI drone could turn on its operator if it disagreed with orders
- Hamilton later clarified the tale was a hypothetical ‘thought experiment’\
The US Air Force official who shared a disturbing tale of a military drone powered by artificial intelligence turning on its human operator in simulated war games has now clarified that the incident never occurred, and was a hypothetical ‘thought experiment’.
Colonel Tucker ‘Cinco’ Hamilton, the force’s chief of AI test and operations, made waves after describing the purported mishap in remarks at a conference in London last week.
In remarks summarized on the conference website, he described a flight simulation in which an AI drone tasked with destroying an enemy installation rejected the human operator’s final command to abort the mission.
‘So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,’ said Hamilton, who seemed to be describing the outcome of an actual combat simulation.
But on Friday, Hamilton said in a statement to the conference organizers that he ‘mis-spoke’ during the presentation and that the ‘rogue AI drone simulation’ was a hypothetical ‘thought experiment’ from outside the military.
‘We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,’ he said. ‘Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.’
Colonel Tucker ‘Cinco’ Hamilton, the force’s chief of AI test and operations, says his tale of a rogue AI drone that targeted its operated was a hypothetical thought experiment
Pictured: A US Air Force MQ-9 Reaper drone in Afghanistan in 2018 (File photo)
In December 2022, AI software successfully flew a modified F-16 in multiple test flights at Edwards Air Force Base in California
As Hamilton’s remarks went viral, the Air Force quickly denied that any such simulation had taken place.
‘The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,’ Air Force spokesperson Ann Stefanek told Insider.
‘It appears the colonel’s comments were taken out of context and were meant to be anecdotal.’
The US military has recently utilized AI to control an F-16 fighter jet in test flights as it continues to research applications for the emerging technology.
In December 2022, AI software successfully flew a modified F-16 in multiple test flights at Edwards Air Force Base in California.
During test flights, the jet, known as ‘X-62A’ or ‘VISTA’, performed takeoffs, landings and combat maneuvers without human intervention for a total of over 17 hours.
It marked the first time AI has been used to pilot a US tactical aircraft, as prior to this milestone, it had only been used in computer simulations of F-16 dogfights.
Hamilton has also been involved in more limited applications of automated flight control for the F-16, such as the Automatic Ground Collision Avoidance System (Auto GCAS) system.
Similar to automatic breaking systems on newer cars, the system detects when a ground collision is imminent, and triggers an autonomous avoidance maneuver.
According to Lockheed Martin, the Auto GCAS capability is currently operating on more than 600 US Air Force F-16 aircraft worldwide.
During test flights a modified F-16 known as ‘X-62A’ or ‘VISTA’ (pictured), performed takeoffs, landings and combat maneuvers without human intervention for a total of over 17 hours
The company says that the system, first introduced in 2014, has already been credited with preventing nine deadly crashes, saving the lives of 10 pilots.
In one case, Auto GCAS kicked in to save a pilot training with the Arizona Air National Guard’s 152nd Fighter Squadron, who lost consciousness during a high-G maneuver.
But at the London conference, Hamilton said that some pilots had resisted the technology, as it took over control of the aircraft in certain situations.
Hamilton also cautioned against relying too much on AI, noting how easy it is to trick and deceive.
In an interview last year with Defense IQ, Hamilton said: ‘AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.
‘We must face a world where AI is already here and transforming our society.
‘AI is also very brittle, ie, it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.’
The Royal Aeronautical Society said that AI and its exponential growth was a major theme at the recent conference, from secure data clouds to quantum computing and ChatGPT.