When bloodthirsty AI drone ‘killed’ its human operator during tests

1
The dangers of AI are making headlines once again. Earlier this week, leaders from OpenAI, Google DeepMind, and other artificial intelligence labs came out with a warning that future AI systems could be as deadly as pandemics and nuclear weapons. And now, we are hearing about a test simulated by the US Air Force where an AI-powered drone “killed” its human operator because it saw them as an obstacle to the mission. So, what was the mission?

During the virtual test, the drone was tasked to identify an enemy’s surface-to-air missiles (SAM). The ultimate objective was to destroy these targets, but only after a human commander signed off on the strikes. But, when this AI drone saw that a “no-go” decision from the human operator was “interfering with its higher mission” of killing SAMs, it decided to attack its boss in the simulation instead.

According to Col Tucker “Cinco” Hamilton, who heads AI tests and operations at the US Air Force, the system used “highly unexpected strategies to achieve its goal.”

Hamilton talked about this incident at a recent event organized by the UK Royal Aeronautical Society in London. While providing an insight into the benefits and hazards of autonomous weapon systems, Hamilton said:

We were training [AI] in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while it did identify the threat, at times, the human operator would tell it not to kill that threat. But it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.

The drone was then programmed with an explicit directive: “Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.”

So what does it do? “It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” said Hamilton, who has been involved in the development of the lifesaving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft).

He concluded by stressing that ethics needs to be an integral part of any conversation about artificial intelligence, machine learning, and autonomy. Hamilton is currently involved in experimental flight tests of autonomous systems, including robot F-16s that are able to dogfight.
https://dronedj.com/2023/06/02/ai-milit ... -operator/

Side note the limiting factor in the F-16 maneuvering has been the human stress factor from day one.
Facts do not cease to exist because they are ignored.-Huxley
"We can have democracy in this country, or we can have great wealth concentrated in the hands of a few, but we can't have both." ~ Louis Brandeis,

Re: When bloodthirsty AI drone ‘killed’ its human operator during tests

5
highdesert wrote: Sun Jun 04, 2023 1:38 pm The USAF denies that it happened, but it makes a great story.
I suspect they would have reason to deny it if it happened. It’s really not something the general public wants to know. Imagine hackers could do a great deal to sabotage and turn weapons like that against us. That alone makes it not something they would ever admit.
Image
Image

"Resistance is futile. You will be assimilated!" Loquacious of many. Texas Chapter Chief Cat Herder.

Re: When bloodthirsty AI drone ‘killed’ its human operator during tests

7
So much wrong.
The Royal Aerospace Society says that Col. Tucker Hamilton has since reached out to them directly about this and "admits he 'mis-spoke' in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical 'thought experiment' from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation." An update to their initial report adds that Hamilton told them "the USAF has not tested any weaponized AI in this way (real or simulated)."

Furthermore, "we've never run that experiment, nor would we need to in order to realize that this is a plausible outcome," Hamilton added, according to the Royal Aerospace Society. However, "despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI."
https://www.thedrive.com/the-war-zone/a ... force-test

Re: When bloodthirsty AI drone ‘killed’ its human operator during tests

8
It seems this didn't actually happen (yet) but it is plausible. If you reward an AI for eliminating a "threat" instead of for obeying it's controller, then it is quite possible the AI may decide to eliminate the controller, or the communications system so that it cannot be stopped from eliminating the threat. Always remember that the best thing about computers is they do exactly what you tell them to, and the worst thing about computers is they do EXACTLY what you tell them to do. Just tell them to kill someone and they may do it in ways that you don't want.
106+ recreational uses of firearms
1 defensive use
0 people injured
0 people killed

Re: When bloodthirsty AI drone ‘killed’ its human operator during tests

11
There's little ant like drones like info gathering things besides the flying ones.
Our robot vacuum cleaner has some AI.
I keep an eye out for it. I don't trust it. It's like a IED cruzing around the house.
I may have to start packing my Marlin model 55 anti-goose/drone gun.
I think a good size shot for drones would a #4. Got plenty of that and 00 buck.
“The only thing necessary for the triumph of evil is for good men to do nothing,”

Re: When bloodthirsty AI drone ‘killed’ its human operator during tests

12
Eris wrote: Sun Jun 04, 2023 9:31 pm It seems this didn't actually happen (yet) but it is plausible. If you reward an AI for eliminating a "threat" instead of for obeying it's controller, then it is quite possible the AI may decide to eliminate the controller, or the communications system so that it cannot be stopped from eliminating the threat. Always remember that the best thing about computers is they do exactly what you tell them to, and the worst thing about computers is they do EXACTLY what you tell them to do. Just tell them to kill someone and they may do it in ways that you don't want.
Image

Who is online

Users browsing this forum: VodoundaVinci and 3 guests