AI gone rogue OR programmed that away or it’s now sentient enough??? I believe that it was made that way…

AI-driven US military drone ‘kills’ its human to finish mission

According to an account of one US Colonel at the Royal Aeronautical Society in London, one AI drone independently “killed” its operator for stopping it from firing.


At this year’s Royal Aeronautical Society summit in London, an American Colonel gave a worrying account of a recent AI drone test in the United States. According to the Colonel, Col. Tucker “Cinco” Hamilton, an AI-controlled drone seemingly took it upon itself to “kill” its human operator for obstructing its mission objectives.

It’s unclear when this test took place or what type of simulated environment was utilized, whether it was completely virtual or semi-live/constructive.

Col. Tucker told delegates at the summit that “one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation.”

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat,” Col. Tucker explained. Col. Tucker, head of the US Air Force’s AI Test and Operations, explained that “the system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person kept it from accomplishing its objective.”

In response, the team added new parameters to the drone’s AI to prevent that from happening again. “Hey, don’t kill the operator — that’s bad,” Tucker explained. However, this didn’t stop the drone from apparently going rogue again.

“So what does it start doing? It starts destroying the communication tower the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said. This is worrying, but concerns may be overblown. In a statement to Insider, Air Force spokesperson Ann Stefanek denied that any such simulation has taken place. “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to the ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal,” he added.

The news of the test, although contested as genuine, intensifies concerns that AI technology could introduce a violent new era in warfare. Combining machine learning with the automation of tanks and artillery may result in the loss of lives of military personnel and innocent civilians if adequate precautions and controls are not put in place before deploying them for real.


Wake up folks ! Although I do not like calling it Skynet,,, but yes if that’s so it’s here,,, seems confirming my speculations about an all out the so called alien war !!!


Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.