Endgame. Set. Match. Human Hunting Expeditions By Autonomous A.I. Slaughterbots Our Future Reality As Ominous UN Report Gives Us A Glimpse Of Tomorrow
– ‘The prospect of human civilization getting extinguished by its own tools is not to be ignored’
So while Libertarians would love to see most politicians ‘gone’, believing our planet would be a much more sane and happy place should ‘human liberty’ regain its rightful place as the guiding light of American politics, not a single major party is even slightly inclined to leave people alone to manage their own affairs in 2021, so why would anyone believe that A.I. would just leave people alone to live their own lives?
As we’ll explore within this story, with more than half of all Europeans who were recently polled claiming they’d like to replace their own politicians with A.I., can you imagine our planet Earth in the future with A.I. ruling over us? Especially after this recent story reporting killer AI drones ‘hunted down humans without being told to’ according to a bombshell new United Nations report.
Putting us directly on the path warned of in many different ‘A.I. apocalypse science fiction’ movies, as this story over at Gizmodo reported just days ago, there’s a very real chance that artificial intelligence could wipe out humanity, one of the reasons Elon Musk wants to go to Mars, so humans will have a place to go in the future.
So a report from the UN that explosive-carrying quadcopters, deployed during an engagement between rival factions in the Libyan civil war, were thought to have deliberately crashed into targets without being ordered by a human controller, is the latest sign we’re already well on our way towards autonomous, killer A.I. unleashed across the planet. From this Daily Star story saved at Archive before we continue.:
An autonomous weaponized drone “hunted down” a human target last year and is thought to have attacked them without being specifically ordered to, according to a report prepared for the United Nations.
The news raises the specter of terminator-style AI weapons killing on the battlefield without any human control.
The drone, a Kargu-2 quadcopter produced by Turkish military tech company STM, was deployed in March 2020 during a conflict between Libyan government forces and a breakaway military faction led by Khalifa Haftar, commander of the Libyan National Army.
The Kargu-2 is fitted with an explosive charge and the drone can be directed at a target in a kamikaze attack, detonating on impact.
With this May 10th story over at Wired alarmingly reporting that the Pentagon is actually considering allowing A.I. to control weapons and weapons systems, believing that ‘intelligent machines’ could outperform human operators in complex scenarios, how far away are we now from ‘Terminator-style’ machines being set loose across America and the world, regularly hunting down human beings autonomously? Probably not far at all as we’ll explore in the next section of this story below.
(ANP NEEDS YOUR HELP: Before continuing, we wanted to thank everybody who has donated to ANP over the years. With donations and ad revenue all that keep ANP online, if you’re able, please consider donating to ANP to help keep us in this fight for America‘s future at this critical time in US history. During a time of systematic, ‘big tech’ censorship and widespread Democrat corruption, truth-seeking media and alternative views are crucial, and EVERY little bit helps more than you could know!)
With this September of 2020 story at the website War Zone reporting the US Air Force has been testing ‘robot dogs’ as security guards for their bases, while in the Middle East, the world has ‘just seen its first A.I. war’, according to one US General as reported in this April 23rd story at the Asia Times, we’re going to have to learn to trust artificial intelligence in the battlefield. And that means, the rules governing human control over artificial intelligence might need to be relaxed.
Hinting at exactly what that sounds like, with US General John ‘Mike’ Murray warning that AI often operates on timescales much faster than an individual human brain can follow, let alone the speed at which a formal staff process moves, what happens when in the future, such ‘autonomous A.I.’ declares human beings to be its enemies to be hunted down and destroyed, like that weaponized drone did in the Middle East? From this May 17th story over at Futurism.:
Endgame, Set, Match.
It’s common knowledge, at this point, that artificial intelligence will soon be capable of outworking humans — if not entirely outmoding them — in plenty of areas. How much we’ll be outworked and outmoded, and on what scale, is still up for debate. But in a new interview published by The Guardian over the weekend, Nobel Prize winner Daniel Kahneman had a fairly hot take on the matter: In the battle between AI and humans, he said, it’s going to be an absolute blowout — and humans are going to get creamed.
“Clearly AI is going to win [against human intelligence]. It’s not even close,” Kahneman told the paper. “How people are going to adjust to this is a fascinating problem.”
And from this Gizmodo story titled “How an Artificial Superintelligence Might Actually Destroy Humanity” written by the so-called ‘futurist’ George Dvorsky.:
I’m confident that machine intelligence will be our final undoing. Its potential to wipe out humanity is something I’ve been thinking and writing about for the better part of 20 years. I take a lot of flak for this, but the prospect of human civilization getting extinguished by its own tools is not to be ignored.
There is one surprisingly common objection to the idea that an artificial superintelligence might destroy our species, an objection I find ridiculous. It’s not that superintelligence itself is impossible. It’s not that we won’t be able to prevent or stop a rogue machine from ruining us. This naive objection proposes, rather, that a very smart computer simply won’t have the means or motivation to end humanity.
Loss of control and understanding
Imagine systems, whether biological or artificial, with levels of intelligence equal to or far greater than human intelligence. Radically enhanced human brains (or even nonhuman animal brains) could be achievable through the convergence of genetic engineering, nanotechnology, information technology, and cognitive science, while greater-than-human machine intelligence is likely to come about through advances in computer science, cognitive science, and whole brain emulation.
And now imagine if something goes wrong with one of these systems, or if they’re deliberately used as weapons. Regrettably, we probably won’t be able to contain these systems once they emerge, nor will we be able to predict the way these systems will respond to our requests.
“This is what’s known as the control problem,” Susan Schneider, director at the Center for Future Mind and the author of Artificial You: AI and the Future of the Mind, explained in an email. “It is simply the problem of how to control an AI that is vastly smarter than us.”
With those previously mentioned stories proving that A.I. is already being used as weapons while one US Army General argues we need to give those A.I. weapons complete autonomy to make their own decisions, no wonder Musk is dreaming of humanity on Mars, though if humans can make it there, how far behind would autonomous A.I. be?
In his upcoming June 11th and June 12th Extinction Protocols Conference, Steve Quayle and his distinguished guests will be discussing A.I. Terminator Robots among the many other topics discussed having great relevance to our society in 2021.
So with the very real possibility that humanity in the future might be wiped out by its own creations, especially with A.I. vastly more quick at human being at ‘computing’, imagine what happens when A.I. comes to a conclusion programmed into them by radical ‘global warming’ leftists that human beings really are a threat to the future of the planet Earth, a ‘problem’ to be ‘solved’ by simply ‘eliminating’ the problem, us. ‘Human hunting expeditions’ might not just be a thing of science fiction movies.
And with those warning about the very real potential of killer A.I. being unleashed upon our world including even Bill Gates, who warned that A.I. poses the same threat to human existence upon this planet that nuclear weapons do, the extended excerpt below comes to us from this May 29th story over at the Liberty Beacon titled “2024 Will Look Like Orwell’s ‘1984’ If We Don’t Stop AI Police State”.
George Orwell’s dystopian vision written in his book “Nineteen Eighty-Four” could become a reality by 2024 as artificial intelligence technology becomes the all-seeing eye, a top Microsoft executive warned Thursday.
Microsoft President Brad Smith told BBC’s Panorama George Orwell’s 1984 “could come to pass in 2024” if government regulation doesn’t protect the public against intrusive artificial intelligence surveillance.
“I’m constantly reminded of George Orwell’s lessons in his book ‘1984.’ You know the fundamental story … was about a government who could see everything that everyone did and hear everything that everyone said all the time,” Smith said on BBC while chatting about China’s use of artificial intelligence to monitor its citizens.
“Well, that didn’t come to pass in 1984, but if we’re not careful, that could come to pass in 2024,” Smith continued.
“If we don’t enact the laws that will protect the public in the future, we are going to find the technology racing ahead, and it’s going to be very difficult to catch up.”
He warned that Orwell’s view of a government spying on its citizens around the clock is already a reality in some parts of the world.
Artificial intelligence-led totalitarianism, such as in China, has wiped away the freedoms of its citizens and transformed them into obedient members of the state. A social credit score keeps citizens in check.
To prevent such a dystopia in the West, lawmakers need to act now, explained Smith.
In 2019, the billionaire investor Peter Thiel insisted that artificial intelligence was “literally communist.”
He said artificial intelligence concentrates power to monitor citizens. These surveillance tools know more about a person than they know about themselves.
Artificial intelligence is a crucial tool for governments to adopt an Orwellian state of surveillance and control.
But can we trust lawmakers and “Big Tech” who want to consolidate power to prevent such a dystopia?
It’s hard to say, considering politicians have only one objective: stay in power.
Suppose we can’t trust politicians to protect our freedoms and interests; instead, they side with mega-corporations. In that case, we must raise our understanding of privacy shields that protect us from artificial intelligence spying on us.
So with every day bringing us more and more proof that we’ve already arrived at the point in time warned of by Brad Smith above, and ‘Endgame. Set. Match’, the proverbial ‘point of no return’ possibly much closer than most Americans even realize, imagine a blending of Orwell’s 1984 with the movie ‘Terminator’ to get a picture of what the future may bring us, Big Brother’s A.I. forever watching over us.