Subscribe for free to the America First Report newsletter.
Google has unveiled its latest artificial intelligence model, paving the way for the development of sentient robots as seen only in the realm of science fiction.
The Robotic Transformer 2 (RT-2) is trained on both web and robotics data, having the capability of translating this knowledge into generalized instructions for robotic control, according to a July 28 report by Google DeepMind.
When a human being learns a task, they do so by reading and observing. In the same way, RT-2 uses text and image data to recognize patterns and perform relevant tasks, even if the robot isn’t trained to do that specific function. This is vastly different from most of the previous generation of robots, which are only capable of performing preprogrammed tasks.
For instance, if a task involved throwing away a piece of trash, an older robot would have to be told explicitly about it, including instructions for identifying the trash, picking it up, and the method of throwing it away.
However, as RT-2 has access to vast swathes of web data, it already has an idea of what the term “trash” refers to and is capable of identifying and disposing of it without being specifically trained.
RT-2 will be able to distinguish between a full bag of chips from an empty bag of chips, recognizing that the latter is “trash.”
“Their training isn’t just about, say, learning everything there is to know about an apple: how it grows, its physical properties, or even that one purportedly landed on Sir Isaac Newton’s head. A robot needs to be able to recognize an apple in context, distinguish it from a red ball, understand what it looks like, and most importantly, know how to pick it up,” Vincent Vanhoucke, head of robotics at Google DeepMind, wrote in a July 28 report.
Robots Being Trained for Human Jobs
Google claims to have trained RT-2 in more than 6,000 trials. It was found to be performing as well as RT-1 in tasks it was trained on, referred to as “seen” tasks. However, when it came to “unseen” tasks or tasks for which neither of the robots were trained for, RT-2 showed almost double the performance of RT-1.
In addition to RT-2, multiple other robots are in development across the world that seek to mimic human capabilities in intelligence and movement.
Earlier this year, mechanical engineers at UCLA School of Engineering revealed a robot named “Artemis,” with its main innovation being arms and legs designed to give it movement abilities similar to human beings. The robot’s springy features allow it to bounce back when pushed, just like a human would.
In May, it was reported that a California-based AI robotics startup called “Figure” raised more than $70 million to build a humanoid robot that the company believes will be used to perform manual labor.
In the company’s master plan, CEO Brett Adcock wrote that robots will “eventually be capable of performing tasks better than humans.”
As humanoid robots increasingly join the workforce, from farmlands to factories, Mr. Adcock expects the cost of labor to decrease until it becomes equivalent to the price of renting a robot.
The Robot Threat
As robots achieve more human-like intelligence and potentially self-awareness, experts have raised concerns about the threats these machines pose to human beings.
According to “Stop Killer Robots,” a campaign that calls for a new international law in autonomous weapons systems, there’s a risk of “dehumanization” with the advent of robots.
“Many technologies with varying degrees of autonomy are already being widely rolled out without pausing to consider the consequences of normalising their use. Why do we need to talk about this? Because machines don’t see us as people, just another piece of code to be processed and sorted,” the campaign’s website reads.
“The technologies we’re worried about reduce living people to data points. Our complex identities, our physical features and our patterns of behaviour are analysed, pattern-matched and sorted into profiles, with decisions about us made by machines according to which pre-programmed profile we fit into.”
At present, various nations are developing killer robots that can have “devastating consequences,” according to the campaign. However, what starts out as a killer robot on the battlefield can spread into areas such as policing as well, it stated.
Drone Allegedly Went Rogue
The danger of machines only seeing human beings as a piece of code was made evident during a recent simulated thought experiment by the U.S. military that showed that an AI-enabled drone could end up turning against its own operator without being instructed to do so.
In the simulated test, an AI drone was assigned a mission to identify and destroy Surface-to-Air Missile (SAM) sites, with a human operator being the ultimate decision-maker.
“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat,” Col. Tucker Hamilton, the U.S. Air Force chief of AI Test and Operations, said at a June event in London hosted by the Royal Aeronautical Society (RAS).
“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
Col. Hamilton later said he “misspoke” and contacted the RAS to clarify his comments.
“We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,” he told RAS.
Col. Hamilton told RAS that the Air Force hasn’t tested any weaponized AI in this way—real or simulated.
“Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI,” he said.
The robot militarization of armies is something that’s expected to happen in the coming years.
In a discussion with Defense One in March, Joint Chiefs of Staff Chairman Gen. Mark Milley said that “over the next 10 to 15 years, you’ll see large portions of advanced countries’ militaries become robotic.”
“If you add robotics with artificial intelligence and precision munitions and the ability to see at range, you’ve got the mix of a real fundamental change,” he said. “That’s coming. Those changes, that technology … we are looking at inside of 10 years.”
Caden Pearson contributed to this report. Article cross-posted from our premium news partners at The Epoch Times.
Discern Report is the fastest growing America First news aggregator in the nation.
Will America-First News Outlets Make it to 2023?
Things are looking grim for conservative and populist news sites.
There’s something happening behind the scenes at several popular conservative news outlets. 2021 was bad, but 2022 is proving to be disastrous for news sites that aren’t “playing ball” with the corporate media narrative. It’s being said that advertisers are cracking down, forcing some of the biggest ad networks like Google and Yahoo to pull their inventory from conservative outlets. This has had two major effects. First, it has cooled most conservative outlets from discussing “taboo” topics like Pandemic Panic Theater, voter fraud, or The Great Reset. Second, it has isolated those ad networks that aren’t playing ball.
Certain topics are anathema for most ad networks. Speaking out against vaccines or vaccine mandates is a certain path to being demonetized. Highlighting voter fraud in the 2020 and future elections is another instant advertising death penalty. Throw in truthful stories about climate change hysteria, Critical Race Theory, and the border crisis and it’s easy to understand how difficult it is for America-First news outlets to spread the facts, share conservative opinions, and still pay the bills.
Without naming names, I have been told of several news outlets who have been forced to either consolidate with larger organizations or who have backed down on covering certain topics out of fear of being “canceled” by the ad networks. I get it. This is a business for many of us and it’s not very profitable. Those of us who do this for a living are often barely squeaking by, so loss of additional revenue can often mean being forced to make cuts. That means not being able to cover the topics properly. Its a Catch-22: Tell the truth and lose the money necessary to keep telling the truth, or avoid the truth and make enough money to survive. Those who have chosen survival simply aren’t able to spread the truth properly.
We will never avoid the truth. The Lord will provide if it is His will. Our job is simply to share the facts, spread the Gospel, and educate as many Americans as possible while exposing the forces of evil.
To those who have the means, we ask that you please donate. We have options available now, but there is no telling when those options will cancel us. We have our GivingFuel page. There have been many who have been canceled by PayPal, but for now it’s still an option. Your generosity is what keeps these sites running and allows us to get the truth to the masses. We’ve had great success in growing but we know we can do more with your assistance.
Thank you, and God Bless!
JD Rucker