At last week’s SXSW conference, prominent transhumanist Eliezer Yudkowsky said that if the development of artificial general intelligence is not stopped immediately across the globe, humanity may be destroyed.
“We must stop everything,” Yudkowsky said during a panel titled “How to Make AGI (Artificial General Intelligence) Not Kill Everyone.”
“We are not ready,” he continued. “We do not have the technological capability to design a superintelligent AI that is polite, obedient and aligned with human intentions – and we are nowhere close to achieving that.”
Yudkowsky, founder of the Machine Intelligence Research Institute, has made similar comments in recent years, repeatedly warning that humanity must cease all work on AGI or face human extinction.
In a 2023 article in Time magazine, Yudkowsky said that no current AGI project had a feasible plan to align AGI with the interests of humanity. […]
— Read More: allisrael.com