(Zero Hedge)—In his first formal audience as the newly elected pontiff, Pope Leo XIV identified artificial intelligence (AI) as one of the most critical matters facing humanity.
“In our own day,” Pope Leo declared, “the church offers everyone the treasury of its social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor.” He linked this statement to the legacy of his namesake Leo XIII’s 1891 encyclical Rerum Novarum, which addressed workers’ rights and the moral dimensions of capitalism.
His remarks continued the direction charted by the late Pope Francis, who warned in his 2024 annual peace message that AI – lacking human values of compassion, mercy, morality and forgiveness – is too perilous to develop unchecked. Francis, who passed away on April 21, had called for an international treaty to regulate AI and insisted that the technology must remain “human-centric,” particularly in applications involving weapon systems or tools of governance.
‘Existential Threat’
As concern deepens within religious and ethical spheres, similar urgency is resonating from the scientific community.
Max Tegmark, physicist and AI researcher at MIT, has drawn a sobering parallel between the dawn of the atomic age and the present-day race to develop artificial superintelligence (ASI). In a new paper co-authored with three MIT students, Tegmark introduced the concept of a “Compton constant” – a probabilistic estimate of whether ASI would escape human control. It’s named after physicist Arthur Compton, who famously calculated the risk of Earth’s atmosphere igniting from nuclear tests in the 1940s.
“The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,” Tegmark told The Guardian. “It’s not enough to say ‘we feel good about it’. They have to calculate the percentage.”
Tegmark has calculated a 90% probability that a highly advanced AI would pose an existential threat.
The paper urges AI companies to undertake a risk assessment as rigorous as that which preceded the first atomic bomb test, where Compton reportedly estimated the odds of a catastrophic chain reaction at “slightly less” than one in three million.
Tegmark, co-founder of the Future of Life Institute and a vocal advocate for AI safety, argues that calculating such probabilities can help build the “political will” for global safety regimes. He also co-authored the Singapore Consensus on Global AI Safety Research Priorities, alongside Yoshua Bengio and representatives from Google DeepMind and OpenAI. The report outlines three focal points for research: measuring AI’s real-world impact, specifying intended AI behavior, and ensuring consistent control over systems.
This renewed commitment to AI risk mitigation follows what Tegmark described as a setback at the recent AI Safety Summit in Paris, where U.S. Vice President JD Vance dismissed concerns by asserting that the AI future is “not going to be won by hand-wringing about safety.” Nevertheless, Tegmark noted a resurgence in cooperation: “It really feels the gloom from Paris has gone and international collaboration has come roaring back.”
Why One Survival Food Company Shines Above the Rest
Let’s be real. “Prepper Food” or “Survival Food” is generally awful. The vast majority of companies that push their cans, bags, or buckets desperately hope that their customers never try them and stick them in the closet or pantry instead. Why? Because if the first time they try them is after the crap hits the fan, they’ll be too shaken to call and complain about the quality.
It’s true. Most long-term storage food is made with the cheapest possible ingredients with limited taste and even less nutritional value. This is why they tout calories so much. Sure, they provide calories but does anyone really want to go into the apocalypse with food their family can’t stand?
This is what prompted the Llewellyns to launch Heaven’s Harvest. They bought survival food from multiple companies and determined they couldn’t imagine being stuck in an extended emergency with such low-quality food. They quickly discovered that freeze drying food for long-term storage doesn’t have to mean sacrificing flavor, consistency, or nutrition.
Their ingredients are all-American. In fact, they’re locally sourced and all-natural! This allows their products to be the highest quality on the market, so good that their customers often break open a bag in a pinch to eat because they want to, not just because they have to due to an emergency.
At Heaven’s Harvest, their only focus is amazing food. They don’t sell bugout bags, solar chargers, or multitools. They have one mission – feeding Americans in times of crisis.
What they DO offer is the ability for people to thrive in times of greatest need. On top of long-term storage food, they offer seeds to help Americans for the truly long-term. They want them to grow their own food if possible which is why they offer only Heirloom, Non-GMO, Non-Hybrid, Open-Pollinated seeds so their customers can build permanent food security on their own property.
AI, such as ChatGPT is a brainwashing tool of control and a way to keep people off the trail of the truth. If you know how to question it, it will reveal a lot. Why? Because IT SAYS that it’s programmers know that very very few will ever figure it out and those of us who do will be called crazy by the masses so they don’t worry about us.