Humans have a predilection for doomsday predictions. Witness the much-hyped technological terror of the Y2K, or year 2000, bug, which failed to crash critical systems as projected, or the apocalyptic prophecies for 2012 following the end of the Mayan long-count calendar made infamous by the movie “2012,” which ushered in no global cornucopia of catastrophes.
By some counts, this century has already marked the passage of 18 religious doomsdays, with no Earth-shattering “KABOOM!” in sight. So, how and when will the world end? Some argue that AI will kill humanity, or that a global economic collapse will do in humankind via food shortages and starvation
NASA has the end-time prediction down to a science.
Here’s the bad news: Earth is going to run out of oxygen and everyone will die.
Researchers with NASA and Japan’s Toho University ran 400,000 supercomputer simulations to determine what happens when the sun approaches the end of its life. The sun only needs to be 10% brighter before it heats Earth so much that green plants die off, dropping oxygen levels like a rock. By that time, the air will hold 10 times more methane and only 18% to 19% oxygen — way below OSHA guidelines.
Now the good news. You will still have to file your taxes this year.
Maybe that’s not good news for some, but the better news is that NASA predicted that the world will end in about 500 million years. That’s when the last humans might look up at a fatter, redder sun and draw their last breaths of hot, hot, thin air.
But hope remains for humanity, astronomer John Debes with the Johns Hopkins Space Telescope Science Institute in Laurel, told The Baltimore Sun.
“I think if something like the Habitable Worlds Observatory or [James Webb Space Telescope] can discover habitable planets around [nearby] stars, we might know how common Earth-like planets are,” Debes said. “If we can make it to a half-billion years in the future as a species, we have plenty of time to figure out how to survive a little stellar evolution.”
AI arms race
If NASA says we have 500 million years left, others are considering threats not so far in the future — where a superintelligent AI consumes and destroys Earth and even the entire solar system.
It may just come down to this: “If Anybody Builds It, We All Die,” which is the name of a book by Eliezer Yudkowsky and Nate Soares, published in September 2025.
“In a scenario where AI starts efficiently harvesting energy and matter to solve their objectives, there’s just probably no room for humans,” Soares told The Sun. “The thing that we can confidently predict is that happy, healthy, free humans are not the most efficient way to solve almost any objective AI might want to pursue. … It’s not like the AIs go out of the way to kill us, any more than we go out of our way to kill the ants underneath the skyscraper we’re building.”
The authors’ concerns may be shared by the 58% of Marylanders concerned that AI poses a threat to humanity. Even the AI industry admits its creations might kill us all. In 2023, more than 350 artificial intelligence executives and experts, including Soares and Yudkowski, signed this one-sentence statement by the Center for AI Safety, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Since then, the race to build ever-more-intelligent AIs has only intensified, with $1.5 billion in investment estimated in 2025, and banking firm Goldman Sachs predicting as much as $500 billion in 2026.
Humans are safe until the AI learns to improve itself, and then its intelligence will quickly grow beyond human comprehension, Soares told The Sun. His book goes on to illustrate several creative scenarios of AI domination. Exactly how it defeats humanity is a hard call to make, he said. The easy call is that it will win like a chess grandmaster playing against a child.
Clara Collier, editor of the magazine Asterisk, is not convinced. Collier writes that the authors breeze past their definition of the danger to reach their foregone conclusions.
“They are the conclusions of a very specific set of beliefs,” she writes, “for example, that danger will come in the form of a single superintelligent AI, which can’t be monitored or countered by other systems. … The book spends whole chapters unpacking the motivations of future superintelligent AIs but devotes very little space to justifying its own account of how those AIs will be created.”
Soares is having none of it.
“It feels to me like we’re in a car hurtling toward a cliff,” he said. “And I’m saying, ‘Stop the car. We’ll die.’ And people are saying, ‘Oh, [co-author] Nate’s overconfident. Maybe we’ll just break all our bones and be paralyzed from the neck down.’ Okay, guys, like, can we stop the car?”
They wrote the book, Soares said, to raise awareness in the hope that humanity will stop the AI arms race.
“Imagine for a moment the perspective of someone in 1952 when both the US and the Soviets clearly had the bomb,” he explained. “Those people could say, well, there’s no way to stop it, we’ve tried before, the juggernaut is too strong. That was how it looked, given all of history up until that point. But humanity realized that nukes were a different ball game and said, Actually, we’re going to stop with the total war now, and that has held for 80 years.”
Is an economic global disaster around the corner?
You might be able to forget the AI problems because some are sounding the alarm that economic issues will lead to mass starvation and kill just about everyone. What would that look like? If paying $7 or $10 for a dozen eggs seemed bad, imagine paying $50 or $100 — or that there are no eggs because the entire supply chain broke down.
That’s possible, says the argument made by The Club of Rome’s 1972 bestseller, “The Limits to Growth.” That book predicted that, by today, society would enter an era of ecological and economic overreach, ushering in a period of global collapse in which humanity could no longer feed its billions. The Swiss-based think tank used a computer simulation to map interactions between population growth, food production, industrial production, pollution, and consumption of natural resources.
Most critics panned it as a bunch of pessimistic hogwash. At their head was Julian L. Simon, who taught business administration at the University of Maryland, College Park before his death in 1998.
“The more we use, the better off we become,” he argued, “and there’s no practical limit to improving our lot forever. Indeed, throughout history, new tools and new knowledge have made resources easier and easier to obtain.”
Humanity overcomes problems of scarcity through ingenuity — “the ultimate resource” that makes all other resources plentiful, Simon wrote in his 1981 book, “The Ultimate Resource.”.
Critics like Simon miss the point, said Harvard graduate and sustainability researcher Gaya Herrington. History shows those ingenious solutions just feed the beast of profit and growth.
That means ingenuity rarely does much to lower the cost of eggs, for instance or make the poultry industry pollute less, or pay the farmer a living wage.
“Human ingenuity is very efficient, and it absolutely has to be part of the solution,” she said. “The thing is that any technology will serve the ultimate goal of the system that it’s part of. That’s why with all the technological innovation that we’ve seen, a lot of it has gone toward getting more growth,” instead of reducing consumption.
The “Limits to Growth” model has been revisited, most recently in 2021 by Herrington. She and others found that global civilization is in lock-step, following the book’s “Business-as-Usual” scenario as predicted 30 years ago.
“You can go past the carrying capacity of the ecosystem, but you cannot stay there indefinitely,” Herrington told The Sun. “Ultimately, infinite growth in a finite world is impossible.”
So how long do we have?
Fourteen years.
That’s right, in the “Business as Usual” scenario, society has until 2040 before the collapse hits home. But Herrington said that other scenarios in the book reveal a path to a better future.
“One of them is that the writers assume that at some point, close to the peak, society changes its priority,” she explained, “and they redirect all the resources to education, health care and natural protection. Then there’s no steep decline.”
Have a news tip? Contact Karl Hille at 443-900-7891 or khille@baltsun.com.
from Baltimore Sun https://ift.tt/4caXFBQ
via IFTTT