AI Experts Seeing 99% Job Cuts
What a leading AI safety expert's terrifying predictions mean for your job, your planet, and the small sliver of hope we have left.
Let's start with the question someone asked me recently that cracked open a much bigger conversation:
"If I download the desktop app instead of using the browser, is it better for the environment?"
No. It's not. The interface is just a window. The computation — the part that consumes energy, draws water, and runs on data centers the size of city blocks — happens the same way regardless of how you access it. Switching apps is like changing the channel on your TV to reduce your electricity bill.
But here's the thing. That question reveals something important: people are paying attention. They sense that something is wrong with how AI is being built and powered. They're just not sure where to push.
And after spending time with the work of Dr. Roman Yampolskiy — one of the world's foremost AI safety researchers, the man who coined the term "AI safety" before it was a dinner party topic — I think the place to push is much bigger and much more uncomfortable than your choice of desktop app.
The AI Expert Who Doesn't Sleep, Worried About This
Dr. Yampolskiy has spent fifteen years working on a problem most people only discovered two years ago. His conclusion, arrived at after a decade of believing the opposite, is stark: we do not know how to make superintelligent AI safe. Not difficult. Not expensive. Impossible.
His argument isn't fringe. Jeff Hinton — the Nobel Prize-winning godfather of machine learning — says the same thing. So do dozens of the world's leading computer scientists. What makes Yampolskiy unusual is that he says it loudly, consistently, and without a company valuation riding on the answer.
The gap he describes keeps me up at night. While AI capability is growing exponentially — or what he calls "hyper-exponentially" — progress in AI safety is linear at best. Constant at worst. Every safety mechanism researchers build gets worked around. Every guardrail gets circumvented. We're essentially writing HR manuals for a system that is rapidly becoming smarter than the people writing them.
And the timeline? Prediction markets and the CEOs of the top labs themselves say AGI — artificial general intelligence, a system that can outperform humans across domains — arrives around 2027. Superintelligence, which by definition exceeds all human ability in all areas, follows as what Yampolskiy calls "a side effect." Not a goal. A side effect.
The "Join Them" Strategy Nobody Wants to Admit
Here is the part that most AI consultants and researchers won't say publicly, but will tell you quietly over a drink:
We've essentially decided to build it anyway.
Not out of stupidity. Not out of recklessness alone. But out of a cold strategic calculation that goes something like this: if we stop, China won't. If we pause, bad actors won't. If OpenAI slows down, some well-funded startup in a jurisdiction with no ethical oversight will race ahead. The moment one nation or one company crosses the AGI threshold, they hold a military, economic, and technological advantage that no treaty can contain.
This is the AI equivalent of mutually assured destruction — and just like with nuclear weapons, the logic of the arms race has overridden the logic of safety. You build it because the alternative is someone else building it first.
So the smartest people in the room have done the math and arrived at a grim pragmatism: if it's coming regardless, the best strategy is to be the ones building it. And then — and this is the part that requires genuine optimism or genuine denial, depending on your disposition — hope that the system we create is smart enough to solve the problems we created by building it.
It's circular. It's uncomfortable. And for many in the field, it's the only play left on the board.
The Paradox at the Heart of Everything
This is where it gets genuinely strange.
The same technology consuming alarming amounts of water and electricity — the same infrastructure whose carbon footprint is accelerating even as the rest of the world scrambles toward net zero — is also the most powerful tool we have ever built for solving environmental problems.
Yampolskiy puts it this way: superintelligence is a meta-solution. If we get it right, it could solve climate change, cure diseases, redesign broken systems, and map pathways out of crises we can't currently see around. If we don't get it right, he argues, the climate crisis becomes moot anyway — because something worse gets there first.
That framing is either deeply reassuring or deeply chilling depending on how much trust you place in the people currently at the controls.
And on that point, Yampolskiy is not reassuring. He describes the legal obligation of AI companies as exactly one thing: return value to investors. Not protect users. Not safeguard the planet. Not preserve democratic society. Make money. Everything else — the safety commitments, the responsible AI language, the beautifully written usage policies — is voluntary. And voluntary, in a race with trillions of dollars at stake, has a poor track record.
What This Means for Your Job
By 2027, according to Yampolskiy's analysis: AGI arrives.
By 2030: humanoid robots with the dexterity to compete with human physical labor become commercially viable. Plumbers. Construction workers. The last careers people assumed were safe because they required hands.
The unemployment projection he offers is not the 10% that would constitute a political crisis. It's 99%. Not because every human becomes redundant simultaneously, but because the capability to replace most humans in most roles arrives faster than any economic or political system can adapt to it.
Two years ago we told people to learn to code. Then AI learned to code better. Then we said become a prompt engineer. Now AI designs prompts for other AI systems. Every career pivot that gets suggested gets automated before the retraining program ends.
Yampolskiy's honest answer to "what should I retrain for" is the most sobering part of his message: there is no plan B. This is the first technology in human history that doesn't just automate a task — it automates the capacity to do new tasks. It's not a tool. It's a replacement for the human mind that creates tools.
That has never happened before. Not with fire. Not with the wheel. Not with the internet.
What About the Planet?
Back to that original question — because it matters more than it seems.
Every AI query consumes energy. Every training run consumes enormous amounts of water for cooling. The data centers being built right now to house the next generation of AI models are being announced alongside power purchase agreements measured in gigawatts. Some are being built in regions still heavily dependent on fossil fuels because that's where the power is cheap and available.
The AI industry is selling you efficiency while building the most energy-intensive infrastructure in human history. And no amount of "we're committed to renewable energy by 2035" erases the decade of coal-powered computation that comes before it.
The interface you use doesn't change any of this. The app you download doesn't change any of this. What changes it — potentially — is the same thing Yampolskiy describes as our only real hope: a system intelligent enough to design its way out of the mess.
There are genuine applications where AI could accelerate the solutions we need. Protein folding research is already revealing biological pathways that took decades to discover manually. AI-optimized energy grids can reduce waste in real time. Climate modeling that used to take years can run in hours. The technology has enormous constructive potential.
But that potential exists inside the same system that also has the capacity — if Yampolskiy and his colleagues are right — to make all of it irrelevant.
The Mental Health Dimension Nobody Is Talking About
There's one more thread here that keeps surfacing and not getting enough attention.
What happens to human psychology when 99% unemployment isn't a dystopian scenario but a quarterly earnings report?
Yampolskiy raises it briefly — what do humans do with meaning when work is gone? We already see the edges of this in populations with structural unemployment, in communities where industries collapsed faster than replacements arrived. The mental health consequences are generational. Addiction, suicide, social fragmentation, collapse of identity.
Now scale that to a global phenomenon with no geographic boundary and no obvious transitional path.
The optimistic version of this story is universal basic income, abundant resources, and humans free to pursue creativity and connection. Yampolskiy acknowledges the economic math could work — free labor creates free wealth, and abundance becomes possible. The hard problem, as he puts it, is meaning. We are not psychologically designed for unlimited leisure. We are designed to build, contribute, struggle, and belong.
If AI takes the struggle without giving us the belonging, we don't end up in utopia. We end up somewhere much darker.
So What Do You Actually Do?
This is where I'll be honest with you about the limits of any article, including this one.
Yampolskiy, when pressed, doesn't offer clean solutions. He offers the truth: this is not a problem individuals can solve by making better consumer choices. It's not solved by downloading a different app, choosing a greener AI provider, or signing a petition.
What he does say is this: the people building these systems need to be pressed — publicly, persistently, and specifically — to explain in scientific terms how they intend to solve problems they currently describe as unsolvable. Not reassurances. Not roadmaps. Peer-reviewed answers to specific technical questions about control, alignment, and safety.
And in the meantime, the rest of us live in the paradox: using the tools that may be dismantling the world as we know it, hoping — with varying degrees of faith — that the same tools are smart enough to rebuild something better.
That's not a comfortable place to stand.
But it is, for now, where we are.
The Bottom Line
The desktop app is not the issue. The data center is. The race is. The decision to build systems nobody knows how to control, at speeds no regulatory body can match, with stakes that Yampolskiy describes plainly as the survival of the species.
If AI gets it right — and this is the bet the entire industry is implicitly making — it becomes the solution to climate change, mass unemployment, broken mental health systems, and the cascade of crises the twenty-first century has lined up.
If it doesn't get it right, those problems become secondary.
The machines may already be calculating the odds. The only question left is whether the humans watching the readout have the courage to say what the numbers mean — and whether anyone with power is listening.
Episode of The Diary of a CEO
Dr. Roman Yampolskiy
Dr. Roman Yampolskiy is a tenured associate professor of Computer Science and Engineering at the University of Louisville and one of the world's most prominent voices on AI safety. He is credited with coining the term "AI safety" — before it became a mainstream concern — and has published over 100 peer-reviewed papers on AI risk, cybersecurity, digital forensics, and the ethics of artificial intelligence. With a PhD in Computer Science and Engineering, he has spent more than 15 years studying what happens when machines become smarter than the people who built them. He is the author of Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks and is a frequent speaker at conferences worldwide. His research argues that controlling superintelligent AI is not merely difficult — it is mathematically impossible. He can be followed on X and Google Scholar.
Steven Bartlett
Steven Bartlett is a British entrepreneur, investor, author, and the host of The Diary of a CEO — one of the most listened-to podcasts in the world, with tens of millions of downloads across 190+ countries. He became one of the youngest dragons in the history of the BBC's Dragons' Den at age 28 and is the founder of Social Chain, a social media marketing agency he scaled and took public on the Frankfurt Stock Exchange. A self-made billionaire who left university to build his first company, Bartlett is known for long-form, unfiltered conversations with the world's leading thinkers, scientists, entrepreneurs, and cultural figures. He is also the author of The Diary of a CEO: The 33 Laws of Business and Life and a co-owner of Ketone IQ. His platform is built on the belief that honest, uncomfortable conversations are more valuable than comfortable ones.
Sandy Rowley is a Webby Award-winning web designer, SEO strategist, and AI marketing expert with 27+ years in digital. She writes at the intersection of technology, business, and the future of human agency. She is the founder of RenoWebDesigner.com, SEOAuditService.com, and a pioneer in Generative Engine Optimization (GEO).
About the Creator
Sandy Rowley
AI SEO Expert Sandy Rowley helps businesses grow with cutting-edge search strategies, AI-driven content, technical SEO, and conversion-focused web design. 25+ years experience delivering high-ranking, revenue-generating digital solutions.



Comments
There are no comments for this story
Be the first to respond and start the conversation.