'The world is in peril' — 5 reasons why the AI apocalypse might be closer than you think
AI is causing problems and there are warning signs it's going to get worse
· TechRadarFeatures By Eric Hal Schwartz published 16 February 2026
Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Get the TechRadar Newsletter
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.
You are now subscribed
Your newsletter sign-up was successful
An account already exists for this email address, please log in. Subscribe to our newsletter
There's been an endless parade of proclamations over the last few years about an AI golden age. Developers proclaim a new industrial revolution and executives promise frictionless productivity and amazing breakthroughs accelerated by machine intelligence. Every new product seems to boast of its AI capability, no matter how unnecessary.
But that golden sheen has a darker edge. There are more indications that the issues around AI technology are not a small matter to be fixed in the next update, but a persistent, unavoidable element of the technology and its deployment. Some of the concerns are born out of myths about AI, but that doesn't mean that there's nothing to worry about. Even if the technology isn't scary, how people use it can be plenty frightening. And solutions offered by the biggest proponents of AI solutions often seem likely to make things worse.
There have been events in the past few months that have hinted at something more destabilizing. None of them guarantees catastrophe on their own, but they don't evoke the optimism the fans of AI would like us to feel. They sketch a picture of a technology accelerating faster than the structures meant to guide it. If the apocalypse ever comes courtesy of artificial intelligence, they may be what we look back at as the first moments.
1. AI safety experts flee
This month, the head of AI safety research at Anthropic resigned and did so loudly. In a public statement, he warned that “the world is in peril” and questioned whether core values were still steering the company’s decisions. A senior figure whose job was to think about the long term and how increasingly capable systems might go wrong decided it was impossible to keep going. His departure followed a string of other exits across the industry, including founders and senior staff at xAI and other high-profile labs. The pattern has been difficult to ignore.
Resignations happen in tech all the time, of course, but these departures have come wrapped in moral concern. They have been accompanied by essays and interviews that describe internal debates about safety standards, competitive pressure, and whether the race to build more powerful models is outpacing the ability to control them. When the people tasked with installing the brakes begin stepping away from the vehicle, it suggests that the car may be accelerating in ways even insiders find troubling.
AI companies are building systems that will shape economies, education, media, and possibly warfare. If their own safety leaders feel compelled to warn that the world is veering into dangerous territory, that warning deserves more than a shrug.
2. Deepfake dangers
It's hard to argue there's an issue in AI safety when regulators in the United Kingdom and elsewhere find credible evidence of horrific misuse of AI like reports that Grok on X had generated sexually explicit and abusive imagery, including deepfake content involving minors.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors