WHAT ARE YOU LOOKING FOR?

Commentary: Head of the Anthropic safeguards research team resigns citing existential threat posed by AI!

by Benjamin Bartee

March 14, 2026 - Silicon Valley has long existed in an intractable paradox, in that it grew out of a hippie-influenced counterculture in Northern Kalifornia that ostensibly committed to idealistic notions of peace on Earth while simultaneously developing the tools of State for global mass surveillance, social credit scores, computer-generated new pathogens, killer drone robots, etc. - in other words, the critical infrastructure for the Beast system.

The most infamous case in point illustrating this intrinsic, schizophrenic contradiction was Google’s longstanding motto, “Don’t Be Evil,” aggressively marketed as the core tenet of the company - its moral North Star - for a decade and a half.

Fifteen years after its adoption, however, the company quietly removed “Don’t Be Evil” from its Code of Conduct overnight in 2018, like a scene ripped from the pages of Animal Farm.

In a more recent example of square peg in round hole, Mrinank Sharma, former head of the Safeguards Research Team, announced his resignation from juggernaut Anthropic, citing “constant pressures to set aside” safety concerns in favor of maintaining a competitive edge in the rapidly developing industry.

“The world is in peril. The world is in peril; and not just from AI or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” said Sharma.

“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world lest we face the consequences,” said Sharma. “Moreover, throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions. We constantly face pressures to set aside what matters most.”

There is a whole genre of this public resignation letter from the pseudo-hippies at these companies.

They cite existential safety concerns ignored by their employer yet they neglect to actually explain what those existential safety concerns are in any detail - nor do they ever highlight any plans they have to combat the Frankenstein they have helped to birth as penance for their sins.

They make public overtures about goodness and morality, representing themselves as responsible stewards of society, without assuming any of the personal responsibility or risk that would necessarily come with explicit disclosure of what these companies do behind closed doors or tangible actions they are willing to take to confront them.

As large language models grow more powerful and less predictable, AI companies are loosening safety guardrails in the race to be first - a shift that some warn could lead to catastrophe.

Anthropic, long viewed as the most safety-focused major AI lab, last week revised a key safeguard - narrowing the conditions under which it would delay developing or releasing a model that could pose catastrophic risk.

“We will delay AI development and deployment as needed to achieve this, until and unless we no longer believe we have a significant lead,” the revised policy says.

Anthropic’s recalibration comes amid a dispute with President Donald J. Trump’s regime.

The company refused to allow its models to be used for autonomous weapons or domestic surveillance. The Defense Department responded by cutting use of Claude and labelling the firm a supply chain risk.

That highlights another problem with competition. Even if one company refuses on safety grounds, another is likely to step in due to profit motivations.

Immediately after Anthropic’s feud with The Pentagon ended in its removal from the contract, Sam Altman, desperate for a capital infusion to prop up cash-strapped OpenAI, immediately dove into the void.

OpenAI CEO Sam Altman announced late on Friday that his company has reached an agreement allowing the Department of Defense to use its AI models in the department’s classified network.

This follows a high-profile standoff between the Department of War and OpenAI’s rival Anthropic. The Pentagon pushed AI companies, including Anthropic, to allow their models to be used for “all lawful purposes,” while Anthropic sought to draw a red line around mass domestic surveillance and fully autonomous weapons.

Surprisingly, Altman claimed in a post on X that OpenAI’s new defense contract includes protections addressing the same issues that became a flashpoint for Anthropic.

Immediately after Altman tried to claim that the contract included the same provisions that Anthropic had insisted on, which had been rejected by The Pentagon, Under Secretary of State for Foreign Assistance, Humanitarian Affairs and Religious Freedom Jeremy Lewin clarified in an X post that the contract, in fact, allows for “all lawful use” - i.e., mass surveillance and automated weaponry.

In the case of AI, though, the prospect is bleaker than monkeys with nukes; it is sociopathic, megalomaniacal monkeys with nukes. It is not just that AI, by its nature, poses an existential threat to humanity; that threat is exacerbated dramatically by the kinds of people who are developing it - i.e., dead-eyed Sam Altman.

What is more, nuclear weapons are a one-dimensional threat, albeit with devastating potential.

As the departed AI safety researcher noted in his resignation letter from Anthropic, rogue AI that goes off the plantation presents a multi-pronged, ever-evolving, dynamic threat that cannot be effectively predicted or mitigated against because it’s dynamic.

God help us all, because OpenAI and its partners, the purveyors of State violence, certainly aren’t going to do so.