Modi was playing politics by convening a global ‘AI Summit’ and maybe didn’t care about the optics beyond that but it might turn out this was the moment it will be remembered for
Visual language is powerful – when it goes right or wrong. On 19 Feb India’s PM Modi got surprised tech’ leaders at his AI Summit to join a ‘raised hands’. Sam Altman of OpenAI & Dario Amoedi of Anthropic awkwardly raised fists instead, visualising not just rivalry but the safety split in BigTech (Altman is right of Modi and Amodei to the right of Altman).
On Feb 18 @Ric_RTP reported on X that OpenAI, Google and X had agreed to Pentagon demands they remove safety restrictions for military use but Anthropic had refused:
“They don’t want Claude used to build fully autonomous weapons that fire without a human in the loop, and they don’t want it used to mass surveil American citizens”.
So Hesgeth threatened to blacklist Anthropic as a “supply chain risk” meaning no company could get US military contracts if it made any use of Anthropics model Claude.
@Ric_RTP concluded: “whatever Anthropic decides in the next few weeks, it sets the precedent for how much control AI companies actually have over their own technology.
Turns out the answer might be: none”.
I terms of setting up photo-opp’s … I guess the lesson is, check that the participants are onside before you start, and, think about it from the viewpoint of the media (alternative and unwanted visual messages it may generate) , if you want to avoid unintended visualisations.
In terms of Big Tech and it’s ongoing internal culture wars, Anthropic is heavily outnumbered in the US with the Trump administration maxing out it’s Power play in values terms at home and abroad but the US is not the world market, just the biggest single lump.
It’s long been the case that American corporates got angry (and now the American Administration) when they conflated the fact that no other country or countries could force America to do something it didn’t want to, with America being able to force the rest of the world to do anything it wanted. Trump is testing that theory on coal v renewables at the moment.
There are several avenues by which Anthropic might yet emerge the winner on the safety-v-forget-safety split but it could equally well be the fall-guy.
For those who might no longer be on X or ever were, here’s what @Ric_RTP wrote in full:
——————————————————- X ————————————————
@Ric_RTP Feb 18
The Pentagon just threatened to BLACKLIST one of America’s most valuable AI companies.
Not Huawei or some Chinese chip maker…
It’s ANTHROPIC. The company behind Claude. $380 billion valuation.
And the reason is genuinely insane:
For months, the Pentagon has been pushing every major AI lab to remove their safety restrictions for military use.
The ask is simple: let us use your models for anything that’s technically legal.
Weapons development, intelligence collection, battlefield operations, mass surveillance of American citizens.
OpenAI said yes.
Google said yes.
xAI said yes.
Anthropic said no.
Not to everything tho. They were willing to negotiate.
But they held firm on two things:
They don’t want Claude used to build fully autonomous weapons that fire without a human in the loop, and they don’t want it used to mass surveil American citizens.
That’s it. That’s the line they drew.
But Pete Hegseth’s response was to threaten to designate Anthropic a “supply chain risk.”
Here’s why that matters:
That label isn’t a contract cancellation. It’s not a fine. It’s not a strongly worded letter…
It means every single company that wants to do business with the US military has to certify they don’t use Claude anywhere in their operations.
8 of the 10 largest companies in America use Claude.
Defense contractors, government suppliers, enterprise companies with any federal exposure…
ALL of them would have to cut ties with Anthropic overnight or lose their government contracts.
A senior Pentagon official told Axios:
“It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”
That’s a US government official threatening to financially destroy an American company because it doesn’t want its AI used to spy on American people.
And it gets WORSE.
Last week, Anthropic’s head of safeguards research resigned.
His parting message: “the world is in peril.”
Elon Musk – whose xAI already handed the Pentagon a blank check – is now publicly attacking Anthropic calling Claude anti-human.
And the Pentagon official told Axios they’re “confident” OpenAI, Google, and xAI will all agree to the “all lawful purposes” standard.
So what you’re actually watching right now is every major AI company in America quietly handing the government unlimited access to the most powerful technology ever built.
With no guardrails.
No limits.
No company-imposed restrictions on what it can be used for.
One company tried to hold a line.
But the government is about to make an example out of them.
If Anthropic folds, it’s over.
Every lab just learned what happens when you push back.
And every restriction, every safety policy, every ethical guardrail these companies spent years building gets negotiated away behind closed doors the second the government asks.
If they don’t fold, a $380 billion company gets made radioactive in its OWN country.
Watch what happens next.
Because whatever Anthropic decides in the next few weeks, it sets the precedent for how much control AI companies actually have over their own technology.
Turns out the answer might be: none.
————————————————– X —————————————————–
From Stars and Stripes



