U.S. Blocks AI? Anthropic Strikes Back!

Markets

What to know:

  • Anthropic has sued multiple U.S. federal agencies, alleging its Claude AI systems were effectively blacklisted from government procurement without required legal procedures.
  • The company argues officials informally imposed nationwide contracting restrictions on national security and supply-chain grounds, without formal determinations, documented evidence or consideration of less restrictive alternatives.
  • The lawsuit comes as the federal government rapidly adopts generative AI-favoring OpenAI’s ChatGPT-and follows reports that the White House is preparing an executive order to remove Anthropic’s tools from federal use.

Anthropic just picked a fight with its biggest potential customer. A David vs. Goliath tale, if David had a $100 billion budget and a grudge against the government’s fickle heart.

The AI company behind Claude filed a lawsuit Monday in the U.S. District Court for the Northern District of California naming the Departments of Treasury, Commerce, State, Health and Human Services, Veterans Affairs, the General Services Administration, and several other federal agencies as defendants.

Anthropic says the U.S. government effectively blacklisted its AI systems from federal procurement and did it without following any of the legal procedures required to actually ban a vendor. A shame, really-like banning a farmer for growing too many potatoes without a trial.

It says there was a lack of formal determination, interagency review, documented evidence, and no evaluation of less restrictive alternatives like conditional approval or security audits. The government, ever the bureaucratic magician, waved a wand and said, “Nope, you’re out.”

According to the complaint, officials justified the restrictions internally on national security and supply-chain grounds, then let the directive spread informally through centralized procurement channels until Anthropic was locked out of federal contracting across the board. A silent coup, orchestrated by a committee of paper-pushers.

The timing makes this more than a procurement dispute. The U.S. government is in the middle of the largest AI adoption push in federal history, using OpenAI’s ChatGPT as its tool of choice. Agencies are deploying generative AI for cybersecurity, intelligence analysis, administrative automation, and internal decision-making. The contracts are large, multi-year, and increasingly central to how the government operates. A bit like a king choosing a new horse to ride, and then kicking the old one into the ditch.

Getting locked out of that market isn’t a minor commercial setback, but an existential competitive problem for any AI company that wants to be taken seriously at the institutional level. Imagine being the only kid who brought a sandwich to a feast and then getting told, “No, you’re not invited.”

Anthropic is asking the court to declare the restrictions unlawful and block agencies from enforcing them. If it wins, the ruling would reopen federal procurement and potentially set a precedent on how far agencies can go when restricting AI vendors on national security grounds without following their own rules. A legal tango, with the government’s foot on Anthropic’s throat.

The government hasn’t publicly responded to the filing, but an Axios report on Tuesday noted the White House was preparing an executive order formally instructing the federal government to rip out Anthropic’s AI from its operations, citing sources familiar with the matter. A move as subtle as a sledgehammer, wrapped in a diplomatic bow.

Read More

2026-03-10 16:14