top of page

A guide to the Pentagon’s dance with Anthropic and OpenAI.

  • Writer: The San Juan Daily Star
    The San Juan Daily Star
  • 8 minutes ago
  • 5 min read
OpenAI’s chief executive, Sam Altman, delivers remarks during a press conference at The White House in Washington on on Tuesday, Jan. 21, 2025. Late last month, Defense Secretary Pete Hegseth delivered an ultimatum to Anthropic, the only company that had provided the Pentagon with artificial intelligence technologies for use on classified systems. (Haiyun Jiang/The New York Times)
OpenAI’s chief executive, Sam Altman, delivers remarks during a press conference at The White House in Washington on on Tuesday, Jan. 21, 2025. Late last month, Defense Secretary Pete Hegseth delivered an ultimatum to Anthropic, the only company that had provided the Pentagon with artificial intelligence technologies for use on classified systems. (Haiyun Jiang/The New York Times)

By CADE METZ


Late last month, Defense Secretary Pete Hegseth delivered an ultimatum to Anthropic, the only company that had provided the Pentagon with artificial intelligence technologies for use on classified systems.


If Anthropic did not allow the Pentagon to deploy these technologies for “all lawful uses,” Hegseth said, he would sever ties with the San Francisco startup.


The threat set off a chain of events that resulted in the Defense Department’s labeling Anthropic a “supply chain risk,” which would prevent all military contractors from using the company’s technologies, and signing an agreement with OpenAI, its biggest rival.


The negotiations were, to say the least, confusing.


How does the Pentagon use Anthropic’s technology?


Anthropic’s technologies are widely used inside the Defense Department because the startup agreed last year to integrate its systems with technology from Palantir, a data analytics company that is approved for classified operations.


Separately from Anthropic’s partnership with Palantir, the Pentagon also uses Anthropic’s technology to analyze imagery and other intelligence data as part of a $200 million AI pilot program.


Anthropic’s technology is being used as U.S. military forces engage in a widening war against Iran, two people familiar with the technology said on the condition of anonymity.


Google, OpenAI and Elon Musk’s xAI are also part of the pilot program, but are not yet used on classified systems. Anthropic was a step ahead of its rivals thanks to its partnership with Palantir.


Why did the Pentagon get angry at Anthropic?


On Feb. 15, The Wall Street Journal reported that Anthropic had raised concerns with Palantir about the role its technologies played in the U.S. military operation to capture Venezuela’s president, Nicolás Maduro. The story inflamed earlier tensions, as Hegseth and others at the Pentagon argued that Anthropic was resisting the military’s use of these AI systems.


The Defense Department was already in talks with Anthropic to establish new contractual language that allowed the Pentagon to use the company’s technologies for any lawful purpose. But Anthropic was reluctant to agree to those terms.


Why was Anthropic reluctant?


Anthropic wanted contractual language that prevented the Pentagon from using its technology with autonomous weapons or for mass surveillance of Americans. It argued that specific language was needed to ensure that the technologies were used only in ways that aligned with what they could “reliably and responsibly do.”


The Pentagon said private companies should not try to control how the military operated.


On Feb. 24, Hegseth met with Anthropic’s CEO, Dario Amodei, and said that if Anthropic failed to agree to the Pentagon’s demands by 5:01 p.m. on the next Friday, he would designate the company a supply chain risk.


What does it mean to be a supply chain risk?


It means that a company’s technology cannot be used by the Pentagon or any of its contractors in their work with the government. The designation is typically applied only to firms with ties to the government of China.


Did cooler heads prevail?


No. The company published a blog post saying it could not “accede” to the Pentagon.


Minutes after the deadline passed, Hegseth deemed Anthropic a supply chain risk in a post to social media.


He added that “no contractor, supplier or partner that does business with the United States military may conduct any commercial activity” with the company. But the Pentagon planned to continue to use Anthropic’s technologies for up to six months as it arranged for alternatives.


The Pentagon later sent a letter to Anthropic saying it had officially designated the company as a supply chain risk.


Does Hegseth have the power to do that?


A court will probably decide. Anthropic has said it intends to sue the government, and legal scholars say a suit would most likely be successful.


“Anthropic’s case is very strong,” said Alan Rozenshtein, a professor of law at the University of Minnesota.


Legal scholars also say the Pentagon does not have the power to bar its contractors from commercial activity with the startup beyond just using its technology. For instance, it cannot prevent contractors from investing in Anthropic, they said.


“The commercial activity language is flatly illegal,” Rozenshtein said.


That is an important point because Amazon and Google — two of Anthropic’s biggest investors — are also Defense Department contractors.


Why didn’t the Pentagon just stop using Anthropic?


That would have been an easier solution to the dispute. “The correct response is to just cancel the contract and walk away,” Rozenshtein said.


Instead, the Pentagon appeared to make a political statement by labeling Anthropic a supply chain risk.


“It seems like the Pentagon just does not like Anthropic’s general political vibe and wants to destroy its entire business,” said Dean Ball, a senior fellow at the Foundation for American Innovation who was previously a policy adviser for AI under President Donald Trump. “That is beyond the pale.”


How did OpenAI get involved?


A day after Hegseth met with Amodei, OpenAI’s CEO, Sam Altman, started his own talks with the Defense Department.


Altman told the Pentagon that it should not give Anthropic the supply chain risk label because it would have a chilling effect on the department’s relationship with the tech industry. Like Anthropic, he said, OpenAI did not want its technologies used for mass surveillance of Americans or with autonomous weapons.


But Altman and OpenAI also worked on their own contract with the Pentagon. Just hours after Anthropic missed its deadline, he announced that they had reached an agreement.


OpenAI agreed to let the Pentagon use its AI systems for any lawful purpose. But OpenAI also said it had negotiated terms that allowed the company to uphold its safety principles by installing specific technical guardrails on its systems.


Can technical guardrails prevent AI from being used for mass surveillance?


No. The guardrails built into today’s AI do not always work as they are designed. And even when these guardrails hold firm, there are many ways AI systems could still be used to feed surveillance or the use of autonomous weapons.


Three days later, OpenAI announced that it had amended its agreement with the Pentagon. It added language saying its AI systems “shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”


Does the amendment uphold OpenAI’s safety principles?


Maybe not. Legal experts point out that the Pentagon could inadvertently collect data about Americans as it worked to monitor foreigners and that it would still be allowed to analyze this data under the terms of the contract.


A contract like this is also difficult for a private company to enforce, because a violation of the terms may not be obvious, Rozenshtein said. In other words, whether a technology has been used for mass surveillance is sometimes open to debate.


Even if the government breaches the contract, OpenAI can at most cancel service and sue for damages, but it cannot force the government to live up to its end of the bargain, Rozenshtein said.


So, what does all this mean?


“This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems,” Ball said. “What should the limitations be? And who gets to decide?”


But he and other experts said this was not the best way to decide these questions. They say Congress should step in to set firmer laws.

Looking for more information?
Get in touch with us today.

Postal Address:

PO Box 6537 Caguas, PR 00726

Phone:

Phone:

logo

© 2026 The San Juan Daily Star - Puerto Rico

Privacy Policies

  • Facebook
  • Instagram
bottom of page