【Wen / Observers Network, Wang Yi】The conflict between the U.S. artificial intelligence (AI) startup Anthropic and the Pentagon continues to escalate. On March 9 local time, the company officially filed a lawsuit against the U.S. Department of Defense, accusing it of misusing the "supply chain risk" label to unfairly penalize U.S. companies.
Anthropic stated in a statement that the scope of "supply chain risk" is very narrow and does not apply to U.S. companies, and that the lawsuit is a necessary measure to "protect our business, customers, and partners." A spokesperson for the U.S. Department of Defense responded that it does not comment on ongoing litigation as per policy.
The New York Times reported on the same day that the "supply chain risk" label typically applies only to companies considered to pose a significant threat to national security, such as those with ties to China. Once listed on this list, companies are effectively cut off from cooperation with the U.S. Department of Defense. This label has never been used against domestic U.S. companies before.
The Washington Post also reported on March 10 that the "supply chain risk" designation is usually applied to Chinese or Russian companies suspected of aiding foreign espionage. Mark Jia (translated name), a professor at Georgetown University, told the newspaper that considering the law the Pentagon relies on was originally designed for companies linked to foreign adversaries, it is likely that Anthropic will win in court.
Previously, during negotiations over a contract worth about $200 million, the two sides had intense arguments over the use of the company's AI tools in warfare. Anthropic requested that the Pentagon guarantee its AI model Claude would not be used for mass surveillance within the United States or for driving fully autonomous lethal weapons.
However, the Pentagon believed that private companies have no right to set policies for the U.S. government. The negotiations eventually broke down. Subsequently, Defense Secretary Hegseth announced that Anthropic had been designated as a "supply chain risk" entity. On the 5th, the Pentagon formally notified the company that the designation had taken effect.
Recently, the tech media The Information published an internal memo that was sharp in tone. It revealed that Anthropic CEO Dario Amodei said the Trump administration opposed the company "because we did not give the dictator Trump the kind of praise he wanted." A U.S. defense official and another informed source said that the leak of this memo ultimately led to the breakdown of the negotiations.
A person familiar with the negotiations revealed, "This issue directly destroyed the negotiations." Amodei subsequently issued a statement apologizing for the memo.
Video screenshot of Anthropic CEO Dario Amodei
In the latest submitted legal documents, Anthropic claimed that the relevant law has a very narrow scope for "supply chain risk," which does not apply to U.S. companies. The company also pointed out that the decision was clearly ideologically motivated, aimed at punishing its public stance, and violated the company's rights under the First Amendment of the U.S. Constitution to express opinions.
Anthropic stated that the Trump administration has started canceling contracts with them, and the company's existing and future collaborations with private clients face uncertainty, potentially affecting hundreds of millions of dollars in revenue in the short term. The complaint states, "In addition to direct financial losses, Anthropic's reputation and core First Amendment freedoms are also under attack."
According to U.S. media reports, Anthropic's AI tools have been widely used in the U.S. Department of Defense's classified systems, used to analyze large-scale data collected by U.S. intelligence agencies and quickly filter information. An informed source said that the Pentagon is still using Anthropic's technology, including supporting Trump's recent bombing of Iran.
The Washington Post stated that the Claude model has been deeply integrated into the U.S. Department of Defense system, and until recently, it was the only AI model approved for use in classified systems. The tool has been embedded in the military's "Maven Smart System," helping commanders analyze intelligence and identify targets. According to previous reports, the system proposed hundreds of targets and provided their exact coordinates before military operations, while prioritizing them by importance. In addition, it significantly accelerated the speed of operational planning and could help assess the results of strikes.
Jessica Tillipman, associate dean of the Law School at George Washington University, believes that designating a U.S. company as a "supply chain risk" is an extreme measure taken by the Pentagon, "they are turning a tool originally used for national security into a lever for commercial negotiations."
Mark Jia (translated name) from Georgetown University analyzed that it is highly likely that Anthropic will win in court because the law the Pentagon relies on was originally designed for companies associated with foreign adversaries.
This dispute has caused a stir in Silicon Valley and raised a question: when AI companies cooperate with the U.S. government, how much should they be able to limit the use of their technology?
The Information Technology Industry Council, representing major tech companies such as Google, NVIDIA, Apple, and Amazon, sent a letter to Hegseth last week expressing concern about his decision. The letter pointed out that emergency powers such as the "supply chain risk" designation are typically used only in genuine emergencies and generally target entities deemed foreign adversaries.
On the 9th, 19 OpenAI employees and 18 Google employees submitted legal briefs supporting Anthropic. They said that as experts in researching cutting-edge AI systems, they understand the potential risks of these technologies and the importance of establishing safety barriers. If the Pentagon can punish Anthropic for its stance, it will "undoubtedly harm the competitiveness of the U.S. industry and research in AI and broader technology fields."
At the same time, Anthropic still expressed willingness to continue negotiating with the Pentagon and offered to assist the Department of Defense in gradually migrating systems to other AI platforms. An informed source said that recently, OpenAI and XAI, Elon Musk's company, have signed agreements with the U.S. Department of Defense to provide technology for classified systems.
Notably, just hours before OpenAI announced its agreement with the Pentagon, Trump ordered the government to stop using Anthropic's software within six months. Unlike Anthropic, OpenAI agreed to allow the Pentagon to use its AI systems for any "legal use."
OpenAI claims that by setting specific safety measures in the technology, it can still maintain its safety principles and add additional protective measures to prevent its technology from being used for mass surveillance of American citizens. However, critics argue that these clauses still leave room for the Pentagon to operate.
This article is exclusive to Observers Network. Unauthorized reproduction is prohibited.
Original: toutiao.com/article/7615450047257510442/
Statement: This article represents the personal views of the author.