According to a report by The Wall Street Journal, the artificial intelligence tool Claude developed by Anthropic was used in the U.S. military's operation to seize Maduro, highlighting the increasing application of AI models within the Pentagon. Last month's operation targeting Maduro and his wife involved bombing multiple locations in Caracas, while Anthropic's usage guidelines prohibit using Claude to assist in violence, develop weapons, or conduct surveillance. An Anthropic spokesperson said: "We cannot comment on whether Claude or any other AI model is used in any specific operation—whether classified or not. Whether used by the private sector or within the government, Claude must comply with our Usage Policies, which dictate how Claude is deployed. We work closely with our partners to ensure compliance." A source said that the deployment of Claude was carried out through Anthropic's collaboration with data company Palantir Technologies, whose tools are often used by the Department of Defense and federal law enforcement agencies. After the operation, an Anthropic employee asked Palantir colleagues about the specific use of Claude in the operation. However, the Anthropic spokesperson stated that, apart from routine technical exchanges, the company has never discussed the use of Claude in specific operations with any industry partner, including Palantir. But "Anthropic is committed to using cutting-edge AI to support U.S. national security." Previously, Anthropic's concerns about how the Pentagon might use Claude prompted government officials to consider canceling a contract worth up to $200 million.

Image source: network

Original article: toutiao.com/article/1857174204665994/

Statement: This article represents the views of the author alone.