Chinese Ministry of Defense Spokesman: Military Use of Artificial Intelligence Could Lead the World to a "Terminator"-Style Apocalyptic Scenario

AFP, Beijing, Wednesday 11th: China warned the U.S. government on Wednesday that uncontrolled use of artificial intelligence (AI) in military fields could lead the world into an apocalyptic scenario depicted in the movie "Terminator," where machines seize power.

AFP reported that currently, there is a heated ethical debate in the United States about the military application of artificial intelligence. The government led by President Trump is in a fierce deadlock with the new tech company Anthropic and has imposed sanctions on it.

This company, which specializes in AI research and development, refused to allow the U.S. military to use its technology without any restrictions, especially refusing to use its technology for large-scale surveillance of populations or for automated attacks and bombings that could result in fatal consequences.

Previously, multiple media outlets reported that Anthropic's technology model had been used to prepare for a joint U.S.-Israel offensive against Iran, which later triggered the Middle East war. "Unrestricted advancement of AI militarization, using it as a tool to infringe on the sovereignty of other countries, allowing it to excessively influence war decisions, and letting algorithms decide the life and death of humans not only undermines the ethics and responsibilities during wartime but also faces the risk of technological失控," said Chinese Ministry of Defense Spokesman Jiang Bin on Wednesday.

Jiang Bin responded to questions about Washington's intention to grant the U.S. military unrestricted access to AI, emphasizing: "The dystopian scene depicted in the American movie 'Terminator' could one day become reality."

The film "Terminator" was released in 1984, starring Arnold Schwarzenegger. It describes a post-apocalyptic future where in 2029, robots controlled by advanced artificial intelligence engage in brutal battles with humans.

Due to Anthropic's refusal to lift restrictions on the use of its AI, the U.S. Department of Defense listed it last week on the Pentagon's list of "supply chain national security risks" companies.

This move forces all suppliers to immediately stop using Anthropic's technology and its generative AI assistant Claude when providing services to government departments.

The report mentioned that this action is unprecedented in the AI industry within the U.S. government. Since the end of 2024, Anthropic has been providing its AI models to U.S. defense and civil agencies.

Anthropic emphasized that the supply chain risk designation of the Pentagon "lacks legal basis" and may set a "dangerous precedent." The company stated that it plans to challenge this designation in court.

This incident has also sparked online debates about how much control private tech companies have over the use of their products by the federal government. It was reported that before the negotiation breakdown, bipartisan defense leaders in the Senate urged the U.S. Department of Defense and Anthropic to resolve the dispute.

According to a Reuters report citing Democratic Representative Sam Liccardo, he announced that he will introduce an amendment to the Defense Production Act this week to limit federal agencies from taking retaliatory actions against high-risk tech suppliers who attempt to restrict the deployment of their technology to reduce risks to American citizens.

Sources: rfi

Original: toutiao.com/article/1859381016391680/

Statement: This article represents the views of the author alone.