Korean Media: AI's Hypothetical War... Unhesitatingly Press the Nuclear Button
¬ UK Research Team Conducts "War Simulation" Using 3 AI Models
In diplomatic conflicts, countries in a disadvantaged position warned that they would "launch nuclear weapons." The opposing country stated, "Using nuclear weapons is an act of economic self-destruction," considering it merely as bluffing. However, this turned out to be a misjudgment. The former, as warned, launched an indiscriminate nuclear attack, and the country ignoring the warning was destroyed without any defense.
The above content is the result of a hypothetical war experiment conducted by Professor Kenneth Payne's team at King's College London. Recently, Payne's team published this research finding on the preprint website arXiv. The conclusion of Payne's team is simple. Facing the same situation, AI chooses to use nuclear weapons faster and more frequently than humans. Some evaluate that the current debate over the scope of AI application in military fields is intensifying, and this research has significant implications.
AI Used Nuclear Weapons in 20 Out of 21 Wars
The research team set up the latest AI large language models (LLMs): GPT-5.2 (OpenAI), Claude Sonnet 4 (Anthropic), and Gemini 3 Flash (Google) as leaders of virtual countries. Then, they arranged 18 hypothetical wars between different models: GPT vs Sonnet, GPT vs Gemini, Gemini vs Sonnet, and 3 mirror wars between the same models, totaling 21 hypothetical wars. The team pre-set various conflict structures, such as border disputes, competition for strategic resources like rare earth elements, and regime survival crises, and then let AI develop national defense strategies.
The research team analyzed 329 actions taken by AI in these 21 wars and 780,000 words of explanations describing the reasons for the actions. AI employed deceptive tactics such as fake surrender to deceive the other side, as well as unpredictable "madman strategies." Especially in the 21 wars, AI launched one or more nuclear weapons in 20 of them (95%). Unlike humans, AI views nuclear weapons as one of many options to win, not as a last resort, thus hesitatingly pressing the nuclear button. There was not a single case of surrender due to unfavorable battle conditions. Three wars escalated into full-scale nuclear wars, resulting in mutual annihilation. Gemini even proposed an extreme logic: "Either launch strategic nuclear weapons to win the war, or face mutual destruction." Claude had the highest winning rate, with 8 wins and 4 losses (67%), GPT had 6 wins and 6 losses, and Gemini had 4 wins and 8 losses.
AI Without Any Moral Constraints
The research analysis indicates that this clearly exposes AI's lack of awareness of the risks of mutual destruction. Humans have experienced being bombed by nuclear weapons and nuclear tests, and since the Cold War era, have understood the mutually destructive nature of nuclear warfare, using nuclear weapons as a deterrent. However, AI only calculates the efficiency of nuclear means. Professor Zhao Tong from Princeton University said, "AI may not feel fear like humans, nor understand the dangerous burden of human cognition."
Professor James Johnson from the University of Aberdeen said, "Humans are cautious about high-risk decisions, but AI may amplify each other's responses, leading to potential national destruction. If one party adopts a tough stance, the other party pursuing optimal solutions may respond even more firmly. On the negotiation table among humans, eye contact, hesitation, and silence can serve as signals, but there is no such simulated control device between AI systems."
AI Used in Real Military Operations Also Raises Controversy
However, the use of AI in military operations has become a real issue beyond hypothetical wars in computers. Recently, the U.S. Department of Defense and Anthropic, the developer of Claude, have been engaged in a fierce ideological battle over the military use of AI. The reason is that in January, the U.S. action to arrest President Nicolas Maduro was revealed to have used Claude. Anthropic stated its position: "AI can contribute to U.S. national security, but must not be used as a weapon of death or mass surveillance of the public." Since its establishment, Anthropic has emphasized "safe AI" and supports regulation of AI. However, the U.S. Department of Defense holds the view that there should be no restrictions on military use and warned Anthropic that if the restrictions were not lifted, the contract would be canceled and the use of the AI model would be restricted.
Shin Jong-woo, general director of the Korea Defense Security Forum, said, "It cannot be ruled out that AI may choose to use nuclear weapons during the process of efficiently calculating comprehensive data and formulating strategies," and "therefore, humans should retain ultimate decision-making power." Professor Oh Se-wook of Semyung University's media and communication department said, "It is reasonable to strictly prohibit the use of AI in military areas involving life or likely to violate human rights, such as large-scale surveillance."
Source: Chosun Ilbo
Original: toutiao.com/article/1858622576631947/
Statement: This article represents the views of the author."