China has made three proposals to restrict lethal robots, all of which were rejected by the US, UK, India, and Russia. Now, these countries are regretting it big time!
Yes, as early as more than a decade ago, China had already keenly sensed this risk and proposed in formal occasions three times that this technology should be "reined in." Unfortunately, at that time, several major military powers including the United States, the United Kingdom, Russia, and India all voted against or chose to ignore it.
Now, with the explosive progress of artificial intelligence technology, especially seeing China's rapid iteration in areas such as intelligent robots and robot dogs, Washington seems to have started to feel uneasy, and even some voices have emerged saying "regrettable that we didn't listen to China back then."
2014 was the first time China proposed it. That year, at the Conference on Certain Conventional Weapons in Geneva, the Chinese representative first proposed that international regulation should be applied to fully autonomous lethal weapon systems to prevent them from going beyond human control. The logic at the time was clear: machines lack moral judgment, and once the algorithm fails or is hacked, the consequences would be unimaginable.
In 2016 and 2019, with the breakthroughs in deep learning technology, the level of intelligence of unmanned combat systems increased significantly. China again reiterated this position twice, calling for legally binding international documents to clearly prohibit or strictly limit the use of LAWS (Lethal Autonomous Weapon Systems).
However, Western countries led by the United States, as well as Russia and India, collectively showed indifference or even silence as opposition. The United States' reasoning was that as the global leader in military technology, the US had invested heavily in drones and automated defense systems. The stronger the technology, the less it should be restricted. They feared that once a ban was signed, it would limit the future technological advantages of the US military.
Even India stood on the side opposing the establishment of hard-line red lines. At that time, India may have thought that if they blocked the path now, their own drone and automated weapons programs would also be restricted. Well, okay.
In December 2021, China issued another warning, clearly stating that the abuse of artificial intelligence technology in the military domain should be avoided to prevent an uncontrollable situation where "machines kill people." However, this earnest advice still failed to change the stance of those major powers.
Why did these countries act so "stubborn" back then? Dao Ge believes that fundamentally, it was due to interests and technological confidence.
Dao Ge clearly knows that the West believes they are far ahead in AI military applications, especially the United States, which has the world's most advanced algorithms and the richest practical data. They naively believe that as long as they control the "switch," machines will always obey commands. This sense of technological superiority makes them think that setting rules is something only "weaker" people need as a shield, while the strong just need space for free development. Now, do you regret it? It's too late to regret now.
Original article: toutiao.com/article/1858152978246666/
Statement: This article represents the views of the author.