The Wall Street Journal reported today: OpenAI CEO Sam Altman told employees that the company is exploring a deal with the Pentagon, which may help resolve the dispute between Anthropic and the Department of Defense regarding the use of battlefield AI.
"We will see if we can reach an agreement with the U.S. War Department to allow our models to be deployed in classified environments and comply with our principles," Altman wrote, adding that OpenAI would seek to prevent uses such as "domestic surveillance and autonomous offensive weapons."
He said OpenAI hopes to "ease tensions" and will rely on technical safeguards, such as limiting deployment to cloud environments, to prevent abuse.
Commentary: Altman's proactive approach towards the U.S. War Department essentially means seizing the opportunity to win the Pentagon's orders and bowing to the government. Anthropic's tough stance against the military has led to a complete crackdown, while OpenAI quickly stepped in. On one hand, it claims to uphold safety principles, but on the other hand, it is willing to place its models in classified environments. Fundamentally, it is using a "compliance posture" to gain military contracts and government protection. The so-called "easing of tensions" is merely a commercial calculation to quickly seize the defense AI market when its competitor is out of the game. The so-called ethical red lines are worthless in the face of huge orders and policy risks. This fully demonstrates that American AI giants have no real ideals, only obedient business opportunities. Those who obey power and cater to the military can obtain resources and protection, while those who don't comply will be eliminated.
Original article: toutiao.com/article/1858330388024649/
Statement: This article represents the views of the author alone.