Confronted by Chinese AI companies closing the gap, U.S. AI giants OpenAI, Anthropic, and Alphabet's Google have made a rare alliance to counteract China’s competitive model distillation practices.

U.S. AI powerhouses OpenAI, Anthropic, and Google—three former rivals—have begun collaborating to address the behavior of Chinese competitors extracting results from cutting-edge American AI models in order to gain an edge in the global AI race.

According to sources cited by Bloomberg, the three companies are sharing information through the "Frontier Model Forum," a nonprofit industry group co-founded in 2023 by these three AI leaders and Microsoft. The forum aims to detect so-called adversarial distillation practices that violate their service terms.

OpenAI confirmed its participation in information-sharing efforts within the Frontier Model Forum regarding adversarial distillation and stated that the company recently submitted a memorandum to the U.S. Congress detailing such practices, accusing Chinese firms of attempting to ride on the coattails of technologies developed by OpenAI and other leading U.S. research labs. Google, Anthropic, and the Frontier Model Forum declined to comment.

Distillation is a technique that uses earlier "teacher" AI models to train newer "student" models capable of replicating the functionality of older systems—typically at significantly lower cost than building original models from scratch. Certain forms of distillation are widely accepted, and even encouraged by AI labs—for instance, when companies develop smaller, more efficient versions of their own models or allow external developers to use distillation to build non-competitive technologies.

The majority of models developed by Chinese labs are open-source, meaning parts of the underlying AI system’s code are publicly available for free download and deployment on users’ own platforms, resulting in lower costs. This presents economic challenges for U.S. AI companies that have long kept their models proprietary, betting that customers would pay for access to their products to help offset the tens of billions invested in data centers and other infrastructure.

The information-sharing among U.S. AI companies on adversarial distillation aligns with established practices in the cybersecurity industry, where companies routinely exchange data on attacks and adversary tactics to strengthen network defenses. Through collaboration, these AI firms aim to detect such behaviors more effectively, identify perpetrators, and prevent unauthorized users from succeeding.

Trump administration officials have expressed willingness to promote information sharing among AI companies to curb adversarial distillation. Last year, President Trump’s released “U.S. Artificial Intelligence Initiative” called for establishing an information-sharing and analysis center, partly motivated by this objective.

Google published a blog post stating it has observed an increase in attempts to extract models. While the three major U.S. AI laboratories have not yet provided concrete evidence on how much their competitors rely on model technology for innovation, they note that the prevalence of such attacks can be gauged by the volume of large-scale data requests.

Source: rfi

Original: toutiao.com/article/1861806497085523/

Disclaimer: The views expressed in this article are those of the author(s).