Artificial intelligence giant Anthropic accidentally released its own highest-level confidential internal code, triggering widespread rewriting on GitHub and potentially causing catastrophic business damage to the Amazon-backed business model.

The developer of the Claude chatbot described the incident as a release issue "caused by human error, not a security vulnerability," reported American tech news site VentureBeat on Tuesday.

According to Axios and The Verge, the leak involved over 500,000 lines of code related to Anthropic's AI coding assistant, Claude Code, which helps users write and manage software through natural language commands. The leaked content included unreleased features, performance data, and developer notes.

As reported by Ars Technica and The Verge, the code quickly spread online, with versions posted on code-sharing platforms like GitHub and copied thousands of times within hours. Although Anthropic took measures to remove the material and issued takedown notices, the content had reportedly already been widely replicated and disseminated.

According to VentureBeat, by exposing the "blueprint" of Claude Code, the leak may have provided "bad actors" with a "roadmap" to bypass security checks, trick the tool into running hidden commands, or access data without user knowledge.

A separate data breach reported in February exposed internal materials revealing detailed information about Anthropic’s unreleased model, Claude Mythos, after thousands of draft files were publicly accessible in a public data cache.

Based on the leaked materials, the model was described as the company’s most powerful system to date, but could pose "unprecedented cybersecurity risks" if deployed widely. As reported by U.S. business magazine Fortune, the company has delayed its release due to concerns over its capabilities and potential misuse.

Original source: toutiao.com/article/1861338314334276/

Disclaimer: This article represents the personal views of the author