The Coin Report, November 7: Technology platforms, policy think tanks, and the National Association of Software and Services Companies (Nasscom) in India all agree that the proposed AI content regulation rules by the Indian government are costly and impractical. On October 22, the Ministry of Electronics and Information Technology (MeitY) released a draft for amending the 2021 IT Rules, aiming to regulate deepfake content through social media platforms by adding watermarks and labels to algorithmically modified content. If AI-generated content cannot be traced back, the responsibility will fall on social media platforms and AI tools. In response, mainstream social media platforms such as Meta argue that the scope of the draft is too broad, and if all algorithmically generated content must be pre-labeled, it would significantly increase review costs and manpower, calling for clear definitions of false information, deepfakes, and harmless or harmful content in AI regulations. The Indian software industry association believes that a more reasonable regulatory approach should focus on the actual impact and potential harm of AI content, rather than its generation process. Think tank experts believe that an ideal regulatory approach should be a results-oriented rule system led by the central government, rather than focusing on technical mechanisms and processes themselves. Analysis points out that the core issue is how to effectively implement AI regulation in practice, and this draft has shifted the responsibility of identifying and managing deepfake content from AI generation platforms to social media platforms, causing controversy.

Original: www.toutiao.com/article/1848279288932695/

Statement: This article represents the views of the author himself.