top of page

Generative AI Measures (2023)

China - CAC

Classification

AI Regulation, International Law, Compliance

Overview

The Generative AI Measures (2023), formally known as the 'Interim Measures for the Management of Generative Artificial Intelligence Services,' are China's first comprehensive, legally binding rules specifically targeting generative AI systems. Enforced by the Cyberspace Administration of China (CAC), these measures apply to both domestic and foreign entities offering generative AI services accessible within China, making them extraterritorial in scope. The rules mandate strict content moderation, algorithm transparency, and user identity verification, while also requiring providers to prevent the generation of prohibited content such as fake news, discrimination, or content undermining national security. Notably, the measures distinguish between public-facing and enterprise/internal use, with the latter subject to lighter requirements. A key limitation is the lack of clarity around enforcement mechanisms and the balance between innovation and control, which may stifle smaller market entrants or hinder global interoperability.

Governance Context

The Generative AI Measures (2023) are rooted in China's broader digital governance framework, including the Cybersecurity Law (2017) and the Data Security Law (2021). Key obligations under the Measures include: (1) Mandatory security assessments for providers before launching generative AI products to the public, and (2) Implementation of robust content filtering systems to prevent dissemination of illegal or harmful information, as defined by Chinese law. Additionally, providers must register their algorithms with the CAC and implement real-name registration for users. These controls mirror requirements seen in the EU AI Act's risk-based approach and the GDPR's extraterritoriality, though China's focus is more on information control and social stability. Providers must also establish mechanisms for user complaints and timely rectification of problematic outputs, aligning with principles of accountability and transparency found in international AI governance frameworks. Furthermore, providers are required to regularly update their models to address new risks and report serious incidents to authorities.

Ethical & Societal Implications

The Measures raise significant ethical questions around freedom of expression, privacy, and the balance between social stability and innovation. While aiming to prevent harmful or illegal content, the rules may also incentivize overcensorship, chilling legitimate discourse and stifling creativity. The requirement for real-name registration and algorithmic transparency introduces privacy trade-offs and potential state surveillance concerns. Societally, these measures may set a precedent for other countries seeking to regulate generative AI, potentially fragmenting global AI governance and raising barriers to cross-border AI collaboration. Moreover, the focus on content control may disproportionately impact marginalized voices and restrict the diversity of perspectives available online.

Key Takeaways

China's Generative AI Measures (2023) are the first national rules targeting generative AI.; The Measures apply extraterritorially to both domestic and foreign providers accessible in China.; Key obligations include security assessments, content moderation, algorithm registration, and real-name verification.; Providers must establish user complaint mechanisms and promptly address problematic outputs.; The Measures prioritize social stability and information control, with potential innovation trade-offs.; Non-compliance can result in service bans, fines, or other penalties enforced by the CAC.; The Measures may influence global AI governance and set precedents for other jurisdictions.

bottom of page