• About Us
  • People
    • Matthew Murphy
    • Ellen Wang
    • Yu Du
    • Xia Yu
    • Sarah Xuan
  • Practice Areas
    • Intellectual Property
    • Technology
    • Corporate
    • International Trade
    • International Arbitration
  • Insights
  • Accolades
  • Locations
  • Contact Us
  • 中文

China’s New Rules for Labeling AI-Generated Content Commences Operation

Published 4 September 2025 Yu Du
The Cyberspace Administration of China (CAC), Ministry of Industry and Information Technology (MIIT), Ministry of Public Security (MPS), and National Radio and Television Administration (NRTA) jointly issued the Measures for the Labeling of AI-Generated Synthetic Content (Measures) on 7 March 2025. The Measures officially came into effect on 1 September 2025. This means that providers and platforms involved in generating or disseminating AI-generated content must implement both explicit (visible) and implicit (technical) labeling mechanisms to clearly identify synthetic content. The goal is to promote the healthy development of artificial intelligence, protect the lawful rights and interests of individuals and organizations, and uphold public interest.
Scope and Application
The Measures apply to network information service providers that generate or disseminate AI content in line with existing laws and regulations, including the Cybersecurity Law, the Regulations on Algorithmic Recommendations for Internet Information Services, the Regulations on Deep Synthesis of Internet Information Services (Deep Synthesis Regulations), and the Provisional Measures for the Management of Generative AI Services. Any service provider falling within these frameworks must comply with the new labeling obligations outlined in the Measures.
Definition and Types of Labeling
AI-generated synthetic content is defined as text, images, audio, video, and virtual scenes produced through artificial intelligence technologies. The Measures distinguish between explicit and implicit labeling. Explicit labeling refers to visible notices—such as text prompts, icons, or audio markers—embedded in the content or user interface that are easily recognizable by users. Implicit labeling refers to technical metadata embedded in the content files, including information such as the nature of the content, the provider’s name or code, and a unique content identifier.
Requirements for Explicit Labeling
When the generated content falls under specific scenarios outlined in the Deep Synthesis Regulations, service providers are required to apply prominent explicit labels. In the case of text, labels should be placed at the beginning, middle, or end, or in the surrounding interactive interface. For audio, voice prompts or rhythmic indicators should be added at appropriate positions. In images, visible markings should be placed in noticeable locations. Videos should include labels at the beginning and around the playback frame, with optional labels inserted at the end or middle. Virtual environments must contain clear notifications at the beginning and, where appropriate, throughout the user experience. In any other relevant use case, prominent labels should be used in a manner consistent with the application’s characteristics. If the provider allows downloading, copying, or exporting of content, the file must retain appropriate explicit labels.
Requirements for Implicit Labeling
Service providers must also embed metadata into the content file’s header, in accordance with Article 16 of the Deep Synthesis Regulations. This metadata should include details about the synthetic nature of the content, the provider’s identity or system code, and a unique content identifier. Providers are encouraged to use technologies such as digital watermarks to further enhance traceability and authenticity. The metadata must conform to structured coding standards and be able to document the source, attributes, and intended use of the content.
Responsibilities of Dissemination Platforms
Platforms that disseminate content must implement specific procedures. When metadata indicates that the file is synthetic, the platform must add prominent labels around the published content to notify users. If metadata is missing but the user declares the content synthetic, the platform must still apply clear indicators. If neither metadata nor user declaration is present but signs of synthetic origin, such as visible labeling or generation artifacts, are detected, the content should be treated as suspected synthetic content and appropriately labeled. Platforms must also provide users with labeling tools and encourage voluntary declarations. In all such cases, platforms must embed additional metadata recording dissemination attributes such as platform codes and content IDs.
App Stores and Application Disclosure Obligations
Application distribution platforms, during the review process for onboarding new applications, must ask whether the app provides AI-generated synthetic content services. If it does, the platform must review relevant labeling documentation to verify that the app complies with the Measures. This ensures that AI capabilities are disclosed and properly labeled at the application level before going live.
User Agreements and Log Retention
Service providers must specify the labeling methods, styles, and responsibilities in their user agreements and ensure that users are clearly informed of their obligations. If a user requests content without visible labels, the provider may fulfill the request only after confirming the user understands their labeling duties and accepting responsibility. In such cases, providers must retain logs of the user’s identity and service details for no less than six months, to ensure traceability and compliance.
Prohibitions and Legal Liabilities
Users who disseminate AI-generated content must voluntarily declare and label such content using the tools provided. It is strictly prohibited for any individual or organization to maliciously delete, alter, forge, or conceal content labels. Likewise, it is illegal to provide tools or services that enable others to commit such acts. Entities are also barred from using misleading labeling practices that infringe on the rights of others. Violations will be dealt with by relevant regulatory authorities including the cyberspace, telecommunications, public security, and broadcasting departments, in accordance with applicable laws and regulations.
Legal Compliance and Regulatory Coordination
Service providers must ensure their labeling practices are aligned with all relevant laws, administrative regulations, departmental rules, and national technical standards. When applying for algorithm registration or security assessments, they must submit all required labeling documentation. Additionally, service providers should support regulatory bodies in sharing labeling data to prevent and combat illegal or harmful activities.
Comment
The implementation of the Measures reflects China’s attention to and response toward the challenges of managing content generated by generative artificial intelligence. By requiring both explicit and implicit labeling of AI-generated content, the regulation enhances traceability and transparency, helping to prevent misinformation and deepfake-related risks. In the context of rapid technological development, this framework provides clearer operational guidelines for the compliant application of artificial intelligence.
© 2026 - All rights reserved.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Cookies and Privacy Policy.

Your Cookie Settings

We use cookies to enable essential functionality on our website and analyze website traffic. For more information, read our our Cookies and Privacy Policy below.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites.

Analytics

These cookies collect information that is used in aggregate and in an anonymized form to help us understand how our website is being used and how effectively our site is performing.