China’s Measures for the Identification of AI-Generated Synthetic Content
Published 19 March 2025
Sarah Xuan
To regulate the identification of AI-generated synthetic content, protect the legal rights of citizens, legal persons, and organizations, and safeguard public interests, China has introduced the Measures for the Identification of AI-Generated Synthetic Content (hereinafter referred to as “the Measures”) on March 7, 2025, establish a comprehensive regulatory system for AI-generated content identification, applicable to internet information service providers involved in AI-based content generation and synthesis, and will take effect on September 1, 2025.
This article provides a detailed legal interpretation of the Measures, outlining their key provisions and analyzing their implications for domestic and foreign technology companies, AI service providers, and digital content platforms operating in China.
I. Scope and Key Definitions
1. Applicability of the Measures
The Measures apply to network information service providers (hereinafter referred to as “service providers”) engaged in AI-generated synthetic content identification activities as outlined in the relevant regulatory provisions. This means that any entity providing AI-based content generation services within China’s jurisdiction will be subject to these regulations.
2. Definition of AI-Generated Synthetic Content
AI-generated synthetic content refers to text, images, audio, video, virtual scenarios, and other forms of content created or synthesized using AI technologies.
The identification of AI-generated synthetic content is categorized into two types:
1) Explicit Identification: Labels that are directly presented in AI-generated content or interactive interfaces using text, sound, graphics, or other forms that are perceptible to users.2) Implicit Identification: Technical measures embedded in the metadata of AI-generated content files that are not easily perceptible to users but provide essential attribution details.
II. Explicit Identification Requirements 1. Mandatory Explicit Identification for Certain AI-Generated Content Under the Measures, if the AI-generated synthetic content falls under Article 17, Paragraph 1 of the Provisions on the Administration of Deep Synthesis of Internet Information Services, service providers must include explicit identification markers as follows: 1) Text Content: A textual prompt or universal symbol must be placed at the beginning, end, or middle of the text. Additional visual indicators should be added to the surrounding interactive environment.2) Audio Content: A voice prompt or an audio rhythm cue must be included at the beginning, end, or middle of the audio content. Alternatively, a prominent label must be displayed in the interactive interface.3) Image Content: A prominent label must be added to an appropriate location within the image.4) Video Content: A visible label must be included at the beginning of the video and at appropriate locations around the playback interface. Additional labels may be added to the middle and end of the video.5) Virtual Scenarios: A prominent label must be included at the beginning of the virtual scenario and, where applicable, at appropriate locations throughout the experience.6) Other AI-Generated Content: Based on specific application scenarios, service providers must ensure appropriate explicit identification. Additionally, when AI-generated content is available for download, replication, or export, service providers must ensure that the content retains the required explicit identification.
III. Implicit Identification Requirements 1. Service providers must embed implicit identification within the metadata of AI-generated content in compliance with Article 16 of the Provisions on the Administration of Deep Synthesis of Internet Information Services. This metadata must include: 1) Content attributes (indicating AI generation);2) Service provider’s name or identification code;3) Unique content identification number.2. The Measures encourage service providers to adopt digital watermarking and other technologies for implicit identification.
3. Metadata refers to descriptive information embedded in the file header according to specific encoding formats, allowing the documentation of content origin, attributes, and usage.
IV. Responsibilities of Content Dissemination Platforms Service providers offering network information dissemination services must implement the following measures to regulate the distribution of AI-generated content: 1. Verification of Metadata for Implicit Identification:
1) If metadata confirms the content as AI-generated, the platform must visibly label the content to notify the public.2) If metadata does not confirm AI generation but the user declares it as such, the platform must still add a visible label.3) If metadata does not confirm AI generation and the user does not declare it as such, but the platform detects explicit markers or traces of AI synthesis, the platform must label it as “suspected AI-generated content.” 2. Provision of Identification Features:Platforms must offer users the ability to declare AI-generated content and provide corresponding labeling tools. 3. When any of the above situations arise, metadata must be updated to include content attributes, platform identification, and a content ID.
V. Compliance Obligations for App Distribution PlatformsInternet application distribution platforms must review AI service capabilities before approving the listing or launch of applications. They must require application service providers to disclose whether they provide AI-generated content services and verify their compliance with identification regulations.
VI. User Compliance and Prohibitions 1. User Obligations Users who disseminate AI-generated content must proactively declare and label their content using the identification tools provided by service providers. 2. Prohibited Activities Any organization or individual is prohibited from: 1) Maliciously removing, altering, forging, or concealing AI-generated content identification.2) Providing tools or services that enable the above activities.3) Using improper identification methods to infringe upon the legitimate rights of others.
VII. Additional Compliance Requirements
1. Alignment with Other Legal and Regulatory FrameworksService providers must comply with all relevant laws, administrative regulations, departmental rules, and national mandatory standards when implementing identification measures. 2. Submission of Identification Materials for Regulatory ApprovalWhen completing regulatory requirements such as algorithm filings and security assessments, service providers must submit relevant AI-generated content identification materials and enhance information-sharing mechanisms to support law enforcement against illegal activities.
[Comment] The Measures for the Identification of AI-Generated Synthetic Content reflect China’s commitment to promoting responsible AI development while preventing misinformation and misuse of synthetic media. For businesses operating AI services in China, the following compliance strategies are recommended: 1. Ensure that both explicit and implicit identification measures are in place and conform to the technical standards outlined in the Measures.2. Update terms of service and user agreements to reflect new identification requirements and ensure users are aware of their labeling obligations.3. Proactively liaise with Chinese regulatory authorities to ensure compliance with AI service regulations, including algorithm filings and security assessments.4. As AI regulations evolve, companies must stay informed of new compliance requirements and adjust their AI governance frameworks accordingly. By adhering to these regulatory requirements, businesses can mitigate legal risks, enhance transparency, and contribute to a responsible AI ecosystem in China.
II. Explicit Identification Requirements 1. Mandatory Explicit Identification for Certain AI-Generated Content Under the Measures, if the AI-generated synthetic content falls under Article 17, Paragraph 1 of the Provisions on the Administration of Deep Synthesis of Internet Information Services, service providers must include explicit identification markers as follows: 1) Text Content: A textual prompt or universal symbol must be placed at the beginning, end, or middle of the text. Additional visual indicators should be added to the surrounding interactive environment.2) Audio Content: A voice prompt or an audio rhythm cue must be included at the beginning, end, or middle of the audio content. Alternatively, a prominent label must be displayed in the interactive interface.3) Image Content: A prominent label must be added to an appropriate location within the image.4) Video Content: A visible label must be included at the beginning of the video and at appropriate locations around the playback interface. Additional labels may be added to the middle and end of the video.5) Virtual Scenarios: A prominent label must be included at the beginning of the virtual scenario and, where applicable, at appropriate locations throughout the experience.6) Other AI-Generated Content: Based on specific application scenarios, service providers must ensure appropriate explicit identification. Additionally, when AI-generated content is available for download, replication, or export, service providers must ensure that the content retains the required explicit identification.
III. Implicit Identification Requirements 1. Service providers must embed implicit identification within the metadata of AI-generated content in compliance with Article 16 of the Provisions on the Administration of Deep Synthesis of Internet Information Services. This metadata must include: 1) Content attributes (indicating AI generation);2) Service provider’s name or identification code;3) Unique content identification number.2. The Measures encourage service providers to adopt digital watermarking and other technologies for implicit identification.
3. Metadata refers to descriptive information embedded in the file header according to specific encoding formats, allowing the documentation of content origin, attributes, and usage.
IV. Responsibilities of Content Dissemination Platforms Service providers offering network information dissemination services must implement the following measures to regulate the distribution of AI-generated content: 1. Verification of Metadata for Implicit Identification:
1) If metadata confirms the content as AI-generated, the platform must visibly label the content to notify the public.2) If metadata does not confirm AI generation but the user declares it as such, the platform must still add a visible label.3) If metadata does not confirm AI generation and the user does not declare it as such, but the platform detects explicit markers or traces of AI synthesis, the platform must label it as “suspected AI-generated content.” 2. Provision of Identification Features:Platforms must offer users the ability to declare AI-generated content and provide corresponding labeling tools. 3. When any of the above situations arise, metadata must be updated to include content attributes, platform identification, and a content ID.
V. Compliance Obligations for App Distribution PlatformsInternet application distribution platforms must review AI service capabilities before approving the listing or launch of applications. They must require application service providers to disclose whether they provide AI-generated content services and verify their compliance with identification regulations.
VI. User Compliance and Prohibitions 1. User Obligations Users who disseminate AI-generated content must proactively declare and label their content using the identification tools provided by service providers. 2. Prohibited Activities Any organization or individual is prohibited from: 1) Maliciously removing, altering, forging, or concealing AI-generated content identification.2) Providing tools or services that enable the above activities.3) Using improper identification methods to infringe upon the legitimate rights of others.
VII. Additional Compliance Requirements
1. Alignment with Other Legal and Regulatory FrameworksService providers must comply with all relevant laws, administrative regulations, departmental rules, and national mandatory standards when implementing identification measures. 2. Submission of Identification Materials for Regulatory ApprovalWhen completing regulatory requirements such as algorithm filings and security assessments, service providers must submit relevant AI-generated content identification materials and enhance information-sharing mechanisms to support law enforcement against illegal activities.
[Comment] The Measures for the Identification of AI-Generated Synthetic Content reflect China’s commitment to promoting responsible AI development while preventing misinformation and misuse of synthetic media. For businesses operating AI services in China, the following compliance strategies are recommended: 1. Ensure that both explicit and implicit identification measures are in place and conform to the technical standards outlined in the Measures.2. Update terms of service and user agreements to reflect new identification requirements and ensure users are aware of their labeling obligations.3. Proactively liaise with Chinese regulatory authorities to ensure compliance with AI service regulations, including algorithm filings and security assessments.4. As AI regulations evolve, companies must stay informed of new compliance requirements and adjust their AI governance frameworks accordingly. By adhering to these regulatory requirements, businesses can mitigate legal risks, enhance transparency, and contribute to a responsible AI ecosystem in China.