Beijing Court Interprets Standards on Determination of AI-Generated Content by Network Platforms
Published 28 July 2025
Fei Dang
Earlier this year, the Beijing Internet Court concluded a case between an internet user and a network service provider regarding a dispute on marking online content as generated by AI.
The plaintiff is an internet user who posted on a network platform that “Working doesn't really make you much money, but it can open up new perspectives ... If you're interested in learning to drive and plan to drive in the future, you can complete it on one of your most free vacations ... There won't be much full-time to learn for a driver's license after you start working.” The defendant, which is the network platform operator, considered the said post a violation, as it failed to mark the AI-generated content, and therefore, determined to hide the said post and forbade the said user to post for one day.
Upon an unsuccessful appeal to the platform, the plaintiff sued in the Beijing Internet Court, claimed that it had not created the AI and the action of the defendant was a breach of the contract, and requested the Court to order the defendant to revoke the said punishment and delete the violation record from the platform system.
The defendant replied that it is entitled to review and recognize any violation of the plaintiff during its use of the platform service and take necessary measures in accordance with their network service contract, which is not a breach of the contract. The defendant claims that it clearly provides in its platform rules that “creators should take the initiative to use the label ‘AI-Generated Content’ to make a statement when publishing content containing AI-generated content, and for content that fails to declare, the platform will take appropriate measures to restrict the stream and add relevant logos.” The defendant argued that the post from the plaintiff was recognized as “including AI-generated content” by a machine and then reviewed manually, which was considered as lacking obvious human emotional characteristics. Further, the defendant considered that it has the right to choose its own reasonable and necessary methods and techniques; the network service contract did not agree that the defendant is obliged to provide the user, explain the technical secrets and algorithmic logic; and the plaintiff does not have the right to be informed. During the litigation, the defendant submitted as evidence of its relevant algorithmic mechanism recorded in the Internet Information Service Algorithm Filing System and claimed that the part associated with the detection of the case in question was the “answer security discriminating module,” which was expressed as "based on the self-developed command fine-tuning model. Based on the ability to train on the corpus of collected and labeled security-risk responses, it can intercept most of the offending responses generated by the deep synthesis model of the conversation." In a word, the defendant requested the Court to reject the plaintiff’s requests.
Upon trial, the Court considered that the platform is entitled to review and determine whether the involved content was generated by AI in accordance with the contract, but such review and outcome should have a reasonable basis. More specifically speaking, the defendant has stated content in the network announcement regarding the marking of the AI-generated content and its measures in terms of failure to mark, which is part of the platform service contract and entitled the defendant to review and handle the involved content herein.
On the other hand, the plaintiff claimed the involved content was not AI-generated and should have submitted preliminary evidence, such as creative drafts, originals, source documents, source data, etc., attesting to the attributes of human creativity, under the normal circumstance; however, since the plaintiff’s content was an instantaneous creation of texts, it would be impossible to provide the said evidence. Thus, the defendant should provide reasonable evidence or explanation for its review conclusion that the involved content was AI generated, as the defendant is both the party in control of the algorithmic tool and the party judging the outcome, not to mention that the algorithmic tool is controlled by the defendant as well in terms of the process of calculation and the outcome of the review.
The court further explained that it could confirm the relevance of the recorded algorithm submitted by the defendant as its published function as identifying risky answers other than recognizing and determining AI-generated content, which meant that the defendant failed to reasonably explain the determining basis and conclusion regarding the involved “AI-generated content.” The defendant argued that it required obvious human emotional characteristics to reject the algorithmic result in terms of its manual review, but the Court considered such a standard “more reliant on subjective perceptions and personal experiences that lack a scientific basis and a high degree of persuasiveness and credibility.”
In conclusion, the Court ordered the defendant to unfold the hidden involved content and delete the violation record in its system and rejected the plaintiff’s other request. Currently, the said judgment has come into effect.
Comment
The Cyberspace Administration of China (CAC) and other departments co-issued the Measures for the Labeling of Artificial Intelligence Generated and Synthesized Content on March 7, 2025, which regulates the use of AI-generated synthetic content logos, requires users to take the initiative to declare and identify AI-generated synthetic content when posting it, and requires the network information content distribution service providers to take measures to regulate the distribution of synthetic content generation activities.
Although the said Measures will not come into effect until September 1, 2025, as the first dispute resulted from the determination of the user’s posted content as AI-generated by the platform in China, the judgment delivered herein has already made useful explorations for the judicial practice of similar cases in the future.
According to the judge of the case, such enlightening explorations include:
“1. The network content service platform has the right to use algorithmic tools to review and process whether the content released by the user uses AI-generated synthesis and clearly informs the user of the need to comply with the provisions of AI-generated synthesized content labeling through the user agreement, community norms, and penalty rules, etc., and if the user violates the relevant provisions, the platform may deal with the violation.
2. If the platform determines that the content released by the user is AI-generated synthesis, the user needs to provide preliminary evidence to prove that the content is created by human beings, and the people's court may comprehensively adjust the user's burden of proof by combining the form of creation, the content, the carrier, and other factors and determine the effectiveness of the evidence.
3. Upon providing prima facie evidence by the user, the platform shall submit evidence to prove the correctness of its use of the algorithmic tool to make the determination or provide explanations to the necessary extent, and the platform shall not evade this obligation merely on the ground that it involves commercial secrets.
4. The judicial review of the interpretation of algorithms should comply with the principle of proportionality. In general, platforms do not need to provide technical details, source code, or raw data, but rather need to prove or explain in an understandable manner the mechanism of algorithmic operation around the disputed facts of the case and whether reasonable remedial measures have been taken to deal with possible miscarriages of justice.
Given the rapid development and wide application of the AI-generated synthesis technology, it is getting hard to distinguish between the AI-generated content and the human creation. Not to mention the tendency to use such technology for illegal gains, such as fraud, infringement, and so on. Thus, the marking of the AI-generated content is a way of supervising to purify the online content and avoid unnecessary misunderstanding. Nevertheless, the significance of the case herein is to draw a line between the rights and obligations of the user and the platform in terms of the marking of the AI-generated content.
The plaintiff is an internet user who posted on a network platform that “Working doesn't really make you much money, but it can open up new perspectives ... If you're interested in learning to drive and plan to drive in the future, you can complete it on one of your most free vacations ... There won't be much full-time to learn for a driver's license after you start working.” The defendant, which is the network platform operator, considered the said post a violation, as it failed to mark the AI-generated content, and therefore, determined to hide the said post and forbade the said user to post for one day.
Upon an unsuccessful appeal to the platform, the plaintiff sued in the Beijing Internet Court, claimed that it had not created the AI and the action of the defendant was a breach of the contract, and requested the Court to order the defendant to revoke the said punishment and delete the violation record from the platform system.
The defendant replied that it is entitled to review and recognize any violation of the plaintiff during its use of the platform service and take necessary measures in accordance with their network service contract, which is not a breach of the contract. The defendant claims that it clearly provides in its platform rules that “creators should take the initiative to use the label ‘AI-Generated Content’ to make a statement when publishing content containing AI-generated content, and for content that fails to declare, the platform will take appropriate measures to restrict the stream and add relevant logos.” The defendant argued that the post from the plaintiff was recognized as “including AI-generated content” by a machine and then reviewed manually, which was considered as lacking obvious human emotional characteristics. Further, the defendant considered that it has the right to choose its own reasonable and necessary methods and techniques; the network service contract did not agree that the defendant is obliged to provide the user, explain the technical secrets and algorithmic logic; and the plaintiff does not have the right to be informed. During the litigation, the defendant submitted as evidence of its relevant algorithmic mechanism recorded in the Internet Information Service Algorithm Filing System and claimed that the part associated with the detection of the case in question was the “answer security discriminating module,” which was expressed as "based on the self-developed command fine-tuning model. Based on the ability to train on the corpus of collected and labeled security-risk responses, it can intercept most of the offending responses generated by the deep synthesis model of the conversation." In a word, the defendant requested the Court to reject the plaintiff’s requests.
Upon trial, the Court considered that the platform is entitled to review and determine whether the involved content was generated by AI in accordance with the contract, but such review and outcome should have a reasonable basis. More specifically speaking, the defendant has stated content in the network announcement regarding the marking of the AI-generated content and its measures in terms of failure to mark, which is part of the platform service contract and entitled the defendant to review and handle the involved content herein.
On the other hand, the plaintiff claimed the involved content was not AI-generated and should have submitted preliminary evidence, such as creative drafts, originals, source documents, source data, etc., attesting to the attributes of human creativity, under the normal circumstance; however, since the plaintiff’s content was an instantaneous creation of texts, it would be impossible to provide the said evidence. Thus, the defendant should provide reasonable evidence or explanation for its review conclusion that the involved content was AI generated, as the defendant is both the party in control of the algorithmic tool and the party judging the outcome, not to mention that the algorithmic tool is controlled by the defendant as well in terms of the process of calculation and the outcome of the review.
The court further explained that it could confirm the relevance of the recorded algorithm submitted by the defendant as its published function as identifying risky answers other than recognizing and determining AI-generated content, which meant that the defendant failed to reasonably explain the determining basis and conclusion regarding the involved “AI-generated content.” The defendant argued that it required obvious human emotional characteristics to reject the algorithmic result in terms of its manual review, but the Court considered such a standard “more reliant on subjective perceptions and personal experiences that lack a scientific basis and a high degree of persuasiveness and credibility.”
In conclusion, the Court ordered the defendant to unfold the hidden involved content and delete the violation record in its system and rejected the plaintiff’s other request. Currently, the said judgment has come into effect.
Comment
The Cyberspace Administration of China (CAC) and other departments co-issued the Measures for the Labeling of Artificial Intelligence Generated and Synthesized Content on March 7, 2025, which regulates the use of AI-generated synthetic content logos, requires users to take the initiative to declare and identify AI-generated synthetic content when posting it, and requires the network information content distribution service providers to take measures to regulate the distribution of synthetic content generation activities.
Although the said Measures will not come into effect until September 1, 2025, as the first dispute resulted from the determination of the user’s posted content as AI-generated by the platform in China, the judgment delivered herein has already made useful explorations for the judicial practice of similar cases in the future.
According to the judge of the case, such enlightening explorations include:
“1. The network content service platform has the right to use algorithmic tools to review and process whether the content released by the user uses AI-generated synthesis and clearly informs the user of the need to comply with the provisions of AI-generated synthesized content labeling through the user agreement, community norms, and penalty rules, etc., and if the user violates the relevant provisions, the platform may deal with the violation.
2. If the platform determines that the content released by the user is AI-generated synthesis, the user needs to provide preliminary evidence to prove that the content is created by human beings, and the people's court may comprehensively adjust the user's burden of proof by combining the form of creation, the content, the carrier, and other factors and determine the effectiveness of the evidence.
3. Upon providing prima facie evidence by the user, the platform shall submit evidence to prove the correctness of its use of the algorithmic tool to make the determination or provide explanations to the necessary extent, and the platform shall not evade this obligation merely on the ground that it involves commercial secrets.
4. The judicial review of the interpretation of algorithms should comply with the principle of proportionality. In general, platforms do not need to provide technical details, source code, or raw data, but rather need to prove or explain in an understandable manner the mechanism of algorithmic operation around the disputed facts of the case and whether reasonable remedial measures have been taken to deal with possible miscarriages of justice.
Given the rapid development and wide application of the AI-generated synthesis technology, it is getting hard to distinguish between the AI-generated content and the human creation. Not to mention the tendency to use such technology for illegal gains, such as fraud, infringement, and so on. Thus, the marking of the AI-generated content is a way of supervising to purify the online content and avoid unnecessary misunderstanding. Nevertheless, the significance of the case herein is to draw a line between the rights and obligations of the user and the platform in terms of the marking of the AI-generated content.