Beijing Internet Court Looks at Good Faith Issues in AI-Related Copyright Litigation
Published 19 November 2025
Yu Du
On 12 November 2015, the Beijing Internet Court issued a judgment in a copyright dispute that marks a pivotal moment in the oversight of generative-AI-assisted works in China. The case concerned an image the plaintiff claimed to be a photographic work. However, the court found that the image was highly likely generated using artificial-intelligence tools. By sanctioning the plaintiff for concealing AI involvement, the Court signaled that, in the era of generative AI, originality claims must be accompanied by transparent and verifiable disclosure of the creation process.
Case Background
A corporate plaintiff alleged that a defendant had used an image online without authorisation, thereby infringing the plaintiff’s exclusive right of information-network dissemination under the PRC Copyright Law.
The plaintiff presented the image as its original photographic work, asserting full authorship and exclusive rights. During litigation, however, the court discovered several red flags: the image bore tell-tale signs of generative-AI output (such as atypical texture uniformity, improbable lighting combinations and absence of camera metadata consistent with a natural photograph).
The Court ordered the plaintiff to produce: i) evidence of the original capture (e.g., camera model, exposure data, RAW file), ii) proof of human creative stages (selection of frame, lighting and composition), and (iii) any prompt logs, model versions or iteration records if an AI tool had been used.
The plaintiff and its upstream rights-holder repeatedly failed to credibly explain these issues, claiming instead that the image was “self-photographed” and that upstream provider “handled all technical details”. The court concluded that the plaintiff had misrepresented key facts and obscured the role of AI generation, thereby impeding fact-finding.
Court’s Holding and Legal Basis
The Beijing Internet Court held that the plaintiff’s mischaracterisation of the work as a pure photograph, while knowing or suspecting AI involvement, constituted an act of bad-faith litigation, relying on the following legal provisions:
1) Article 13 of the PRC Civil Procedure Law requires parties to conduct litigation in good faith. Article 67 further requires each party to provide evidence supporting its assertions, and allows the court to draw adverse inferences where a party refuses to produce evidence within its control. Article 118 empowers courts to impose admonitions or fines where a party obstructs litigation or disrupts the evidentiary process.
2) Article 10 of the PRC Copyright Law provides authors with rights including the right of information-network dissemination, while Article 48 confirms that unauthorised online dissemination constitutes infringement.
3) Article 63 of the Provisions of the Supreme People’s Court on Evidence in Civil Litigation (as amended in 2019) provides that parties shall make truthful and complete statements regarding the facts of the case.
In applying these norms, the Court noted that: i) by failing to provide verifiable evidence of human creative input and misrepresenting the creation process, the plaintiff undermined the integrity of litigation; ii) given the increasing prevalence of AI generation, the court affirmed that mere assertion of authorship is insufficient - the rights-asserting party must disclose whether AI tools were used, and maintain a credible chain of creation records; and iii) as a result, the court imposed a procedural financial penalty on the plaintiff and rejected parts of the claim relating to the contested image.
Comment
Despite its simple factual scenario, the decision carries far-reaching implications. As one of the first judgments to explicitly link the duty of truthful disclosure regarding AI use with the overarching principle of good-faith litigation in copyright disputes, it reflects a clear shift in judicial expectations. In the AI era, originality claims require transparency, explainability and verifiable documentation of the creation process. Misrepresenting or concealing the use of AI does not merely weaken a claim but may result in procedural sanctions and fundamentally undermine the claimant’s rights. For businesses, creators and legal practitioners, the message is unequivocal: while AI may assist creativity, the chain of authorship must remain authentic, accurate and traceable. This judgment marks an important milestone, signalling that as AI-generated content becomes ubiquitous, the threshold for legal protection increasingly demands both substantive human creativity and procedural clarity grounded in good-faith engagement with the court.
Case Background
A corporate plaintiff alleged that a defendant had used an image online without authorisation, thereby infringing the plaintiff’s exclusive right of information-network dissemination under the PRC Copyright Law.
The plaintiff presented the image as its original photographic work, asserting full authorship and exclusive rights. During litigation, however, the court discovered several red flags: the image bore tell-tale signs of generative-AI output (such as atypical texture uniformity, improbable lighting combinations and absence of camera metadata consistent with a natural photograph).
The Court ordered the plaintiff to produce: i) evidence of the original capture (e.g., camera model, exposure data, RAW file), ii) proof of human creative stages (selection of frame, lighting and composition), and (iii) any prompt logs, model versions or iteration records if an AI tool had been used.
The plaintiff and its upstream rights-holder repeatedly failed to credibly explain these issues, claiming instead that the image was “self-photographed” and that upstream provider “handled all technical details”. The court concluded that the plaintiff had misrepresented key facts and obscured the role of AI generation, thereby impeding fact-finding.
Court’s Holding and Legal Basis
The Beijing Internet Court held that the plaintiff’s mischaracterisation of the work as a pure photograph, while knowing or suspecting AI involvement, constituted an act of bad-faith litigation, relying on the following legal provisions:
1) Article 13 of the PRC Civil Procedure Law requires parties to conduct litigation in good faith. Article 67 further requires each party to provide evidence supporting its assertions, and allows the court to draw adverse inferences where a party refuses to produce evidence within its control. Article 118 empowers courts to impose admonitions or fines where a party obstructs litigation or disrupts the evidentiary process.
2) Article 10 of the PRC Copyright Law provides authors with rights including the right of information-network dissemination, while Article 48 confirms that unauthorised online dissemination constitutes infringement.
3) Article 63 of the Provisions of the Supreme People’s Court on Evidence in Civil Litigation (as amended in 2019) provides that parties shall make truthful and complete statements regarding the facts of the case.
In applying these norms, the Court noted that: i) by failing to provide verifiable evidence of human creative input and misrepresenting the creation process, the plaintiff undermined the integrity of litigation; ii) given the increasing prevalence of AI generation, the court affirmed that mere assertion of authorship is insufficient - the rights-asserting party must disclose whether AI tools were used, and maintain a credible chain of creation records; and iii) as a result, the court imposed a procedural financial penalty on the plaintiff and rejected parts of the claim relating to the contested image.
Comment
Despite its simple factual scenario, the decision carries far-reaching implications. As one of the first judgments to explicitly link the duty of truthful disclosure regarding AI use with the overarching principle of good-faith litigation in copyright disputes, it reflects a clear shift in judicial expectations. In the AI era, originality claims require transparency, explainability and verifiable documentation of the creation process. Misrepresenting or concealing the use of AI does not merely weaken a claim but may result in procedural sanctions and fundamentally undermine the claimant’s rights. For businesses, creators and legal practitioners, the message is unequivocal: while AI may assist creativity, the chain of authorship must remain authentic, accurate and traceable. This judgment marks an important milestone, signalling that as AI-generated content becomes ubiquitous, the threshold for legal protection increasingly demands both substantive human creativity and procedural clarity grounded in good-faith engagement with the court.