• About Us
  • People
    • Matthew Murphy
    • Ellen Wang
    • Yu Du
    • Xia Yu
    • Sarah Xuan
  • Practice Areas
    • Intellectual Property
    • Technology
    • Corporate
    • International Trade
    • International Arbitration
  • Insights
  • Accolades
  • Locations
  • Contact Us
  • 中文

Guangzhou Court Recognizes an Article Generated by AI Infringes the Right of Reputation of a Listing Company

Published 20 March 2026 Fei Dang
On March 12, 2026, the Guangzhou Internet Court (“the Court”) introduced a case of infringement of the right of reputation due to the use of AI to generate lengthy posts spreading rumours about a listed company. Case introduction The plaintiff is a group company that is listed on the Chinese stock market, and one of its senior executives was arrested on suspicion of committing a crime in the course of his duties. The defendant is an individual who registered and operates a personal WeChat public account focused on finance, where he describes himself as “a certified public accountant, certified asset appraiser, and certified tax agent with thirty years of experience in the relevant industry.” It is reported that the defendant instructed a generative AI program to generate a lengthy article with no less than 10,000 words based on a document with about 1700 words provided by the defendant. The provided document contained details of a criminal case involving a senior executive of the plaintiff, such as the embezzlement of public funds through import subsidies and the transfer of benefits via related-party transactions. Upon multiple rounds of in-depth analysis and searching hundreds of online sources, the said AI program ultimately generated an article of nearly 15,000 words. Then, the defendant published the said AI-generated article on his WeChat public account with a title of “Details of A 320 Million Yuan Fraud Case Involving A Senior Executive at A Group Company Exposed,” which was marked as “Original.” It is said that the crime of details regarding the said executive included “leveraging personal connections to secure approvals, licenses, and other resources for the company; having irregularities in its financial data; inflating profits; engaging in self-dealing through related-party transactions and being suspected of financial fraud and other violations of laws and regulations.” Accordingly, the plaintiff claimed that the said article published by the defendant was severely inaccurate and damaged its goodwill and therefore brought a lawsuit of infringement on the right of reputation to the Court and requested the defendant to apologize and compensate for economic loss amount to RMB 500,000, as well as the fee to maintain its right. The defendant argued that the article in question was generated by AI and the relevant content therein was obtained from public information online. Thus, it did not constitute an infringement on the right of reputation. Court opinion Upon trial, the Court determined that the content regarding the plaintiff and its executives in the document provided by the defendant and fed into the generative AI was inaccurate without any supporting evidence, and the sources cited in the generated article also lacked legitimate references or supporting evidence for the relevant content. By the time the article was taken down, it had accumulated over 11,000 views and been shared more than 1,000 times. The Court concluded that the key issue of the case was whether the defendant’s use of AI to generate and publish information was at fault and analyzed it from four aspects as follows. Firstly, during the article-generation phase, the defendant failed to conduct the necessary verification of the input reference materials and used misleading prompts. The Court held that users not only have a duty to verify the authenticity and accuracy of the input reference materials to ensure the reliability of the generated results but also bear a duty of care to use generation instructions prudently, as user instructions directly affect the legal boundaries of the generated content. In this case, the content regarding the plaintiff in the 1,700-word source document provided by the defendant to the AI constituted false information, yet the defendant failed to verify it, demonstrating clear negligence. Furthermore, the defendant’s instruction to the AI to generate a 10,000-word article essentially required the AI to expand upon the said source text containing false information. Therefore, the defendant “subjectively possessed the intent to condone the generation of infringing content and the expansion of the infringing impact.” Secondly, during the article-publishing phase, the defendant failed to conduct the necessary verification of the AI-generated content and allowed false information to spread. Since the defendant neither conducted any fact-checking nor filtered out false information in the generated article nor took necessary measures to prevent harm from occurring, the defendant’s public dissemination of the article led to the spread of infringing statements. Therefore, the defendant bore subjective fault for allowing the infringing consequences to occur.Thirdly, the defendant failed to fulfill its duty of care regarding the labeling of AI-generated content. By failing to comply with the labeling requirements and provide a proactive disclosure regarding the AI-generated articles it published, as stipulated in Article 10.1 of the “Measures for the Labeling of Artificial Intelligence Generated and Synthesized Content,” the defendant was therefore at fault.Fourthly, the defendant failed to exercise the duty of care commensurate with his professional status. The Court held that, as a financial professional holding multiple professional qualifications such as a certified public accountant and a certified asset appraiser, the defendant should have possessed a higher level of awareness regarding financial fraud, conflicts of interest, and their harmful consequences and should have foreseen the impact on the plaintiff’s corporate reputation that would result from the publication of his article “exposing” the plaintiff’s alleged violations. The Court further held that the defendant’s failure to exercise the duty of care commensurate with his professional standing, coupled with his labeling of the article as “original,” reflected his “subjective intent to mislead the public into believing that the article was the result of his professional analysis.” In a word, the Court ordered the defendant to publish a statement of apology to the plaintiff’s company on the WeChat public account in question and to compensate the plaintiff’s company for economic losses in the amount of 10,000 yuan. The judgment has now taken effect. Comment This case involves an infringement of the right of reputation claim arising from AI-generated content. As in many previous cases involving generative AI infringement, the ruling in this case reaffirms the emerging judicial consensus that “AI generation” does not exempt a party from liability. Furthermore, this case establishes the scope of the duty of care for users of generative AI. Specifically, the judge in this case defined four tiers of duty of care for AI users, covering both the generation and dissemination stages of AI-generated content. During the generation stage, AI users must conduct necessary verification of the input reference materials, which is the duty of input verification, and avoid using prompts that induce the AI to generate infringing content, which is the duty of exercising due care in using generation prompts. During the dissemination stage, AI users must conduct necessary verification of the authenticity and objectivity of AI-generated content, which is the duty of content review, as well as actively declare and use labeling functions to identify AI-generated content, which is the duty of labeling. Notably, the said obligation to label AI-generated content is explicitly stipulated in Article 10.1 of the “Measures for the Labeling of Artificial Intelligence Generated and Synthesized Content,” and it provides that “When users utilize online information content dissemination services to publish generated or synthetic content, they shall proactively declare such content and use the labeling functions provided by the service provider to label it.” In short, the four-tier duty of care established in this judgment regarding AI-generated content during the generation and dissemination stages not only helps fill the gap in determining the duty of care for AI users but also aids in curbing the spread of false information and is therefore of great significance.

© 2026 - All rights reserved.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Cookies and Privacy Policy.

Your Cookie Settings

We use cookies to enable essential functionality on our website and analyze website traffic. For more information, read our our Cookies and Privacy Policy below.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites.

Analytics

These cookies collect information that is used in aggregate and in an anonymized form to help us understand how our website is being used and how effectively our site is performing.