On 9 September 2024, the National Technical Committee 260 on Cybersecurity of Standardization Administration of China released the first version of the AI Safety Governance Framework (“Framework”). It is China’s second step following its release of the Global AI Governance Initiative (“Initiative”) on 18 October 2023. The Initiative systematically expounds China’s thinking for global AI governance around AI development, security and governance. On the one hand, the Framework provides development and management guidance for the domestic AI field in China. On the other hand, it proposes the first version of the Chinese draft to promote the early formation of the global AI governance framework and standard specifications.
According to the Framework, AI safety governance involves identifying AI safety risks, technical response measures, comprehensive governance measures, and safe development and application guidelines. AI safety risks are dynamic. No matter when and what kind of safety risks are faced, AI governance should abide by the following principles: 1. While taking an inclusive attitude towards the research development and application of AI, the bottom line of protecting national security, social public interests and the legitimate rights and interests of the public should also be strictly observed.2. Dynamically analyze the safety risks of AI technology and its applications, and quickly adjust governance measures.3. Comprehensively use measures that combine technology and management, and clarify the safety responsibilities of relevant parties in the AI research and development application ecosystem.4. Promote international cooperation in global AI safety governance, and form a global AI governance system with broad consensus.
The Framework divides AI safety risks into two Classes: endogenous safety risks caused by technical defects and deficiencies, and application safety risks caused by improper use, abuse, or malicious use. The endogenous safety risks are further divided into safety risks caused by flaws in model algorithms, data, and systems. The Framework clarifies that the system safety risks of the endogenous safety risks include supply chain safety risks, implying that the United States uses unilateral measures such as technology monopoly and export controls to create development barriers that cause the risk of global supply cuts of AI chips, software and tools. In addition, the Framework classifies AI application safety risks into four major areas, namely the network field, the reality field, the cognitive field, and the ethical field.
In response to the above-mentioned AI safety risks, the Framework proposes technical countermeasures to prevent endogenous and application safety risks from aspects such as training data, computing facilities, model algorithms, product services and application scenarios, and comprehensive governance measures involving all relevant parties in the entire industry chain, including technology research and development institutions, service providers, users, government departments, industry associations, and social organizations. Regarding the technical countermeasures, the Framework requires that the provision of AI services and AI model algorithms to overseas countries shall comply with cross-border data management regulations and export control requirements respectively. Furthermore, it points out that China’s countermeasures for supply chain safety risks are to “track the vulnerabilities and defect information of software and hardware products, and take timely patching and reinforcement measures”. In terms of the comprehensive governance measures, the following specific measures are listed in the Framework: 1. For the Endogenous safety risks: to solve model algorithm safety risks by promoting research on the solvability and predictability of AI, and building a responsible AI research and development application system; to prevent data safety risks by improving data safety and personal information protection specifications in all aspects of AI; and to avoid system safety risks by strengthening AI supply chain security, establishing a risk threat information notification and sharing mechanism, and building an AI security incident emergency response mechanism.2. For the application safety risks: to prevent application safety risks in the fields of network, reality, cognition and ethics by measures including implementation of AI application classification and grading management, establishment of an AI service traceability management system, cultivation of relevant talents, establishment and improvement of relevant publicity and education, industry self-discipline and social supervision mechanisms, and promotion of international exchanges and cooperation. When describing the measures to cultivate talents, the Framework specifically points out that it is necessary to strengthen the security talent team in the fields of unmanned driving, intelligent medical care, brain-like intelligence and brain-computer interface, currently being vigorously developed in China. In addition, it is considered that sharing knowledge achievements, open-source AI technology, jointly developing AI chips, frameworks and software, and enhancing the diversity of supply chain sources can prevent supply chain safety risks.
Finally, the Framework formulates guidelines for model algorithm developers, AI service providers, key field users, and the general public. According to the Framework, model algorithm developers shall implement the principle of people-oriented, intelligent for good in key links of R&D, attach importance to data security and personal information protection, respect intellectual property rights and copyrights, ensure the security of model algorithm training environment, comprehensively evaluate the potential defects of model algorithms and the maturity of AI product and service capabilities, and conduct regular evaluation tests. AI service providers shall disclose the capabilities, limitations and scope of application of AI products or services, ensure that the responsibility chain can be traced and minimum effective functions, continuously track security risks in operation, promptly report security incidents or vulnerabilities found, and promptly stop illegal misuse of AI products. Key field users, including government departments, critical information infrastructure, and areas that directly affect public safety, health and safety of citizens, should carefully evaluate and regularly check the long-term and potential impact of AI technology, use high-level password policies, enhance network security and supply chain security capabilities, limit the access rights of AI systems to data, formulate data backup and recovery plans, and avoid relying entirely on AI decisions. The public shall choose reputable products, carefully read relevant agreements and instruction documents, avoid inputting sensitive information, and protect personal privacy.
In conclusion, the Framework proposes these specific technical responses and comprehensive prevention and control measures for the endogenous security risks such as model algorithm security, data security and system security, and application security risks such as the network field, the reality field, the cognitive field and the ethical field, based on the characteristics of AI technology, risk sources and manifestations. In addition, it formulates relevant guidelines for model algorithm developers, artificial intelligence service providers, key field users and the public. This will help the healthy development of the domestic AI field and promote the active participation of all parties in the Initiative.
According to the Framework, AI safety governance involves identifying AI safety risks, technical response measures, comprehensive governance measures, and safe development and application guidelines. AI safety risks are dynamic. No matter when and what kind of safety risks are faced, AI governance should abide by the following principles: 1. While taking an inclusive attitude towards the research development and application of AI, the bottom line of protecting national security, social public interests and the legitimate rights and interests of the public should also be strictly observed.2. Dynamically analyze the safety risks of AI technology and its applications, and quickly adjust governance measures.3. Comprehensively use measures that combine technology and management, and clarify the safety responsibilities of relevant parties in the AI research and development application ecosystem.4. Promote international cooperation in global AI safety governance, and form a global AI governance system with broad consensus.
The Framework divides AI safety risks into two Classes: endogenous safety risks caused by technical defects and deficiencies, and application safety risks caused by improper use, abuse, or malicious use. The endogenous safety risks are further divided into safety risks caused by flaws in model algorithms, data, and systems. The Framework clarifies that the system safety risks of the endogenous safety risks include supply chain safety risks, implying that the United States uses unilateral measures such as technology monopoly and export controls to create development barriers that cause the risk of global supply cuts of AI chips, software and tools. In addition, the Framework classifies AI application safety risks into four major areas, namely the network field, the reality field, the cognitive field, and the ethical field.
In response to the above-mentioned AI safety risks, the Framework proposes technical countermeasures to prevent endogenous and application safety risks from aspects such as training data, computing facilities, model algorithms, product services and application scenarios, and comprehensive governance measures involving all relevant parties in the entire industry chain, including technology research and development institutions, service providers, users, government departments, industry associations, and social organizations. Regarding the technical countermeasures, the Framework requires that the provision of AI services and AI model algorithms to overseas countries shall comply with cross-border data management regulations and export control requirements respectively. Furthermore, it points out that China’s countermeasures for supply chain safety risks are to “track the vulnerabilities and defect information of software and hardware products, and take timely patching and reinforcement measures”. In terms of the comprehensive governance measures, the following specific measures are listed in the Framework: 1. For the Endogenous safety risks: to solve model algorithm safety risks by promoting research on the solvability and predictability of AI, and building a responsible AI research and development application system; to prevent data safety risks by improving data safety and personal information protection specifications in all aspects of AI; and to avoid system safety risks by strengthening AI supply chain security, establishing a risk threat information notification and sharing mechanism, and building an AI security incident emergency response mechanism.2. For the application safety risks: to prevent application safety risks in the fields of network, reality, cognition and ethics by measures including implementation of AI application classification and grading management, establishment of an AI service traceability management system, cultivation of relevant talents, establishment and improvement of relevant publicity and education, industry self-discipline and social supervision mechanisms, and promotion of international exchanges and cooperation. When describing the measures to cultivate talents, the Framework specifically points out that it is necessary to strengthen the security talent team in the fields of unmanned driving, intelligent medical care, brain-like intelligence and brain-computer interface, currently being vigorously developed in China. In addition, it is considered that sharing knowledge achievements, open-source AI technology, jointly developing AI chips, frameworks and software, and enhancing the diversity of supply chain sources can prevent supply chain safety risks.
Finally, the Framework formulates guidelines for model algorithm developers, AI service providers, key field users, and the general public. According to the Framework, model algorithm developers shall implement the principle of people-oriented, intelligent for good in key links of R&D, attach importance to data security and personal information protection, respect intellectual property rights and copyrights, ensure the security of model algorithm training environment, comprehensively evaluate the potential defects of model algorithms and the maturity of AI product and service capabilities, and conduct regular evaluation tests. AI service providers shall disclose the capabilities, limitations and scope of application of AI products or services, ensure that the responsibility chain can be traced and minimum effective functions, continuously track security risks in operation, promptly report security incidents or vulnerabilities found, and promptly stop illegal misuse of AI products. Key field users, including government departments, critical information infrastructure, and areas that directly affect public safety, health and safety of citizens, should carefully evaluate and regularly check the long-term and potential impact of AI technology, use high-level password policies, enhance network security and supply chain security capabilities, limit the access rights of AI systems to data, formulate data backup and recovery plans, and avoid relying entirely on AI decisions. The public shall choose reputable products, carefully read relevant agreements and instruction documents, avoid inputting sensitive information, and protect personal privacy.
In conclusion, the Framework proposes these specific technical responses and comprehensive prevention and control measures for the endogenous security risks such as model algorithm security, data security and system security, and application security risks such as the network field, the reality field, the cognitive field and the ethical field, based on the characteristics of AI technology, risk sources and manifestations. In addition, it formulates relevant guidelines for model algorithm developers, artificial intelligence service providers, key field users and the public. This will help the healthy development of the domestic AI field and promote the active participation of all parties in the Initiative.