1. INTRODUCTION

With the rapid progress of artificial intelligence technology, promoting its healthy development and standardization has become crucial. This is especially true given the trend of artificial intelligence-generated content (AIGC) triggered by ChatGPT. As a result, regulatory authorities in major countries are closely monitoring generative AI and enforcing appropriate regulations. In China, the Cyberspace Administration of China (CAC) issued the Measures for The Management of Generative Artificial Intelligence Services (Draft for Comment) (the "AIGC Measures") on April 11, 2023, which aims to provide clearer guidance for China's AI and algorithm-related industries in terms of the current regulatory landscape. This article will conduct a comprehensive analysis of the AIGC Measures to enhance readers' understanding of its significance.

2. BASIC APPROACH AND PRINCIPLES OF THE AIGC MEASURES

(1) Bolstering AIGC Development: The Legal Framework and Application Scope of the AIGC Measures

The AIGC Measures are empowered by three pillar data laws in China, namely the Cybersecurity Law (CSL), the Data Security Law (DSL), and the Personal Information Protection Law (PIPL). These superior laws, combined with the AIGC Measures, the Internet Information Service Algorithmic Recommendation Management Provisions (the "Algorithm Provisions"), the Internet Information Service Deep Synthesis Management Provisions (the "Deep Synthesis Provisions"), and the upcoming Measures for Ethical Review of Science and Technology (Trial), form the primary legal foundation and regulatory framework for ensuring compliance in the AIGC industry.

The AIGC Measures defines generative artificial intelligence as technology that creates content based on algorithms, models, and rules. Its scope regarding AIGC is broader than that of the Deep Synthesis Provisions. AIGC companies targeting the Chinese public, regardless of their physical location, are subject to regulatory supervision. ChatGPT's recent actions of shutting down user account from the P.R.C. can be considered a preventative measure to avoid being subject to the laws and regulations of the P.R.C. The AIGC Measures may also regulate "specific targeting" AIGC products, which means that even if an AIGC product is not directly targeted at the public, but at specific domestic enterprise users or custom-designed products, it may still be subject to the AIGC Measures.

The AIGC Measures stress the importance of prioritizing the use of secure and trustworthy software products, tools, computing, and data resources. It is worth noting that using secure and trustworthy network products and services is mandatory under P.R.C. laws and regulations for critical information infrastructure operators and important data processors. Detailed guidelines or referential standards remain to be introduced to assess the security and trustworthiness of network products and services in the future.

(2) Primary Principles of the AIGC Measures

the AIGC Measures sets out five primary principles: adherence to the core values of socialism, prohibition of algorithmic discrimination, prohibition of unfair competition, prevention of false information, and non-infringement of personal privacy and intellectual property. It is noteworthy to emphasize that certain principles hold significant value and merit highlighting:

  1. Prohibition of algorithmic discrimination. The prohibition of algorithmic discrimination is a key aspect of algorithm compliance under the current E-commerce Law. AIGC Measures extends the prohibition of algorithmic discrimination to other aspects such as algorithm design, training data selection, model creation and optimization, and service provision.
  2. Prohibition of unfair competition. Previous Anti-Unfair Competition Law, Anti-Monopoly Litigation Interpretation, and 4 supporting anti-monopoly-related regulations regulated the use of data, algorithms, and platform advantages for unfair competition or monopolistic behavior. The current AIGC Measures also address these issues, reflecting a response to similar compliance issues concerning algorithms.
  3. Prevention of false information. The AIGC Measures demand that content generated by AI must be accurate and truthful, and Service Providers (defined hereunder in first paragraph of Article 3) must take steps to prevent the creation of false information. However, what constitutes "false information" is still unclear. Some AI-generated content, such as synthetic articles, synthetic pictures, and deep fake videos, are inherently unrealistic, and requiring complete accuracy may contradict the relevant technology's original intent. Considering that AI technology is still at an early stage, it can be a huge challenge for Service Providers to ensure 100% accuracy.
  4. Non-infringement of privacy and intellectual property. AIGC poses the risk of violating individuals' legitimate rights and interests, such as portrait rights, reputation rights, and rights to privacy. In addition, the demand for data processing that arises from algorithms training often leads companies to use data from the internet, risking the infringement of intellectual property or data ownership of others. Correspondingly, the AIGC Measures sets out the principle of non-infringement of privacy and intellectual property.

3. OBLIGATIONS OF THE SERVICE PROVIDER

Article 5 of the AIGC Measures defines Service Providers as individuals and organizations that use AIGC products to provide services such as chat, text, image, and sound generation. This also includes entities that provide programmable API interfaces to support others in generating related content but excludes AIGC end-users. It is noteworthy that Service Providers are deemed as de facto personal information processors under the AIGC Measures, where the Service Providers shall, per the PIPL, determine the purpose and method of processing personal information during the AIGC service independently or jointly with the user. Such is not always the case. When the Service Providers only provide API interfaces, they may neither determine the purpose nor the methods. Therefore, the role of the "entrusted party" is more applicable to Service Providers than that of a "personal information processor." Regardless, the main body of the AIGC Measures focuses on the extensive obligations of the Service Providers, which will be introduced and analyzed as follows:

(1) Obligations of Security Assessment and Algorithm Filing

The AIGC Measures mandate that Service Providers conduct security assessments in accordance with the Provisions on the Security Assessment of Internet Information Services with Public Opinion Properties or Social Mobilization Capacity (the "Security Assessment Provisions") and fulfill algorithm filing obligations in accordance with the Internet Information Service Algorithmic Recommendation Management Provisions (the "Algorithm Management Provisions"). While the Security Assessment Provisions and Algorithm Management Provisions limit the scope of such assessments and filings to internet information services with public opinion properties or social mobilization capacity, regulatory authorities actually determine such properties based on the potential rather than actual effects. Therefore, generative AIGC product providers are advised to complete the security assessment and algorithm filing since most AIGC products may affect public opinion or mobilize society. It remains unclear whether downstream Service Providers must file algorithms again if upstream Service Providers have already completed the filing.

(2) Obligations to Ensure the Lawfulness of the Source of Training Data

The AIGC Measures require Service Providers to ensure the lawfulness of training data from various aspects, including compliance with laws and regulations, no infringement of intellectual property rights, obtaining a legal basis for processing personal information, ensuring authenticity, accuracy, objectivity, and diversity, and meeting other regulatory requirements. Given that the lawfulness of source data is a risk inherent to algorithmic training, the AIGC Measures' mandate for Service Providers to ensure such lawfulness may indicate increased regulatory oversight over data sources in the future. Furthermore, ensuring objectivity and diversity of data is crucial to prevent algorithmic discrimination and information silos. However, whether the AIGC Measures apply authenticity and accuracy requirements solely to source data or also encompass artificially created products (data) remains unclear.

(3) Obligations of Annotation and Mark

The AIGC Measures impose two distinct obligations: the annotation obligation in Article 8 and the mark obligations in Article 16. Article 8 requires Service Providers to accurately and consistently annotate training data to assist algorithms in learning and training. To achieve this, the AIGC Measures mandate that Service Providers establish clear, specific, and operable annotation rules and provide training to annotation personnel. It's essential to note that accurate annotation is crucial to meet the accuracy requirement for trained algorithm models as stipulated in Article 4 of the AIGC Measures. Article 16 requires Service Providers to comply with the mark obligations outlined in the Deep Synthesis Provisions. Two types of mark requirements are specified in the Deep Synthesis Provisions: traceability marks and significant marks. Traceability marks apply to all generated content, while significant marks are only required for scenarios that may confuse or mislead the public. The AIGC Measures expand on the significant mark obligation of the Deep Synthesis Provisions, mandating that AIGC Service Providers must apply significant marks, regardless of the possibility of causing confusion or misleading.

(4) Obligations to Verify the Real Identity of End-Users.

The AIGC Measures require Service Providers to authenticate the real identity information of end-users in accordance with the CSL. "Real-name authentication on the backend and voluntary participation on the frontend" has become a standard requirement for nearly all online services. Based on the similar requirements for real-name authentications under the Deep Synthesis Provisions, AIGC Measures may require the Service Providers to collect the following information for real-name authentication: mobile phone numbers, identity card numbers, unified social credit codes, and so forth.

(5) Obligations to Prevent Excessive Reliance or Addiction

Article 10 of the AIGC Measures requires Service Providers to take steps to prevent users from becoming overly reliant or addicted to generated content. To safeguard user rights, the Measures mandate that Service Providers define and publicly disclose their services' intended user, usage scenarios, and purposes. However, compliance obligations, user protection, and ethical considerations may differ across various business scenarios and necessitate further refinement. Service Providers can implement measures like pop-up reminders or usage frequency and duration limits to avert user addiction. Regular evaluation of the algorithm mechanism, model, data, and application results is essential to ensure compliance with ethical standards. The AIGC industry may in the future introduce regulatory campaigns like "Minor Protection" campaigns to prevent user addiction.

(6) Obligations on Limiting the Use of Personal Information

The AIGC Measures mandate Service Providers to safeguard users' input information and usage records by not retaining any illegal input information that may reveal their identity, creating user profiles based on input information or usage, or disclosing user input information to third parties. However, in contrast to superior laws such as the PIPL and E-commerce Law, the AIGC Measures do not impose additional obligations on Service Providers as personal information processors or service providers. But Service Providers should still obtain separate consent or other legal bases for special personal information processing activities such as user profiling in accordance with PIPL.

(7) Obligations to Prohibit the Generation of Discriminatory Content

The AIGC Measures require Service Providers to prevent generating discriminatory content based on users' race, national origin, gender, and other factors. This obligation encompasses measures such as preventing discrimination in algorithm design, training data selection, model generation and optimization, and service provision. Service Providers must also monitor their products to ensure that they do not generate discriminatory content. Violations of these obligations can lead to accountability under the AIGC Measures. However, a complete prohibition on generating content with any hint of discriminatory nature could limit the ability to depict antagonists in the generated novels and scripts, potentially compromising the free application and convenience of AIGC services.

(8) Obligations to Properly Handle User Complaints and Infringing Content

Article 13 of the AIGC Measures incorporates rules from the PIPL regarding individuals' exercise of their rights and explicitly requires Service Providers to establish a mechanism for receiving and handling user complaints and timely dealing with requests to correct, delete, or block their personal information. The latter half of Article 13 also requires Service Providers to prevent ongoing harm when generated content infringes upon the rights or interests of others, such as their right to privacy, image, reputation, or trade secrets, or in other unlawful circumstances. It is essential to note that Service Providers should take necessary measures to stop infringing or unlawful content during the generation phase and promptly delete, block, or disconnect links, even terminating services if necessary. Therefore, Service Providers must bear a higher duty of care and implement monitoring and auditing mechanism during the content generation phase within a reasonable and technically feasible range.

(9) Obligation to Ensure the Stability of the Service

The AIGC Measures require Service Providers to ensure the stability of the lifecycle of their generative AI services, where "lifecycle" refers to the period of existence of such services. But per our observation, only a limited number of AIGC products in the current market can provide "stable services." Further clarification by regulatory authorities is necessary to interpret this requirement, as a lack thereof may give rise to lawsuits between users and Service Providers in the future.

(10) Obligations to Optimize the Algorithm Model

The AIGC Measures mandate that Service Providers prevent the generation of inappropriate content by implementing content filtering measures, including removing sensitive words and other unsuitable content. In response to user reports and observations to prevent the re-generation of unsuitable content, Service Providers shall also optimize and train algorithm models within three months from the time when identifying inappropriate content or receiving users' compliant. This algorithm training and optimizing obligation aims to enhance content control by supplementing content filtering and other measures However, Service Providers are now obligated under the AIGC measure to address the "vulnerabilities" of artificial intelligence within three months, which is undoubtedly a high standard of obligation for AIGC Service Providers both technically and in practice. Relying on user reports as the trigger for algorithm optimization and re-training may cause significant cost burdens for Service Providers.

(11) Obligations to Disclose and Educate

The AIGC Measures require Service Providers to disclose necessary information, including the source of pre-training and optimization training data, according to the requirements of the CAC and competent authorities. Service Providers must also educate users to understand and use the generated content rationally.

This obligation allows administrative authorities to proactively intervene in the technical development of companies based on their information disclosure obligations. It represents a shift from previous passive regulatory approaches, as the CAC and competent authorities have the administrative power to obtain more information about the development of artificial intelligence systems from the Service Providers. But boundaries between protecting a company's trade secrets and information disclosure obligations remain to be clarified.

Regarding the obligation to educate users, Service Providers can fulfill them by incorporating relevant educational content into platform rules, privacy policies, or promotional pamphlets.

(12) Obligations to Uphold Social Morality

The AIGC Measures prohibit using AIGC products for illegal or unethical activities such as online hype, malicious posting and commenting, spamming, writing malicious software, and improper business marketing, emphasizing the Service Providers' obligation to monitor such behavior actively. However, there may be challenges in accurately identifying illegal or unethical behavior and balancing user privacy with regulatory needs. These issues require collective efforts from all stakeholders to address.

4. PENALTIES AND EFFECTIVE DATE

The AIGC Measures impose penalties based on the CSL, DSL, and PIPL and provide CAC and competent authorities with discretionary power to issue penalties in the absence of clear provisions in superior laws. Severe violations may result in suspension or termination of AIGC services licensure and even criminal charges. These penalties have a strong deterrent effect. The AIGC Measures are set to take effect in 2023, and their official issuance is expected to happen soon given the industry's current state.

5. CONCLUSION

The AIGC Measures provide new guidance for developing China's algorithm-related industries. However, some details still need to be clarified by lawmakers. The Measures impose high compliance obligations on AIGC Service Providers, raising the question of balancing regulation with industry development. We recommend that AI companies provide practical recommendations and feedback to lawmakers during the public comments period to promote the effective implementation of the Measures.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.