ARTICLE
14 May 2026

Legal Considerations For EU Businesses Using China-Based AI Services (Part I: AI Governance)

SL
Shaohe Law Firm

Contributor

Founded in 2007, Shaohe Law Firm has become one of the most trusted legal service providers for foreign entities in China, especially for European entities. We cover a wide range of practice areas with an emphasis on complex disputes, corporate/M&A, employment law, data compliance, intellectual property protection and tax law.
Since ChatGPT's debut and the rise of DeepSeek for deep reasoning, generative AI tools has become essential for business operations. More and more companies are increasingly embracing AI, albeit cautiously, by permitting workplace use, purchasing enterprise AI subscriptions and creating corporate AI accounts for employees, with the aim of bringing employee AI usage under corporate risk management.
China Technology
Chen Jia wei’s articles from Shaohe Law Firm are most popular:
  • in Canada
  • with readers working within the Pharmaceuticals & BioTech industries
Shaohe Law Firm are most popular:
  • within Law Department Performance, Employment and HR and Tax topic(s)
  • with Inhouse Counsel

Since ChatGPT's debut and the rise of DeepSeek for deep reasoning, generative AI tools has become essential for business operations. More and more companies are increasingly embracing AI, albeit cautiously, by permitting workplace use, purchasing enterprise AI subscriptions and creating corporate AI accounts for employees, with the aim of bringing employee AI usage under corporate risk management.

When European multinationals, particularly their Chinese subsidiaries, permit employees to use China-based generative AI tools like DeepSeek, Kimi, or Doubao, what legal regimes might apply? This article examines the legal landscapes both in China and the EU from two lenses: AI governance and personal data protection. It addresses a pressing concern for such businesses: which Chinese/EU regulations apply, and how extensive are the compliance obligations?

I. AI Governance

A key concern for European multinationals is whether allowing China-based staff to use domestic AI services would subject them to the several newly enacted AI regulations and the compliance burden that follows.

1)                  China’s AI Regulatory Landscape

China has enacted targeted regulations governing specific categories of AI services, including the Interim Measures for the Management of Generative Artificial Intelligence Services, the Administrative Provisions on Deep Synthesis in Internet-based Information Services, and the Provisions on the Administration of Algorithmic Recommendations in Internet Information Services.1

However, most of the rules in these regulations bind providers only. Dedicated statutory obligations for AI users remains sparse. One notable exception is the Measures for Labeling Synthetic Content Generated by Artificial Intelligence, which is in force since September 1, 2025. Article 10 requires users of AI services to proactively declare and label AI-generated content when publishing online and prohibits removing or tampering with such labels. Beyond this specific duty, using generative AI services is subject to the same legal conditions and principles as any other business activity. Companies must exercise heightened compliance vigilance to avoid infringing on personal rights or intellectual property rights of others and to prevent unfair competition practices such as false advertising.

2)The EU Artificial Intelligence Act (“AI Act”)

The EU Artificial Intelligence Act (“AI Act”), in force since August 1, 2024, establishes a comprehensive legal framework governing various types of AI systems and models (not merely generative AI). Crucially, the AI Act contains extraterritorial jurisdiction provisions, stipulating that in certain circumstances it applies to activities and entities located outside the EU. This raises a critical question for European multinationals: could their Chinese subsidiaries’ use of domestic AI services bring them within the AI Act's scope?

2.1     Extraterritorial Jurisdiction under the AI Act

Article 2(1) (c) of the AI Act provides that if an entity located outside the EU uses an AI system and the output produced by that system is “used in the Union,” such activity falls within the scope of the AI Act.

According to Recital 22 of the AI Act, the purpose of this extraterritorial provision is primarily to prevent “circumvention”. Without it, an AI deployer could escape the AI Act’s jurisdiction by routing AI operations through a non-EU entity, using non-EU services to generate content, then importing the outputs for EU use. By extending jurisdiction to the extraterritorial generation of AI outputs, the AI Act ensures that its rules remain valid and binding upon outputs that ultimately affect the Union and the rights and interests of individuals in EU.

The practical significance of this rule is obvious. Consider a Chinese subsidiary established by an EU parent company uses a Chinese AI service to produce outputs such as analytical reports or decision-making recommendations. If these outputs are subsequently used in the operations of the European parent company and actually affect employees or customers within the EU, the AI usage by the Chinese subsidiary (on its own as a deployer in the sense of AI Act) could fall within the scope of Article 2(1)(c). Yet admittedly some practical uncertainty remains. The fact that the AI Act is not yet fully implemented means that enforcement practice and specifically, how regulators will interpret “use in the EU” for intangible outputs, is still developing. The breadth of the range of Article 2(1)(c) therefore remains to be clarified through future regulatory practice.

Although the AI usage of Chinese subsidiaries appear to easily fall under the broad legal wording of Article 2(1)(c) of the AI Act, companies need not be overly concerned, since territorial coverage does not translate directly into operational burden. As discussed below, the AI Act imposes different obligations depending on the risk category of the AI system, and most obligations apply only to high-risk AI systems. Therefore, if the usage scenario of the Chinese subsidiary does not fall within the legally defined high-risk category, its actual compliance obligations may be minimal, even if the AI Act technically does apply.

2.2     Risk Classification and Compliance Obligations

The AI Act categorizes AI systems into three risk levels: unacceptable risk (prohibited), high risk (heavily regulated), and general risk (mainly transparency obligation).

AI systems presenting unacceptable risk are prohibited from being developed or used. These include systems involving subliminal manipulation, exploiting vulnerable groups, enabling workplace emotion recognition, etc. Such AI systems typically require specialized development and often constitute severe fundamental rights violations. Standard generative AI usages by China-based subsidiaries discussed in this article do not fall within this category.

High-risk AI systems are not prohibited but subject to rigorous conformity requirements. For EU-invested Chinese subsidiaries using generative AI services, the most pertinent high-risk category is workplace AI applications. According to Annex III, Section 4 of the AI Act, AI systems used in employment-related contexts, ranging from pre-employment screening, in-employment monitoring, and termination-related decision-making, are classified as high-risk. Consequently, a Chinese subsidiary uses generative AI tools for EU-connected HR functions (such as resume screening, recruitment advertising, candidate evaluation, workplace performance assessment, work scheduling, task allocation, dismissal decisions, severance evaluations, or non-compete assessments) must evaluate whether such usage triggers high-risk obligations. As an important softening exemption, Article 6(3) of the AI Act provides that if an AI system merely assists human without “materially influencing the outcome of decision making”, such use will not be classified as high-risk.

Finally, all AI systems (including general-risk systems) must comply with Article 50’s transparency obligation. Deployers must ensure that individuals are aware that they are interacting with AI systems and can distinguish synthetic content from authentic material. Specifically, Article 50(4) requires clear disclosure when AI-generated images, audio, or video depict real persons, objects, locations, or events and could reasonably be mistaken as authentic. This mirrors China’s labeling requirements under the aforementioned Measures for Labeling Synthetic Content Generated by Artificial Intelligence, creating essentially overlapping compliance obligations for China-based subsidiaries serving EU markets.

Summary: AI Governance

From a regulatory standpoint, permitting staff at China-based subsidiaries to use generative AI services carries limited compliance burden in both jurisdictions. In China, most AI regulations target AI service providers rather than users; In the EU, although the use of AI by Chinese subsidiaries may easily fall within the extensive territorial scope of the AI Act, the compliance risks remain manageable, provided that the Chinese subsidiary aligns its AI policies with those of its European parent company, particularly by avoiding the use of generative AI in high-risk scenarios (such as employment-related evaluation or decision-making concerning individuals located within the EU).

It should also be noted that the AI Act’s high-risk system obligations and the transparency requirements under Article 50 have not yet begun to apply. Under the current application timetable (Article 113), these obligations will become applicable on August 2, 2026. However, according to a legislative reform plan released by the European Commission on November 19, 20252, the implementation of these obligations may be postponed further.

Footnotes

1 These regulations all stipulate their scope of application in Article 2, applying solely to entities providing these artificial intelligence services within the territory of China.

2 For more details, please see the legislative reform plan published by the European Commission on November 19, 2025: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52025PC0836.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More