1 Legal and enforcement framework

1.1 In broad terms, which legislative and regulatory provisions govern AI in your jurisdiction?

In France, there is no global legal framework dedicated to AI and/or that takes into account all of the complexity of new intelligent technologies (eg, AI, big data, Internet of Things). Nevertheless, algorithms and their uses are framed, directly or indirectly, by rules and obligations set forth in various acts and codes, such as the following:

  • Act 78-17 dated 6 January 1978 relating to data processing, files and freedoms (‘Data Protection Act') and Implementation Decree 2019-536 dated 29 May 2019, which govern the processing of personal data necessary for the operation of algorithms;
  • Act 2016-1321 dated 7 October 2016 (‘Act for a Digital Republic') and Implementation Decree 2017-330 dated 14 March 2017 relating to the rights of persons subject to individual decisions taken on the basis of algorithmic processing, which aim to combat digital discrimination that may result from the algorithms used by online platform operators;
  • liability regimes set forth in:
    • the Civil Code (eg, liability for the actions of things; fault-based liability; liability for defective products);
    • specific acts (eg, the so-called ‘loi Badinter' setting out a specific regime of liability for road traffic accidents; the Public Health Code regarding liability for health defective products); and
    • the Criminal Code (eg, criminal liability due to the use of AI to commit an offence); and
  • the French IP Code, which determines the rights that parties can claim on:
    • the component elements of AI (eg, rights on databases or materials (eg, photos, videos) used to train AI models); and
    • the elements generated by AI (eg, software, creations).

In addition, a constitutional bill relating to the Charter of Artificial Intelligence and Algorithms was added to the agenda of the French National Assembly on 15 January 2020.

The aim is to include in the Preamble of the French Constitution the reference to a ‘Charter of Artificial Intelligence and Algorithms'.

1.2 How is established or ‘background' law evolving to cover AI in your jurisdiction?

The legal framework for AI is evolving in France thanks to flexible instruments of soft law such as reports, charters, white papers, guidance and guidelines.

In this respect, for example, the Act for a Digital Republic entrusted the French data protection authority (CNIL) with a mission to "reflect on the ethical and societal issues raised by the evolution of digital technologies". On 15 December 2017, the CNIL published a summary report entitled How to Allow Men to Keep Control? The Ethical Issues of Algorithms and Artificial Intelligence.

The French Senate's Economic Affairs Commission likewise tasked the Parliamentary Office for the Evaluation of Scientific and Technological Choices with reporting on AI. On 29 March 2017 it issued a report entitled For a Controlled, Useful and Demystified Artificial intelligence, which recommended that AI be put "at the service of men and humanistic values".

Another example of such instruments is the government report initiated by Axelle Lemaire, former secretary of state for digital and innovation, entitled Synthesis Report for France Artificial Intelligence.

On 28 March 2018 mathematician and deputy Cédric Villani also made public his report entitled Giving Meaning to Artificial Intelligence, which was the result of a parliamentary mission entrusted to him. Following this report, in December 2019 the French National Ethics Committee established a Steering Ethical Committee on Digital Matters, whose first opinions are expected to address chatbots, autonomous vehicles and medical diagnosis in the era of AI. The steering committee is expected to publish its first annual report in the coming months.

Other non-binding initiatives are undertaken by scientific groups such as:

  • the Commission for Reflection on Ethics Research in Digital Science and Technology, which has published several reports, including one entitled Research Ethics in Machine Learning; and
  • the National Research Institute for Digital Science and Technology (INRIA), which published a white paper on 16 September 2016 entitled Artificial Intelligence, Current Challenges and INRIA's Actions.

All of these initiatives provide food for thought on the amendments that could be necessary to the legal and regulatory framework for AI.

1.3 Is there a general duty in your jurisdiction to take reasonable care (like the tort of negligence in the United Kingdom) when using AI?

No, there is no such general obligation in France.

However, compensation for damages caused by AI may be sought under several liability regimes, such as the following:

  • Under the fault-based liability regime set forth by Article 1240 of the Civil Code, any fault – regardless of its seriousness and regardless of the source of the duty violated – engages in the same way the responsibility of its author, and obliges the latter to compensate the entire damage caused to the victim. This general principle of liability allows case law to define the norms of social conduct and the duties of conduct whose violation constitutes a fault. Recently, case law has established a new fault – ‘precautionary fault', based on the precautionary principle – which results in a high degree of vigilance in the presence of uncertain, unproven risks. This principle and the related fault may be particularly relevant in the context of AI. Finally, the French Supreme Court has identified a duty of vigilance regarding proven risks, which will also be relevant for AI.
  • Under the regime on liability for actions of things set forth by Article 1242 of the Civil Code, a party is responsible for things under his or her custody (there is no need for the victim to prove fault; the fact that the damage has been caused by the thing under someone else's custody alone is sufficient). However, this liability regime raises some questions due the intangible nature of AI and the identification of the party which can be considered as the custodian.
  • The regime on liability for road traffic accidents may also be relevant in the AI context (please see question 3.1(c)).
  • The regime on liability due to defective products set forth by Article 1245 of the Civil Code will also apply to AI insofar as the product is understood in a very broad way. However, this liability regime raises some questions, due the intangible nature of AI and difficulties in characterising the default of the product.

1.4 For robots and other mobile AI, is the general law (eg, in the United Kingdom, the torts of nuisance and ‘escape' and (statutory) strict liability for animals) applicable by analogy in your jurisdiction?

In principle, insofar as robots and other mobile AI can be qualified as movable assets, they may be encompassed within the scope of existing laws applicable to such assets. Currently, however, there is no case law that has applied the existing liability regimes to AI.

Article 1243 of the Civil Code provides for a liability regime for animals, under which the owner or the person using the animal bears the burden of any damages caused by the animal, whether the animal is "under his care" or is "lost or escaped". Currently, this liability regime has not been applied to robots or other mobile AI by the French courts; but some authors suggest that the transposition of this regime to the AI sphere could be relevant and would make it possible to take into account the possibility of autonomous AI ‘escaping' from the control of its user.

As indicated in question 1.3, in the past, the regime on liability for actions of things set forth by Article 1242 of the Civil Code, as interpreted by the French courts, has been sufficiently broad to adapt to the liability issues presented by different types of things (eg, liquid oxygen, soft drinks, aerosol cans, electric trolleys). As such, some authors contend that this general liability regime is sufficiently broad to take into account the issues presented by AI, thanks to the broad interpretation by the French courts of the notion of ‘things' and past case law on the notion of custody. These authors suggest that it would simply be a matter of applying these by analogy to AI by adapting current concepts where necessary.

Nevertheless – and despite the French courts' broad interpretation of the different liability regimes in the past – other authors have called for the creation of a new liability regime dedicated to robots based on the current liability regime for animals.

1.5 Do any special regimes apply in specific areas?

There is no dedicated legal regime applicable to AI in France.

In terms of liability, some authors argue that the different civil liability regimes that currently exist in France (eg, fault-based liability, liability for actions of things, liability for defective products, liability of healthcare professionals, liability for road traffic accidents) can apply to AI depending on the context, and that it is just a matter of applying them by analogy to AI by adapting current concepts where necessary. Other authors have called for a new liability regime dedicated to robots.

1.6 Do any bilateral or multilateral instruments have relevance in the AI context?

The recent initiatives of the European Union in the AI context are very relevant. On 19 February 2020 the European Commission published a white paper on AI within the framework of the development of a coordinated European approach on the human and ethical implications of AI, as well as a reflection on better use of megadata to promote innovation. The European Parliament also adopted three reports on 20 October 2020 indicating how the European Union can regulate AI in order to stimulate innovation and enhance the reliability of this technology, while establishing ethical standards. These respectively covered:

  • an ethical framework for AI;
  • liability for damages caused by AI; and
  • IP rights.

In particular, the European Parliament has stressed the importance of having an effective system to further develop AI, including patents and new creative processes. The European Commission is expected to present a legislative proposal on AI in early 2021.

Companies involved in AI are also keen to adopt ethical charters and code of conducts which are generally applicable to those companies themselves. It is expected that ethical charters and code of conducts that are applicable across whole industries and sectors, rather than just to individual companies, will be adopted in the coming years.

1.7 Which bodies are responsible for enforcing the applicable laws and regulations? What powers do they have?

To date, no specific body has competence to enforce applicable laws and regulations on and over AI. Due to the issues that AI may present, the bodies that are most likely to enforce laws and regulations on AI are the French courts and independent administrative authorities. For example, the French courts may have to rule on:

  • the ownership of IP rights on the components of AI or elements generated by AI; and
  • the liability issues raised by AI.

In this context, the French courts will have their usual powers (ie, for civil courts, the power to rule on IP ownership and to allow damages; for criminal courts, the power to issue sanctions such as fines and imprisonment).

The CNIL may also be competent to monitor compliance with the applicable legislation on the processing of personal data in the context of AI. It can issue formal notices to companies in order for them to cease breaches or directly issue sanctions, including fines of up to €20 million or 4% of the company's total worldwide annual turnover in the preceding financial year, whichever is the higher.

1.8 What is the general regulatory approach to AI in your jurisdiction?

To date, there is no dedicated legal framework applicable to AI in France.

As such, the general regulatory approach to AI is a flexible legal approach of soft law, comprising resolutions, reports, white papers, guidelines and the recommendations of commissions and working groups that have been tasked with analysing specific aspects of AI (please see question 1.2).

Such commissions and working groups may propose to the French legislature the drafting of specific and dedicated instruments, or may call for the amendments of existing provisions to adapt to the specificities of AI.

2 AI market

2.1 Which AI applications have become most embedded in your jurisdiction?

When it comes to AI, there is often confusion between AI technologies and applications.

We believe that a distinction should be made on the French market between three levels of what are identified as ‘AI technologies', as the markets are structured differently according to the technology concerned:

  • Level 1: This comprises companies that analyse, structure and rationalise data or databases. In this case, it is more a question of ‘business intelligence' – a technology for representing and structuring information. The business intelligence market is very large, but less technologically advanced.
  • Level 2: This comprises companies that conduct automated data analysis using a technique known as ‘machine learning'. By analysing data, this technology enables intuitions to be confirmed – for example, by making connections (eg, when it rains, there are more accidents). However, it can also highlight what are known as ‘weak signals' – that is, logical links in a set of data, which could not have been analysed alone, without the use of this technology. This is a rather structured market.
  • Level 3: This comprises companies offering solutions that integrate ‘deep learning' technology, which is at the heart of AI. This involves ‘teaching' the machine to understand information that was not accessible to it (eg, images, sounds). This is the most innovative, but smallest AI market.

2.2 What AI-based products and services are primarily offered?

The AI-based products and services that are commercialised in France are of a very different nature (eg, chatbots, autonomous vehicles (AVs), speech-to-text applications), depending on:

  • the technology concerned (see question 2.1); and
  • the targeted market (ie, business to business (B2B) or business to consumer (B2C)).

As far as deep learning is concerned, AVs and facial recognition/supervision are the most developed products and applications to date.

The B2B market is the most mature and technologically advanced market. Companies offering AI-based products and services in this marked are generally structured in one of the following ways:

  • Application segments which use AI to solve functional and/or sectoral problems: In this case, companies offer AI applications in a specific field to address a specific need (eg, analysis of damage in the automotive field, fraud detection). AI-based products and services offered by companies on the B2B market are in the majority and represent 80% of start-ups.
  • The more upstream segments, which offer useful and specific products, services and technologies for the creation of AI-based applications: In this case, companies develop AI technology or services to make them available to their customers for specific use in their various fields of activity, regardless of the field of activity concerned. This is a smaller market to date, but some ambitious and disruptive projects are emerging.

2.3 How are AI companies generally structured?

AI companies can be of any size and consequently have very different structures (eg, large group, small and medium-sized enterprises).

However, in France, AI companies are mostly start-ups, which are very involved in innovation. Most of these start-ups are simplified joint stock companies – a form which allows for significant flexibility in the organisation and management of the company.

2.4 How are AI companies generally financed?

AI companies can have very different forms and structures.

When AI technologies are developed and commercialised by start-ups, they can use a wide variety of financing methods, including:

  • bootstrapping;
  • crowdfunding;
  • bank loans;
  • acquisition of shares by professional investors (business angels);
  • fund raising through venture capital funds; and
  • intervention of the public investment bank, BPIfrance, in the financing of the start-up in collaboration with the French Tech ecosystem, in the form of a donation, a contribution or a loan.

2.5 To what extent is the state involved in the uptake and development of AI?

The French government declared, as of 2017, that the uptake and development of AI constituted a strategic priority for France. The implementation of a regulatory and financial framework favourable to the emergence of AI, through the provision of special support for research projects and start-ups in AI, is a crucial part of its development strategy.

France's unique position in the field of AI can be explained by the existence of a high-performance ecosystem which has many advantages. First, numerous quality training courses are available, including AI-specialised master's degrees; and many doctoral theses have been dedicated to AI topics.

Second, France is an attractive territory for investors, with incentives including the following:

  • Young Innovative Company Status is available for any technology company that has been established for less than eight years and that conducts research and development (R&D), as long as it meets the attribution criteria. Such companies benefit from exemptions on their tax charges (eg, corporate income tax, annual flat tax) and their social charges (eg, Ursaff employer's contributions) for a period of eight years from their establishment.
  • The Research Tax Credit is a generic measure that aims to support corporate R&D activities, with no sector or size restrictions. Businesses that incur expenses for fundamental research and experimental development can benefit from the RTC by deducting these expenses from their income tax under certain conditions. The rate of the RTC varies according to the amount of the investment.
  • Numerous European and French innovation competitions are open to AI companies, which award funding for R&D (eg, funding by BPIfrance).
  • As a public investment bank, the role of BPIfrance is to provide financial support to companies and start-ups by providing capital or guaranteeing the financial loans that companies can take out (eg, the i-Lab programme, the French Tech Scholarship).

Finally, the state encourages the establishment of research laboratories in France in order to further develop its ecosystem: about 88 laboratories and R&D research centres have been established, and more than 13,000 researchers are working on AI or related issues. In particular, France hosts the research centres of some of the world's leading AI companies, such as Google, Facebook, Microsoft, Uber, Fujitsu, IBM, Criteo and Thales.

3 Sectoral perspectives

3.1 How is AI currently treated in the following sectors from a regulatory perspective in your jurisdiction and what specific legal issues are associated with each: (a) Healthcare; (b) Security and defence; (c) Autonomous vehicles; (d) Manufacturing; (e) Agriculture; (f) Professional services; (g) Public sector; and (h) Other?

(a) Healthcare

The fields of application of AI in medicine are numerous and include:

  • computer-assisted surgeries;
  • remote patient monitoring;
  • intelligent prostheses;
  • diagnostic assistance; and
  • personalised treatments.

The reports listed in question 1.2. deal with AI in the healthcare sector, among other things. Most of them highlight the importance of preserving the decisional power of doctors with regard to AI, and recommend the development of technical devices using AI in order to assist in making medical decisions, rather than imposing on doctors or patients a decision made by the algorithms. They also emphasise the importance of training healthcare professionals to understand the global operating approach, in order to identify the limits of AI and the recommendations/solutions it affords.

In November 2016 the High Authority for Health issued Good Practice Guidelines on Health Apps and Smart Devices (Mobile Health or mHealth), with the aim of guiding and promoting the use of connected applications and objects, and strengthening confidence in this regard. The guidelines set out good practices covering the reliability of health content, data protection and cybersecurity. Such guidelines are also relevant for AI software.

The French Data Protection Act also contains several provisions dedicated to the processing of health data, which may apply to the processing of health data in the context of developing or running medical software that includes AI elements (eg, computer-assisted surgery, remote patient monitoring, diagnostic assistance). However, these provisions are not specific to AI.

The Act of 26 January 2016 on the modernisation of the healthcare system led to the creation of the National Health Data System, which brings together the main public health databases and sets out the rules on the use of such data. Under certain conditions, this data may be used by companies to conduct research that could further and/or lead to the development of software incorporating AI.

The current French bill on bioethics addresses the algorithmic processing of genetic data. For example, the bill aims to ensure that:

  • proper patient information is used when a medical act involves an algorithmic processing of massive data; and
  • a healthcare professional intervenes in the adaptation of the settings of the processing, and as such, the principle of a human guarantee when using AI is respected.

The specific legal issues that are not yet addressed yet by the current regulatory framework include the following:

  • Medical liability regime: Is the current medical liability regime sufficient to adapt to specific issues that might be raised due to AI or is a dedicated liability regime necessary?
  • Eugenics: Is the current legal framework (Article 214-1 of the French Criminal Code) sufficient to address the issues that might be raised by AI (transhumanism and augmented humans) or are amendments necessary?

(b) Security and defence

In France, the number of AI military applications is increasing; they include computer vision, intelligent robotics, distributed intelligence, automatic language processing, semantic analysis and data crossing. On 13 September 2019 the Ministry of the Armed Forces issued a report outlining a roadmap for the deployment of AI on battlefields, with concrete examples, raising ethical issues related to its use (AI at the Service of Defence). The authors of the report identified several "priority areas of effort", including decision support, robotics, cyber defence, intelligence, logistics and support, as well as maintenance in operational condition and "collaborative combat". To this end, a new organisation within the ministry specifically dedicated to AI was created on 1 September 2018 by Florence Parly, minister of the armed forces. The purpose of the Defence Innovation Agency is to coordinate the ministry's innovation initiatives by ensuring the coordination and consistency of all innovation initiatives.

Examples of specific AI military applications include the following:

  • in the field of communications, use of a system that can adapt in real time, based on data from monitoring satellites, to avoid failures and automatically re-plan operations during a mission if necessary; and
  • AI-based tools and weapons to supporting on-site missions – drones that will adapt in real time to the situation on-site, fighter planes equipped with virtual voice assistants and battle tanks that can be accompanied by semi-autonomous robots with the ability to operate in complex environments. As far as autonomous lethal weapons are concerned, France has no plans to develop fully autonomous systems that are totally beyond human control in the definition and execution of their mission.

The Ministry of Defence has called for "trusted AI" and the establishment of international standards. In its report, the ministry highlighted the need for France to strike the right balance between benefiting from what large private and often foreign digital groups can offer in terms of AI without becoming dependent on it and developing its own military applications.

The use of AI in the field of security and defence presents specific difficulties and challenges. Beyond the issues of national sovereignty, the regulations applicable to classified information can be a constraint on the use of AI. Indeed, Articles R 2311-1 and 2 of the Defence Code classify information considered as defence and national security secrets as follows:

  • Top secret defence: Reserved for information and materials that concern government priorities in defence and national security, and whose disclosure is likely to seriously harm national defence;
  • Defence secret: Reserved for information and materials whose disclosure is likely to cause serious harm to national defence; and
  • Defence confidential: Reserved for information and materials whose disclosure is likely to harm national defence or could lead to the discovery of a secret classified at top secret or defence secret level.

Classified information can be accessed only with a security clearance, which differs according to the classification levels of the information. It is well known that AI-based systems can become intelligent only if they have enough relevant data to learn from: without data, an algorithm is necessarily blind, and without an algorithm, a data is definitely mute. Although nothing prevents such data from being injected into a deep learning process, the restricted nature of access to such data may limit the players that are seeking to develop AI-based technologies in the defence sector.

(c) Autonomous vehicles

The rules on the traffic conditions of autonomous vehicles (AVs) for experimental purposes can be found in several instruments:

  • the Act on the Energy Transition for Green Growth, dated 17 August 2015;
  • the Order Relating to the Experimentation of Delegated Driving Vehicles on Public Roads dated 3 August 2016;
  • the Decree Relating to the Experimentation of Delegated Driving Vehicles on Public Roads dated 28 March 2018;
  • the Order Relating to the Experimentation of Delegated Driving Vehicles on Public Roads dated 17 April 2018; and
  • the so-called Loi Pacte, dated 22 May 2019.

These texts authorise the circulation on public roads of vehicles with total or partial delegation of driving authority, for experimental purposes (ie, in order to carry out technical tests or to evaluate the performance of these vehicles). Experimentation is subject to prior authorisation. The issue of authorisation is subject to the condition that the system of delegated driving may be neutralised or deactivated by the driver at any time. In the absence of a driver on board, it is necessary to provide evidence that a driver located outside the vehicle, responsible for supervising the vehicle and its driving environment during the experiment, will be ready to take control of the vehicle at any time, in order to take the necessary steps to ensure the safety of the vehicle, its occupants and road users.

The Loi Pacte further clarifies the criminal liability regime applicable in the event of accidents that occur during experiments on AVs. This act exempts the driver from criminal liability during periods when the system of driving delegation is activated if the following cumulative conditions are met:

  • The driver must have activated the driving delegation system in accordance with its conditions of use; and
  • The driving delegation system must be in operation and must inform the driver, in real time, that it is in a position to observe traffic conditions and carry out any manoeuvre independently without delay (instead of the driver).

According to these provisions, instead of the vehicle driver, criminal liability will be borne by the holder of the prior authorisation for experimentation, which will have to pay any fines imposed and any damages awarded in case of accidents.

Currently, this liability regime is limited to the situations of experimentation outlined above. French commentators have thus questioned whether the current French liability regime for traffic road accidents (the so-called loi Badinter, dated 5 July 1985) is sufficient to address issues raised by AVs (with slight adaptations where necessary – for example, on the notion of ‘driver'); or whether a dedicated liability regime would be necessary, under which the liability of different stakeholders (eg, driver, car manufacturer, AI provider) might be triggered.

(d) Manufacturing

There is no dedicated regulation in this sector as yet.

(e) Agriculture

There is no dedicated regulation in this sector as yet.

(f) Professional services

There is no dedicated regulation in this sector yet.

(g) Public sector

The French Act for a Digital Republic (Act 2016-1321 dated 7 October 2016) introduced the principle of transparency of public algorithms used as a basis for individual administrative decisions. Its provisions have been transposed in the French Code of Relationships between the Public and the Administration. According to these provisions, whenever an individual decision is taken on the basis (even partially) of an algorithm, the administration must:

  • include an "explicit statement" on the relevant documents (eg, notices, opinions) informing the user that the decision concerning him or her has been taken on the basis of an algorithm. This statement must outline:
    • the purposes of the processing;
    • the user's right to know about the "main features" of this processing; and
    • how this right can be exercised (Articles L311-3-1 and R311-3-1-1); and
  • explain, at the request of the individual, how the relevant algorithm works (Articles L311-3-1-2 and R311-3-1-2), by providing the following information:
    • the degree and mode of contribution of the algorithm to the decision making;
    • the data processed and its sources;
    • the processing settings and their weighting applied to the situation of the data subject; and
    • the operations carried out by the processing.

For administrations with at least 50 agents or employees, the administration must provide general information (Article L312-1-1-3), which involves publishing online the "rules defining the main algorithmic processing used in the accomplishment of their missions" – provided, once again, that these form the basis of individual decisions.

These provisions are complemented by Article 47 of the Data Protection Act, which provides that the data controller must ensure that it remains in control of the algorithmic processing and its development, so that it can explain to the data subject, in detail and in an intelligible form, the manner in which the processing has been carried out on him or her.

To assist administrations in complying with such obligations, in March 2019 Etalab (a French public body) published a guide for the attention of administrations, together with a practical factsheet on the explicit statement that must be provided to the user.

(h) Other

Numerous AI applications have emerged in the legal sphere. Many legal tech firms are using machine learning processes to develop software and applications for tasks such as:

  • analysing case law;
  • identifying the chance of success;
  • anticipating the result of litigation; and
  • assisting with due diligence.

The development of such software is facilitated by the implementation of open data policies due to the obligations set forth in the Act for a Digital Republic.

The introduction of AI software in the judicial system is also being explored. For example, for three months the judges of the Courts of Appeal of Douai and Rennes have tested software that aims to predict judicial decisions.

Some provisions have been inserted in the General Data Protection Regulation and the Data Protection Act in order to protect data subjects from the adverse consequences that the sole utilisation of algorithms may have on judicial decisions concerning them. In this respect, Article 47 of the Data Protection Act provides that: "No judicial decision involving an assessment of a person's conduct may be based on the automatic processing of personal data intended to evaluate certain aspects of that person's personality."

4 Data protection and cybersecurity

4.1 What is the applicable data protection regime in your jurisdiction and what specific implications does this have for AI companies and applications?

The applicable data protection regime is as follows:

  • the EU General Data Protection Regulation (2016/679) (GDPR);
  • the Police-Justice Directive (2016/680);
  • the Privacy and Electronic Communications Directive (2002/58/EC);
  • EU Regulation 2018/1807 on a framework for the free flow of non-personal data in the European Union;
  • the Data Protection Act, updated to ensure compliance of national laws with the GDPR and the Police-Justice Directive;
  • Decree 2019-536 of 29 May 2019, adopted for the application of the Data Protection Act;
  • the Electronic Communication Code – in particular, Articles L32-3 and following (use of electronic communications data for statistical purposes, to improve the services or for direct marketing) and Article L34-5 (use of data for direct marketing purposes); and
  • the guidelines of the European Data Protection Board and the French data protection authority (CNIL).

In addition, the recent proposal for a regulation of the European Parliament and of the Council on European data governance will apply.

The primary implications of these regulations are as follows. First, it is necessary to determine whether the data injected into AI processes (whether to train models or to use applications that incorporate AI) constitutes personal data, in order to identify the applicable regulations and obligations.

Where AI processes involve the processing of personal data, the issues and implications will differ depending on the stage:

  • At the stage of AI development (ie, where databases are used to train models), the issues and available solutions will differ depending on whether the company has created a specific database to train AI models or wants to use existing databases (created for other purposes) to train AI models. In any case, the company will need to:
    • ensure that it has appropriate legal grounds to process the personal data, and/or that the purpose related to training AI models is compatible with the initial purposes of processing (the CNIL has a very strict doctrine on web scraping techniques that involve the collection of publicly available data to use it for different purposes);
    • inform data subjects appropriately regarding the use of their data; and
    • ensure that data subjects can exercise their rights (which in practice is not always simple, depending on the data used to train models).
  • The use of applications that incorporate AI will raise specific issues only if the application makes automated decisions based on AI. Companies will then have to determine whether the application falls within the scope of Article 22 of the GDPR and Article 47 of the Data Protection Act and, if so, ensure compliance with such articles – for example, by:
    • obtaining the explicit consent of the data subject;
    • relying on the need to process the data for the performance of a contract between the data controller and the data subject; or
    • being authorised to process the data by national or EU law.

The development and use of AI also raise the question of the qualifications and roles of the parties – that is, the company developing the application incorporating AI, and the company using the AI and sometimes providing data to be used to train and develop tailored models and applications) – with respect to the GDPR. Are these parties acting as joint controllers? As separate data controllers? Or in the context of a data controller/data processor relationship?

4.2 What is the applicable cybersecurity regime in your jurisdiction and what specific implications does this have for AI companies and applications?

The applicable cybersecurity regime is set out in:

  • the GDPR;
  • the Data Protection Act, which includes cybersecurity requirements and provides for sanctions when personal data is involved;
  • Act 88-19 dated 5 January 1988 on computer fraud, which establishes the offences relating to any automated data processing system. This has been enacted in the Criminal Code (Articles 323-1 to 323-8);
  • the Military Programming Act 2013-1168 dated 18 December 2013 (for 2014 to 2019) and the Military Programming Act 2018-607 dated 13 July 2018 (for 2019 to 2025), which provide requirements for operators of vital importance;
  • the Network and Information Systems (NIS) Directive (2016/1148) and Law 2018-133 dated 26 February 2018 (and its implementation Decree 2018-384), which provide specific NIS requirements for operators of essential services and digital service providers;
  • Regulation 2019/881 of 17 April 2019, which lays down the main requirements for European cybersecurity certification schemes with respect to information and communications technology products, services and processes;
  • the Second Payment Services Directive (2015/2366), transposed in the French Monetary and Financial Code, which sets out provisions for payment service providers' information systems;
  • Regulation 2017/745, which applies to medical devices that include software components;
  • the Decree of 22 November 2019, which sets out several cybersecurity requirements for digital assets services providers information systems; and
  • the Public Health Code, which sets out specific security requirements applicable to health data hosting service providers.

AI is primarily relevant in the cybersecurity context in two ways:

  • It is used to improve cybersecurity, by embedding AI technology into cybersecurity equipment and applications (eg, firewalls, endpoint detection and response solutions), and/or it is used to detect or to investigate and respond to abnormal behaviours detected in a security operations centre.
  • Cybersecurity assumes even more crucial importance in the case of AI applications, due to the potential adverse impacts that could result from a cyber-attack (eg, on autonomous vehicles, computer-assisted surgery or remote patient monitoring) or from unauthorised access to the data processed by the application (eg, there is a high risk for individuals in case of unauthorised access to an application that stores databases of photos/patterns for facial recognition or biometric data for biometric access). Companies should thus take care:
    • to implement robust security measures in order to prevent accidental or unlawful destruction, loss, alteration and unauthorised disclosure of, or access to, personal data that is transmitted, stored or otherwise processed, in compliance with Article 32 of the GDPR; and
    • to undertake a data protection impact assessment prior to the implementation of an application that incorporates AI functions.

    5 Competition

    5.1 What specific challenges or concerns does the development and uptake of AI present from a competition perspective? How are these being addressed?

    AI needs a significant volume of data in order to continue to learn. In this regard, the extent to which the data market is concentrated in the hands of a few web giants presents competition issues. These issues can arise at the stage of both access to and exploitation of data.

    As regards access to data, the following points arise:

    • Turnover thresholds are often problematic in applying concentration law to digital matters, because companies evolving in the technological and digital spaces do not always meet them.
    • Data is fundamental for the development of AI. However, can data be considered as an essential facility within the meaning of competition law? This concept has been defined in several decisions and it is not easy to translate the conditions set out in those decisions to access to data, due to the nature of data (non-rival).

    As regards the exploitation of data, the use of algorithms may change the competitive functioning of markets. The use of algorithms has direct consequences on consumer choices and consumption patterns, but also on the strategies and decisions of companies. Algorithms do not in themselves represent a new anti-competitive risk, but may reinforce existing risks of collusion between competitors (eg, the same algorithm may be used by competitors to determine their prices or the use of algorithms may lead competitors to adopt the same prices) and abuse of a dominant position. Indeed, algorithms and big data strengthen the information resources of companies by increasing the volume of data and the speed at which this information is exchanged with competitors, and enhancing raw data processing capacity.

    Several cases of abuse of foreclosure relating to the anti-competitive use of algorithms have also been observed recently (eg, the European Commission's Google Shopping decisions of 27 June 2017), in which the partial or total foreclosure of competitors in downstream markets resulted from the algorithmic manipulation of natural search results.

    The effectiveness and efficiency of several traditional regulatory and competition authority tools have been questioned in the context of algorithms. However, in the Algorithms and Competition study published jointly in November 2019 by the French Competition Authority and the German Bundeskartellamt, the authorities concluded that: "in the situations considered so far, the contemporary legal framework, in particular Art. 101 TFEU and its accompanying jurisprudence, allows competition authorities to address possible competitive concerns. In fact, competition authorities already have dealt with a certain spectrum of cases involving algorithms, which have not raised specific legal difficulties."

    6 Employment

    6.1 What specific challenges or concerns does the development and uptake of AI present from an employment perspective? How are these being addressed?

    Numerous debates and discussions are ongoing in the employment field.

    One concern relates to the potential impact of the development of AI on employment. Some fear that this could lead to the destruction of certain jobs and thus have a negative impact on employment; others are more positive and foresee a positive job creation/destruction balance – and, as a bonus, an evolution towards more interesting jobs, centred on human skills. Supporting change in the workplace thus remains a central challenge. As such, many working groups have been organised to examine the industrialisation and transfer of AI technologies to different economic sectors by maximising the economic benefits, and to anticipate the macroeconomic and social impacts of AI.

    A second cause of concern relates to the impact of AI and its consequences on labour law. Generally speaking, the Labour Code already contains mechanisms that can be adapted to the introduction of technical devices based on AI. In practice, however, the application of these provisions to the use or existence of AI devices will raise many questions, such as the following:

    • the regulations applicable to work-related accidents caused by employees using an AI device, whether that takes the form of a robot or any other machine;
    • psycho-social risks for employees; and
    • use of an AI device by an employer in the context of decision making and risks of bias.

    These issues, which have not been addressed to date by specific legal provisions relating to AI, will have to be addressed in practice by the courts; or if necessary, may be the subject of a further reform of the applicable labour law.

    7 Data manipulation and integrity

    7.1 What specific challenges or concerns does the development and uptake of AI present with regard to data manipulation and integrity? How are they being addressed?

    As mentioned in question 9.3, one of the main risks attached to AI relates to the potential bias and discrimination that may result from algorithms or from the data used to train the models on which an algorithm is based.

    Indeed, AI needs a broad range of data to be properly trained – both in number and with respect to the variety of use cases. If the data set is not sufficiently varied, the AI will likely reproduce and amplify a specific situation and thus lead to discriminatory decisions. Companies must also ensure that the sample of data used to train models is not modified. If not, the algorithm is likely to use shortcuts to make predictions and will thus provide incorrect solutions.

    As such, companies must ensure that AI solutions are built in a robust way, to ensure that they can provide reliable results throughout their entire lifecycle, and to limit errors or inconsistencies in their predictions. In this context, both the design of the algorithm and the sampling of training data are crucial.

    Guidance on this topic suggests that the best way to ensure this is:

    • to adopt design and verification procedures that eliminate cognitive, statistical and economic biases from AI; and
    • to subject such procedures to accountability.

    8 AI best practice

    8.1 There is currently a surfeit of ‘best practice' guidance on AI at the national and international level. As a practical matter, are there one or more particular AI best practice approaches that are widely adopted in your jurisdiction? If so, what are they?

    There is an abundance of guidance regarding best practices on AI, at both the national and international level. However, it is difficult to say whether any of these approaches have been particularly adopted by actors involved in AI.

    Guidance from the European Union, the Organisation for Economic Co-operation and Development (OECD) and the Institute of Electrical and Electronics Engineers (IEEE) is most commonly cited in other published guidance.

    In December 2020 a practical guide to implementing trustworthy AI was published by think tank Impact AI. This reflects the work of 16 French businesses which are members of Impact AI's working group and which represent a broad range of industries (eg banking, insurance, software editing, telecommunications, consultancy firms, transportation, energy).

    8.2 What are the top seven things that well-crafted AI best practices should address in your jurisdiction?

    The proliferation of guidance and guidelines on AI has resulted the publication of nearly 50 principles aimed at regulating AI. Although these principles may sometimes have different names, they can be categorised – as proposed by the European Commission and the working group AI4People – under five main themes, as follows:

    • Principle of beneficence: AI should increase human prosperity by strengthening human and societal wellbeing and the common good, and by serving as an agent of progress and innovation.
    • Principle of non-maleficence: AI system should not undermine, aggravate or harm human beings in any way. This includes the protection of human dignity as well as mental and physical integrity.
    • Principle of autonomy: AI systems should not subordinate, coerce, deceive, manipulate, condition or control human beings, who should be able to maintain their full and effective self-determination to take part in the democratic process.
    • Principle of justice: To the furthest extent possible, detectable and discriminatory bias should be deleted from the collection stage onwards; and a control procedure should be implemented to analyse, in a clear and transparent manner, the purposes, constraints, requirements and decisions of the system.
    • Principle of explicability and transparency: The data sets and processes by which an AI system renders a decision – including the data collection and tagging processes, as well as the algorithms used – should be documented to the highest standards, to allow for traceability and improved transparency. This also applies to the decisions rendered by the AI system.

    In addition, in its practical guide to implementing trustworthy AI, Impact AI has proposed the following seven ethical pillars (based on those of the IEEE, the OECD and the European Union), to be implemented when crafting an AI project:

    • Dignity: AI should serve the user, who should be able to make his or her own decisions.
    • Robustness: AI should be reliable over the long term.
    • Data governance: AI should respect privacy.
    • Transparency: AI should be interpretable and explicable.
    • Equity: AI should treat everyone fairly.
    • Sustainable development and wellbeing: AI should help to resolve universal problems.
    • Responsibility: It should be possible to justify the effectiveness of the mechanisms used in AI to minimise risks.

    8.3 As AI becomes ubiquitous, what are your top tips to ensure that AI best practice is practical, manageable, proportionate and followed in the organisation?

    This is a complex question and the response will intrinsically depend on the company concerned, its operations and its sector of activity.

    In practice, according to its own criteria and based on the best practices usually implemented in its sector of activity, each company will need to determine:

    • the nature of the best practices to be observed in relation to the use of AI technologies (whether these are integrated into tools used within the company or into its products and services); and
    • the appropriate internal channels to ensure their effectiveness.

    In our opinion, particular attention should be paid to:

    • the implementation of good practices in line with the recommendations of the relevant authorities and work groups in the relevant sector of activity; and
    • the implementation of an internal policy to support these good practices within the organisation by:
      • appointing dedicated contacts to take charge of these issues;
      • scheduling training sessions to make teams aware of the challenges relating to AI; and
      • ensuring the effectiveness of these good practices and addressing any difficulties that may arise from them in practice.

    9 Other legal issues

    9.1 What risks does the use of AI present from a contractual perspective? How can these be mitigated?

    The main contractual risks arise from the fact that there is no single contractual scheme, due to all the specificities and applications related to deep learning.

    The use of an AI-based product or service will be governed – like that of any product or service – by the legal terms and conditions applicable to the use of that product or service, in both business-to-business and business-to-consumer relationships (eg, terms of use; agreements negotiated between a provider and a customer, or between co-developers). AI technologies can differ significantly by nature (eg, software as a service; on-premises software; development and delivery of a custom mass data processing app to be integrated into a larger system belonging to the customer), and thus require very different contractual frameworks. The contractual scheme is usually part of one of the many forms of IT contracts that exist, which must be adapted according to the project in question and the AI technology concerned.

    Several difficulties may arise in particular with regard to IP rights. Thus, in order to negotiate and implement the appropriate contractual scheme and provisions, particular attention should be paid to the specificities of the AI process, its operation and all of its components, so that the parties can identify and determine all rights they can claim on the elements incorporated in or generated by the process (eg, raw data used for learning purposes, data sets, labels, neural network, models, output data) – in particular, according to their respective role in the transformation of the data.

    Difficulties may also arise with regard to data protection. The contracting parties must first determine whether the data injected into the deep learning process constitutes personal data, in order to identify the regulations applicable to the data. Often, the operation of AI technology will involve the processing of personal data, in both the development phase and the use phase. The parties' obligations will be defined according to their respective qualification as controller/subcontractor, joint controller or separate controller under the data protection legislation. It is crucial to determine these qualifications according to the uses that the parties wish to make of the data both for the duration of the contract and at its end, in order to integrate an appropriate and compliant clause into the contract.

    9.2 What risks does the use of AI present from a liability perspective? How can these be mitigated?

    As indicated in questions 1.3 to 1.5, as yet there is no liability regime specifically dedicated to AI in France.

    Depending on the context, however, fault-based liability, liability for actions of things and liability for defective products may apply.

    The risks and limits of these liability regimes relate to the fact that they do not take into account the intrinsic specificities of AI per se. The learning and autonomous dimensions are essential components of AI, but these liability regimes have not necessarily been constructed to take account of this.

    First, the liability for defective products which would apply in case of a product defect is subject to exemptions set out in Article 1245-10 of the Civil Code. In particular, a producer can be exonerated from its liability for defective products if it can prove that the state of scientific and technical knowledge at the time it put the product into circulation made it impossible to detect the existence of the defect (Article 1245-10 4° of the Civil Code). In this respect, the evolutionary nature of ‘strong' AI, comprised of unsupervised algorithms (ie, when the machine learns and establishes its own classification from raw data independently), could lead to the refusal of compensation for victims if the defect is somehow linked to the evolution of AI.

    Second, as regards the application of the liability for actions of things provided for in Article 1242 of the Civil Code, the main difficulty will lie in identifying the custodian of the thing (which supposes, as per the definition provided by case law, the use, direction and control of the thing), in light of the freedom of decision and autonomy of strong AI. This notion of the custodian might thus be unsuitable in the context of strong AI.

    That said, thus far, strong AI remains more of a theoretical concept. Although such technologies are the subject of in-depth research, they do not seem to have any practical or commercial applications on the market (with the exception of some forms of deep learning, such as go or chess games). The risk is thus mitigated, insofar as the above-mentioned liability regimes are more easily applied to supervised algorithms (ie, the limits outlined above are less relevant).

    9.3 What risks does the use of AI present with regard to potential bias and discrimination? How can these be mitigated?

    While AI allows for the more efficient processing of data, it always reflects a system of values and social choices, through its configuration or its training data. Thus, the criteria on which predictive systems are based may result in the reproduction and/or accentuation of disparities, exclusions or discrimination, usually caused by so-called ‘cognitive biases' –systematic deviations of logical and rational thinking from reality. These biases can directly or indirectly affect algorithms (from design or because they have not been sufficiently trained) or the data that is integrated into or generated by the system (eg, lack of diversity of learning databases). For example, where the algorithm which underpins a facial recognition tool is trained on a database that is not sufficiently diverse, the tool may fail to recognise faces from certain ethnic groups.

    Predictive algorithms allow companies to anticipate their customers' behaviour or behaviour trends, and to automate or assist in the decision-making process (eg, whether to provide a loan or offer insurance cover). The use of AI may thus present risks of discrimination insofar as such algorithms are increasingly used in relation to individuals' access to social benefits and justice (in particular, the development of predictive justice), the functioning of organisations such as hospitals, access to public services and hiring procedures. To the extent that algorithms are designed by humans and are based on data that reflects human practices, biases can be integrated into or generated through the design of systems and can lead to discrimination.

    The French data protection authority (CNIL) and the Defender of Rights have already expressed concerns about the impact of certain algorithmic systems on fundamental rights, highlighting that the right to non-discrimination must be effectively respected in all circumstances, including where a decision involves recourse to an algorithm. The two independent administrative authorities issued the following recommendations for AI players on 31 May 2020:

    • Train and raise awareness among professionals in the technical and computer engineering professions regarding the risks that algorithms may present to fundamental freedoms and rights.
    • Support research to develop measurement and bias prevention studies, and promote the notion of ‘fair learning' – that is, algorithms that are designed to meet the objectives of equality and understanding, and not only performance.
    • Comply with all legal obligations in terms of information, transparency and explainability of algorithms – with regard to not only users and other concerned persons, but also third parties and professionals, in the general interest. At this stage, Article 13 of the General Data Protection Regulation already imposes an obligation to provide "meaningful information about the logic involved" in any automated decision making that has a significant impact on the data subject. In addition, the Code on Relations between the Public and the Administration, complemented by the Act for a Digital Republic, specifies the information which must be provided to the recipient of an individual decision regarding:
      • the "degree and method of contribution of the algorithmic processing to the decision-making process":
      • "the data processed and its source"; and
      • "the processing parameters and… weighting applied to the data subject".

    However, the CNIL and the Defender of Rights consider that the legal requirements of information, transparency and explainability should not be restricted to decision-making algorithms and those involving personal data processing, but should apply to all private and public sector algorithms.

    • Carry out impact assessments to anticipate discriminatory effects of algorithms and monitor their effects after deployment.

    10 Innovation

    10.1 How is innovation in the AI space protected in your jurisdiction?

    In France, innovation in the AI space is protected by the legislation applicable to IP rights. In determining which IP rights apply to a specific AI innovation, a distinction must be made between the protection of the tool itself (and its various components) and that of its outputs. This analysis requires an understanding of both the tool and its operation. How this analysis is carried out will differ depending on whether the IP rights involved relate to:

    • literary and artistic property (copyright that protects creations or the sui generis database right that protects investment); or
    • industrial property (trademarks, patents).

    Likewise, the ownership of IP rights will depend on whether more than one person has collaborated in the development of the AI technology.

    This is a complex issue that inherently requires a case-by-case analysis, but the following points will apply in general:

    • Algorithms: If an algorithm cannot be subject to copyright due to a lack of sufficient formalisation, the coded expression of the program (ie, the software integrating the algorithm) will nonetheless be protected by copyright, provided that it meets the originality requirement. In addition, in some cases, machine learning algorithms can constitute innovative technical solutions that open the door to patent protection. In this respect, the European Patent Office has already granted a number of patents for computer-implemented inventions that incorporate AI.
    • Databases: AI technology that integrates several databases (eg, raw data, labelled data, datasets, output data) may be protected by:
      • copyright, provided that the originality requirement is met; or
      • the sui generis rights of the database producer, if it can be demonstrated that the database producer has made substantial financial, material or human investments (assessed qualitatively or quantitatively) in order to constitute, verify or present the content of the database.

    Some have also suggested that an AI innovation should be viewed not as its various components – including algorithms, neural networks, databases and so on – but in a more holistic manner, leading it to qualify as a complex work. As yet, however, the legal regime applicable to intellectual property in France has not adapted to this approach.

    10.2 How is innovation in the AI space incentivised in your jurisdiction?

    At the AI for Humanity conference held on 29 March 2018, the French president presented the national strategy for AI in France. The aim of this strategic plan is to make France a major international player in the AI space.

    As part of this strategic plan, the following initiatives have been announced:

    • France will devote €1.5 billion up until 2022 to the development of AI. Nearly €400 million of this will be devoted to calls for projects and breakthrough innovation challenges.
    • A world-class AI research hub will be created thanks to the implementation of a national programme coordinated by the National Research Institute for Digital Science and Technology, in conjunction with universities and research organisations. The objective is to create "a network of dedicated institutes located in four or five places in France", accompanied by a programme of individual chairs, in order to attract the world's best researchers.
    • The link between public research and industry will be strengthened by:
      • simplifying start-up formation procedures for researchers;
      • speeding up the procedures that control scientific projects; and
      • allowing researchers to devote 50% of their time to private projects (instead of 20% as previously).
    • Other aims include:
      • doubling the number of students trained in AI and in ethics in the digital sector;
      • allowing more room for experimentation;
      • opening up public data;
      • establishing data-sharing platforms;
      • defining the framework for European sovereignty;
      • making algorithms public and ensuring their integrity; and
      • initiating a European reflection on algorithms.

    11 Talent acquisition

    11.1 What is the applicable employment regime in your jurisdiction and what specific implications does this have for AI companies?

    Traditionally, labour law is not well suited to start-ups and innovative companies that rely heavily on subcontractors and freelancers. However, after a fund raising or in a growth phase, the recruitment and structuring of a team of employees may become necessary. French labour law is highly protective of employee rights and is often insufficiently flexible to enable start-ups and innovative companies to meet their personnel needs; although mechanisms have recently been introduced for small and medium-sized companies in particular (eg, negotiation of a company agreement in the absence of a trade union; capping of compensation for damages in the event of a labour dispute with an employee).

    Start-ups and innovative companies may also face difficulties with labour law when they wish to recruit foreign workers. AI involves significant R&D activity that requires highly qualified personnel (eg, engineers, data scientists, analysts), who might be located abroad. In such case, companies must establish whether the foreign worker is authorised to work in France. Applicants must either hold a work permit or be from a country for which a work permit is not mandatory (European Economic Area members, Switzerland, Monaco, Andorra and San Marino). The recruitment of foreign employees thus involves legal and administrative procedures, resulting in financial costs which might be burdensome for a start-up or an innovative company.

    In this regard, simplified formalities have been introduced in France to make it easier for innovative companies to address these difficulties or to allow the use of qualified employees. For example, the Innovative Company Talent Passport is a four-year renewable work permit, issued under the Law of 7 March 2016, which allows companies that are qualified as ‘innovative' under the meaning of the law to benefit from a simplified procedure to recruit foreign employees, subject to compliance with the law's requirements.

    11.2 How can AI companies attract specialist talent from overseas where necessary?

    French companies are very active in the AI space and are thus significant players in this field, which can be very attractive to foreign talent. Likewise, AI involves significant R&D activity involving highly qualified personnel (eg, engineers, data scientists, analysts), who may be difficult to find in France (due to high demand and a shortage of experts), necessitating the recruitment of overseas talents.

    However, start-ups and innovative companies may face difficulties arising from the applicable French labour law when seeking to recruit talent from overseas. In such case, companies must establish whether the foreign worker is authorised to work in France. Applicants must either hold a work permit or be from a country for which a work permit is not mandatory (European Economic Area members, Switzerland, Monaco, Andorra and San Marino). The recruitment of foreign employees thus involves legal and administrative procedures, resulting in financial costs which might be burdensome for a start-up or an innovative company.

    In this regard, simplified formalities have been introduced in France to make it easier for innovative companies to address these difficulties or to allow the use of qualified employees. For example, the Innovative Company Talent Passport is a four-year renewable work permit, issued under the Law of 7 March 2016, which allows companies that are qualified as ‘innovative' under the meaning of the law to benefit from a simplified procedure to recruit foreign employees, subject to compliance with the law's requirements.

    12 Trends and predictions

    12.1 How would you describe the current AI landscape and prevailing trends in your jurisdiction? Are any new developments anticipated in the next 12 months, including any proposed legislative reforms?

    The French government is making significant efforts to support the AI market, with the aim of making France a world leader in this space. Investment in AI in 2020 was estimated at €350 million; the aim is to increase this to more than €1.3 billion by 2023. In a press release published on 5 April 2019, the ministers of economy and armed forces announced the strategic aspects of state investment in AI. As large budgets such as the €800 million allocated to the Deep Tech plan launched in 2019 illustrate, this is a key priority for the French government. Support mechanisms for entrepreneurs have also been established through an established financing ecosystem encompassing players such as private banks, public bodies such as Bpifrance and other private investment funds represented by business angels (see questions 2.4 and 2.5).

    Moreover, compared to other countries, France nurtures real talent through excellent training programmes. This collective intelligence has contributed greatly to the reputation of French Tech. The challenge will be to avoid the so-called ‘brain drain' by promoting the attractiveness of the most innovative French companies.

    To date, no specific legislative reforms on AI have been announced for the next 12 months. The implementation of specific legal regimes has not yet been identified as an essential step for the development of AI in France, as AI is currently regulated under a soft law approach. However, the work initiated by numerous working groups and think tanks relating to the use of AI and its impact in many sectors of the economy, as well as the willingness of the French government to establish incentive mechanisms for the development of the AI market, could lead to a more flexible interpretation of existing rules (eg, by courts or administrative authorities) or to the implementation of dedicated legal regimes on subjects where this is identified as necessary (eg, a liability regime dedicated to robots).

    13 Tips and traps

    13.1 What are your top tips for AI companies seeking to enter your jurisdiction and what potential sticking points would you highlight?

    Any company wishing to develop AI activity in France should seek assistance from qualified professionals and bear in mind the following recommendations:

    • Analyse and take the necessary steps to benefit from the available financial mechanisms that would facilitate the financing of your company and activity – in particular, in relation to R&D activity, the recruitment of highly qualified foreign employees and so on (see questions 2.4 and 2.5).
    • Identify the AI market in which you intend to commercialise your products and services in order to deploy an adaptive and competitive business model (see question 2.1).
    • Analyse the feasibility of your project from a regulatory standpoint, depending on the sector in which you wish to develop and market your AI products and services, in order to identify possible obstacles to its development or specific measures to be implemented (as well as the associated costs) in order to facilitate its deployment and ensure its legal security.
    • Protect and enhance the value of your technology and products by undertaking the necessary formalities to secure your IP rights (eg, patents, trademarks, copyrights):
      • First, identify the technologies belonging to third parties that are integrated into your products or used in your services (eg, open source software components, third-party software), in order to ensure that you hold all necessary authorisations in order to market those products and provide those services.
      • Next, identify the technologies and products that you have developed in order to determine the IP rights to which you may be entitled in respect of these elements and components (eg, filing of patents; copyright; sui generis database right); and, where applicable, take the necessary steps to protect them (eg, filing; transfer of IP rights to the company in the event of the intervention of a third-party service provider; inclusion of specific IP provisions in employment contracts with employees).
      • Finally, based on the preceding analysis, identify which rights you wish to grant to customers in relation to your AI technology, its various components and the elements generated by this technology (eg, raw data, datasets, generated models), as well as to all products and services provided to customers (eg, right of use, transfer of rights); and draft appropriate contractual documents in this regard.

    Technological innovation is one of the main determinants of success. You can benefit more from innovation if you take into account the full range of IP issues involved in the development of new technologies and products. Effective use of the IP system reduces risk and facilitates the introduction of innovative technologies into the marketplace, while improving the competitiveness of technology-based firms.

    • Identify whether personal data is processed by AI technology and the status of your company in processing such personal data vis-à-vis customers and data subjects, and more broadly ensure compliance with the requirements of the General Data Protection Regulation (eg, legal basis; information of data subjects).

    The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.