AI governance in our company – who is responsible?

Have you, following the DPO and CISO, already appointed an "AI Officer"? For most organizations this will not be necessary for tackling compliance of the AI developments, but you still need to have appropriate governance. After having discussed our 11 principles for a responsible use of AI and risk assessments, we now explain what you should look out for and what steps you should take in terms of governance for ensuring a legal and ethical use of AI. This is part 5 of our AI series.

In simple terms, governance concerning a legal and ethical use of AI means that a company defines who has which tasks, powers and responsibilities in relation to the use of AI and the implementation of AI projects and which rules and procedures need to be complied with. This ensures that everything runs smoothly, that important goals can be achieved and that no unwanted risks are taken. Achieving this with regard to compliance is not easy, especially in the field of artificial intelligence, for three reasons:

  • First, AI affects many topics and areas of the company. Sales can use AI, as can customer service, research and development, human resources and so on. Even with regard to specific questions, such as whether a certain application is permissible, answers must be sought in a wide variety of legal areas – from data protection to intellectual property law to unfair competition law to contract and criminal law.
  • Second, there is still a lot of uncontrolled growth when it comes to AI in many companies. In contrast to the introduction of cloud technologies, for example, which in many places was driven "bottom-up" from IT, artificial intelligence is a technology where "business" and even the management itself comes up with new ideas – and wants them to be implemented.
  • Third, from a legal perspective, there is little transparency regarding the tools and infrastructure used for the deployment of AI and the corresponding knowledge. Which version of ChatGPT is suitable from a data protection perspective, and which version of Microsoft Copilot? Our list discussing the various offerings in part 2 of our series resulted in huge feedback from far beyond Switzerland, which shows that there are still a lot of unanswered questions here and that many do not really know what they should and may do.

How is a compliance or legal department that is suddenly swamped with requests supposed to provide a reasonable answer within a reasonable time? This has to settle down and, above all, be regulated and organized, i.e. companies have to define their guidelines (not only legally, but also technically, such as the platform issue) and regulate responsibilities and procedures. This is what we currently do most often with our clients, apart from assessing specific projects, providers and tools.

One other remark: Proper governance is not only necessary with a view of ensuring compliance, but all other goals an organization may have, as well. While we in the following will focus on compliance governance, similar processes, rules and structures will also help in achieving other goals in relation to the use of AI.

Who is responsible for AI compliance?

This raises the question of who in the company is responsible for the topic. At least with regard to legal issues, this has not really been determined in many places. However, we are seeing certain trends among our clients. They include the fact that the topic of AI compliance is now primarily assigned to those who already take care of data protection compliance. It is true that AI can also affect other areas of law, such as copyright law. However, as many companies are primarily users of AI technologies that are already available on the market, these other legal issues are somewhat less prominent for them than for those who offer AI products themselves (we will discuss separately the changes that result from the AI Act).

In many cases, we believe it also makes sense for AI compliance to be coordinated internally with those who are already responsible for data protection. They will usually already have the most experience: Many of the central concerns and approaches that relate to a legally compliant, ethical and risk-aware use of AI are already well known and established in data protection. Examples include the principles of transparency, accuracy and self-determination. Similarly, in data protection, there is already a lot of experience with two compliance tools that are now also becoming important in the field of AI: records of processing activities and data protection impact assessments. We recommend the former to our clients for their internal AI applications, and the latter represents a proven methodology for assessing and addressing risks that are also well suited to AI projects (see Part 4 of our series). It is therefore not surprising that most of the regulatory recommendations on the use of AI to date have come from supervisory authorities in the area of data protection.

The EU AI Act will also add the spectrum of product regulation to the use of AI (especially for those companies that are considered to be providers) and a number of cases in which companies will have to introduce certain standard checks, such as whether a particular project could result in a prohibited AI applications, whether one of the special obligations that are imposed on deployers of AI (e.g. when using AI-generated content for public information) need to be complied with, or whether the intended AI application is considered "high-risk" under the AI Act.

1426562a.jpg

Click here for the AI Governance.

How companies should proceed

A company needs to address at least these three elements to handle the use of AI in terms of governing compliance:

  • Policy (Part 1): There are legal and non-legal requirements. The former are predetermined, while the latter – typically referred to as "ethics" – must be decided by the company itself. Each company will have to define its own ethical principles in this area; there is no universal set of guidelines. For our clients, we have developed the "11 Principles", which we discussed in Part 3 of our series and which can form a basis for a comprehensive discussion about the guidelines a company wants to develop for itself in order to regulate the use of AI in substance and organizationally; they cover the entire spectrum of topics usually addressed today. The first step in the process for the actual implementation within an organization is to define the principles that a particular company wishes to adhere to. By principles, we mean, in addition to rules concerning the organization and procedures for dealing with AI (see below), on the one hand, setting for which conduct is prohibited and which conduct is required, each with exceptions (example: How transparent do we want to be with regard to the use of AI?) and guidelines on how requirements can be implemented and assessed (example: What is the quality standard we require when using an LLM? When is it sufficiently explainable for us?). The second point is often a challenge in practice.
  • Policy (Part 2): Many organizations are still in an orientation phase where they have no real plan or even a clear idea of the topic, its facets and its implications for the organization. In order to develop functioning guidelines in such a situation, it can be useful to set up a task force in which stakeholders from different areas of the company are represented so that the different needs and perspectives are adequately taken into account and the necessary "buy-in" for subsequent implementation is ensured. Business, legal and compliance, IT, information security, finance and other relevant areas should be represented. Such a task force can not only consider the aspects of compliance discussed here, but also determine in general what significance AI has for the company, what goals are to be achieved with it, how the company wants to deal with the topic and what it wants to invest in it, and much more. In practice, it is advisable to conduct this discussion using specific examples. This avoids the development of abstract, well-sounding but impractical principles. For example, when discussing the aforementioned topic of transparency, the proposed principles should be tested using specific use cases in order to check whether this makes sense at all.
  • Policy (Part 3): The next step is to put to record these principles and transfer them into actionable instructions for employees of the company, for example in the form of a guideline, policy or directive. For small companies, our "one-pager" will suffice in its original or amended form, together with rules and allocations of the various roles, tasks and responsibilities. For larger companies, a two-stage approach is advisable, as is already often implemented for other data law purposes. It consists on the one hand of issuing a strategic guideline in the form of broad (legal and non-legal) principles ("AI Strategy") and, on the other hand, guidelines and policies that put these principles into practice. In practice we usually see organizations issuing one, generic AI policy, which is then supplemented by additional, specific policies where necessary. Of course, the topics can also be covered in existing directives; in our experience, however, this is not very practical in terms of handling – employees want to know exactly where they have to look when making plans using AI in the company. In addition, issuing an "AI policy" meets the current hype surrounding the topic; it automatically guarantees that the regulations will get more attention, which is never a bad thing in compliance matters. However, this does not change the fact that the various policies and directives of an organization must be well coordinated. Many of the requirements of the AI policy, for example, will be driven by data protection (data processing agreements, processing principles, rights of data subjects) and must therefore be coordinated, both in terms of the substantive rules and the compliance procedures. There will also be new cases that are not yet regulated by law, but which will have to be dealt with, for example if there are requests from data subjects that do not fit into the classic scheme of data subject rights under data protection law. However, we recommend not to get too detailed and far into these issues and rather take a "let's cross that bridge when we come to it" attitude should it become clear that such cases actually come up in practice in a relevant way. Otherwise an organization will be unnecessarily shadow boxing.
  • Organization (Part 1): Inour view, the organization in the area of AI compliance can be the same as for data protection and utilize the resources already available. Here too, a central point of contact is needed to coordinate. In our experience, the data protection officer is an obvious choice; they are already accustomed to working together with the other relevant internal stakeholders, such as the CISO. At the moment, however, experience shows that such an officer will still operate on an ad hoc basis with regard to AI issues: In some companies, we see that these positions are currently being overwhelmed with more requests for legal approval for all possible AI applications than they can handle in a timely and technically sound manner. They themselves lack the experience and, in some cases, the necessary specialist knowledge but in our view, for most companies it would be premature and an overkill to hire an "AI officer" for compliance issues, because existing employees can usually take on the tasks at issue here; the current wave of demand is likely to weaken in the medium term, for example once companies have made their key decisions regarding AI tools and infrastructure. Apart from the fact that there are still hardly any specialists with the necessary experience, it will be much more efficient for most companies that use AI only as a "user" and do not offer AI products themselves to cover the legal issues that arise with their existing resources – if necessary, in an interdisciplinary manner and with external help. However, the expectations placed on the internal experts with regard to their knowledge will continue to increase. The contact point for AI compliance should organizationally be part of the "second line of defense" (according to the widely used "Three Lines of Defense" compliance model) and should, if possible, not also be responsible for the compliance of specific AI applications in order to avoid any conflicts of interest.
  • Organization (Part 2): In addition to an AI contact point to ensure organizational compliance regarding the use of AI, we also recommend that larger companies create a body for setting forth both the non-legal requirements (i.e. an "AI ethics committee") as well as keeping the "big picture" in sight for the company, which means monitoring the use of AI in the organisation, track positive and negative developments and propose adjustments to policies, the organization and the procedures as necessary (i.e. an "AI oversight committee"). We have already described such a setup for the area of data ethics and see a similar approach as appropriate here, as soon as a company intends to rely on ethical principles in a relevant manner. For smaller companies, the creation of an group with different stakeholders to share their experience and view may make sense, as well; they can make appropriate recommendations to the management or other drivers of the topic within the company. If an organization has set up the task force described above, it may be tasked with this job, as well. We note, however, that interestingly, the issue of ethics is seen differently in different countries. For example, colleagues in Germany tell us that their clients regularly only want to know whether an application is permissible or not according to hard law – and not whether they comply with ethical principles. In Switzerland, on the other hand, the Federal Data Protection and Information Commissioner, a supervisory authority, takes the position that companies should adhere to requirements for AI that clearly go beyond the law, for example in terms of transparency.
  • Organization (Part 3): For each AI tool and each AI project, a company should appoint a person (or, if not otherwise possible, another body) who is internally responsible for its compliance (in the sense of "accountability"). This application "owner" will typically be the person for whose benefit it is operated or implemented. It is typically a position in the "first line of defense" (according to the widely used "Three Lines of Defense" compliance model). The position will also have to make the necessary risk decisions (see our article in this series on risk management). This is typically the "business owner", who is usually also responsible for the success of the project. Even in our one-pager directive, we have set forth that an "owner" must be defined for every AI tool approved for use. No business AI use cases should be approved without a defined owner who is responsible for compliance in accordance with the internal policies.
  • Organization (Part 4): Strategic guidelines, priorities and goals in the field of AI, as well as decisions over important projects, will be a management task in every organization. In our experience, top management is currently very interested in the topic of AI, but its decisions will typically have to be prepared at a lower level, for example by the aforementioned AI oversight committee or AI task force, as management will often lack the necessary subject-matter expertise and overall view of the topic.
  • Standard Procedures (Part 1): Companies should define several standard procedures for AI. The first concerns the introduction of new AI applications (or changes to existing AI applications): On the one hand, it serves to check the project for compliance with the company's legal and other requirements, and on the other hand, it serves risk management. We have already partly described how the latter works in part 4 of our series. Accordingly, a company will have to instruct its employees to report any use of AI to the defined contact point in advance and have it checked or demonstrate that the compliance requirements are met. In larger companies, we recommend carrying out the compliance check in at least two stages. In the first, early stage of a use case, a rough preliminary check is carried out according to the "traffic light principle" - green, amber and red; the DPO, CISO and other specialists should provide their recommendations to the business owner (or its project team) to consider, including the legal dead ends and pitfalls they will certainly see when even doing only a preliminary assessment; it will allow the business owners to better understand their homework. An in-depth check only takes place later. It is also helpful to provide the business with pre-approved AI infrastructure options, such as permitted AI models and implementation use case examples that are known to work.
  • Standard Procedures (Part 2): In order not to lose the overview and keep track of the AI technologies used and their internal owners, we also recommend keeping an inventory with all AI applications listed in it, a so-called Records of AI Activities or "ROAIA". This approach has already proven itself in data protection. A ROAIA can of course be combined with the records of processing activities as per data protection law. However, the aspects to be recorded in a ROAIA are slightly different (e.g. the AI models used in each case); a free template can be found in our GAIRA tool, which can be downloaded here.
  • Standard Procedures (Part 3): A further standard procedure concerns the monitoring of AI in use and the reporting of incidents when using AI. We are also familiar with this from data protection law. Unless there is also a breach of data security under data protection law (known as a data breach), there is currently no obligation to report AI incidents. However, from the perspective of a responsible use of AI, a company should actively monitor their AI usage, track incidents and oblige employees to report them; this allows timely intervention and is part of good risk management.
  • Standard Procedures (Part 4): Further standard procedures could include handling requests from persons affected, training employees and monitoring internal compliance with the relevant rules.

Even though there is currently a hype with regard to the topic of artificial intelligence, it seems clear that AI will play an increasingly important role in the corporate world and companies will need to get involved in how to use AI even if only for not falling behind their competitors. Establishing sound AI governance is therefore not only a question of compliance, but also a step towards exploiting the full potential of the technology while minimizing risks. In our experience in advising our clients, the current great interest in promoting the use of AI represents a very good opportunity to also create the necessary guidelines, organization and processes for compliance and thus governance – at least if this can be shown to be enabling instead of slowing down AI applications.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.