ARTICLE
1 July 2025

Embracing Artificial Intelligence – Incorporating AI In The Boardroom And Beyond?

KL
Herbert Smith Freehills Kramer LLP

Contributor

Herbert Smith Freehills Kramer is a world-leading global law firm, where our ambition is to help you achieve your goals. Exceptional client service and the pursuit of excellence are at our core. We invest in and care about our client relationships, which is why so many are longstanding. We enjoy breaking new ground, as we have for over 170 years. As a fully integrated transatlantic and transpacific firm, we are where you need us to be. Our footprint is extensive and committed across the world’s largest markets, key financial centres and major growth hubs. At our best tackling complexity and navigating change, we work alongside you on demanding litigation, exacting regulatory work and complex public and private market transactions. We are recognised as leading in these areas. We are immersed in the sectors and challenges that impact you. We are recognised as standing apart in energy, infrastructure and resources. And we’re focused on areas of growth that affect every business across the world.
The widespread availability of generative AI has led to discussions about its potential use in all parts of the economy and wider society, and for many people its use is now a standard part of their working life.
United Kingdom Technology

The technology's transformative potential is already clear, but safeguards and governance need to keep pace

The widespread availability of generative AI has led to discussions about its potential use in all parts of the economy and wider society, and for many people its use is now a standard part of their working life. It is not surprising then that company secretarial teams are considering whether, and how, AI can assist them with their role, particularly in connection with board meetings, and that boards themselves are grappling with AI governance arrangements in their businesses.

In summer 2024 the GC100, the association of general counsel and company secretaries working in FTSE 100 companies, ran a poll on the use of AI and legal technology to support the minute-taking process by listed and large private companies. The results, which were published in autumn 2024, showed an almost equal split between responses in favour of (36%), and against (37%), the use of AI in the boardroom (with 27% undecided). So, what are the issues that companies should take into consideration when deciding whether or not to allow AI into the boardroom, do the potential rewards outweigh the risks, and how should boards be considering AI governance in respect of their organisations more generally?

Benefits and risks of using AI in the boardroom

Those who replied to the GC100 poll saying they were in favour of using AI pointed to the efficiencies that it can bring to dealing with repetitive and administrative tasks. In the context of governance professionals and the boardroom, these efficiencies could be realised for example in the context of the preparation of the agenda, board packs and minutes for full board meetings and board committee meetings.

By streamlining manual and time-consuming processes related to board minutes, AI would free up company secretaries to focus on the other aspects of their role and spend more time on those tasks and activities that are valued by both their internal and external stakeholders. With 92% of respondents to the GC100 poll having not yet introduced AI into their minute-taking processes, AI has the potential for significant time, cost and other efficiencies for a large number of governance professionals.

Companies should be mindful that it will take time to learn how to harness the power and potential of AI and should not expect a radical transformation overnight."

Alexander Amato-Cravero
Director, Emerging Technology (Advisory)

However, there are a number of potential risks in using AI for board-related tasks, some of which relate to the use of generative AI in any context, and others which are more specific to the board environment. Key issues to consider include:

Impact on board discussions

In the GC100 poll, respondents reported that many directors would be uncomfortable with board discussions being recorded for the production of AI-generated minutes. This could have an impact on directors' willingness to speak freely at meetings or conversely increase contributions as a result of directors believing that they must raise a question or add a comment in relation to agenda items so that their contribution is recorded. Minutes are not intended to be a verbatim record of the meeting and typically do not record individual views. The ability to have a full and frank exchange of views is an important feature of board meetings and part of the way that directors comply with their duties to the company.

Concerns about recording board meetings extend beyond the impact on the quality and openness of the discussion itself to issues relating to the potential cyber risks associated with voice recordings. Such concerns have been highlighted by recent incidents of audio deepfakes being used by hackers to infiltrate companies' internal systems (see further below on security concerns).

Finally, companies will need to ensure that the approach adopted in relation to recording meetings, and the wider use of AI tools, complies with data privacy and protection requirements.

Quality of minutes produced by AI

Company secretaries exercise judgement to distil the essence of discussions at board meetings into the minutes, ensuring that all relevant points are captured, and nuances reflected where necessary. They are acquainted with the participants in the meeting, understand their roles and responsibilities and know their personalities (and idiosyncrasies) and interpret their contributions to a meeting in this light.

AI cannot exercise such judgement, so there is a risk that comments are misconstrued and that matters are covered inappropriately. The general concerns around AI hallucinations also raise questions about the accuracy of what is included in AI-generated minutes and there is the additional risk that key points of discussion are missed completely where an AI tool is used to produce the minutes, since the AI tool may miss the relevant context for the discussion or some of the relevant background information.

What are AI hallucinations?

These occur where responses generated by AI are presented as fact but contain false or misleading information.

Confidentiality

Given the sensitive nature of many board discussions – from commercial, strategic, employment and regulatory points of view – companies need to be sure that these remain confidential, even if AI is used to take minutes. A particular concern is that the recordings captured by the AI tool deployed to produce the minutes might be used to train that AI tool and so become publicly available. Even where internal, ring-fenced, so called "closed AI" tools are used, there remains the risk that details of confidential discussions are inadvertently made available to employees within the company.

The approach adopted may be that the AI recording and AI-generated draft minutes are only used as a starting point from which the final minutes are produced by a member of the company secretarial team. However, directors will still want to know what will happen to the verbatim transcript and any AI-generated first draft of the minutes, in order to ensure that the final minutes are the definitive record of the meeting and in order to mitigate any risk that the AI recording and AI-generated draft minutes become subject to disclosure in legal proceedings (when issues of privilege will be key, a concept that AI is not best placed to identify) or become more generally available.

Security

Even before considering the use of generative AI, companies were already exposed to significant cyber threats. There are frequent news stories of companies falling victim to attacks from hackers, and the means through which illegal actors are attempting to gain access to companies are becoming increasingly sophisticated and responsive to the developments in AI and technology. There is a risk that expanding the use of AI within the company creates new avenues for cyber-attacks and new areas of potential weakness for hackers to exploit. The recently published Cyber Governance Code of Practice is a useful resource for directors dealing with an ever-increasing cyber threat.

Lack of expertise

Several respondents to the GC100 poll flagged their lack of expertise and experience in AI and acknowledged that this makes it difficult to understand the risks, and also to fully assess the benefits.

Wider considerations around AI governance

Board teams need to address issues around the use of AI in the boardroom in the context of the company's overarching governance arrangements in relation to the use of AI. Any specific policy around its use in the boardroom should complement these wider governance practices.

With regard to directors' responsibilities relating to AI more generally, the starting point is that they should have a clear understanding of how AI is being used within the company. This encompasses both specific AI tools and applications and generic "non-AI" technology systems which now incorporate an AI element.

Directors also need to understand the purpose of the various AI tools already in use throughout the company and the impact of these tools on day-to-day operations. This knowledge can then inform the development of an overarching AI governance, risk, and compliance framework which is tailored to the use of AI within the company in order to manage the risks associated with a wider scale deployment of AI. The framework should comprehensively define the principles, restrictions, and general guidelines for AI use within the organisation, including ethical considerations, data privacy, security measures, internal training expectations, and compliance with regulatory requirements. It should also provide mechanisms for monitoring and evaluating AI's use, performance and impact. Such an approach helps ensure that AI is being deployed in a safe and responsible way, with appropriate management of the associated commercial, reputational and regulatory risks.

As noted above, the responses to the GC100 poll flagged the lack of expertise in relation to AI's use in the boardroom; an issue which applies more generally to AI governance across the company. Not all members of the board will necessarily be AI-literate, and this will inevitably limit their ability to assess the guidance, policy and controls which the company should put in place to regulate the use of AI by its officers, employees and contractors.

Where there are one or two directors who are AI-literate, there is a risk that other members of the board would defer to those directors and not challenge the approach proposed by this sub-group of directors as the rest of the board would lack the understanding to test the views held. Of course, as is the case in other aspects of its work, the board is able to bring in subject-matter experts – both internal and external – to support them in their deliberations and decisions. However, given the direction of travel in relation to AI, boards will be increasingly wary of over-reliance on such experts and will seek (and in some cases already are seeking) to bolster the AI capabilities on the board.

An approach that is likely to be suitable for many companies would be to adopt committee-based governance through the creation of a dedicated group of AI-literate board members and subject-matter experts who are able to discuss and formulate a proposal for the best approach for the particular organisation. This proposal should then be presented to the board for its approval.

Will we see greater use of AI in the boardroom?

We anticipate that the use of AI in the boardroom will increase, given the potential it has for improving business efficiency, the speed with which it is developing and improving, and its rapid adoption in wider society. However, as the GC100 poll indicates, there is still a certain reticence about using AI for board-related matters, reflected in the fact that only 8% of respondents had actually used AI in their minute-taking processes.

Companies should be mindful that it will take time to learn how to harness the power and potential of AI and should not expect a radical transformation overnight. Companies will want to find the sweet spot of using AI in situations where it can offer the most efficiencies but with sufficient safeguards in place to manage the various risks. Maximising the benefits available with AI, whilst curtailing the risks, will take an investment of time and effort to develop an appropriate, tailored AI policy.

Developing a tailored, sophisticated AI governance, risk, and compliance framework will help create an environment in which human and artificial intelligence can work together to achieve better outcomes."

Julie Farley
Knowledge Lawyer

How AI would be used in any situation will vary according to the task in hand. In the context of minute-taking, for example, one possible approach would be that the first draft of minutes is produced by AI overseen by someone with relevant expertise. This AI-generated draft would then need to be reviewed, amended and refined by a member of the company secretarial team before sign-off – keeping a human in the loop. Developing a tailored, sophisticated AI governance, risk, and compliance framework will help create an environment in which human and artificial intelligence can work together to achieve better outcomes.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More