In this video, Ciaran Flynn and Rhiannon Monahan from our Governance and Consulting Services Group share their expert perspectives on embedding AI strategy at a senior leadership level and why integrating risk management from the outset is essential. They highlight the importance of cross-functional collaboration, central governance, and AI literacy in ensuring effective and ethical implementation. As organisations navigate the evolving landscape of artificial intelligence, Ciaran and Rhiannon offer practical insights into building a resilient and responsible AI framework fit for future success.
Video Transcription
Rhiannon Monahan
In the same way that your Chief Executive Officer will be responsible for executing corporate strategy, it is important to have someone at a senior leadership level to make sure that your AI strategy is driven forward and execute it effectively.
Ciaran Flynn
AI strategy can't be divorced from risk management so it's really important when organisations are thinking about their AI strategy, that they're also thinking about how this is going to impact on their culture, how it's going to impact on their brand, and how they can manage any risk or ethical concerns. It's too easy to think in the current environment that organisations need to be using AI for AI's sake. What's far more important from specific organisations to think about when they're forming their strategy is really what's right for us? What's aligned from an efficiency perspective and from a operating model development, but also from a cultural and ethical dimension in terms of what's right for the organisation, its brand, and its people. So it's fundamental that when you're developing your AI strategy, you are also are thinking about AI risk management. They're two sides of the same coin and unless you're starting thinking about risk management from the get go, it'll be a challenge.
Rhiannon Monahan
It's very important as well to have this senior person operating centrally, given that your AI systems are going to be implemented by employees across all functions, so they're not going to be limited to any one department, and you need to have that holistic oversight. You also need to make sure then that those AI systems are being governed centrally by your operational policies, but also are going to be in compliance with your legal and regulatory obligations. What's also most important when it comes to AI and having that central leadership role is that you're talking about cross-functional teams. When you're implementing AI, you're taking together your technologist, your governance professionals, your legal and compliance experts, and you need a true leader to make sure that those people can work effectively together.
Ciaran Flynn
Having more people in your organisation, being aware of the power of AI and how it can change their particular lived experience within the workplace is key to generating use cases and that stretch between uplifting AI literacy and hence driving use case generation, which can then help drive AI strategy, is fundamental to success in this area.
Rhiannon Monahan
I think what's important to note here is that AI literacy rules have now been in effect since February 2025, so they're already here and live. What organisations need to remember is that there can be a significant disparity across their organisation and then groups of employees, board members, and other stakeholders that they have. All of these groups need to be assessed to determine their base level of AI literacy and to develop AI literacy programmes which suit their needs. When it comes to AI literacy, it's going to be one of the most key foundational tools when it comes to implementing AI, because what you're doing is really empowering and encouraging your employees to use it and know how to use it to its best advantage.
Originally published 3 Jul 2025
This article contains a general summary of developments and is not a complete or definitive statement of the law. Specific legal advice should be obtained where appropriate.