- within Technology topic(s)
- with readers working within the Securities & Investment industries
- within Technology, Antitrust/Competition Law and Law Practice Management topic(s)
As AI quickly shifts from everyday tools to comprehensive AI platforms, from pilot projects to daily operations, financial institutions face practical challenges around adoption, regulation and trust. AI carries enormous potential for operational efficiencies and enhanced product and client experience important for the Luxembourg financial sector.
Yet, with innovation comes responsibility. While the security of AI applications relies on similar concepts as to traditional ICT, there are specific implementation considerations. Below, we have outlined a few key considerations that financial institutions should consider when building their AI strategy.
Requirements for the use of traditional ICT tools (e.g. under DORA) |
Examples of how these requirements may need to be enhanced for AI systems/models and core questions that arise |
Availability, authenticity, integrity and confidentiality of data; or comply with requirements and industry best practice such as:
|
Know your AI: Understand the core parameters of an AI system, type and source of training data and access to such, input data representativeness and relevance etc. ...and its risks: Understanding new AI security risks is crucial – ICT risk management framework must factor in, for example, model inversion attacks, membership inference, data/model poisoning, etc. Everyone knows about confidentiality, but... Data integrity and confidentiality extend to training data sets but also outputs – risk of confidential data leakage via model output such as prompt injection must be considered. Where is our data? DORA requires knowledge of data location but in the case of AI tools, where are the data on which the model was trained located? If the AI tool or model is provided by an external provider, how is such data segregated, erased, destroyed or returned once training is complete? According to the OECD paper Generative artificial intelligence in finance, Dec 2023, larger financial institutions consider the use of private restricted versions of AI models deployed internally within the firm's firewalls and/or private cloud, which also provides better visibility in terms of compliance oversight. |
Governance |
Consider adapting the firm's internal rules in terms of compliance monitoring and model governance, e.g. to define clear responsibility throughout the model lifecycle from development to deployment as for traditional ICT. However, also consider:
|
Data protection (GDPR, privacy) |
While AI privacy concepts and requirements remain the same (including disclosures to data subjects about the collection and use of data related to them and data subjects rights), it requires careful consideration of new characteristics and challenges such as:
|
Business continuity and backup (BCP/DRP) |
Consider the autonomous nature of AI systems and the consequences in case of discontinuation, failure, model drifts, etc. – would the suggested use require an AI-specific BCP? Assess the robustness and resilience – can the model withstand adverse events or changes, such as discontinuation of the training data sets which creates model drift that undermines the model's predictive capacity in stress scenarios? |
Exit strategy |
Especially where AI models/environments are tailored to specific business models and operations, exit may prove particularly challenging due to the complexity of such models and interconnection with the existing overall ICT infrastructure, systems and applications. |
Audit and access rights |
The effectiveness of standard audit rights should be considered and extended to AI systems, e.g. to cover AI training data sources, specific performance metrics, explainability, etc. |
Contractual arrangements, service levels and performance targets |
Agree to AI specific terms for any external tools embedding AI, including about the use of prompts and outputs for training, intellectual property, data localisation, the EU AI Act (Regulation (EU) 2024/1689), third party service providers liability, etc. Consider more protective contractual clauses around the use of training data – e.g. necessary consents, retention period, obligations to delete or return, technical and operational protection measures, etc. Re-define what is an acceptable performance when agreeing SLAs, considering the still novelty of AI use and special characteristics of the model, such as accuracy, response time, computing resources, fairness, bias metrics, drift detection, model updates, explainability requirements, etc. How will performance monitoring be achieved – traditional or AI monitoring AI? |
ICT incident management |
Consider what type of AI-specific failures are possible in order to plan for prevention, mitigation and reporting – e.g. systemic bias or discrimination, corruption of the model, harmful outputs, etc. AI may be challenging to on board. But once it's there, failure may become critical! |
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.