‘Grace' is a lifelike robot nurse, built with artificial intelligence to bring emotional care for patients during the pandemic and make them feel comfortable and at ease.
Artificial intelligence (AI), the self-didactic technology which detects patterns from historical data, is pervading all walks of life, be it healthcare or the financial services industry.
The High-Level Expert Group on AI, tasked by the European Commission to draft AI ethics guidelines, defined AI as ‘‘systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals''.
In the finance world, AI has evolved substantially over the recent decades and its utility ranges from the performance data monitoring, establishing creditworthiness and credit scoring, as well as in combatting cybercrime and money laundering. However, the exponential use does not come without a fair amount of risks attached, in particular in machine learning applications where risks of data bias can lead to erroneous results being generated by the AI due to statistical errors or interference during the machine learning process.
The paucity in AI regulation and the multiplicity in AI practices led the European Commission to focus on this technology in its Digital Finance Package, launched at the end of 2020 to ensure that the EU financial sector remains competitive while catering for digital financial resilience and consumer protection.
Towards the end of last year, the European Central Bank published its opinion welcoming the Artificial Intelligence Act. While noting the increased importance of AI-enabled innovation in the banking sector, given the cross-border nature of such technology, the supranational body held that the Artificial Intelligence Act should be without prejudice to the prudential regulatory framework to which credit institutions are subject.
The ECB acknowledged that the proposal cross-refers to the obligations under the Capital Requirements Directive (2013/36 or ‘CRD V') including risk management and governance obligations to ensure consistency. Yet the ECB sought clarification on internal governance and outsourcing by banks who are users of high-risk AI systems.
Raising its concerns as to its role under the new Artificial Intelligence Act, the ECB reiterated that its powers derive from article 127(6) of the Treaty on the Functioning of the European Union (TFEU) and the Single Supervisory Mechanism regulation (EU) 1024/2013 (SSM regulation), which instruments confer on the ECB specific tasks concerning prudential supervision policies of credit institutions and other financial institutions.
Recital 80 of the proposal provides that ‘‘authorities responsible for the supervision and enforcement of the financial services legislation, including where applicable the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions”.
The bank held that ‘market surveillance' under the Artificial Intelligence Act would also consist in ensuring the public interest of individuals (including health and safety). In a nutshell, the ECB informed the Commission that the ECB has no competence to regulate solutions like Grace the robot, but it will only ensure the safety and soundness of credit institutions. To this effect, the bank suggested that (i) a relevant authority be appointed for health and safety risks related obligations; and (ii) another AI authority be set up at Union level to ensure harmonisation.
In parallel, the ECB also recommended that the Artificial Intelligence Act be amended so as to mandate that, that in relation to credit institutions evaluating the creditworthiness of persons and credit scoring, an ex-post assessment be carried out by the prudential supervisor as part of the SREP, in addition to the ex-ante internal controls that are already listed in the proposal.
Interestingly, the Bank for International Settlements, in its newsletter on artificial intelligence and machine learning, raised its concerns in view of the cyber, security and confidentiality risks, data governance challenges, risk management, biases, inaccuracies and potential unethical outcomes of AI systems, ‘‘the committee believes that the rapid evolution and use of AI/ML by banks warrant more discussions on the supervisory implications”.
While the Artificial Intelligence Act has not been agreed upon in its final form and may be substantially changed before its acceptance, it is safe to say that the financial sector is one in which the challenges relating to the use of AI need to be evaluated well, before and when deploying such technological solutions, in view of the risks and individual rights that are at stake.
Originally Published by Times of Malta
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.