ARTICLE
6 November 2024

Artificial Intelligence And The Future Of Insurance

B
Buren

Contributor

BUREN is an independent international firm of lawyers, notaries, and tax advisers with offices in Amsterdam, Beijing, The Hague, Luxembourg, and Shanghai. We provide full-service, multidisciplinary support, helping national and international clients expand, innovate, or restructure their businesses through our offices, country desks, and global network of partners.
The report is a compilation of the expertise and opinion of members from 18 countries and provides compelling insight into the state of AI in their respective markets as the so-called Fourth Industrial Revolution changes.
China Insurance

ARTIFICIAL INTELLIGENCE – AN EXECUTIVE SUMMARY

So much has been written about AI in the last 15 months since we first heard about Chat GPT that it is sometimes difficult to see the wood from the trees. What is AI? Is it new? Is it just the latest in a long line of tech innovations that we will seamlessly adopt into our workplace environment or is the real deal that will cause massive disruption to the way we work and global capitalism is organised. Or both? This report looks at how AI is impacting the global insurance industry today and in the future. We begin with an overview of where we are with AI regulation across the world with an analysis of the recent EU plans in this area. Will other countries follow a similar path? Politicians and regulators have a habit of being behind the curve when dealing with innovative ideas and tech developments. Similarly, the insurance sector is not always renown for being enthusiastic and early adopters of new thinking, but our members have plenty of evidence of their clients embracing rather than rejecting change. So, what has been that impact of AI? In this report, we highlight 12 areas.

We discuss risk prediction and analysis. The ability of AI to quickly analyse vast quantities of data is a powerful tool for insurers in predicting and assessing risks, particularly when there is a significant data source – such as the risks associated with climate change. AI also allows insurance to become more personal as the improved analysis and enhanced actuarial data that AI can provide for insurers enables them to offer more tailored, personalised coverage across auto insurance, life and health and home insurance, as well as large, commercial lines of business.

AI can and will help to improve the customer service that insurers offer their clients. Chatbots will become genuinely helpful rather than (sometimes) an irritant. Digital techniques such as smartphone apps – often involving AI – will enhance and allow the ability of insurers to offer policies in new and underserved markets around the world.

One powerful use of AI we are already seeing in action is the ability to improve the accuracy of claims assessments and uncovering fraudulent claims, given AI can process and analyse large volumes of data so that unusual patterns can be more easily detected. It's not all plain sailing for insurance and other sectors. AI is all about data and lots of it. We analyse how data bias and data privacy are potentially challenges to the way insurance gets the best out of AI.

We have a deep dive into how and where AI is being used in the management of claims in insurance markets around the world. The potential for increased efficiency in claims handling, notably for low-value, high-volume claims is clear, but AI will also help with complex cases such as those associated with natural catastrophes. We also look at new claim areas that might appear because of AI and the automation of traditional manufacturing processes as well as Cyber.

AI is already having an impact on the recruitment processes and strategies of insurance companies around the world, and this looks set to increase. And there is no doubt that the skills needed to be a successful employer and employee in a post AI world will evolve, particularly in those areas of the sector that lend themselves more towards automation. And finally, some thoughts on what happens next.

LEGAL DEVELOPMENTS AND REGULATORY CHANGES IN AI An overview from GILC members

The EU is leading the way in AI regulation

Most parts of the world do not currently have specific AI regulations; instead, a patchwork of data protection laws that touch on the potential risks and exposures related to the use of AI. The European Union, however, is set to pass a more comprehensive AI Act later this year, following a political consensus being reached by the EU institutions. Lawmakers in other parts of the world are keeping a close eye on the EU AI Act. Whether they will follow the exact same path as Brussels remains to be seen.

THE EU AI ACT – SETTING THE STANDARD?

The European Union is leading the way in developing a wideranging set of regulations to oversee the use of AI systems and the potential risks they pose. The European Commission, Council and Parliament have been in discussions since 2021 about the creation of an Artificial Intelligence Act, which would be the world's first comprehensive AI law.

Political agreement between the three institutions was reached in December 2023 and the wording of the final text is now being finalised. The EU AI Act is likely to be passed later this year, with Member States then having two years to enact it into their national law.

The EU AI Act aims to ensure that AI systems are safe and respect existing law on the fundamental rights of individuals and uphold EU values, while also encouraging AI innovation and facilitating investment in AI. It is intended to create a single market for lawful and trustworthy AI applications – and prevent market fragmentation.

The EU AI Act will take a risk-based approach, classifying AI systems into three risk bands: unacceptable risk; high risk; and limited risk.

'Unacceptable risk' AI systems will include, for example, biometric categorisation systems that use sensitive characteristics, such as race or sexual orientation, in the untargeted scraping of facial images or emotion recognition in the workplace.

'High risk' according to the legislators are AI systems that are deemed to pose potential significant harm to health, safety, fundamental rights, the environment, democracy, or the rule of law. That includes AI systems used in critical infrastructure sectors, medical devices, or those used by educational establishments and in recruitment.

The Act will introduce a set of mandatory compliance obligations for AI systems in this 'high risk' category. These will include requirements around risk mitigation, data governance, detailed documentation, human oversight, transparency, accuracy, and cyber security.

'High risk' AI systems will also be subject to conformity assessments to evaluate their compliance with the new rules. The Act will also create a mechanism to enable EU citizens to assess whether their fundamental rights as citizens have been adversely impacted by decisions based on AI and enable individuals to launch complaints if they believe they have.

The third, 'limited risk', category will include chatbots and certain emotion recognition and biometric categorisation systems and will be subject to less stringent transparency obligations. Although as a minimum, the Act will require users to be informed that they are interacting with an AI system.

In some EU countries, rules already exist that regulate certain uses of AI and associated risks. In Germany, for example, the federal financial supervisory authority Bundesanstalt für Finanzdienstleistungsaufsicht (BaFin) requires insurers to have a compliance and management framework that includes all decisions based on algorithms and clearly designed roles and processes.

In Poland, meanwhile, a national soft law in the form of Polish Financial Supervision Authority (PFSA) guidelines regulates, among other things, cloud computing.

Outside the EU, no jurisdiction has a comprehensive legal framework specifically related to the use of AI. Most, however, are taking steps towards the establishment of rules about the responsible and ethical use of AI. And many already have regulations in place that touch upon the use of AI in certain areas.

In Australia, the Government in January 2024 published its response to public submissions to a discussion paper on 'Supporting Responsible AI', which examined the need for a combination of general regulations, sector-specific regulations, and self-regulation initiatives to support safe AI practices. Australia established the world's first eSafety Commissioner in 2015 to safeguard Australian citizens online and was one of the earliest countries to adopt a national set of AI Ethics Principles. The Government has also said it was considering implementing mandatory guardrails for the use of AI in high-risk settings, either by amending current laws or creating new laws specific to AI.

In addition, from March 2024, a new online safety code will be introduced, covering search engines, and providing protection against generative AI-related risks.

China currently has a series of regulations that are aimed specifically at managing some of the risks associated with AI use and it requires compliance by companies engaged in AI-related activity. These include the Administrative Provisions on Algorithm Recommendations for Internet Information Services, the Administrative Provisions on Deep Synthesis for Internet Information Services, and the Interim Measures for the Management of Generative AI Services. These regulations mandate compliance in areas including data security, algorithmic transparency, ethical standards, and risk management.

In Mexico, a bill was proposed before Congress in 2023 that would regulate the ethics of the use of AI in robotics and create a decentralised body and national network of statics to monitor the use of AI. This bill, once passed, is expected to lead to the creation of official standards to regulate the use of AI in Mexico. The regulatory bodies in close neighbours to the EU like Norway and Switzerland are keeping a close eye on events as they develop in Brussels.

In 2018, the United Arab Emirates created the UAE Council for Artificial Intelligence and Blockchain, which is intended to propose policies related to AI. No specific legislation has yet been implemented. In 2023, the Dubai International Financial Centre (DIFC), the offshore freezone jurisdiction of the UAE enacted amendments to its data protection laws to include companies using AI or generative, machinelearning technology. This is the first piece of legislation regulating AI use in the Middle East.

The United Kingdom currently does not have specific legislation regarding the use of AI. The UK Government, however, set out guidelines on AI and data protection following the March 2023 publication of its whitepaper 'A pro-innovation approach to AI regulation'. The Government has put forward an AI framework containing regulatory guidance conforming to the Organisation for Economic Cooperation & Development (OECD)'s principles for ethical use of AI.

In November 2023, the UK published a set of global guidelines on the secure development of AI technology, which were developed by the UK National Cyber Security Centre and aim to help developers ensure that cyber security is a precondition of AI development. These guidelines were endorsed by agencies from 17 other countries, including the United States.

President Biden had previously issued his own Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in the US on 30 October 2023. The American authorities have already indicated that they will want to pursue a more decentralised and sector-specific approach to AI regulation with non-binding recommended actions in comparison to the binding legislation of the proposed EU legislation.

It seems clear that each country will pursue different paths in regulating AI and this will be dependent on their history and different regulatory philosophies, for the moment at least it appears there is wide agreement among countries on the foundational principles. In the future it will be important to monitor whether regulatory arbitrage is being brought into play. That is, whether corporations will seek out jurisdictions that have less stringent regulatory frameworks.

HOW AI WILL TRANSFORM INSURANCE ACROSS THE INDUSTRY

There is broad consensus from leading industry commentators including GILC member firms that AI will play an ever-increasing role in the full lifecycle of the insurance process - from initial customer interactions to better risk prediction and analysis, to the underwriting and designing of tailored insurance solutions, and improvements in the efficiency and speed of claims management.

As Virginie Liebermann, lawyer at Molitor Legal in Luxembourg, explains: "Major players in the insurance sector often emphasise the transformative and innovative potential of the technology, as well as the benefits it can bring for the market, such as efficiency and time savings."

According to Sakate Khaitan, partner at Khaitan Legal Associates in India: "AI is an opportunity to provide quicker claims service, better underwriting, innovative products, and better insurance administration. We are of the view that the use of AI for repetitive and non-value-adding, or low-value-adding transactions can increase efficiency and save costs. AI is also likely to be used by market participants as a competitive advantage. The race is on – we wait to see who the winners will be."

To read the full article click here

Originally published by GILC, February 2024

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More