Insurers are increasingly turning to data science, using algorithms to automate personal lines underwriting and assist their underwriters with more complex risks. Some are also using them to analyse claims.
Usually these systems work well and yield meaningful results. But if they are inadvertently fed with skewed data, then they can give biased outcomes.
The risk is that much greater when the algorithm is developed through machine learning rather than explicitly programmed, because then the system itself determines what weighting to give to certain factors. If the resulting algorithm is a "black box" that can't explain how it arrives at its decisions, that can be problematic. This is already causing concern in other areas, such as the US criminal justice system, where some courts use algorithms in setting sentences.
When GDPR comes into force in May 2018 it will require companies (wherever they operate around the world) to give EU citizens "meaningful information about the logic" of automated decision-making processes. The ICO and equivalent data regulators around the EU expect companies to find "simple ways to tell the data subject about the rationale behind, or the criteria relied on in reaching the decision without necessarily always attempting a complex explanation of the algorithms used or disclosure of the full algorithm."
We make two predictions for next year.
One - insurance regulators will pay more attention to insurers' use of algorithms. And two - as insurers become increasingly dependent on the few specialist insurtech businesses that provide this technology, regulators will have to start thinking about how insurtech itself is best regulated.
You can read the rest of our insurance predictions here.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.