Part 2: How to Build AI Ethically

Training Data within AI

AI is a technological breakthrough, opening avenues for various things – growing companies at scale, predicting future changes with insights and analytics, and replacing tedious manual work with faster data processing. It is sweeping across industries, changing how we work, live, travel, and learn. However, a tool as powerful as AI, especially in the insurance and financial domain, is more than helping improve lives. It is closely tied to the outcome of somebody’s life and property. When it comes to using AI to do tangible things, like analyzing data from a car accident photo or figuring out the impact of floods and associated costs during a cyclone, the tasks are objective, and the purpose is to make life easy for analysts. However, the question of ethics in AI insurance must be addressed when it comes to data associated with humans.

Human beings and anything associated with them are relevant data that can be analyzed and predicted. In insurance, every data point is personal and sensitive, whether it is a policyholder’s payment history or personal data like address, SSN, contact numbers/email, and health issues required for the underwriting process. It cannot be made available for everyone to access. As soon as people are touched by a data point anywhere in their insurance journey, the question of ethics arises because it entails a breach of their personal information.

Three Dangers of AI’s training data

1. Mismanaged or Biased Data
Data drives the insurance industry. Data helps make decisions, from crucial underwriting and risk assessment to more straightforward customer interaction and marketing. With the exponential growth of structured and unstructured data, the need for technological intervention in AI to make quicker, better-informed decisions is even more critical, considering the value it adds to operational efficiency and the company’s bottom line. Harnessing the potential of large volumes of data and deriving insights from it can help decision-making, impacting workforce manual work and bringing more accuracy and efficiency to everyday insurance tasks. Big-data technologies like AI can strengthen the InsurTech platform in deriving and managing data-driven tasks, giving companies a competitive advantage and better-serving customers.

But AI, in its free form, can be very unregulated. Suppose the data it is built upon comes from questionable or non-credible sources. In that case, if the algorithms are flawed, and if there are loopholes in the tool used, it can invalidate every insight and considerably impact decision-making and claims handling, risk assessment, and fraud management processes. The question of using AI in insurance is redundant, given the strides it has already made. Still, with continuous technological advancement, its use is expanding rapidly, and so is the challenge to keep it within a regulated framework. Today, it has become imperative to ensure that data-driven decision-making tools like AI are used ethically in ways that can help in the insurance management process and, in turn, customers rather than harming them with bias. But what is most important is knowing whether such tools are used ethically to enhance customer trust instead of damaging it.

2. Current Insurance Regulations

The insurance industry is addressing the challenges arising from ethically led insurance software within the ambit of security and safety of customer data, guided by government regulations and oversight bodies across global, regional, and country levels. AI tools used from open sources make insurance data vulnerable to security threats and hacks. The algorithms and processes interpreting and making decisions on that data must be vetted and safeguarded at every level, and human intervention is made part of decision-making. While the legal standpoint of ethics thus far mainly works around data privacy and protection of personally identifiable information, governed by industry laws, it is the onus on the people building the AI tool to ensure that data is not compromised and ethical handling of insurance tasks at every step becomes non-negotiable. Regulators have started to draft and promulgate regulations for the use of AI in the underwriting process, focusing on two critical issues: One, discrimination against protected classes of people, and two, mandated testing for bias in the algorithms. As AI advancements continue and more insurance carriers use these advanced technologies, more regulations and reporting will become the norm.

 

3. What do you need to build AI ethically?

The insurance industry is addressing the challenges arising from ethically led insurance software within the ambit of security and safety of customer data, guided by government regulations and oversight bodies across global, regional, and country levels. AI tools used from open sources make insurance data vulnerable to security threats and hacks. The algorithms and processes interpreting and making decisions on that data must be vetted and safeguarded at every level, and human intervention is made part of decision-making. While the legal standpoint of ethics thus far mainly works around data privacy and protection of personally identifiable information, governed by industry laws, it is the onus on the people building the AI tool to ensure that data is not compromised and ethical handling of insurance tasks at every step becomes non-negotiable. Regulators have started to draft and promulgate regulations for the use of AI in the underwriting process, focusing on two critical issues: One, discrimination against protected classes of people, and two, mandated testing for bias in the algorithms. As AI advancements continue and more insurance carriers use these advanced technologies, more regulations and reporting will become the norm.

CONCLUSION

Understanding the principles that define AI and how AI Engines are built, also leads us to finding out ways such Ethical AI can be implemented, how to measure the KPI of such AI systems and the resultant ROI expected. Read more of this in the concluding Part 3 of the ‘Ethical AI’ series.

PARTS 1 2 3

Written by: Dr. Charmaine Kenita

Technology and Business Writer

and

John Standish

Co-founder & CIO

REQUEST A DEMO:

What Products Are You Interested In?