Part 1: Ethical AI – The Six Pillars For Insurance

Artificial intelligence (AI) is mentioned daily in the news media.  The popularity of AI is exponentially increasing every month as new ideas and products are introduced in the marketplace.  These new products are exciting and innovative, but will they stand up to scrutiny and authorization by regulatory agencies overseeing technology in their specific business domain? In the insurance business, regulators have tackled the task of drafting model bulletins and regulations governing the use of AI in the underwriting and claims processes. The regulators are viewing the concern of biased AI and the consequences that will harm consumers.

Ethical AI is a popular topic in the InsureTech Community, and it must stay at the forefront of innovation and deployment of solutions for insurance carriers and stakeholders. The NAIC H Committee has drafted a model bulletin for insurance carriers to follow using AI with specific points on how the carriers can be held accountable.

In 2019, the Health Ethics and Policy Lab[1] defined ethical AI from a global convergence of moral principles: 1. Transparency; 2. Justice; 3. Fairness; 4. Non-maleficence; 5. Responsibility; and 6. Privacy.  These principles are on point and very important in developing and deploying AI solutions in our society.

Ethics is defined as moral principles that govern a person’s behavior or the conduct of an activity. A typical example of ethics daily is “Do the right thing even when nobody is watching!” InsureTech companies cannot take shortcuts in developing their solutions, and insurance carriers must be ready to ask pointed questions regarding models and insights delivered by AI.

 

For the context of InsureTech, the basic principles for ethical AI are:

1. Factual. The data AI analyzes for insights and predictions must be accurate and related to the information in the policy, claim file, and any external (third-party) data applicable to the facts of the loss or the underwriting process.

2. Accurate. The AI algorithms must be tested, vetted, and continuously checked for accuracy. Bias exists everywhere and must be recognized and tuned for precise analysis and predictions.

3. Explainable. The highly regulated insurance business has no “Black Box” in AI for underwriting or claims. The consumer and the insurance carrier need to know how the AI makes decisions, predictions, and recommendations throughout the underwriting and claims process.

4. Articulate. The AI must articulate how and where the insight, pattern, and recommendation will help the underwriter and claims examiner.

5. Transparent. The AI must be transparent in its process so the practitioner and end user can understand how it works and how the user arrives at the decision.

6. Testable. The AI must be able to be tested automatically to generate better results as it continuously analyzes data. This is one of the basic concepts of machine learning.

Therefore, we can define ethical AI in insurance as the principal governance of machine learning and natural language processing to analyze data for insights that will safeguard consumers and insurance carriers accurately, transparently, and factually. The growth of AI is exponentially expanding in our lives and must be appropriately created, governed, and continuously monitored.

CONCLUSION

While ethics in InsureTech AI are defined by a set of rules, ensuring their technological integration is equally important. Translating human-centric theoretical principles into technology-led practical applications requires the building of AI Engines that ensure transparency and trust throughout the claims management process without compromising on regulatory frameworks at any point in time. In Part 2 we explore how To Build AI Ethically and what does this entail.

[1] https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.pdf

PARTS 1 2 3

Written by: Dr. Charmaine Kenita

Technology and Business Writer

and

John Standish

Co-founder & CIO

REQUEST A DEMO:

What Products Are You Interested In?