Reducing Bias in Artificial Intelligence for Insurance – Best Practices for Compliance
Artificial intelligence (AI) is becoming ubiquitous, an inseparable part of our processes, potentially revolutionizing the insurance landscape to bring more predictability, pattern recognition, and enhanced performance to our collective work. It exists everywhere, helping streamline legacy systems to bring convenience and efficiency to our jobs and lives. InsureTech, with its cutting-edge AI solutions and techniques, is leading the charge in impacting every aspect of the insurance carrier’s work. The most profound is the transformation of the claims workflow at a speed that is both life-altering and innovative. This altered way of approaching standard processes and traditional systems defines the future of insurance.
What is Artificial Intelligence for InsureTech?
In simple terms, AI is a branch of computer science devoted to developing data processing systems that perform functions such as reasoning, learning, and self-improvement, typically associated with human intelligence. AI helps streamline processes, injects predictability, automates systems, and simplifies complexity to ensure smoother functioning for different insurance stakeholders. Therefore, a task such as assessing smartphone photos in an auto insurance claim, which may take several minutes for a person, can be done in a matter of seconds by an AI system. Artificial intelligence is currently used in processing claims, underwriting, fraud detection, and customer service.
Essentials of Trust in the Insurance Domain
Insurance in the US goes back to the 1700s, first mandated by Benjamin Franklin. Over the years, it has become a non-negotiable in various aspects of life, from health to travel. Besides stabilizing economies, insurers and policies are relied upon to safeguard and protect people’s lives and assets during loss or catastrophe. An intricately woven piece of society’s social and economic fabric, insurance is and has always been one of the most regulated and scrutinized industries, with trust forming the foundation on which it is based. For several decades now, there has been an inherent trust and relative transparency in the fundamentals of an insurance purchase. Purchasers know that if they live in a frequently disaster-affected area or have a history of speeding fines, premiums are high, and vice versa, with other factors considered.
However, as insurers envision and accept proprietary advancements in AI and big data and consider the increase in insurance complexities, newer questions are emerging based on ethics and bias.
An ever-growing society is looking at changing dynamics across races, genders, and social statuses and how this is being addressed and approached. Questions like: Where is the ‘big data’ being used, how is it being used, what are the factors influencing large language AI models, and their impact on premiums and coverage are all being asked. There is a growing effort to ensure it plays out fairly for all concerned and eliminates bias as much as possible. However, delivering on this expectation is a different ballgame altogether.
What is Bias in Artificial Intelligence?
The Oxford Dictionary of English (2nd Edition) defines bias as the inclination or prejudice for or against one person or group, especially unfairly. Bias is further defined as a systematic distortion of a statistical result due to a factor not allowed for in its derivation. So, what exactly is bias in the context of AI and data? It’s like a hidden prejudice, unfairly favoring or disfavoring a particular person or group. In AI, this bias can sneak in and distort statistical results, like a sneaky factor that wasn’t accounted for during its creation.
Science claims that humans are biased in their decision-making. Personal prejudices often seep into our decisions. The exposure and experience of upbringing change us, predisposing us towards a specific group or people, while favorable to another. This can distort anything we do, distracting us from our core purpose.
When using AI in insurance, one must recognize that keeping bias out of insurance is fundamentally complex. The nature of insurance entails its need to be biased away from fundamental unreasonable risks to ensure financial feasibility. However, we can move forward by putting regulatory frameworks and processes in place while using ethical AI platforms that uphold the domain’s integrity, while mitigating and governing unfair treatment and practice in insurance.
The Current Regulatory Landscape
In December 2023, the National Association of Insurance Commissioners (NAIC) published a Model Bulletin for regulating AI used by insurance companies[1]. To date, thirteen states have enacted regulations for the use of AI by insurance companies. InsureTech AI platform Charlee.ai contributed to developing and promulgating the Bulletin by providing training and expertise to the AI, Innovation, Big Data committees and various state insurance departments. Charlee.ai continues to work with the NAIC committees and regulators to provide technology and domain expertise for AI in insurance workflows.
[1] https://content.naic.org/sites/default/files/inline-files/2023-12-4%20Model%20Bulletin_Adopted_0.pdf
Charlee Standards to Prevent Bias in Models
Insurance and trust being fundamentally interwoven, ensuring fairness and transparency in using AI platforms, is a corporate responsibility that must be adhered to at all times. Charlee™ has developed a set of techniques and standards for balancing buyers’ trust, while monitoring the element of bias that may creep in anywhere during the AI intervention cycle. Below are a few critical insights from our thought leadership about the AI platform one chooses to use.
- Be careful what goes into the AI’s Large Language Models:
Building the AI system entails considering certain factors that go into the build. Machine learning models continuously learn patterns based on whatever they are trained on or fed into them. Prominent columns like zip code, claimant race, gender, etc. should be avoided. However, sometimes bias may creep up unexpectedly when other columns used as model inputs correlate highly with protected classes, effectively making those columns proxies for the protected classes. When choosing input columns for a model, it is essential to continuously test them for correlations with protected classes and potential bias.
- Test the outputs for bias:
Testing the platform and its outputs for bias consistently ensures that the results are not unnecessarily influenced or altered during the analysis. It is essential to do a correlation analysis of the language model outputs against the columns in the data representing protected classes. Constant checks should be conducted to ensure that the distribution of the model outputs across the values in each of those columns is fair.
- Review the results with experts:
In addition to doing bias testing, having a domain expert review the model predictions adds another layer of checks to ensure the analysis always remains bias-free. It is critical to have them read through a set of claims/data and provide their expert opinions on the model results. Any concerns brought up need to be promptly addressed and reviewed.
- Provide the language model with guard rails:
Once expert inputs and reviews are received, it is important to use the inputs to define guard rails for the model. These can be used to train the model and continuously validate the model outputs. Also, having an ensemble of models that can be used helps in case the model output falls outside of the guard rails.
Role of Stakeholders in Eliminating Bias
The insurance ecosystem is built around a siloed framework where essential stakeholders are separated from one another based on their primary tasks, decision-making abilities, and depth of expertise. Human nature is prone to prejudice, which can creep into decision-making at any time. Similarly, in AI and other consequential decision systems, the technical nature of the work breaks it into siloes. Data scientists and engineers develop the systems, risk and compliance teams later evaluate them, and every stakeholder does it separately, owing to which bias can enter the system unknowingly.
To combat this, it is critical for all stakeholders and technical and subject matter experts to work together with trust and transparency built into the order of work. Human processes and decisions surrounding the language model’s conception, development, and deployment have to come together to understand the goal, arrive at expected outcomes,
and reason the model’s best possible solution for the business problem that must be addressed. Once the language model is adopted, non-technical members need to find user-friendly ways to monitor, access, and understand the decisions made by the AI. Insurers must continuously evaluate system performance, know when biases arise, and address them efficiently, correcting course all the time.
How Charlee.ai Models Have Been Developed to Minimize Bias
The Charlee AI platform is built within the regulatory compliance framework and overseen by insurance experts. It provides pre-trained models for AI-based claims analytics, including severity, litigation, attorney involvement predictions, and fraud indicators.
- Model inputs:
Charlee severity, litigation, and suspicious claims models have been developed in close collaboration with domain experts who have vetted the models at every stage. Domain experts have helped define the model inputs and the ontologies that help train Charlee’s NLP models. We have carefully selected columns that capture the injury and damage severity, as well as behavioral patterns that are independent of the group (race, gender, etc.) that the claimant belongs to.
- Testing for bias:
Charlee models are regularly and carefully tested with domain experts to prevent bias from creeping in. The model prediction distribution is weighed against the input data to ensure it is fair.
- Expert reviews:
Our business analysts and data scientists carefully select a good distribution of anomalous and statistically conformant claims for review by domain experts. Our domain experts review the selected claims in detail and validate the predictions. Any issues they point out are promptly investigated.
- Guard rails:
Charlee employs a library of models based on LOB and coverage. Domain expert inputs are used to define guard rails for the predictions. Models are carefully selected, and an ensemble is configured to ensure compliance.
Conclusion
Despite the best systems, solutions, practices and efforts, we are, after all, only humans and depend on humans to oversee the systems. Mistakes do happen. Insurers are facing a new challenge of managing fairness and bias in technology head-on, constantly addressing them through risk governance, transparency, and a clear intent of objectivity at every step.
Written by: Dr. Charmaine Kenita
Technology and Business Writer