Charlee.ai and Reducing Bias in Artificial Intelligence – Part 2

In part 1 of this 2 part series we examined how bias is fundamentally ingrained into any kind of technology or processes built by humans. Recognizing and working with it in every decision we take is the only way to address and minimize its influence on our business processes. A conscious approach to understanding and addressing bias begins from incorporating compliance and risk concerns into every aspect of the insurance process, from data handling and risk assessment to decision systems and stakeholder education. Insurers should continuously evaluate system performance, know when problems of bias arise, pre-empt and prepare to deal with them and configure steps to identify and course correct where necessary.

Among the foremost factors that can affect the creeping of bias into existing business systems, is the role of regulation and compliance and the changes that are constantly being introduced into the regulatory landscape. Committing to fairness and transparency across the organization is a corporate responsibility. Managing AI risks like bias is a business and not just a technical or technological problem.

The Current Regulatory Landscape

In December 2023, the National Association of Insurance Commissioners (NAIC) published a Model Bulletin for regulating AI used by insurance companies[1]. To date, thirteen states have enacted regulations for the use of AI by insurance companies.  Charlee.ai contributed to developing and promulgating the Bulletin by providing training and expertise to the AI, Innovation, Big Data committees and various state insurance departments. Charlee.ai continues to work with the NAIC committees and regulators to provide technology and domain expertise for AI in insurance workflows.

[1] https://content.naic.org/sites/default/files/inline-files/2023-12-4%20Model%20Bulletin_Adopted_0.pdf

Standards that Define Charlee’s Prevention of Bias in AI Large Language Models

Insurance and trust being fundamentally interwoven, ensuring fairness and transparency in using AI platforms, is a corporate responsibility that must be adhered to at all times. Charlee™ has developed a set of techniques and standards for ensuring buyers’ trust, and monitoring the element of bias that may creep in anywhere during the AI intervention cycle. Below are a few critical insights from our thought leadership about the AI platform one chooses to use.

  1. Be careful what goes into the AI’s Large Language Models:

Building the AI system entails considering certain factors that go into the build. Machine learning models continuously learn patterns based on whatever they are trained on or fed into them. Prominent columns like zip code, claimant race, gender, etc. should be avoided. However, sometimes bias may creep up unexpectedly when other columns used as model inputs correlate highly with protected classes, effectively making those columns proxies for the protected classes. When choosing input columns for a model, it is essential to continuously test them for correlations with protected classes and potential bias.

  1. Test the outputs for bias:

Testing the platform and its outputs for bias consistently ensures that the results are not unnecessarily influenced or altered during the analysis. It is essential to do a correlation analysis of the language and machine learning (ML) model outputs against the columns in the data representing protected classes. Constant checks should be conducted to ensure that the distribution of the model outputs across the values in each of those columns is fair.

  1. Review the results with experts:

In addition to doing bias testing, having a domain expert review the model predictions adds another layer of checks to ensure the analysis always remains bias-free. It is critical to have them read through a set of claims/data and provide their expert opinions on the model results. Any concerns brought up need to be promptly addressed and reviewed.

  1. Provide the language model with guard rails:

Once expert inputs and reviews are received, it is important to use the inputs to define guard rails for the model. These can be used to train the model and continuously validate the model outputs. Also, having an ensemble of models that can be used helps in case the model output falls outside of the guard rails.

Role of Stakeholder Collaboration in Eliminating Bias

The insurance ecosystem is built around a siloed framework where essential stakeholders are separated from one another based on their primary tasks, decision-making abilities, and depth of expertise. Human nature is prone to prejudice, which can creep into decision-making at any time. Similarly, in AI and other consequential decision systems, the technical nature of the work breaks it into siloes. Data scientists and engineers develop the systems, risk and compliance teams later evaluate them, and every stakeholder does it separately, owing to which bias can enter the system unknowingly.

To combat this, it is critical for all stakeholders and technical and subject matter experts to work together with trust and transparency built into the order of work. Human processes and decisions surrounding the language model’s conception, development, and deployment have to come together to understand the goal, arrive at expected outcomes, and reason the model’s best possible solution for the business problem that must be addressed. Once the language and ML models are adopted, non-technical members need to find user-friendly ways to monitor, access, and understand the decisions made by the AI. Insurers must continuously evaluate system performance, know when biases arise, and address them efficiently, correcting course as needed.

How Charlee.ai Models Have Been Developed to Minimize Bias

The Charlee AI platform is built within the regulatory compliance framework and overseen by insurance experts. It provides pre-trained models for AI-based claims analytics, including severity, litigation, attorney involvement predictions, and fraud indicators.

1. Model inputs: Charlee severity, litigation, and suspicious claims models have been developed in close collaboration with domain experts who have vetted the models at every stage. Domain experts have helped define the model inputs and the ontologies that help train Charlee’s NLP models. We have carefully selected columns that capture the injury and damage severity, as well as behavioral patterns that are independent of the group (race, gender, etc.) that the claimant belongs to.

2. Testing for bias: Charlee models are regularly and carefully tested with domain experts to prevent bias from creeping in. The model prediction distribution is weighed against the input data to ensure it is fair.

3. Expert reviews: Our business analysts and data scientists carefully select a good distribution of anomalous and statistically conformant claims for review by domain experts. Our domain experts review the selected claims in detail and validate the predictions. Any issues they point out are promptly investigated.

4. Guard rails: Charlee employs a library of models based on LOB and coverage. Domain expert inputs are used to define guard rails for the predictions. Models are carefully selected, and an ensemble is configured to ensure compliance.

Conclusion

Bias can creep into AI models in various ways. Having an objective oversight and risk controls are best practices to follow when analyzing data models. Careful selection of model inputs, thorough tuning and testing, and close collaboration between domain experts, business analysts, and data scientists can help reduce model bias in AI. While it is human nature to be biased, insurers can do well to implement transparency, risk governance and objectivity with a clear intent to do the right thing.

That’s why at Charlee.AI, we have defined, and employ these best practices that ensure compliance of our AI models.

Written by: Dr. Charmaine Kenita

Technology and Business Writer

I would like to: