Charlee.ai and Reducing Bias in Artificial Intelligence – Part 1

“Be fair. Treat the other man as you would be treated.” Fairness is one of the earliest lessons we learn and an essential aspect of character building. Treating others as one would like to be treated paves the way for small acts like kindness and equality to grow into empathy, fairness, and the absence of prejudice in decision-making at work and in our personal lives.

Bias, however, is a natural human trait that creeps in across ages in whatever we do. As adults, we can become aware of it and ensure it doesn’t color our perspectives and way of life, but it is a natural trait which creeps into almost everything we make or create. Artificial intelligence (AI) driven systems – the creation of humans is also subject to the bias of their human creators. Often unwittingly, we bake biases into systems in how they are configured OR by training them based on biased data OR defining them by ‘rules’ created by experts with unconscious yet implicit biases. Knowing the inflection points at which biases can creep in can help create unbiased underlying AI algorithms, even if removing them entirely from our habituated minds seems impossible.

Technology and machines, a product of human hands, are bound to be biased. Artificial intelligence humans create is bound to show biased traits. Yet it is revolutionizing the insurance landscape to bring more predictability, pattern recognition, and enhanced performance to our collective work. Insurtech, with its cutting-edge AI solutions and techniques, is leading the charge in impacting every aspect of the insurance carrier’s work. The most profound is the transformation of the claims workflow at a speed that is both life-altering and innovative. The answer, therefore, lies in balancing what we feed into the AI system while maintaining the integrity of our standard processes and traditional systems to promote greater fairness and equality in every insurance task we undertake.

What is Artificial Intelligence for Insurtech?

AI is a branch of computer science devoted to developing data processing systems that perform reasoning, learning, and self-improvement functions, typically associated with human intelligence.  In simple terms, AI mimics the reasoning and problem-solving of the human mind. It helps streamline processes, injects predictability, automates systems, and simplifies complexity to ensure smooth functioning for different insurance stakeholders. Therefore, a task such as assessing smartphone photos in an auto insurance claim, which may take several minutes for a person, can be done in a matter of seconds by an AI system. With this technology, insurance is shifting from a ‘detect and repair’ model to a ‘predict and prevent’ one, transforming everything across the industry, from processes to workflows to decisions. Artificial intelligence is currently used to process claims, underwriting, fraud detection, and enhance the overall customer experience.

Essentials of Trust in the Insurance Domain

Insurance in the US goes back to the 1700s, first mandated by Benjamin Franklin. It has evolved from a labor-intensive process to one segmented and structured to ensure better, comprehensive coverage and customer interaction. Over the years, it has become a non-negotiable in various aspects of life, from health to travel. Besides stabilizing economies, insurers and policies are relied upon to safeguard and protect people’s lives and assets during loss or catastrophe. An intricately woven piece of society’s social and economic fabric, insurance is and has always been one of the most regulated and scrutinized industries, with trust forming the foundation on which 

it is based. For several decades now, there has been an inherent trust and relative transparency in the fundamentals of an insurance purchase. Purchasers know that if they live in a frequently disaster-affected area or have a history of speeding fines, premiums are high, and vice versa, with other factors considered. However, as insurers envision and accept proprietary advancements in AI and big data and consider the increase in insurance complexities, newer questions are emerging based on ethics and bias.

Research suggests it is easier to program bias out of machines than human minds. An ever-growing society driven by human thinking faces rapidly changing dynamics across races, genders, and social statuses and how this is being addressed and approached. Questions like: Where is the ‘big data’ being used, how is it being used, what are the factors influencing significant language AI models, and their impact on premiums and coverage are all being asked. There is a growing effort to ensure it plays out fairly for all concerned, eliminating bias and putting measures in place to minimize it as much as possible. InsureTech companies must take disciplined measures to deliver on this expectation.

What is Bias in Artificial Intelligence?

The Oxford Dictionary of English (2nd Edition) defines bias as the inclination or prejudice for or against one person or group, especially unfairly. Bias is further defined as a systematic distortion of a statistical result due to a factor not allowed for in its derivation. So, what exactly is bias in the context of AI and data? It’s like a hidden prejudice, unfairly favoring or disfavoring a particular person or group. In AI, this bias can sneak in and distort statistical results, like a sneaky factor that wasn’t accounted for during its creation.

It is a fact that humans are biased in their decision-making. Personal prejudices often seep into our decisions. The exposure and experience of upbringing change us, predisposing us towards a specific group of people while favorable to another. This can distort anything we do, distracting us from our core purpose. Research has shown that interviewers show a preference for specific candidates or that information from one case is misapplied to another.

Conclusion

Regarding using AI in insurance, there are several ways bias can creep in. The underlying data is one of the foremost sources of this bias. AI models may be trained on data based on human decisions or reflect a second-hand influence of social or historical inequities. How data is selected or collected can also let bias seep in. Recognizing and working on keeping bias out of insurance is fundamentally complex. The nature of insurance entails its need to be biased away from fundamental unreasonable risks to ensure financial feasibility. However, defining and measuring fairness parameters, putting regulatory frameworks and processes in place, and using ethical AI platforms that uphold the domain’s integrity while mitigating and governing unfair treatment and practice can go a long way in ensuring the insurance ecosystem comes together to work towards this.

Written by: Dr. Charmaine Kenita

Technology and Business Writer

I would like to: