Part 3: How To Implement AI Ethically
It is the tool that maketh the man, goes a famous saying. In the case of artificial intelligence (AI) – the tool transforming how we analyze data and process information, the ramifications of its use are far-reaching and with life-altering consequences. Since time immemorial, human intervention in business processes has come about from trial and error towards simplifying processes and streamlining critical thinking. The insurance industry has entailed setting up functional workflows, changing process maps, business reengineering, resource alignment, and more. With the exponential growth of the insurance industry across various products and lines over the decades, the multitude of processes to keep operations well-oiled and running, the complexity of claims processes and data insights to keep up with workflows, human intervention in insurance is falling short of the needed quality and expertise. Analysts today are heavily burdened with a deluge of claims, missing out on critical red flags, lengthening reconciliation processes, increased payouts by insurers, and litigation. Using AI to help with tangible things, analyze unstructured data such as claim notes and documents across many carriers and lines, identify patterns in data to pre-empt and mitigate potential loss, and warn insurers of reserves insufficiency would certainly help reduce claims cycle time and expenses.
As with any new technology such as AI, there is also a concern with unwarranted outcomes. This is where ethics plays a significant role, given how insurance touches people’ everyday lives – from their health to their property, their modes of transport to businesses they operate, to the place they live and work. The way AI is leveraged thus defines its ethical or unethical usage. With humans making decisions, controlling processes, and putting checks and measures in place, business processes can significantly benefit from having a technological intervention defined by ethics that safeguards the decisions made by humans based on AI. Compliance and other legal and regulatory mandates suffice in influencing the use of AI. But these are constantly changing. In addition, since it is the human with decision-making capabilities and AI insights that will decide its use – from identifying data patterns and their relevance to probability matrix and risk assessment – it is also human thinking alone that can pull out best practices from past experiences and their outcomes. From measuring ROI, placing checks and balances throughout workflows, and predicting the future behavior of claims to identify and intervene at critical points in the cycle, governing the use of such technology is essential.
While ethics can be considered an all-encompassing subject and can have different connotations for different industries, in the insurance domain, especially in relation to InsureTech, below are the 3 points to consider in deploying and using ethical AI.
1. Implementation – It is one thing to design and code AI within regulatory frameworks but another to implement it ethically. Implementation underlines applications that have to adhere to specific rules at every step. Because it handles large amounts of structured and unstructured data, insurance processes must be governed by sound implementation at every step, ensure data is factual, and accurately represent the claims and other circumstances, all the time controlled by compliance and other regulatory practices. Workflows that depend on outcomes from AI applications must have human-centric checks to explain the decisions taken. For e.g. claim scoring – system-generated claim score must be clearly understood and implemented within workflows by humans who clearly understand the score criteria and can intervene when needed.
2. Human Interaction – The design of the AI workflow within the claims system has to be done considering human interactions at every step. People will access, guide, and input information in the AI system and influence the results inferred from data to make future decisions. Humans are the critical component of the AI matrix. For e.g. fraud detection software must be able to explain suspected fraudulent attributes or indicators that scored or flagged claims as suspected fraudulent. These claims must be reviewed for those indicators before further action is taken in the automated workflow. Explainable AI must help humans understand the logic behind the machine-learned patterns so humans can accept or reject as needed.
3. Accurate and timely Measurement of KPIs and overall ROI of AI
Accurate – Data is a large canvas open to interpretation of all kinds. Data can also be misrepresented, misinterpreted, and skewed depending on the biases or prejudices of those drawing insights or conclusions from that data. Ethics means ensuring that the most accurate, unbiased learnings are extracted from AI-driven technologies, helping uphold the various workflows and processes, and drawing fair conclusions that can help make accurate predictions in claims processes and risk assessments.
AI solutions detecting patterns must be periodically pre-trained, tested and refreshed. This is key. Domain experts must become masters of the outcomes before implementing within workflows for staff to operate with due diligence. The AI domain experts / super users can assess the outcomes from AI based insights and predictions before implementing them into current workflows.
Timely – Time is critical in the management of insurance processes. Be it assessing Risks – learning from past ones and applying them to understand new risks; Resources or people who have to make decisions promptly – using it to understand past, present, or future insights; and ROI – how well is the new AI system helping processes, measurement of customer satisfaction, reduction of expenses, other indicators of the effectiveness of AI systems; ethics overarches the monitoring and evaluation of such tech systems at every step.
AI solutions have to be ROI-checked periodically for initially established financial goals. ROI is time-sensitive in how its measured, what models need to be refreshed, and how often and how and when it is tested and delivered. Change in workflows and operational behaviors in this new AI insights-driven business process will yield different outcomes, which need to be measured periodically.
CONCLUSION
AI is a godsend for Insuretech in the many ways and versatile insights it provides claims examiners, expanding how they think and utilize data to simplify workflows. Besides making jobs easy, it also helps teams improve performance, focus on critical matters, expend energy on necessary tasks, and significantly reduce spending. When governed by ethics, it provides a level playing field for insurers while ensuring that data gathering isn’t contaminated or biased towards specific thought.
Written by: Dr. Charmaine Kenita
Technology and Business Writer
and
John Standish
Co-founder & CIO