AI Governance Considerations for Long-Term Care

By Patricia Nguyen and Dennis Lu

Long-Term Care News, August 2024

young Asian student on bicycle in park with superimposed tech symbols

In the long-term care (LTC) and broader insurance industry, interest in artificial intelligence (AI) continues to expand as carriers look for ways to innovate or supercharge business processes. At this year’s Intercompany Long Term Care Insurance Conference, AI and other technological advancements were among the most prominent topics, with a staggering 20% of sessions dedicated to various aspects of AI.

As insurers expand AI applications, concerns around potential adverse impacts, such as unfair bias and discrimination, have prompted regulators to introduce new guidelines in the United States. Some are specific to insurance, while others apply to multiple sectors. This article gives a brief overview of the regulatory landscape and governance considerations for LTC insurers currently using (or expecting to use) AI. A companion article, planned for a future issue of Long-Term Care News, will focus on technical applications to improve transparency and enable intentional governance of AI models.

Here are some helpful definitions of acronyms used in this article:

  • Artificial intelligence (AI) is a relatively broad concept and relates to enabling a machine or system to sense, reason, act or adapt like a human.
  • Machine learning (ML) is an application of AI that allows machines to extract knowledge from data and learn from it autonomously.[1] It has vast applications in insurance—from underwriting to in-force management.
  • Generative AI (GenAI) is another application of AI that insurers are exploring to improve the efficiency and insightfulness of customer service interactions.

The Current AI Regulatory Landscape

In March 2024, the European Union (EU) adopted the AI Act, considered the world’s first comprehensive AI regulation framework. Compared to the EU, the AI regulatory landscape in the US is still in its infancy, with a patchwork of federal and local regulations addressing different AI concerns, many of which apply to insurers.

Federal Regulations

The year 2023 brought a new wave of federal AI regulations applicable to the insurance industry. In October, President Biden issued an executive order to promote AI development while establishing guidelines for federal agencies to follow when designing, acquiring, deploying and overseeing AI systems. Among other objectives, this executive order seeks to establish testing standards to minimize AI-driven risks to infrastructure, cybersecurity, social equity and data privacy. Most insurers deal with sensitive data from consumers and will therefore benefit from the development of leading practices as a result of this executive order.

State Regulations

In December 2023, just a few months after President Biden’s executive order, the National Association of Insurance Commissioners (NAIC) adopted a model bulletin that addresses the increasing integration of AI in insurance operations, particularly in underwriting, claims processing, risk assessment and customer interactions.[2] At a high level, insurers are expected to develop, implement and maintain a written program for the responsible use of AI. LTC insurers using AI systems to support decision-making processes (such as digital underwriting and fraud analytics) are expected to align their governance framework with the expectations outlined in the model bulletin.

Since the adoption of the NAIC bulletin, states have begun introducing AI regulations for the insurance industry. So far, 11 jurisdictions have adopted regulations based on the NAIC model.[3] Additionally, regulators in other jurisdictions have issued their own warnings, placing significant obligations on insurers to ensure that their models and data collection methods are not unfairly discriminatory.

In November 2023, the Colorado Division of Insurance (CDOI) adopted a first-of-its-kind regulation establishing governance and risk management requirements for life insurers with AI models that use external consumer data and information sources.[4] In addition, the CDOI is working on a separate reporting regulation that would require insurers to perform quantitative testing on AI models.[5] If approved, the new regulation will impact LTC insurers who use external consumer data, including socioeconomic proxies and health history, in AI-driven applications.

Considerations for AI Governance Frameworks

In 2023, Oliver Wyman released its inaugural LTC Predictive Analytics Survey, which included participation from 28 LTC carriers. It found that 60% of survey participants currently use predictive analytics for LTC applications. About 30% of these carriers cited regulatory requirements as an external hurdle to adopting predictive analytics and/or AI, and nearly 50% did not yet have fully formed governance processes targeted at unfair discrimination and unintended bias. Considering the ongoing expansion of AI applications in LTC and evolving regulatory guidance focused on risk governance, it is increasingly vital for insurers to develop a robust and well-documented governance framework for their AI models.

The following list outlines several key considerations for building an AI governance framework. Insurers may also reference the CDOI risk management requirements and NAIC model bulletin, which provide robust guidelines related to AI model oversight, risk assessment and documentation.

Avoid unfair discrimination and unintended bias. As an initial step, LTC insurers should establish internal guidelines on (1) protected characteristics that should be excluded from AI models and (2) prohibited AI use cases. This aligns with several existing regulations, such as the Georgia House Bill 887, which prohibits insurance coverage decisions from being solely based on AI tools.[6]

In addition, insurers can mitigate unfair discrimination by systematically examining data used to train AI models and removing biased variables. One method for identifying a biased variable is to evaluate the variable’s predictive performance across different ethnic and racial subgroups. For example, a particular variable may be a reliable predictor of LTC claim incidence for an insured population predominantly comprised of a single race. However, this variable may predict a disproportionately biased outcome for minority subgroups, which can go undetected if model accuracy is only evaluated on a total population level.

Companies can further mitigate the risk of unfair discrimination by implementing prescriptive testing to determine if their AI models satisfy internal “fairness” standards. The CDOI’s draft regulation on quantitative testing of AI-driven underwriting models can serve as a reference point for insurers interested in defining measurable “fairness” metrics.

Finally, unintended bias in AI systems that result in unfair discrimination can cause substantial financial and reputational damage to insurers. Therefore, bias audits and safeguards are vital in a pre-production AI governance checklist.

Perform risk assessments on AI and machine learning (ML) applications. While there is no best practice in AI applications risk ranking, insurers can leverage local and global regulation guidance to create their own rubric. The CDOI’s regulation, for example, mandates that insurers establish a documented rubric for assessing and prioritizing AI models that use external consumer data and that reasonable consideration be given to consumer impact. Additionally, the EU AI Act, a risk-based regulation, provides a framework for companies to categorize AI applications into Unacceptable, High, Limited and Minimal risk categories.[7]

Common characteristics of high-risk models implied in both the EU AI Act and the CDOI’s AI regulation include those that (1) directly profile individuals and (2) have access to protected data. Given this classification, a model used for in-force experience studies may be considered low risk. However, if this same model is deployed for accelerated underwriting, it may be considered high risk due to its potential consumer impact. Common AI models used by LTC insurers that might be considered high risk include those used for underwriting, fraud detection or claim adjudication.

Navigate judgment through process. The NAIC model bulletin requires insurers to maintain well-documented processes and controls to be followed at each stage of an AI system life cycle, from its development to its decommissioning. For instance, setting up ML models introduces new areas of judgment that can be minimized with robust documentation and controls. One such area is hyperparameter tuning in gradient boosting machines (GBM), an ML technique several LTC insurers use for experience analysis. Hyperparameter tuning controls the model’s structure and performance by limiting the depth and number of fitting iterations where the modeler’s judgment may impact model results. Processes and controls should be designed to require model tuning to be performed systematically.

Build up an arsenal of validation techniques. The black-box nature of certain AI techniques can make it challenging for insurers to identify sources of unfair bias, including data training, model design assumptions and inadvertent inclusion of biased predictors. To gain comfort with AI model outcomes, actuaries should familiarize themselves with various model visualization and validation techniques. Designing a robust analytics platform can also ease the interpretation and validation of AI models. For example, ML models often deemed “black boxes,” such as GBM, can be interpreted using Shapley values.

The CDOI regulation also requires that insurers monitor and document the performance of their AI algorithms and predictive models over time to account for model drift. Validation techniques should enhance, not replace, existing methods such as cross-validation, actual-to-expected analysis and parallel testing against traditional models, which are still fundamental to model validation and governance.

Conclusions

Currently, much of the responsibility for developing a compliant governance framework falls upon insurers as existing US regulatory guidance is more descriptive than prescriptive. Looking forward, the more prescriptive nature of the EU AI Act may influence future AI regulations in the US, especially considering ongoing cooperation between the EU and the US as part of the Trade and Technology Council.[8] Additionally, state insurance departments may refine their AI regulations to include more specificity or clarifications as AI applications in the industry evolve.

Stay tuned for the companion article, “AI Governance Toolbox for Long-Term Care Insurers,” which will provide helpful tips for developing and enhancing AI governance frameworks through technical case studies.

Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries, the editors, or the respective authors’ employers.


Patricia Nguyen, FSA, CERA, is a manager with Oliver Wyman's Actuarial Practice. Patricia can be reached at patricia.nguyen@oliverwyman.com.

Dennis Lu, FSA, FCIA, CERA, is a senior manager with Oliver Wyman's Actuarial Practice. Dennis can be reached at dennis.lu@oliverwyman.com.


Endnotes

[1] “AI vs. Machine Learning: How Do They Differ?,” Google Cloud, n.d., https://cloud.google.com/learn/artificial-intelligence-vs-machine-learning.

[2] “NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers,” NAIC, Dec. 2, 2023, https://content.naic.org/sites/default/files/inline-files/2023-12-4%20Model%20Bulletin_Adopted_0.pdf.

[3] As of April 2024, 11 states have adopted the NAIC model: Alaska, Connecticut, Illinois, Kentucky, Maryland, Nevada, New Hampshire, Pennsylvania, Rhode Island, Vermont and Washington. “Implementation of NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers,” NAIC, Apr. 30, 2024, https://content.naic.org/sites/default/files/inline-files/AI%20Model%20Bulletin%20-%20April%202024.pdf.

[4] Department of Regulatory Agencies, Division of Insurance, “Regulation 10-1-1 Governance and Risk Management Framework Requirements for Life Insurers’ Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models,” State of Colorado, Nov. 14, 2023, https://www.sos.state.co.us/CCR/GenerateRulePdf.do?ruleVersionId=11153.

[5] Department of Regulatory Agencies, Division of Insurance, “Draft Proposed Concerning Quantitative Testing of External Consumer Data and Information Sources, Algorithms, and Predictive Models Used for Life Insurance Underwriting for Unfairly Discriminatory Outcomes,” Google Drive, n.d., https://drive.google.com/file/d/1BMFuRKbh39Q7YckPqrhrCRuWp29vJ44O/view.

[6] Mandisha A. Thomas, House Bill 887, Georgia General Assembly, Jan. 9, 2024, https://www.legis.ga.gov/api/legislation/document/20232024/220941.

[7] Tambiama Madiega, “Artificial Intelligence Act,” Briefing: EU Legislation in Progress, European Parliament, Mar. 2024, https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf.

[8] The EU and US are collaborating on the implementation of a joint road map for evaluation and measurement tools related to trustworthy AI and risk management. Marcin Szczepański, “United States Approach to Artificial Intelligence,” European Parliament, Jan. 2024, https://www.europarl.europa.eu/RegData/etudes/ATAG/2024/757605/EPRS_ATA(2024)757605_EN.pdf.