AI Risk Management Frameworks: An Expert Panel Discussion
April 2025
Author
Ronald L Poon Affat, FSA, FIA, MAAA
Executive Summary
The Society of Actuaries’ Artificial Intelligence Working Group convened a panel of experts to explore the role of Artificial Intelligence (AI) in actuarial practice, with a particular focus on risk management, regulatory compliance, and ethical considerations. AI has the potential to revolutionize actuarial science, insurance, and healthcare by enhancing predictive modeling, automating decision-making, and improving efficiency. However, as AI adoption grows, so do concerns about fairness, transparency, security, and unintended biases in AI-driven models.
A key focus of this discussion was the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) and its Generative AI Profile, which provide guidelines for developing and deploying AI responsibly. These frameworks are particularly relevant to industries where actuarial work plays a crucial role, including insurance and healthcare, where AI is increasingly being used for underwriting, risk assessment, claims processing, and fraud detection.
The NIST AI RMF is a non-regulatory framework developed by NIST to help organizations identify, assess, and mitigate AI-related risks. The Generative AI Profile extends this framework to address risks specific to generative AI models, such as large language models (LLMs) and deep learning systems. These frameworks are important for actuaries, insurers, and healthcare professionals because they:
Support Risk Governance and Compliance: Aligning AI risk management with existing frameworks, including those set by the National Association of Insurance Commissioners (NAIC) and international standards bodies.
Mitigate Bias and Promote Fairness: Ensuring AI-driven models do not introduce unintended biases that could lead to unfair discrimination in insurance pricing, claims, or healthcare outcomes.
Enhance Model Transparency and Explainability: Addressing the “black box” nature of AI models to improve actuarial oversight, regulatory reporting, and policyholder trust.
Strengthen Data Integrity and Cyber Security: Ensuring secure data handling in AI applications, particularly in life insurance and healthcare, where sensitive personal data is involved.
Improve AI Reliability and Resilience: Encouraging ongoing monitoring, stress testing, and validation of AI models to ensure long-term reliability in risk modeling and financial forecasting.
The expert panel engaged in a holistic discussion, addressing key aspects of AI risk management, industry best practices, and emerging regulatory challenges. This report provides a high-level summary of the panel’s insights from the discussion held on March 12, 2025. A list of discussion topics is included in Appendix A.
Material
Acknowledgments
The researchers’ deepest gratitude goes to those without whose efforts this project could not have come to fruition: the Expert Panel participants and others for their diligent work overseeing questionnaire development, analyzing, and discussing respondent answers, and reviewing and editing this report for accuracy and relevance.
Dorothy Andrews, PhD, ASA – Senior Behavioral Data Scientist & Actuary NAIC; expert in AI regulation and bias mitigation in insurance.
Dave Ingram, CERA, FRM, PRM, FSA, MAAA – Risk Management Actuary, Society of Actuaries Board Member, advocate for AI integration in actuarial science.
Shane Leib, FSA, MAAA – Director of Actuarial Research, Moody’s Analytics; Assistant Teaching Professor, University of Notre Dame.
Elaine Newton, PhD – Principal for AI Standards & Tech Policy, Amazon Web Services (AWS); former NIST and Oracle, specialist in cybersecurity, privacy, and AI/ML standards and compliance.
Ronald Poon-Affat (Moderator), FSA, FIA, MAAA, CFA, HIBA– Reinsurance specialist, Independent Board Member.
At the Society of Actuaries Research Institute:
Korrel Crawford, Senior Research Administrator
Questions or Comments?
Give us your feedback! Take a short survey on this report. Take Survey
If you have comments or questions, please send an email to Research@soa.org