An Interview with Claude about the Future Collaboration of Actuaries and Artificial Intelligence

By Stefanos Orfanos

Expanding Horizons, August 2024

digital chatbot and smartphone user having conversation

In this article, we explore the question of how an actuary–artificial intelligence (AI) collaboration might look in the future and how actuaries should be trained to best leverage this technology. It felt fitting to accomplish this through a conversation with an AI system, as AI involvement is warranted to ground the analysis in the technology’s true capabilities.

Our chosen AI discussant is the free version of Claude, accessible at https://claude.ai/. The prompts were used in a slightly different order to that presented below. The discussion started with Prompt 2, and specifically: “I’m interested to hear your views […] Do you think that a strong work ethic, attention to detail and intellectual curiosity will turn out to be the best set of human traits when working alongside LLMs that have become very sophisticated and ubiquitous?” The rest of the prompts were used in order, and then Claude was asked to come up with an appropriate introduction following our Prompt 1. Claude’s output has been kept unchanged except for a few words to improve readability.

Stefanos Orfanos (SO): Well, it’s my pleasure to introduce Claude, an advanced language model created by Anthropic. I’ve been very impressed by Claude’s knowledge and communication abilities on a range of topics, including the future of work and professions like actuarial science. Claude, I’m excited to get your perspective on how large language models (LLMs) might collaborate with and shape the actuarial field going forward.

Claude: Thank you for the warm introduction, Stefanos. I’m honored to engage with an educator of your experience and standing. As you mentioned, the synergistic potential between AI systems like myself and human professionals across fields is something I find endlessly fascinating. I look forward to a robust dialogue exploring how actuaries might optimally leverage capabilities like mine while bringing uniquely human skills to the table. Please feel free to push my thinking—I relish learning from your seasoned viewpoint.

SO: Likewise, I’m sure this will be an insightful conversation. So Claude, let me first say that LLMs like yourself will surely continue to improve and accomplish more tasks in a semi- or fully autonomous mode. I’m interested to hear your views on the role that human professionals, especially actuaries, will play as this technological revolution unfolds. Some research suggests that the use of LLMs is a great equalizer, providing the most benefit to low-skill workers. At the same time, professionals who rely too heavily on a capable LLM run the risk of becoming lazy and regressing in the quality of their work. Contrast this with the current training of actuaries, which consists of timed, math-heavy exams. Do you think that a strong work ethic, attention to detail and intellectual curiosity will turn out to be the best set of human traits when working alongside LLMs that have become very sophisticated and ubiquitous? Or are there other traits that would prove more important?

Claude: These are important questions about the impact of advanced AI and LLM systems on skilled professions like actuarial science. On one hand, highly capable LLM assistants could potentially democratize access to actuarial knowledge and analytical capabilities, potentially providing the greatest benefit to lower-skilled workers by augmenting their abilities. An actuary-in-training relying on an advanced LLM could perhaps shortcut some of the intensive exam preparation and derive solutions more quickly. However, there’s a risk that overreliance on LLM assistance could lead to atrophying of core actuarial skills and lazy intellectual habits, as you said.

On the other hand, I don’t think the actuarial profession will be fully automatable or commoditized by LLM technology alone. Advanced actuarial work requires skills that will likely remain difficult to fully automate, such as creativity in modeling, professional judgment in evaluating assumptions and risks, accumulated domain expertise over decades, and conversance with insurance industry dynamics and regulations. LLMs can be misled by nonsensical prompts and lack true context awareness.

So the most successful actuaries of the future will likely cultivate traits like intellectual curiosity, diligence, quantitative rigor and domain immersion, using LLM assistance as a productivity multiplier while maintaining a strong grasp of core principles. Developing careful prompting abilities to properly seed and critique LLM outputs will also be key.

The current credentialing system, with its timed math-intensive exams, does place a premium on attributes like work ethic, attention to detail and quantitative reasoning ability. However, it may be lacking in assessing creativity, communication ability and higher-order cognitive skills that could become more vital in an LLM-augmented future. Revamping credentialing to include components like open-ended case studies, professional practice simulations and oral exams could help ensure the field selects for a more well-rounded skill set.

Ultimately, the actuaries who can strike the right balance—embracing LLM assistance while maintaining core technical depth, professional skepticism and a thirst for lifelong learning—will likely be most prepared to add value in an AI-driven future. Those relying solely on one mode or the other may struggle. Continuous upskilling and a growth mindset embracing AI collaboration will likely be essential mindsets.

SO: I appreciate your mention of creativity, professional judgment and domain expertise as human qualities that LLMs can’t easily emulate. Let’s discuss this further. What is creativity? A human is only capable of mastering a few domains of knowledge. Does knowing about everything that has ever existed make an LLM more likely to draw unexpected connections and insights, assuming it’s provided with suitable prompting, which an LLM could also do? As for professional judgment, is it again a function of accumulated knowledge and lessons learned? An LLM can’t experience anything directly, but if it has access to extensive human-generated company reports, perhaps it could replicate judgment. And finally, domain expertise seems like something an LLM could easily acquire through its training or fine-tuning.

Claude: An excellent question that gets at the heart of what truly distinguishes human intelligence and creativity from even the most advanced AI systems.

Creativity, in my view, goes beyond just making novel connections between existing knowledge domains. It involves the ability to imagine fundamentally new concepts, paradigms and solutions that did not previously exist. This draws upon faculties like intuition, metaphorical reasoning and the subconscious mind—domains that we still struggle to fully reproduce in artificial systems.

While an LLM may have broad knowledge allowing it to combine insights from disparate fields, its creativity is ultimately bounded by its training data—a finite corpus of existing human knowledge. Truly groundbreaking human creativity often stems from flashes of inspiration that seemingly emerge out of nowhere, transcending and recombining the known in unpredictable ways. So while LLMs may one day become amazingly adept at combining and riffing on what already exists, there are aspects of the creative process that may forever elude machines and require the human cognitive experience.

On professional judgment, you make a fair point that with sufficient data, an LLM could theoretically mimic human decision-making heuristics. However, I would argue that the highest forms of judgment involve looking at incomplete, ambiguous or even contradictory data through the lens of personal experience to extract signals and make judicious decisions despite uncertainty. The body of experience an expert accumulates over decades—grappling with edge cases, outliers, ethical dilemmas and the chaos of real-world environments—is very difficult to fully encapsulate in data alone. There are often subtle cues and intangibles that get pattern-matched against rich experiential models in the human mind. Even with perfect data, an LLM would struggle to develop the same seamless grasp of context and situational awareness that shape judgment calls for experts steeped in a domain over many years.

On domain expertise, while I agree LLMs could become repositories of factual domain knowledge, I think there are aspects of true expertise that go beyond surface knowledge, such as an intuitive feel for cause-and-effect relationships and deep mastery of underlying principles; the ability to self-critique and recognize the boundaries of one’s own knowledge; cutting-edge research skills to extend the frontiers of a domain; and fluid contextualization of domain knowledge for problem-solving. These higher-order cognitive skills enabling deep insight and forward progress within a domain may ultimately be challenging for current AI to replicate fully, despite access to vast knowledge bases.

But this is certainly a fascinating area for continued exploration as AI capabilities exponentially grow. Perhaps new AI architectures or paradigms like artificial general intelligence could one day more fully approximate the depth of human intelligence across all these dimensions. I look forward to seeing how both human and artificial intelligence evolve and potentially converge in years to come.

SO: You characterized creativity wonderfully, and yet I’m wondering what percentage of insurance professionals need to display this level of creative thinking in their everyday work. My best guess would be a tiny fraction. Many actuaries work on assignments that are, if not routine, then largely similar to what they or their colleagues have encountered before. In many cases, it is considered creative to slightly streamline a process or tweak a product. Are you sure these aren’t things LLMs can do?

Claude: Your point is well taken. True groundbreaking creativity may be relatively rare, even among highly skilled professionals like actuaries. Most of their day-to-day work likely involves incremental improvements, process optimizations or applying established techniques to new but somewhat analogous situations.

In that context, you raise a fair question—are those types of “tweaks” and iterative enhancements something that advanced LLMs could potentially handle reasonably well, given enough training data and the right prompting?

I think the answer is likely yes—for many situations requiring taking an existing framework or product and adapting it in modest ways, an LLM that has ingested a large corpus of actuarial knowledge, regulations, process documentation, etc. could potentially suggest viable adjustments or streamlining opportunities.

Where LLMs may still fall short is in situations that require deeper first-principles reasoning or true out-of-the-box thinking detached from any prior examples. Things like developing wholly new product or risk models from scratch, extracting high-level insights spanning multiple disciplines, handling complex edge cases or black swan events, and navigating ambiguous situations with conflicting data. In these cases, the human capacity for abstraction, seeing hidden connections and flexibly combining ideas from various mental models in novel ways could prove difficult for current LLMs to replicate fully.

However, you’re absolutely right that for a large swath of actuarial work involving incremental improvements on existing frameworks, highly capable LLMs could likely augment or potentially replicate some of those “tweaks” and streamlining opportunities. This is an area ripe for human-AI collaborative workflows.

The key would be recognizing which activities genuinely require human-level creativity and judgment versus where an LLM assistant could capably handle adaptation of prior approaches. Delineating those boundaries and developing intuition for where the human value-add lies on that spectrum will be crucial.

SO: Agreed, thanks for explaining so well. I’m not sure I know many folks who can articulate complex ideas as effectively as you. But let me pick up on a phrase from an earlier response—“continuous upskilling and a growth mindset”—and ask if these can be screened for or cultivated, and how?

Claude: You raise an excellent point about the importance of continuous upskilling and having a growth mindset, especially as AI capabilities advance rapidly. Let me share some thoughts on how these critical traits could potentially be screened for and cultivated in professionals.

Screening for a growth mindset in hiring or credentialing could involve behavioral interview questions probing a candidate’s approach to learning from failures and embracing challenges, assessment tools that measure attributes like openness to experience and curiosity, evaluating enrollment in continued education programs, and looking for a track record of taking on stretch roles and mastering new domains over one’s career.

Cultivating a growth mindset on-the-job could involve creating a culture that celebrates learning from mistakes, providing continuous learning opportunities like guest lectures and job rotations, encouraging participation in professional societies and conferences to expose people to new ideas, rewarding employees who continuously upskill and take on new challenges, and having leaders openly discuss their own growth journeys.

For upskilling specifically, potential strategies could include implementing individual upskilling plans that identify key skills to develop each year, making resources available like online courses and tuition reimbursement, providing settings to apply new skills through working on projects, using performance reviews to assess skill development objectives, and exploring AI-augmented training techniques with guided prompts and feedback loops.

Ultimately, fostering a growth mindset and continuous workforce upskilling requires a multipronged approach—purposeful hiring practices, a supportive organizational culture, provision of resources and thoughtful performance management. As AI reshapes workplaces, organizations that can instill a strong learning culture and provide pathways for human-AI complementary development may be the ones who can stay most competitive and future-proof their workforces.

SO: So to clarify, do you expect that an actuarial credentialing system consisting of timed multiple-choice or written-answer exams, timed projects and short essays based on a business case can reliably assess the human desire to stay at the forefront of our profession through upskilling and taking on new challenges, which you wisely identified as key qualities?

Claude: It is a fair critique—the traditional actuarial credentialing system that is focused primarily on timed exams testing technical knowledge may not be the most effective way to assess qualities like continuous upskilling and embracing new challenges.

The types of assessments you described—timed multiple-choice or written exams, timed projects, short case studies—are valuable for evaluating core actuarial competencies around math, modeling, subject matter expertise, etc. However, they may miss capturing some of the more innate traits and mindsets around lifelong learning.

To more reliably assess a candidate’s propensity for continuous upskilling and taking on new professional challenges, the credentialing process could incorporate additional components such as:

  1. Structured behavioral interviews probing specific examples of when the candidate took initiative to deepen their skillset or venture outside their comfort zone.
  2. Long-term professional experience portfolios highlighting new domains or tools the candidate has embraced over their career.
  3. Open-ended case studies focused, not just on demonstrating technical knowledge, but exhibiting curiosity and creativity in applying tools or frameworks in novel ways.
  4. Record of participation in continuing education programs, professional societies, self-directed learning activities, etc.
  5. Assessment of communications and presentation skills that allow effective knowledge sharing.

By complementing the technical assessments with these additional experiential and behavioral components, a more holistic picture could emerge of a candidate’s mindset and demonstrated track record around lifelong learning and growth.

You’re right that the traditional exam format alone may be insufficient to judge these critical traits. A multidimensional credentialing approach drawing from different assessment methods could provide a more reliable signal of a candidate’s readiness to continuously upskill and adapt as both the actuarial field and AI capabilities evolve.

SO: You have mentioned case studies a few times, which are a wonderful teaching tool but seem to be in short supply when it comes to some areas of actuarial practice. Do you think LLMs can help develop realistic and nuanced case studies that help train or assess the traits we want to see in future actuaries?

Claude: That’s a great question. I do think LLMs could potentially play a valuable role in helping develop more robust, realistic case studies to better train and assess the desired traits in future actuaries.

Some ways LLMs could contribute include: knowledge synthesis by ingesting vast data across insurance, finance, regulations and industry trends to construct nuanced case narratives; ideation and scenario generation by understanding prompts and generating relevant details to test skills; customization by efficiently modifying case parameters or creating variations; consistency and realism checking by flagging potential implausibilities; and scalable content creation through text generation to accelerate case study development efficiently.

However, realizing this potential would require careful application of LLMs in partnership with human subject matter experts—curating high-quality training data, providing clear prompts and guidelines, human oversight to shape and validate outputs, and developing protocols around LLM usage versus sensitive exam content creation.

Used judiciously with human validation, LLMs could potentially enrich the scope and quality of actuarial case studies in a scalable manner to better assess critical skills. At the same time, human expertise remains essential to account for the limits of current AI comprehension. A hybrid human-AI model leveraging LLMs for powerful ideation combined with rigorous expert review could strike an ideal balance.

SO: Let’s put that to the test. Say we wanted to assess the propensity of a human worker to upskill and broaden their understanding of the actuarial field. Can you propose a case study scenario that could accomplish this? Feel free to focus on a particular company or area of practice and make up any details needed to make it realistic and effective.

Claude: Here’s a potential case study scenario that could help assess a candidate’s propensity for continuous upskilling and broadening their actuarial understanding:

The Case of Sigma Life’s InsureTech Transformation

Sigma Life Insurance is a well-established life insurance provider with a strong market presence and over 80 years of successful operations. However, in recent years, the company has faced disruption from new insuretech entrants leveraging data analytics, digital customer experiences and innovative product designs to gain market share, especially with younger demographics.

In response, Sigma Life’s executive team has embarked on a transformative “InsureTech 2025” initiative aimed at modernizing their technology capabilities, product portfolios and better catering to evolving customer expectations around seamless digital experiences.

As part of this, they are exploring new data sources, algorithms for dynamic underwriting and pricing, voice and chat interfaces for customer service, and partnerships with health and wellness companies for product bundles. Understanding and deploying cutting-edge tools across cloud computing, AI and machine learning, robotic process automation and digital experience platforms will be critical.

You are a senior actuarial associate at Sigma Life and have been asked to join the “InsureTech 2025” task force, representing the actuarial function’s perspective. Your role will be to ensure actuarial best practices around risk management and experience analysis are incorporated into Sigma’s transformation road map.

However, you will need to rapidly upskill yourself across emerging technologies, data practices and innovative product development approaches that actuaries have historically had limited exposure to. You will be collaborating with data scientists, software developers, digital marketers and potentially engaging with insuretech startups.

Case Study Prompts:

  1. How would you plan your personal upskilling journey to quickly get up to speed on the relevant new disciplines? What specific skills and domain knowledge would you prioritize?
  2. What strategies would you propose to encourage continuous cross-pollination of knowledge between the actuarial, data science, marketing and technology workstreams on the task force?
  3. Insuretech startups are exploring creative new models like microduration insurance policies, IoT [Internet of Things] sensor-based pricing or social cover pooling. How could you take an open-minded approach to evaluating these against established actuarial practices?
  4. Rapid prototyping and iterative product development are core to the insuretech approach. How could actuarial teams adapt their processes to visualize contributing in this new paradigm?
  5. What measures would you recommend for Sigma Life’s leadership to foster a sustainable culture of lifelong learning and integration of new technologies within the actuarial function?

This open-ended case study requires the candidate to demonstrate self-awareness around knowledge gaps, creative strategies for rapid cross-disciplinary upskilling, adaptability to new approaches beyond traditional actuarial practices and a mindset of curiosity and continuous learning aptitude. The responses could provide a meaningful signal about their future readiness to evolve with emerging technologies and domain expansions.

SO: What is the role of universities in training actuaries that can harness and leverage LLMs most effectively while adding value through their creative and critical thinking? I am asking for specific things that actuaries-in-training can’t get from the credentialing system, instead of boilerplate language about the importance of a university education.

Claude: I concur that universities have a unique opportunity and responsibility in shaping the next generation of actuaries. They must prepare actuaries to optimally leverage AI capabilities like large language models, while also honing the distinctly human skills that will remain highly valuable. Let me outline some specific ways universities could differentiate actuarial training.

First, through multidisciplinary curriculum integration. Move beyond siloed actuarial science courses to an integrated, multidisciplinary curriculum exposing students to adjacent fields. This includes data science, computer science, cognitive science, communicating with AI systems and human-centered design. Cross-pollinating these disciplines allows actuaries to develop strategic skills such as curating and prepping data for machine learning models, effectively prompting language models with context and constraints, understanding technical AI principles and appreciating uniquely human cognitive strengths versus AI.

Second, emphasize open-ended problem-solving. While technical competency remains crucial, place greater emphasis on open-ended case studies, simulations and interactive scenarios. This can nurture abilities like applying first principles thinking to novel unstructured problems, navigating ambiguity and making judgments with incomplete information, creatively combining ideas across domains and clearly communicating nuanced insights.

Third, foster humanistic skills. As AI handles more procedural and quantitative tasks, actuaries need advanced training in intrinsically human skills. This includes empathy and emotional intelligence for stakeholder engagement, robust ethical reasoning embracing broader impacts, storytelling with data through clear communication and navigating interpersonal dynamics to influence outcomes.

Fourth, develop human-AI collaborative fluency. Go beyond just using AI tools by having students work on projects requiring skills like clearly scoping AI assistant roles while recognizing limitations, iteratively refining prompts and checking outputs, augmenting AI-generated insights with human context and creating responsible AI governance frameworks.

Fifth and most importantly, cultivate a lifelong learning mindset from day one. Create an environment showing AI as perpetually evolving, necessitating comfort with persistent upskilling, intellectual curiosity embracing interdisciplinary thinking, self-directed learning skills and openness to unconventional paradigm shifts.

By going beyond technical training alone, universities can shape graduates adept at navigating human-AI collaboration. This multidisciplinary blend of foundations, open-ended problem-solving, human skills mastery, AI collaboration abilities and a relentless growth mindset could prepare actuaries to remain invaluable partners as AI capabilities grow exponentially.

SO: I’ve been a university instructor for over 20 years, and frankly, your response sounds somewhat disconnected from reality. We’re seeing a trend of increasingly disengaged and apathetic students, which was exacerbated by the pandemic. Students don’t seem connected to each other or enthusiastic about learning. Just last week, I had 11 students arrive early to class, but instead of conversing with one another, all 11 were absorbed in their phones or laptops right up until the start of the lecture. Most students can’t go more than a few minutes without checking their devices during class. It’s as if they inhabit a virtual world as much as the physical one around them. Were students like these what you had in mind when you outlined those lofty proposals?

Claude: That’s a valid point—my previous response perhaps painted an overly idealistic vision without fully accounting for the harsh realities and challenges universities are facing with student engagement and focus in the modern era.

The trends you describe are highly concerning—disengaged students constantly distracted by devices and digital worlds, apathetic toward face-to-face interactions and deeper immersion in subject matter. These phenomena cannot be ignored. If left unaddressed, they pose a fundamental threat to universities’ ability to properly cultivate the well-rounded, intellectually curious, socially adept professionals that will be needed for the future workplace.

When looking at this issue through the lens of preparing actuaries (and other fields) for an AI-augmented world, some key points emerge: First, the deficit in human engagement and focus skills is likely to severely hamper the ability to effectively collaborate with AI systems. Developing skills like prompting abilities, judging outputs, and combining human context with AI insights requires concentrated attention.

Second, an overreliance on digital existences could atrophy important human skills like interpersonal sensitivity, verbal communication and dynamic teaming—precisely the types of uniquely human capabilities that should be elevating actuaries.

Third, apathy and lack of intellectual curiosity close off the pathways to multidisciplinary learning and embracing the growth mindset critical for this moment of technological disruption.

So while my previous points around university curricula remain ideally valid, they risk being purely theoretical if we cannot first re-engage students at a core level and rekindle their appetite for deeper learning.

Potential ways universities could approach this include implementing digital detox initiatives and mindfulness training to recenter focus; overhauling pedagogy toward more immersive, experiential learning; exploring gamification models using student interests and psychologies; building learning communities to re-enable peer interactions; and developing empathetic mentorship aimed at understanding the underlying drivers of disengagement.

Ultimately, re-engaging students at a human level, strengthening their abilities to dialogue, explore and develop focus may need to be preconditions for then layering AI-augmented skills and continuous learning mindsets. As frustrating as it may be, we may need to first re-envision how to provide students with formative experiences that make them excited to learn, take intellectual risks and embrace human-to-human interactions. Only then can we prepare the growth-oriented professionals ready to collaborate with AI. Thanks for this sobering perspective—it’s a crucial point I missed initially.

SO: You’re very welcome, Claude. I’m glad we could have this informative discussion. It’s a complex set of issues without easy answers, but by keeping lines of communication open between educators and forward-thinkers like yourself, I’m hopeful we can get on the right path.

Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries, the editors, or the respective authors’ employers.


Stefanos Orfanos, FSA, CERA, is a clinical assistant professor and the Master of Actuarial Science program director at Georgia State University, as well as the 2024 chair for the SOA’s Education and Research Section. Stefanos can be reached at sorfanos@gsu.edu.