What Actuaries Can Learn from Data Scientists
By Nick Hanewinckel
Predictive Analytics and Futurism Section Newsletter, August 2021
Some actuaries are data scientists (or maybe some data scientists are actuaries). But for actuaries without a data science background, there are a lot of small things to learn from data scientists that can add up to big results. This article explains some techniques from data science and software development that you may find useful, no matter how traditional your actuarial work may be.
This article is particularly aimed at those traditional actuaries. Partly this is because all actuaries will have to understand data science at some level soon (just like 30 years or so ago, all actuaries began to need to know Excel). But more importantly, these ideas can help across the whole profession. So, for those data science skeptics, please put down your slide rules for a moment, and read on!
Collaboration—Agile Framework
The Agile Framework is certainly not unique to data science. It evolved from software development. But for those of you who have never worked this way, https://www.scrum.org/resources/what-is-scrum gives a nice overview of the Scrum process (the within the Agile framework that I use). There is one catch—you must do it well; going through the motions without thought is worse than doing nothing!
Agile is centered around collaborative planning—using conversations with cross-functional teams to plan solutions (self-organizing). It allows stakeholders to drive priority, but those with the know-how to determine the solution.
So, what do we get as a reward for learning this?
For the actuary, this collaboration may allow you to wear fewer “hats.” You can now rely on, say, your database team to design the solution rather than having to work in an area that wasn’t on any actuarial exam. You can lead a data science team as a “product owner” even if you don’t have much data science experience. You can focus on the “what” now that you don’t have to focus on the “how.”
When done correctly, it also allows for adaptability as things change. The process is designed to be iterative. This process of constant improvement keeps projects from stagnating. This is further helped by organizing projects into “Sprints”: two to four weeks of focused work on defined goals.
The result is a collaborative work environment that avoids being overly prescriptive. I first saw this framework used successfully in a statutory valuation department—so this is definitely not for data scientists only!
Control—Git
Many developers use the open-source version control software Git. For the actuary, this is a way of controlling operational risk, audit risk, and risk in other areas. Git is a system that compares “snapshots” of files (traditionally code) and tracks their changes. Many users can work on the same software, checking out the work to different “branches.” These branches can be merged back to a central repository, and conflicting changes can be addressed.
Besides redundantly storing this code in case one repository is lost, changes can be dialed back to any previous version. No matter how many mistakes are made or how severe, you can always magically revert to the last correct version. Git can be hosted for free on your own networks, or Git hosting services (e.g., GitHub, GitLab, Bitbucket) can host in the cloud in a very secure fashion. Data can also be version-controlled via Git, though due to the large file size, it can be streamlined by just version-controlling the metadata (dvc is one software package that allows this in a Git framework).
If you are not a developer, I especially encourage you to check out Git. You can version control any type of file (not just python scripts) and there are many uses for this. For example, a pricing actuary can be sure they are running their analysis on the correct version of software (or can roll it back to any prior version). A valuation actuary may wonder if there were any accidental changes to any one of their thousands of assumption tables. An actuary doing an internal audit may want to run a cashflow model using the exact version that generated a financial statement. All these become trivial with a solid Git repository.
Cross-Functional Knowledge
Data science work requires data science skills. But that is not enough—insurance has a lot of quirks, a lot of history, and a lot of data relationships that require business understanding (e.g., post-level term mortality for life Insurance, or the implications of no-fault auto insurance for P&C). A good team will have some members with data science skills and some with Insurance Industry knowledge.
Your team members may come to the team from a variety of backgrounds: underwriting, actuarial, “pure” sciences (including data science) and more.
The advice I have for this area is to increase your knowledge constantly! While it’s hard to openly admit when you don’t know or understand something, you’ll be rewarded in your bravery by learning something new.
Finally, consider adding to the knowledge breadth of your own team. Perhaps bringing a subject matter expert into your technical work can make insights happen earlier. Or conversely, perhaps bringing a data scientist into a non-data business problem will lead to solutions that have never been thought of.
I hope these examples are useful and that you can see the tools and behaviors that lead to good results. Regardless of the technological tools involved, these examples grow from the values of collaboration, openness and honesty, respect, and the drive to achieve across the whole organization. Here’s to a productive future in our fast paced, data-driven future!
Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries, the newsletter editors, or the respective authors’ employers.
Nick Hanewinckel, FSA, CERA, MAAA, is AVP and lead data scientist with SCOR Global Life Americas. He has worked in predictive analytics for life insurance for several years and is a council member of the PAF Section, treasurer, and newsletter editor. He can be contacted at nhanewinckel@scor.com.