Coordinating VM-31 With ASOP No. 56 Modeling
By Karen K. Rudolph
The Finanical Reporter, July 2022
In Oct. 2020, companies were anticipating the initial VM-20 valuation to be performed for yearend 2020. The qualified actuary's focus was on the heavy lifting to be done with respect to assumption and margin development, the calculations, and the VM-31 report itself. Time was tight.
At about that same time, all actuaries became subject to a new actuarial standard of practice, ASOP No. 56, Modeling. A careful read of this new ASOP reveals both actuaries and intended users[1] of the VM-20 models are subject to the provisions found in ASOP No. 56.
There are sections of the VM-31 PBR Actuarial Report (PBRAR) that should arguably be fulfilled with ASOP No. 56 in mind. This article focuses on VM-31's "Life Report" Section 3.D.2 on Cash Flow Models and in particular part e, Calculation and Model Validation.
In the PBRAR, the actuary must discuss how the model was evaluated for appropriateness and applicability, including in paragraph e.(i): "a thorough explanation of how the company became comfortable with the model." To be clear, the PBRAR is prepared under the direction of a qualified actuary, and the responsibility of ensuring the VM-31 requirements are fulfilled belongs with the company. In addressing how the company became comfortable with the VM-20 model, ASOP No. 56 provides a framework with which to address this response. It begins with the definition of "intended purpose," which ASOP No. 56 defines as "The goal or question, whether generalized or specific, addressed by the model within the context of the assignment."[2] In this discussion, the intended purpose is VM-20 calculations and results. Given this, how can ASOP No. 56 guide actuaries in their PBRAR documentation?
VM-31 Section 3.D.2.e.(i)
In addressing a thorough explanation of how the company became comfortable with the model, the PBRAR might capture thoughts on the company's decision points around selecting the model used for VM-20. In evaluating whether the capabilities of this model are consistent with its intended purpose, the company likely used the specific considerations of ASOP No. 56 Section 3.1.1, which include but are not limited to the following:
- Characteristics such as the level of detail and granularity.
- The dependencies recognized by the model such as comprehensive asset-liability interactions and the model's ability to reflect dynamic policyholder behavior interactions.
- The model's ability to identify volatility around expected values. This last characteristic is important for evaluating the scenario reserves generated in determining the stochastic reserve amount.
All of these considerations, while likely included long before VM-20 production runs were first made, are important to capture and document to confirm that the capabilities of the model are consistent with its intended purpose.
VM-31 Section 3.D.2.e.(ii) and (iii)
In the PBRAR, VM-31 3.D.2.e.(ii) requires the actuary to discuss "how the model results compare with actual historical experience" and 3.D.2.e.(iii) requires "tables showing numerical static and dynamic validation results." ASOP No. 56 Section 3.6.2 suggests model output validation is a critical element in ensuring the model output reasonably represents what is being modeled, and may include these considerations:
- When compared against recent historical financials, potentially adjusted to reflect the appropriate in-scope policies, do the modeled projected cash flows bear a reasonable relationship to the company's recent actual cash flows? There is an Academy VM-31 template resource that demonstrates just this comparison in table form (https://www.actuary.org/pbr-templates).
- When applied to hold-out data, does the model produce output that is reasonably consistent with model output developed without the hold-out data? This consideration may be used for checking predictive models.
- Performing statistical or analytical tests on model output to assess their reasonableness. For example, does the average mortality rate for the cohort start out and progress as expected? If mortality anti-selection is present, is it occurring at the proper point in the projection? If shock lapses are part of the baseline assumption set, are they occurring at the proper point in the projection? Do projected sources of profit bear reasonable relationships to each other through time?
- Running tests of variations on key assumptions (sensitivity tests) to test that changes in the output are consistent with the expectations, given the changes in the input. For example, if mortality is increased one would expect paid death claims to increase and surrender benefits to decrease. Though seemingly simple, this is a method of model output validation.
- Comparing model output to results of an alternative model where possible and appropriate. An example of this might be comparing the account value roll-forward from the VM-20 model to an illustration testing model capturing the same policies. This is a way of checking that the account value mechanics are at least consistent with another model representing the same policies.
VM-31 Section 3.D.2.e.(iv) and (v)
In the PBRAR, VM-31 3.D.2.e.(iv) requires the actuary to discuss "which risks, if any, are not included in the model" and 3.D.2.e.(v) requires a discussion of "any limitations of the model that could materially impact the NPR [net premium reserve], DR [deterministic reserve] or SR [stochastic reserve]." ASOP No. 56 Section 3.2 states that, when expressing an opinion on or communicating results of the model, the actuary should understand: (a) important aspects of the model being used, including its basic operations, dependencies, and sensitivities; (b) known weaknesses in assumptions used as input and known weaknesses in methods or other known limitations of the model that have material implications; and (c) limitations of data or information, time constraints, or other practical considerations that could materially impact the model's ability to meet its intended purpose.
Together, both VM-31 and ASOP No. 56 require the actuary (i.e., any actuary working with or responsible for the model and its output) to not only know and understand but communicate these limitations to stakeholders. An example of this may be reinsurance modeling. A common technique in modeling the many treaties of yearly renewable term (YRT) reinsurance of a given cohort of policies is to use a simplification, where YRT premium rates are blended according to a weighted average of net amounts at risk. That is to say, the treaties are not modeled seriatim but as an aggregate or blended treaty applicable to amounts in excess of retention. This approach assumes each third-party reinsurer is as solvent as the next. The actuary must ask, "Is there a risk that is ignored by the model because of the approach to modeling YRT reinsurance?" and "Does this simplification present a limitation that could materially impact the net premium reserve, deterministic reserve or stochastic reserve?”
Understanding limitations of a model requires understanding the end-to-end process that moves from data and assumptions to results and analysis. The extract-transform-load (ETL) process actually fits well with the ASOP No. 56 definition of a model, which is: "A model consists of three components: an information input component, which delivers data and assumptions to the model; a processing component, which transforms input into output; and a results component, which translates the output into useful business information." Many actuaries work with models on a daily basis, yet it helps to revisit this important definition. Many would not recognize the routine step of accessing the policy level data necessary to create an in-force file as part of the model itself. The actuary should ask, "Are there risks introduced by the frontend or backend processing in the ETL routine?" and "What mitigations has the company established over time to address these risks?"
VM-31 wants the qualified actuary to discuss these items in the PBRAR, and ASOP No. 56 requires actuaries to understand the models in their purview well enough to know about and document these limitations and weaknesses.
Closing
ASOP No. 56 applies to the actuary when, in the actuary's professional judgment, reliance by the intended user on the model output has a material effect for the intended user.[3] This ASOP confirms that actuaries working with models have a responsibility not only to themselves but to the intended users to understand all aspects of the models, including their limitations. Keeping the guidance of ASOP No. 56 in mind while working with models of all types will help to ensure that the capabilities of each model is consistent with its intended purpose.
Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries, the newsletter editors, or the respective authors’ employers.
Karen K. Rudolph, FSA, MAAA, is a principal and consulting actuary at Milliman. She can be reached at karen.rudolph@milliman.com.