Individual Claim Development Models and Detailed Actuarial Reserves in Property-Casualty Insurance

Author: Chris Gross – Chief Executive Officer at Gross Consulting


Individual Claim Development Models and Detailed Actuarial Reserves in Property-Casualty Insurance


Actuarial reserving techniques using aggregated triangle data are ubiquitous in the property casualty insurance industry. By instead starting with the modeling of individual claim behavior using predictive modeling techniques and a modeling framework that describes the full life cycle of a claim, there are numerous benefits including greater reliability of reserve estimates, faster recognition of underlying mix changes, and avoidance of problems in pricing due to differences in development. Component development and emergence models used in conjunction with simulation of currently outstanding claims and simulation of claims still yet to be reported form an alternative framework for generating estimates of reserve need. Algorithmic case reserves at the claim level and algorithmic IBNR estimates at the policy level, actuarially determined and designed to be unbiased, provide valuable information for downstream analyses, a bridge to the generally accepted triangle reserving paradigm, and a means for demonstration of reliability for actuarial purposes.


Adjusting for Changes in Loss Development

There is a considerable volume of actuarial research and literature devoted to answering the question: “How do we react to changes in loss development patterns?”  If development patterns never changed, we’d all just use the chain-ladder methods and go home early.

There are no shortage of approaches.  How does the actuary best choose the right approach, given all of the options?  How can the actuary be sure that the approach is appropriate to the scenario?  This is a crucial question to answer, because the consequences for choosing the wrong method can be severe.

B-F and B-S potholes

One of my actuarial heroes, Jerry Degerness, once famously said,

“The road to insolvency is paved with Bornehuetter-Ferguson and Berquist-Sherman calculations.”

By way of explanation, he later added, “The B-F pothole is sustained embrace of progressively inadequate expected loss ratio assumptions and the B-S pothole is overly optimistic adjustments based on the promise of better claim processing and settlement practices or revised coverage terms.”

It’s not that the Berquist-Sherman method is flawed per se, it’s the fact that it can mask loss ratio deterioration if used inappropriately. And by used inappropriately, I mean that it is used to mask underlying change, despite emerging evidence to the contrary. Consider this scenario:

  1. There is an undetected mix shift into a class of business which has 10% higher severity
  2. The claim department implements a new system that starts under-reserving case reserves by about 10%

Looking at the paid and incurred loss triangles, the actuary sees heavier (earlier) paid loss development, but little or no change on the incurred side. “What’s causing this?” asks the actuary. Hypothesis: with the switch to a new claims system, the payment pattern must be changing. So, armed with Berquist-Sherman, the actuary adjusts the historic paid loss triangle to match the new payment pattern. The end result is a reserve estimate consistent with past estimates, effectively masking the increasing loss ratio.

It would be better, instead, to identify that different types of business have different expected frequencies and severities, and different paid and incurred development patterns. The trick is identifying which are the key types, and then quantifying those differences.  Armed with this knowledge, the actuary would be able to anticipate how observed development patterns should be changing, given observed mix changes on the exposure side.

Where there is mix shift, there will be changes in development. And there is always mix shift.

Armed with the knowledge of what’s really happening in the book, the actuary can correctly identify the change in paid loss development is being driven by mix shift, and can also identify (and quantify) the change in case reserve adequacy. Ideally, all of this happens independently of the claim department’s case reserving process.

Enter the Actuarial Case Reserve

In his March 2020 webinar, Chris Gross discussed Actuarial Case Reserves, an independent and unbiased case reserve set by the actuary for purposes of more accurate price and reserve estimates:

“Case reserves currently serve two primary roles – to facilitate the appropriate settlement of each claim, and to provide financial information. These goals are intrinsically at odds with each other. As a profession, we need to move beyond the use of subjectively determined case reserves to using case reserves that are more appropriate for loss reserving, that we have constructed directly, using objective claim and exposure information.”

The Actuarial Case Reserve, he says, can be thought of as an extension of the Berquist-Sherman idea — that is, adjusting the historic triangle to account for identified changes. In this case, we are adjusting the incurred triangle by replacing all of the claim department’s case reserve estimates with an independent and unbiased Actuarial Case Reserve, based not just on accident period and development age, but all available exposure and claim characteristics as well.

Developing the Actuarial Case Reserve

It’s difficult — maybe impossible — to accurately identify the impact of various mix changes by studying paid and incurred triangles. The analyst would have to correctly intuit the important segmentations needed (geography, class of business, size of risk, etc) and then study triangles for all of these segments. Are there differences between these triangles? Are the differences statistically significant? Are the segments large enough to be credible? What about interaction effects: We’re growing in state X, but that’s where our growth in small business is coming from.

This is where the Claim Life Cycle Model (CLCM) approach proves useful. (For an introduction to CLCM, see CLCM In a Nutshell.)

A CLCM analysis assumes that the loss triangles are an artifact of the claims process, not a description of the claims process itself. The claims process is a series of events in the life cycle of a policy: Will there be a claim? How many claims? What are the claim severities? When will the claims be reported? When will payments be made? When will the claims close? Will any claims re-open?

Each of these events is itself a simple model, based on the segmentations that are significant and credible to that event. Perhaps report lag is not sensitive to geography, but claim severity is. If mix is shifting from one state to another, the severity model will catch this.

The end result is an estimate of future payments, by claim and by policy (for IBNR claims) that reflects all historic mix shifts to the extent that they are statistically significant and credible. In other words, an Actuarial Case Reserve.

It is important to emphasize that there is no implication that the claim department’s case reserve is wrong! As Chris reiterates, the primary purpose of the traditional case reserve is to “facilitate the appropriate settlement of each claim.” In contrast, the development of the Actuarial Case Reserve gives the actuary an estimate which aligns more closely with the actuary’s goals: more accurate loss reserves and risk pricing.

For more discussion on topics like these, and some free Continuing Ed credit, join us each month for our MuSigma Webinar Series.  Sign up at

If you’re an actuary about to start a pricing, reserving, or claims modeling project, you should absolutely look into CLCM as one of your core strategies. Compared to more traditional approaches, the benefits and capabilities CLCM provides are transformational.

Want to accelerate your implementation of CLCM? Actuaries at Gross Consulting are now helping carriers stand up a multi-line CLCM process in three months or less using our Comprehensive Insurance Review (CIR) engagement. Please reach out to me with comments or questions at

Actuarial Case Reserves – MuSigma Webinar March 2020

Coming Tuesday, March 3 2020 at 1:00p EST / 10:00a PST

Chris Gross presents the next installment of The MuSigma Webinar Series – “Actuarial Case Reserves”

Register at


The use of case reserves in actuarial development triangles is ubiquitous. Many of the problems encountered in loss reserving stem from systematic changes and inaccuracies in the determination of case reserves. Case reserves currently serve two primary roles – to facilitate the appropriate settlement of each claim, and to provide financial information. These goals are intrinsically at odds with each other.

As a profession, we need to move beyond the use of subjectively determined case reserves to using case reserves that are more appropriate for loss reserving, that we have constructed directly, using objective claim and exposure information. During this session we will discuss how the separation of the dual roles of case reserves will benefit not only the actuaries in their reserving and pricing work, but also the claim settlement function.

About The MuSigma Webinar Series:

The MuSigma Webinar Series seeks to provide a forum for the presentation and discussion of topics relevant to today’s practicing actuaries, and to give actuaries another option when pursuing continuing education in organized activities.
It’s our goal each month to bring to the actuarial community a webinar and a speaker in order to provide topical, timely, and free access to quality continuing education opportunities.

The format will be

  • a participatory webinar in a one-hour format
  • focused on an opening period of content delivery by topic experts
  • followed by a Q&A / audience participation period where the presenter takes questions and comments from the virtual floor

You can register for each webinar at  There is no cost to attend.

Do you have suggestions for future webinar topics?  Would you like to volunteer to present a topic?  Contact Bret Shroyer at

Detecting Changes in Development Patterns

For actuaries performing a reserve analysis, change is the enemy. It’s an oft-repeated actuarial mantra: “I don’t care how Claims sets case reserves, as long as they keep doing it the same way.”

In reality, changes in development are actually more the rule than they are the exception; it’s impossible to find a book of business having perfectly stable loss development over a 10-year time span. We expect to find changes in loss development.

So, the reserving actuary is always looking out for change. But even with advance warning of changes in Claim’s process, or mix shifts on the exposure side, or a sudden jump in claim severities — it’s very difficult to quantify how much change in to expect in the loss development patterns.

Even worse, it’s hard to see direct evidence of changes in development patterns, until they’re deep in your history. And if it’s deep in your history, you’ve been using the wrong development assumptions in your predictions for the past 4-8 analysis periods.

Wait – Is Development Actually Changing?

Here’s the thought process of a reserving actuary over a hypothetical four quarters of reserve studies:

  • 1st qtr – “That’s an odd development factor, but it’s only one data point. I can safely ignore that.”
  • 2nd qtr – “There it is again… is there something actually there? Probably not, because odds are, we’re going to have two consecutive outliers every so often.”
  • 3rd qtr – “OK, it’s back to normal. I knew I shouldn’t worry about it.” (in reality, *this* is the outlier from the new pattern)
  • 4th qtr – “It’s back! there must be something here, let’s figure it out.”

How much of the triangle needs to exhibit a changing pattern before the analyst recognizes and reacts to the change?

All of this arises because the process of picking Loss Development Factors (LDFs) is one of observing the aggregate, and trying to make sense of what’s happening at the individual claim level. We may have good evidence that indicated LDFs are increasing at 12 months, but we don’t know why, or even what that should mean for or our estimates of ultimate. Our only evidence is the aggregate pattern itself.

LDFs Should Be Outputs, Not Inputs

Instead of looking at the development factors to attempt to understand what’s going on with the underlying claims, what if we look at what is happening to the underlying claims, and use that to predict what will happen to aggregate development?

This is a fundamental shift in the way the “loss development factor” is seen and used. Instead of being the most important initial assumption in the analysis, it becomes one of the last outputs from an analysis. In other words, the LDFs become a product of the analysis, rather than a key assumption and input to the analysis.

This is one of the key strategies (and benefits) of claims modeling using CLCM vs. triangle-based reserving methods. Because CLCM moves LDF indications to the end of the analysis, the Claim Life Cycle Model approach allows the analyst to recognize changes in development much earlier.

(In my last post, I gave an overview of the Claim Life Cycle Model (CLCM) approach. If CLCM is new to you, you may want to read that post for background and context.)

Triangle methods require an initial assumption as to how claims will develop in the aggregate, then apply that same assumption to every claim in the analysis. CLCM methods focus on studying the life cycle of each claim, at the claim level, to uncover what drives claim behaviors like report lag, payment pattern, and closure rates. CLCM focuses on discovering the exposure and claim characteristics that best predict individual claim behaviors.

An Answer to Why LDFs Are Changing

The end result is not just an aggregate reserve analysis, but a reserve analysis at the claim level – and at any level of detail in between. The analyst can not only produce a set of LDFs for each segment of the book, but can explain WHY the LDFs for Segment 1 differ from Segment 2, because the variables that impact development have been identified and quantified.

Additionally, the indicated LDFs for each segment will now be in balance with the aggregate LDF at the book level; it doesn’t matter how split up the book, you can get consistent reserve estimates when you add up the segments.

If you’re an actuary about to start a pricing, reserving, or claims modeling project, you should absolutely look into CLCM as one of your core strategies. Compared to more traditional approaches, the benefits and capabilities CLCM provides are transformational.

Want to accelerate your implementation of CLCM? Actuaries at Gross Consulting are now helping carriers stand up a multi-line CLCM process in three months or less using our Comprehensive Insurance Review (CIR) engagement. Please reach out to me with comments or questions at

CLCM in a Nutshell

CLCM In a Nushell
CLCM In a NushellIn my last post, I talked about some of the challenges I see in the current claims modeling efforts, and offered the opinion that Claim Life Cycle Modeling (CLCM) is an approach that helps resolve many of those problems.
In this post, I want to shed a bit more light on CLCM:

CLCM is a strategy

When performing claims modeling, one of the first questions is, “What kind of model will we build?”  The answer to this must align with current corporate strategy.  The model has to be able to answer questions that are both relevant and actionable.  CLCM is a strategic choice to build a flexible framework based on as much available data as possible to answer a wide variety of pricing, reserving, and claims modeling questions.

CLCM is a process

Rather than building a single claims model that attempts to predict a particular future outcome, CLCM involves building a set of interrelated models that form the framework for a prediction of many future claim behaviors.  The result is a probability distribution of a variety of future claim statistics at the claim level, in the aggregate, by segment, by layer, etc.

CLCM is open and transparent

CLCM is an idea that’s been in development at Gross Consulting for over a decade.  During this period, we have delivered numerous presentations and participated in many discussions of the CLCM process, at both regional and national actuarial conferences.  From the outset, the goal has been to encourage open discussion and review of the CLCM process, and to encourage more actuaries to use some of these ideas to enhance their analyses.

CLCM is implemented in software

Here at Gross Consulting, we perform CLCM analyses with the benefit of of specialized software:  Cognalysis CLCM.  However, there’s no requirement that a CLCM analysis be performed using Cognalysis software; we have documented the process thoroughly enough that you should be able to replicate many of the ideas using your own logic.  Of course, we also invite actuaries to leverage our investment of time, effort, and experience to arrive at the finished product much faster.   CLCM is something we believe every practicing casualty actuary should be utilizing.

CLCM unifies pricing, aggregate reserving, and claims modeling

These three analytics efforts rely on the same underlying bodies of data:  past premium, exposure, and claims data.  However, they typically go about formulating the key questions differently, resulting in differing assumptions, and therefore potentially conflicting results.  CLCM, on the other hand, builds a set of claims behavior models that describe future outcomes, resulting in
  1. A pricing model which predicts pure premium at the policy level
  2. A claim-level reserve estimate, including probability distributions that can be rolled up by segment and layer
  3. A claims model that can be used for live claims triage, “jumper” assignment, etc.
Using CLCM, these efforts don’t require three sets of analysts building three separate models – these three deliverables are a natural outcome of a single CLCM analysis, based on the same starting data and a common set of assumptions, so the three models will be in agreement each other.

CLCM builds reserve estimates at the claim level, based on all available information for that claim

Traditional reserving methods look at claim development using triangle methods, which incorporate just three two pieces of information:  Loss, Time Period, Development Age.
Claims Models typically incorporate many more pieces of data, but typically make point predictions as of a particular point in time – say 30 days or 90 days.
CLCM looks at all claim behavior over time, at each time step, with behavior in each time step a function of behavior in the previous steps.

CLCM succeeds when other methods are most likely to fail

CLCM was originally developed to address a very common question:  “How do I best estimate aggregate reserves when things are changing?”  (Or worse yet, “How do I know whether or not things are changing?”)
Traditional triangle methods work well — until they don’t.  Because they rely on just three pieces of information (Loss, Time Period, Development Age) they break down when the book is changing over time across a different dimension.  These scenarios include:
  1. Mix shifts
  2. Changes in case reserving methods
  3. Changes in payment timing (deliberate or not)
  4. Changes in the external environment (trend, new causes of loss)

CLCM in more detail

Over the course of the next 11 weeks, I’ll be diving deeper into many of these ideas, as well as describing some of what I’ve seen as the key features and benefits of CLCM.  I’d like to be explicit with my goal in this series: If you’re an actuary about to start a pricing, reserving, or claims modeling project, you should absolutely look into CLCM as one of your strategies.   Compared to more traditional approaches, the benefits and capabilities CLCM provides are significant.
Please reach out to me with comments or questions at

The Jumper Dilemma – Why is Claims Modeling So Hard?

Claims Modeling CLCM

Claims Modeling CLCMClaims modeling is gaining traction right now.  Attend a conference, or talk to some data science / modeling staff, and you’ll likely hear about some current or impending efforts to build a claims model.

“What kind of claims model are you building?” is a natural line of questioning.  If you’re talking to the same groups of people that I am, you’ll hear three general answers:

  1. A jumper model
  2. A triage model
  3. A reserving model

I’d like to discuss each of these, in turn, in the context of an observation that’s becoming more and more clear to me:  The number one mistake being made in claims modeling is that modelers are with shocking frequency attempting to answer the wrong question.

The Jumper Claim Model

Let’s start with the Jumper model.  This model attempts to answer the question: “Which claims are most likely to jump by more than $50,000 from the initial case reserve estimate at 30 days?”  To build this model, the analyst assembles claim information and examines the case incurred amounts for each claim at 30 days and at some future date, with the target variable being a binary Yes/No if the claim met the jumper definition.  There are several big potential pitfalls with this approach:

  1. The jumper criteria (ie $50K increase from 30 days to ult) must be determined before modeling begins
  2. The jumper criteria is almost certainly not optimal
  3. If case reserving methods change (say as a result of the findings of the modeling), this can invalidate the model’s predictive accuracy
  4. There is no prescriptive value in this model; merely identifying claims likely to be jumpers does not say anything about what to do with those claims to change the future. 

The Claims Triage Model

With triage models, the modeler is attempting to answer the question: “Which claims are likely to be more complicated or higher severity, and should be assigned to more experienced adjusters?”  Triage models are prescriptive models, in that they attempt to prescribe a future action to help mitigate or reduce future payments.  In that light, triage models can also be built to indicate when and where particular loss control or settlement actions should be performed.  In my opinion, this approach has a good probability of being successful at mitigating claim costs, but there are still a few potential pitfalls:

  1. For many carriers, there is no coding of past loss control procedures, so there’s just nothing there to model on.  Carriers need to start coding their loss control efforts in a regimented way for some time to gather the data needed to model the effectiveness of those actions
  2. In implementation, many triage models are used primarily to assign complex / high severity claims to more senior claim adjusters.  This is certainly a smart move, but it’s not scalable.  What is that senior-level, experienced claim adjuster going to do that a junior adjuster wouldn’t do?  Wouldn’t it be great if the model could tell us that?  (see also the first point)
  3. As with the jumper approach, if the claims adjustment process changes as a result of the triage model, and the triage model is based on the case reserves, this can effectively break the model when the claims department starts changing behavior

The Claims Reserve Model

Reserving models are in the minority.  Very little of the claims modeling efforts are being invested in building more a more accurate reserves picture.  With reserving models, the modeler is attempting to answer the question: “What is the likely future ultimate value of a reported claim?”  This is a much simpler question, with quite a few ready-made applications.  It would be hard to argue that the modeler is asking the wrong question here.  Instead, the biggest potential pitfalls are in using case reserves as a model input:

  1. Again, what if the case reserving process changes, particularly in reaction to the model?  This breaks the model.
  2. Typically, these models reveal that one of the most important predictors is the case reserve itself.  How can this be executed?  Does this mean we should fire the modelers and hire/train better Claims staff?

The Ideal Claims Model

This is not to say that attempting to build a claims model is an exercise in futility.  Ideally, claims models should

  • Be based on objective information (this does not include case reserves)
  • Include all available information – exposure detail, claims detail, transactional (time series) data, free-form text, external data, etc.
  • Be flexible enough to answer multiple questions
  • Provide a springboard to enable new actuarial and analytics projects

CLCM Exemplifies the Ideal Claims Model

For the past several years, we’ve been using a different approach at claims modeling that incorporates these ideals:  the Claim Life Cycle Model (CLCM).  Over the course of the next 12 weeks, I’m going to be interrogating the Claim Life Cycle Model process from a number of different angles in an attempt to explain its strengths, capabilities, and limitations.  I’ll compare and contrast with the three more common claims modeling approaches I introduced above.  My aim in this is to provide some support for analysts using other claims modeling approaches to avoid some of the pitfalls commonly encountered in claims modeling efforts, and ultimately to convince a few of you that using a Claim Life Cycle Model approach may ultimately be the best way forward to help you achieve your goals in claims modeling.   

To learn more about the Claim Life Cycle Model approach, and how you can employ it to build better claims models for your organization, contact me at