Chris Gross Co-Authors Variance Journal Paper

Validation of minimum bias rate factors

Released in the December, 2018 issue of Variance Journal, Christopher Gross and Jonathan Evans co-authored a paper entitled Minimum Bias, Generalized Linear Models, and Credibility in the Context of Predictive Modeling.

Abstract: When predictive performance testing, rather than testing model assumptions, is used for validation, the need for detailed model specification is greatly reduced. Minimum bias models trade some degree of statistical independence in data points in exchange for statistically much more tame distributions underlying individual data points. A combination of multiplicative minimum bias and credibility methods for predictively modeling losses (pure premiums, claim counts, average severity, etc.) based on explanatory risk characteristics is defined. Advantages of this model include grounding in long-standing and conceptually lucid methods with minimal assumptions. An empirical case study is presented with comparisons between multiplicative minimum bias and a typical generalized linear model (GLM). Comparison is also made with methods of incorporating credibility into a GLM.

Download the full study directly from Variance Journal here.

Actuarial Case Reserves – MuSigma Webinar March 2020

Coming Tuesday, March 3 2020 at 1:00p EST / 10:00a PST

Chris Gross presents the next installment of The MuSigma Webinar Series – “Actuarial Case Reserves”

Register at


The use of case reserves in actuarial development triangles is ubiquitous. Many of the problems encountered in loss reserving stem from systematic changes and inaccuracies in the determination of case reserves. Case reserves currently serve two primary roles – to facilitate the appropriate settlement of each claim, and to provide financial information. These goals are intrinsically at odds with each other.

As a profession, we need to move beyond the use of subjectively determined case reserves to using case reserves that are more appropriate for loss reserving, that we have constructed directly, using objective claim and exposure information. During this session we will discuss how the separation of the dual roles of case reserves will benefit not only the actuaries in their reserving and pricing work, but also the claim settlement function.

About The MuSigma Webinar Series:

The MuSigma Webinar Series seeks to provide a forum for the presentation and discussion of topics relevant to today’s practicing actuaries, and to give actuaries another option when pursuing continuing education in organized activities.
It’s our goal each month to bring to the actuarial community a webinar and a speaker in order to provide topical, timely, and free access to quality continuing education opportunities.

The format will be

  • a participatory webinar in a one-hour format
  • focused on an opening period of content delivery by topic experts
  • followed by a Q&A / audience participation period where the presenter takes questions and comments from the virtual floor

You can register for each webinar at  There is no cost to attend.

Do you have suggestions for future webinar topics?  Would you like to volunteer to present a topic?  Contact Bret Shroyer at

Detecting Changes in Development Patterns

For actuaries performing a reserve analysis, change is the enemy. It’s an oft-repeated actuarial mantra: “I don’t care how Claims sets case reserves, as long as they keep doing it the same way.”

In reality, changes in development are actually more the rule than they are the exception; it’s impossible to find a book of business having perfectly stable loss development over a 10-year time span. We expect to find changes in loss development.

So, the reserving actuary is always looking out for change. But even with advance warning of changes in Claim’s process, or mix shifts on the exposure side, or a sudden jump in claim severities — it’s very difficult to quantify how much change in to expect in the loss development patterns.

Even worse, it’s hard to see direct evidence of changes in development patterns, until they’re deep in your history. And if it’s deep in your history, you’ve been using the wrong development assumptions in your predictions for the past 4-8 analysis periods.

Wait – Is Development Actually Changing?

Here’s the thought process of a reserving actuary over a hypothetical four quarters of reserve studies:

  • 1st qtr – “That’s an odd development factor, but it’s only one data point. I can safely ignore that.”
  • 2nd qtr – “There it is again… is there something actually there? Probably not, because odds are, we’re going to have two consecutive outliers every so often.”
  • 3rd qtr – “OK, it’s back to normal. I knew I shouldn’t worry about it.” (in reality, *this* is the outlier from the new pattern)
  • 4th qtr – “It’s back! there must be something here, let’s figure it out.”

How much of the triangle needs to exhibit a changing pattern before the analyst recognizes and reacts to the change?

All of this arises because the process of picking Loss Development Factors (LDFs) is one of observing the aggregate, and trying to make sense of what’s happening at the individual claim level. We may have good evidence that indicated LDFs are increasing at 12 months, but we don’t know why, or even what that should mean for or our estimates of ultimate. Our only evidence is the aggregate pattern itself.

LDFs Should Be Outputs, Not Inputs

Instead of looking at the development factors to attempt to understand what’s going on with the underlying claims, what if we look at what is happening to the underlying claims, and use that to predict what will happen to aggregate development?

This is a fundamental shift in the way the “loss development factor” is seen and used. Instead of being the most important initial assumption in the analysis, it becomes one of the last outputs from an analysis. In other words, the LDFs become a product of the analysis, rather than a key assumption and input to the analysis.

This is one of the key strategies (and benefits) of claims modeling using CLCM vs. triangle-based reserving methods. Because CLCM moves LDF indications to the end of the analysis, the Claim Life Cycle Model approach allows the analyst to recognize changes in development much earlier.

(In my last post, I gave an overview of the Claim Life Cycle Model (CLCM) approach. If CLCM is new to you, you may want to read that post for background and context.)

Triangle methods require an initial assumption as to how claims will develop in the aggregate, then apply that same assumption to every claim in the analysis. CLCM methods focus on studying the life cycle of each claim, at the claim level, to uncover what drives claim behaviors like report lag, payment pattern, and closure rates. CLCM focuses on discovering the exposure and claim characteristics that best predict individual claim behaviors.

An Answer to Why LDFs Are Changing

The end result is not just an aggregate reserve analysis, but a reserve analysis at the claim level – and at any level of detail in between. The analyst can not only produce a set of LDFs for each segment of the book, but can explain WHY the LDFs for Segment 1 differ from Segment 2, because the variables that impact development have been identified and quantified.

Additionally, the indicated LDFs for each segment will now be in balance with the aggregate LDF at the book level; it doesn’t matter how split up the book, you can get consistent reserve estimates when you add up the segments.

If you’re an actuary about to start a pricing, reserving, or claims modeling project, you should absolutely look into CLCM as one of your core strategies. Compared to more traditional approaches, the benefits and capabilities CLCM provides are transformational.

Want to accelerate your implementation of CLCM? Actuaries at Gross Consulting are now helping carriers stand up a multi-line CLCM process in three months or less using our Comprehensive Insurance Review (CIR) engagement. Please reach out to me with comments or questions at

Announcing the MuSigma Webinar Series

MuSigma Free Webinar Series

Gross Consulting is excited to announce that we’re launching into a new offering  – a recurring webinar series – aimed at providing the practicing actuary with a relevant, timely, (and free) opportunity for continuing education.  

The format will be

  • a participatory webinar in a one-hour format
  • focused on an opening period of content delivery by topic experts
  • followed by a Q&A / audience participation period where the presenter takes questions and comments from the virtual floor

You can register for each webinar at  There is no cost to attend.

Do you have suggestions for future webinar topics?  Would you like to volunteer to present a topic?  Contact Bret Shroyer at

CLCM in a Nutshell

CLCM In a Nushell
CLCM In a NushellIn my last post, I talked about some of the challenges I see in the current claims modeling efforts, and offered the opinion that Claim Life Cycle Modeling (CLCM) is an approach that helps resolve many of those problems.
In this post, I want to shed a bit more light on CLCM:

CLCM is a strategy

When performing claims modeling, one of the first questions is, “What kind of model will we build?”  The answer to this must align with current corporate strategy.  The model has to be able to answer questions that are both relevant and actionable.  CLCM is a strategic choice to build a flexible framework based on as much available data as possible to answer a wide variety of pricing, reserving, and claims modeling questions.

CLCM is a process

Rather than building a single claims model that attempts to predict a particular future outcome, CLCM involves building a set of interrelated models that form the framework for a prediction of many future claim behaviors.  The result is a probability distribution of a variety of future claim statistics at the claim level, in the aggregate, by segment, by layer, etc.

CLCM is open and transparent

CLCM is an idea that’s been in development at Gross Consulting for over a decade.  During this period, we have delivered numerous presentations and participated in many discussions of the CLCM process, at both regional and national actuarial conferences.  From the outset, the goal has been to encourage open discussion and review of the CLCM process, and to encourage more actuaries to use some of these ideas to enhance their analyses.

CLCM is implemented in software

Here at Gross Consulting, we perform CLCM analyses with the benefit of of specialized software:  Cognalysis CLCM.  However, there’s no requirement that a CLCM analysis be performed using Cognalysis software; we have documented the process thoroughly enough that you should be able to replicate many of the ideas using your own logic.  Of course, we also invite actuaries to leverage our investment of time, effort, and experience to arrive at the finished product much faster.   CLCM is something we believe every practicing casualty actuary should be utilizing.

CLCM unifies pricing, aggregate reserving, and claims modeling

These three analytics efforts rely on the same underlying bodies of data:  past premium, exposure, and claims data.  However, they typically go about formulating the key questions differently, resulting in differing assumptions, and therefore potentially conflicting results.  CLCM, on the other hand, builds a set of claims behavior models that describe future outcomes, resulting in
  1. A pricing model which predicts pure premium at the policy level
  2. A claim-level reserve estimate, including probability distributions that can be rolled up by segment and layer
  3. A claims model that can be used for live claims triage, “jumper” assignment, etc.
Using CLCM, these efforts don’t require three sets of analysts building three separate models – these three deliverables are a natural outcome of a single CLCM analysis, based on the same starting data and a common set of assumptions, so the three models will be in agreement each other.

CLCM builds reserve estimates at the claim level, based on all available information for that claim

Traditional reserving methods look at claim development using triangle methods, which incorporate just three two pieces of information:  Loss, Time Period, Development Age.
Claims Models typically incorporate many more pieces of data, but typically make point predictions as of a particular point in time – say 30 days or 90 days.
CLCM looks at all claim behavior over time, at each time step, with behavior in each time step a function of behavior in the previous steps.

CLCM succeeds when other methods are most likely to fail

CLCM was originally developed to address a very common question:  “How do I best estimate aggregate reserves when things are changing?”  (Or worse yet, “How do I know whether or not things are changing?”)
Traditional triangle methods work well — until they don’t.  Because they rely on just three pieces of information (Loss, Time Period, Development Age) they break down when the book is changing over time across a different dimension.  These scenarios include:
  1. Mix shifts
  2. Changes in case reserving methods
  3. Changes in payment timing (deliberate or not)
  4. Changes in the external environment (trend, new causes of loss)

CLCM in more detail

Over the course of the next 11 weeks, I’ll be diving deeper into many of these ideas, as well as describing some of what I’ve seen as the key features and benefits of CLCM.  I’d like to be explicit with my goal in this series: If you’re an actuary about to start a pricing, reserving, or claims modeling project, you should absolutely look into CLCM as one of your strategies.   Compared to more traditional approaches, the benefits and capabilities CLCM provides are significant.
Please reach out to me with comments or questions at

The Jumper Dilemma – Why is Claims Modeling So Hard?

Claims Modeling CLCM

Claims Modeling CLCMClaims modeling is gaining traction right now.  Attend a conference, or talk to some data science / modeling staff, and you’ll likely hear about some current or impending efforts to build a claims model.

“What kind of claims model are you building?” is a natural line of questioning.  If you’re talking to the same groups of people that I am, you’ll hear three general answers:

  1. A jumper model
  2. A triage model
  3. A reserving model

I’d like to discuss each of these, in turn, in the context of an observation that’s becoming more and more clear to me:  The number one mistake being made in claims modeling is that modelers are with shocking frequency attempting to answer the wrong question.

The Jumper Claim Model

Let’s start with the Jumper model.  This model attempts to answer the question: “Which claims are most likely to jump by more than $50,000 from the initial case reserve estimate at 30 days?”  To build this model, the analyst assembles claim information and examines the case incurred amounts for each claim at 30 days and at some future date, with the target variable being a binary Yes/No if the claim met the jumper definition.  There are several big potential pitfalls with this approach:

  1. The jumper criteria (ie $50K increase from 30 days to ult) must be determined before modeling begins
  2. The jumper criteria is almost certainly not optimal
  3. If case reserving methods change (say as a result of the findings of the modeling), this can invalidate the model’s predictive accuracy
  4. There is no prescriptive value in this model; merely identifying claims likely to be jumpers does not say anything about what to do with those claims to change the future. 

The Claims Triage Model

With triage models, the modeler is attempting to answer the question: “Which claims are likely to be more complicated or higher severity, and should be assigned to more experienced adjusters?”  Triage models are prescriptive models, in that they attempt to prescribe a future action to help mitigate or reduce future payments.  In that light, triage models can also be built to indicate when and where particular loss control or settlement actions should be performed.  In my opinion, this approach has a good probability of being successful at mitigating claim costs, but there are still a few potential pitfalls:

  1. For many carriers, there is no coding of past loss control procedures, so there’s just nothing there to model on.  Carriers need to start coding their loss control efforts in a regimented way for some time to gather the data needed to model the effectiveness of those actions
  2. In implementation, many triage models are used primarily to assign complex / high severity claims to more senior claim adjusters.  This is certainly a smart move, but it’s not scalable.  What is that senior-level, experienced claim adjuster going to do that a junior adjuster wouldn’t do?  Wouldn’t it be great if the model could tell us that?  (see also the first point)
  3. As with the jumper approach, if the claims adjustment process changes as a result of the triage model, and the triage model is based on the case reserves, this can effectively break the model when the claims department starts changing behavior

The Claims Reserve Model

Reserving models are in the minority.  Very little of the claims modeling efforts are being invested in building more a more accurate reserves picture.  With reserving models, the modeler is attempting to answer the question: “What is the likely future ultimate value of a reported claim?”  This is a much simpler question, with quite a few ready-made applications.  It would be hard to argue that the modeler is asking the wrong question here.  Instead, the biggest potential pitfalls are in using case reserves as a model input:

  1. Again, what if the case reserving process changes, particularly in reaction to the model?  This breaks the model.
  2. Typically, these models reveal that one of the most important predictors is the case reserve itself.  How can this be executed?  Does this mean we should fire the modelers and hire/train better Claims staff?

The Ideal Claims Model

This is not to say that attempting to build a claims model is an exercise in futility.  Ideally, claims models should

  • Be based on objective information (this does not include case reserves)
  • Include all available information – exposure detail, claims detail, transactional (time series) data, free-form text, external data, etc.
  • Be flexible enough to answer multiple questions
  • Provide a springboard to enable new actuarial and analytics projects

CLCM Exemplifies the Ideal Claims Model

For the past several years, we’ve been using a different approach at claims modeling that incorporates these ideals:  the Claim Life Cycle Model (CLCM).  Over the course of the next 12 weeks, I’m going to be interrogating the Claim Life Cycle Model process from a number of different angles in an attempt to explain its strengths, capabilities, and limitations.  I’ll compare and contrast with the three more common claims modeling approaches I introduced above.  My aim in this is to provide some support for analysts using other claims modeling approaches to avoid some of the pitfalls commonly encountered in claims modeling efforts, and ultimately to convince a few of you that using a Claim Life Cycle Model approach may ultimately be the best way forward to help you achieve your goals in claims modeling.   

To learn more about the Claim Life Cycle Model approach, and how you can employ it to build better claims models for your organization, contact me at



Jeff White Joins Gross Consulting

Gross Consulting is excited to announce the continued expansion of our actuarial consulting staff with the addition of Jeff White.

Jeff brings his over 25 years of P&C insurance experience as an actuarial and data leader to help clients utilize the optimal data and technology to meet their analytical needs.

Prior to joining Gross Consulting, Jeff founded Sync Oasis LLC.  Sync Oasis LLC helps insurance enterprises build analytical data platforms, both on-premises and in the cloud.  An analytical data platform pulls data from various sources into a common data platform, where data is prepared once.  This single source of the truth, which is ready made for insurance data analytics, provides the following benefits:

  • Data is integrated and standardized as if it came from the same source
  • Data Quality is applied to clean the data
  • Data structure is flexible to accommodate easily new data sources or changing data sources
  • Data is merged / matched to provide Customer 360 views (upon request)
  • Data structure is organized around real physical entities, such as homes, cars, people, etc. (upon request)
  • Queries can pull attributes at a particular point in time or as they were when the transaction occurred
  • Query results can be repeated, even many months after the original query was run
  • Business logic is managed by the analytical users through table driven logic

Jeff brings these capabilities with him to Gross Consulting.

To learn more about Jeff’s experience and how he can help your company to succeed click here.

Sarah Krynski Joins Gross Consulting

Gross Consulting is excited to announce the addition of our newest Intern, Sarah Krynski!

Sarah Krynski joined Gross Consulting as an intern in May 2019. She is currently working towards her B.S. in Actuarial Science and B.A. in Mathematical Statistics at the University of St. Thomas and will graduate in May 2020.

Sarah is currently the V.P. of Member Relations of St. Thomas’s chapter of Gamma Iota Sigma, an international business fraternity that specializes in insurance, risk management, and actuarial science. Sarah has her primary focus in data analysis and software development for our Cognalysis suite of tools.

To learn more about Sarah’s experience and how she can help your company to succeed click here.

Tim Davis Joins Gross Consulting

Gross Consulting is excited to announce the addition of our newest Consultant, Tim Davis!

Tim has over 17 years of experience as an actuary and economist in the insurance industry.  He has enjoyed a broad range of experience in areas including crop, large accounts pricing, program business, and ceded reinsurance.  He has supplemented this experience with roles outside the insurance industry as an entrepreneur and business owner, as well as a financial advisor.

Before joining Gross Consulting, Tim was Vice President and Senior Actuary at Hudson Insurance Company, supporting their Crop Insurance line of business.  This role focused on fund allocation, reserving, and product development.  Prior to this role, he served as an economist with the USDA’s Risk Management Agency, supporting the 508(h) product submission, review, approval, and implementation procedure.

Tim has served as an actuary at The St. Paul Companies and Employers Reinsurance Corporation.

Tim earned his B.S. in Mathematics and Economics from Northwest Missouri State University.  He is a Fellow of the Casualty Actuarial Society.

To learn more about Tim’s experience and how he can help your company to succeed click here.

Steve Lacke Joins Gross Consulting

Gross Consulting is excited to announce the addition of our newest Senior Consultant, Steve Lacke.

Mr. Lacke has over 28 years of experience as an actuary and insurance executive, particularly in the professional liability area of practice (Medical Professional, E&O, D&O, and EPL.) Most recently, Steve founded Birchwood Consulting, leveraging his considerable medical professional liability expertise to bring creative actuarial consulting solutions to carriers in this space.

Steve spent eight years with Constellation, the parent company of three medical professional liability insurers, where he held multiple leadership positions, including Chief Actuary, CFO and Chief Operating Officer. Prior to this, Steve recorded a decade of service at Travelers/St. Paul Companies in actuarial roles including pricing, reserving, reinsurance, and strategy.

In his spare time, Mr. Lacke is the Chairman of the Board at True Friends Foundation, a nonprofit providing life-changing experiences that enhance independence and self-esteem for over 5,000 children and adults with disabilities annually.

Steve is a Fellow of the Casualty Actuarial Society (FCAS) and a Member of the American Academy of Actuaries (MAAA). He also holds an MBA from the Carlson School of Management at the University of Minnesota.

To learn more about Steve’s experience and how he can help your company to succeed click here.