CLCM in a Nutshell

CLCM In a Nushell
CLCM In a NushellIn my last post, I talked about some of the challenges I see in the current claims modeling efforts, and offered the opinion that Claim Life Cycle Modeling (CLCM) is an approach that helps resolve many of those problems.
In this post, I want to shed a bit more light on CLCM:

CLCM is a strategy

When performing claims modeling, one of the first questions is, “What kind of model will we build?”  The answer to this must align with current corporate strategy.  The model has to be able to answer questions that are both relevant and actionable.  CLCM is a strategic choice to build a flexible framework based on as much available data as possible to answer a wide variety of pricing, reserving, and claims modeling questions.

CLCM is a process

Rather than building a single claims model that attempts to predict a particular future outcome, CLCM involves building a set of interrelated models that form the framework for a prediction of many future claim behaviors.  The result is a probability distribution of a variety of future claim statistics at the claim level, in the aggregate, by segment, by layer, etc.

CLCM is open and transparent

CLCM is an idea that’s been in development at Gross Consulting for over a decade.  During this period, we have delivered numerous presentations and participated in many discussions of the CLCM process, at both regional and national actuarial conferences.  From the outset, the goal has been to encourage open discussion and review of the CLCM process, and to encourage more actuaries to use some of these ideas to enhance their analyses.

CLCM is implemented in software

Here at Gross Consulting, we perform CLCM analyses with the benefit of of specialized software:  Cognalysis CLCM.  However, there’s no requirement that a CLCM analysis be performed using Cognalysis software; we have documented the process thoroughly enough that you should be able to replicate many of the ideas using your own logic.  Of course, we also invite actuaries to leverage our investment of time, effort, and experience to arrive at the finished product much faster.   CLCM is something we believe every practicing casualty actuary should be utilizing.

CLCM unifies pricing, aggregate reserving, and claims modeling

These three analytics efforts rely on the same underlying bodies of data:  past premium, exposure, and claims data.  However, they typically go about formulating the key questions differently, resulting in differing assumptions, and therefore potentially conflicting results.  CLCM, on the other hand, builds a set of claims behavior models that describe future outcomes, resulting in
  1. A pricing model which predicts pure premium at the policy level
  2. A claim-level reserve estimate, including probability distributions that can be rolled up by segment and layer
  3. A claims model that can be used for live claims triage, “jumper” assignment, etc.
Using CLCM, these efforts don’t require three sets of analysts building three separate models – these three deliverables are a natural outcome of a single CLCM analysis, based on the same starting data and a common set of assumptions, so the three models will be in agreement each other.

CLCM builds reserve estimates at the claim level, based on all available information for that claim

Traditional reserving methods look at claim development using triangle methods, which incorporate just three two pieces of information:  Loss, Time Period, Development Age.
Claims Models typically incorporate many more pieces of data, but typically make point predictions as of a particular point in time – say 30 days or 90 days.
CLCM looks at all claim behavior over time, at each time step, with behavior in each time step a function of behavior in the previous steps.

CLCM succeeds when other methods are most likely to fail

CLCM was originally developed to address a very common question:  “How do I best estimate aggregate reserves when things are changing?”  (Or worse yet, “How do I know whether or not things are changing?”)
Traditional triangle methods work well — until they don’t.  Because they rely on just three pieces of information (Loss, Time Period, Development Age) they break down when the book is changing over time across a different dimension.  These scenarios include:
  1. Mix shifts
  2. Changes in case reserving methods
  3. Changes in payment timing (deliberate or not)
  4. Changes in the external environment (trend, new causes of loss)

CLCM in more detail

Over the course of the next 11 weeks, I’ll be diving deeper into many of these ideas, as well as describing some of what I’ve seen as the key features and benefits of CLCM.  I’d like to be explicit with my goal in this series: If you’re an actuary about to start a pricing, reserving, or claims modeling project, you should absolutely look into CLCM as one of your strategies.   Compared to more traditional approaches, the benefits and capabilities CLCM provides are significant.
Please reach out to me with comments or questions at bret.shroyer@cgconsult.com

The Jumper Dilemma – Why is Claims Modeling So Hard?

Claims Modeling CLCM

Claims Modeling CLCMClaims modeling is gaining traction right now.  Attend a conference, or talk to some data science / modeling staff, and you’ll likely hear about some current or impending efforts to build a claims model.

“What kind of claims model are you building?” is a natural line of questioning.  If you’re talking to the same groups of people that I am, you’ll hear three general answers:

  1. A jumper model
  2. A triage model
  3. A reserving model

I’d like to discuss each of these, in turn, in the context of an observation that’s becoming more and more clear to me:  The number one mistake being made in claims modeling is that modelers are with shocking frequency attempting to answer the wrong question.

The Jumper Claim Model

Let’s start with the Jumper model.  This model attempts to answer the question: “Which claims are most likely to jump by more than $50,000 from the initial case reserve estimate at 30 days?”  To build this model, the analyst assembles claim information and examines the case incurred amounts for each claim at 30 days and at some future date, with the target variable being a binary Yes/No if the claim met the jumper definition.  There are several big potential pitfalls with this approach:

  1. The jumper criteria (ie $50K increase from 30 days to ult) must be determined before modeling begins
  2. The jumper criteria is almost certainly not optimal
  3. If case reserving methods change (say as a result of the findings of the modeling), this can invalidate the model’s predictive accuracy
  4. There is no prescriptive value in this model; merely identifying claims likely to be jumpers does not say anything about what to do with those claims to change the future. 

The Claims Triage Model

With triage models, the modeler is attempting to answer the question: “Which claims are likely to be more complicated or higher severity, and should be assigned to more experienced adjusters?”  Triage models are prescriptive models, in that they attempt to prescribe a future action to help mitigate or reduce future payments.  In that light, triage models can also be built to indicate when and where particular loss control or settlement actions should be performed.  In my opinion, this approach has a good probability of being successful at mitigating claim costs, but there are still a few potential pitfalls:

  1. For many carriers, there is no coding of past loss control procedures, so there’s just nothing there to model on.  Carriers need to start coding their loss control efforts in a regimented way for some time to gather the data needed to model the effectiveness of those actions
  2. In implementation, many triage models are used primarily to assign complex / high severity claims to more senior claim adjusters.  This is certainly a smart move, but it’s not scalable.  What is that senior-level, experienced claim adjuster going to do that a junior adjuster wouldn’t do?  Wouldn’t it be great if the model could tell us that?  (see also the first point)
  3. As with the jumper approach, if the claims adjustment process changes as a result of the triage model, and the triage model is based on the case reserves, this can effectively break the model when the claims department starts changing behavior

The Claims Reserve Model

Reserving models are in the minority.  Very little of the claims modeling efforts are being invested in building more a more accurate reserves picture.  With reserving models, the modeler is attempting to answer the question: “What is the likely future ultimate value of a reported claim?”  This is a much simpler question, with quite a few ready-made applications.  It would be hard to argue that the modeler is asking the wrong question here.  Instead, the biggest potential pitfalls are in using case reserves as a model input:

  1. Again, what if the case reserving process changes, particularly in reaction to the model?  This breaks the model.
  2. Typically, these models reveal that one of the most important predictors is the case reserve itself.  How can this be executed?  Does this mean we should fire the modelers and hire/train better Claims staff?

The Ideal Claims Model

This is not to say that attempting to build a claims model is an exercise in futility.  Ideally, claims models should

  • Be based on objective information (this does not include case reserves)
  • Include all available information – exposure detail, claims detail, transactional (time series) data, free-form text, external data, etc.
  • Be flexible enough to answer multiple questions
  • Provide a springboard to enable new actuarial and analytics projects

CLCM Exemplifies the Ideal Claims Model

For the past several years, we’ve been using a different approach at claims modeling that incorporates these ideals:  the Claim Life Cycle Model (CLCM).  Over the course of the next 12 weeks, I’m going to be interrogating the Claim Life Cycle Model process from a number of different angles in an attempt to explain its strengths, capabilities, and limitations.  I’ll compare and contrast with the three more common claims modeling approaches I introduced above.  My aim in this is to provide some support for analysts using other claims modeling approaches to avoid some of the pitfalls commonly encountered in claims modeling efforts, and ultimately to convince a few of you that using a Claim Life Cycle Model approach may ultimately be the best way forward to help you achieve your goals in claims modeling.   


To learn more about the Claim Life Cycle Model approach, and how you can employ it to build better claims models for your organization, contact me at Bret.Shroyer@cgconsult.com.

 

 

Jeff White Joins Gross Consulting

Gross Consulting is excited to announce the continued expansion of our actuarial consulting staff with the addition of Jeff White.

Jeff brings his over 25 years of P&C insurance experience as an actuarial and data leader to help clients utilize the optimal data and technology to meet their analytical needs.

Prior to joining Gross Consulting, Jeff founded Sync Oasis LLC.  Sync Oasis LLC helps insurance enterprises build analytical data platforms, both on-premises and in the cloud.  An analytical data platform pulls data from various sources into a common data platform, where data is prepared once.  This single source of the truth, which is ready made for insurance data analytics, provides the following benefits:

  • Data is integrated and standardized as if it came from the same source
  • Data Quality is applied to clean the data
  • Data structure is flexible to accommodate easily new data sources or changing data sources
  • Data is merged / matched to provide Customer 360 views (upon request)
  • Data structure is organized around real physical entities, such as homes, cars, people, etc. (upon request)
  • Queries can pull attributes at a particular point in time or as they were when the transaction occurred
  • Query results can be repeated, even many months after the original query was run
  • Business logic is managed by the analytical users through table driven logic

Jeff brings these capabilities with him to Gross Consulting.

To learn more about Jeff’s experience and how he can help your company to succeed click here.

Sarah Krynski Joins Gross Consulting


Gross Consulting is excited to announce the addition of our newest Intern, Sarah Krynski!

Sarah Krynski joined Gross Consulting as an intern in May 2019. She is currently working towards her B.S. in Actuarial Science and B.A. in Mathematical Statistics at the University of St. Thomas and will graduate in May 2020.

Sarah is currently the V.P. of Member Relations of St. Thomas’s chapter of Gamma Iota Sigma, an international business fraternity that specializes in insurance, risk management, and actuarial science. Sarah has her primary focus in data analysis and software development for our Cognalysis suite of tools.

To learn more about Sarah’s experience and how she can help your company to succeed click here.

Tim Davis Joins Gross Consulting

Gross Consulting is excited to announce the addition of our newest Consultant, Tim Davis!

Tim has over 17 years of experience as an actuary and economist in the insurance industry.  He has enjoyed a broad range of experience in areas including crop, large accounts pricing, program business, and ceded reinsurance.  He has supplemented this experience with roles outside the insurance industry as an entrepreneur and business owner, as well as a financial advisor.

Before joining Gross Consulting, Tim was Vice President and Senior Actuary at Hudson Insurance Company, supporting their Crop Insurance line of business.  This role focused on fund allocation, reserving, and product development.  Prior to this role, he served as an economist with the USDA’s Risk Management Agency, supporting the 508(h) product submission, review, approval, and implementation procedure.

Tim has served as an actuary at The St. Paul Companies and Employers Reinsurance Corporation.

Tim earned his B.S. in Mathematics and Economics from Northwest Missouri State University.  He is a Fellow of the Casualty Actuarial Society.

To learn more about Tim’s experience and how he can help your company to succeed click here.

Steve Lacke Joins Gross Consulting

Gross Consulting is excited to announce the addition of our newest Senior Consultant, Steve Lacke.

Mr. Lacke has over 28 years of experience as an actuary and insurance executive, particularly in the professional liability area of practice (Medical Professional, E&O, D&O, and EPL.) Most recently, Steve founded Birchwood Consulting, leveraging his considerable medical professional liability expertise to bring creative actuarial consulting solutions to carriers in this space.

Steve spent eight years with Constellation, the parent company of three medical professional liability insurers, where he held multiple leadership positions, including Chief Actuary, CFO and Chief Operating Officer. Prior to this, Steve recorded a decade of service at Travelers/St. Paul Companies in actuarial roles including pricing, reserving, reinsurance, and strategy.

In his spare time, Mr. Lacke is the Chairman of the Board at True Friends Foundation, a nonprofit providing life-changing experiences that enhance independence and self-esteem for over 5,000 children and adults with disabilities annually.

Steve is a Fellow of the Casualty Actuarial Society (FCAS) and a Member of the American Academy of Actuaries (MAAA). He also holds an MBA from the Carlson School of Management at the University of Minnesota.

To learn more about Steve’s experience and how he can help your company to succeed click here.

Chris Gross Co-Authors Variance Journal Paper

Validation of minimum bias rate factors

Released in the December, 2018 issue of Variance Journal, Christopher Gross and Jonathan Evans co-authored a paper entitled Minimum Bias, Generalized Linear Models, and Credibility in the Context of Predictive Modeling.

Abstract: When predictive performance testing, rather than testing model assumptions, is used for validation, the need for detailed model specification is greatly reduced. Minimum bias models trade some degree of statistical independence in data points in exchange for statistically much more tame distributions underlying individual data points. A combination of multiplicative minimum bias and credibility methods for predictively modeling losses (pure premiums, claim counts, average severity, etc.) based on explanatory risk characteristics is defined. Advantages of this model include grounding in long-standing and conceptually lucid methods with minimal assumptions. An empirical case study is presented with comparisons between multiplicative minimum bias and a typical generalized linear model (GLM). Comparison is also made with methods of incorporating credibility into a GLM.

Download the full study directly from Variance Journal here.

Bret Shroyer Joins Gross Consulting

Gross Consulting welcomes Bret Shroyer as our newest employee! Bret’s experience over more than two decades, and across a wide range of analytical, actuarial, and strategic roles in the insurance and reinsurance markets make him a perfect addition to the company. Bret’s background includes time with Valen Analytics, Willis Re, and Travelers.

 

To learn more about Bret’s background and how he can help your company to succeed click here.