Expose the flaws in exposure data

In September 2007, I wrote an article entitled ‘Exposure Management; Back to the Future’. At the time, modelling was the primary focus of re/insurance analytics and companies were investing considerable sums in building catastrophe modelling teams. It was an exciting time to be in the cat modelling space.

The article was written in response to various discussions regarding over-reliance on modelling. Models were already sophisticated, but high-profile events had highlighted that they were never intended to predict event losses precisely, and that the market still needed to focus on the fundamentals of exposure management.

Of course, while many companies never lost sight of this, some were over-reliant on modelled loss estimates being ‘right’ to maintain or grow the P&L and, of utmost importance, to protect their balance sheet.

One natural response to model dominance was to ensure good old-fashioned exposure management remained at the heart of risk operations and all key metrics; to not be lured too far by the magic of the models. Stay true to the fundamentals; what business am I writing, and where is it?

A major shift in focus

Two huge and previously unconsidered catastrophes, 9/11 and Hurricane Katrina, are both seminal events in the development of exposure management; not just because of their devastating human and financial impact, but also because they both led to a much greater focus on the fundamentals of risk management.

The common denominators underpinning these two events are exposure analytics. Exposures for both events were catastrophically misunderstood. In the case of 9/11, addresses were being incorrectly placed nowhere near the actual buildings, and in Katrina the key vulnerability characteristics were completely wrong, with hundreds of thousands of risks incorrectly coded.

Localised events, such as 9/11, resulted in a range of analytical responses that would be embedded in key re/insurance workflows over the ensuing years. New tech companies popped up, the big brokers created exposure analytics tools, and as the variety and resolution of the models increased there was a sense that exposure management was back in vogue.

The longer-term impact of Katrina was a considerable increase in interest in flood exposure. Such spikes are always evident following major losses. I received my first call about terrorism modelling on 12 September 2001, while for Katrina the phone was ringing with similar flood enquiries before landfall. I was one of those in the market (metaphorically and in many cases literally) explaining what was and, critically, was not, included in the models used to assess Katrina losses.

A clear gap in understanding

Why was this important? Well, for all intents and purposes the different forms of flood modelling had been widely misunderstood up to this point. This was different to 9/11 as the issue there wasn’t the lack of terrorism modelling per se but the lack of understanding regarding exposure correlation; previous misunderstood concentrations were putting companies out of business.

The problem was clear; two of the three core components of all ground-up modelled loss estimates were simply not known. The hazard was fairly well understood, but the same could not be said for property locations – the geocoding – or the property characteristics – the vulnerability.

Today, have we filled these data gaps? No, definitely not. In fact, the problem is getting worse. As models – or the hazard element – and analytics have become increasingly high-resolution, the gaps in the data have become more exposed than ever.

There’s an assumption that high-resolution models are matched with equally high-resolution exposure data. That is not the case. I want to be perfectly clear on this – the industry is exposed to multiple balance sheet impairment issues due to a continued reliance on models and exposure data that are seriously imprecise and mismatched.

We started Insurdata in 2017 because we believed some of the fundamentals of re/insurance risk analytics were flawed. We knew we had quite a task ahead of us, as the flawed analytics are central to the reserving and pricing for most re/insurers, including publicly traded companies. Given the magnitude and sensitivity of the problem, central to building the right technology was a process of education and change management.

This process is very similar to the change-management activities related to significant model changes. Historically, many of the big modelling changes have been likened to a catastrophic event, given the impact on key metrics that some companies have experienced. At Insurdata we knew our technology could have a similar impact. In fact, we anticipated it was very likely.

Fast forward three years and numerous analyses of live exposure data have proved the impact of flawed exposure data beyond any doubt. Rather than pore through endless details, a snapshot of a recently completed project spanning a portfolio subset of ~1,000 properties located in Florida and California, highlights the impact:

Bear in mind that the above analysis was conducted on live policies; this isn’t hypothetical and this portfolio subset is most definitely not the worst we’ve seen.

Facing up to the data problem

Companies can no longer simply blame inherent volatility in modelled loss estimates, or hope that the next event will not uncover a systemic exposure management issue. As an industry, we must face up to the deficiencies in the exposure data we rely upon and grasp the technologies which now exist to bridge the gap between data quality and model capabilities.

This is a fundamental problem that simply won’t go away without action, and its potential market impact if not addressed head on is the equivalent of multiple cat 5 hurricanes about to make landfall.

This article was published in Insurance Day on Wednesday 9 September

“We are extremely grateful to those re/insurers who have worked with us to help us evaluate the current industry data baseline and explore ways in which we can collectively raise it”
Jason Futers — Insurdata