hx welcomes Insurdata as a Renew Connect partner
For insurers in need of detailed geographical insights, Insurdata offers highly accurate geo-coding without the need for manual surveys or online research. By partnering with hx, customers can integrate this data into their pricing models with a few clicks.
It can be incredibly challenging for the insurance industry to collect high-quality position and risk attribute data, without spending hours manually searching online or performing on-site surveys. However, this information is essential for accurate, reliable pricing. Take flood pricing, where the difference between a couple of inches in site height or slope can make all the difference between a claim or no claim following a flood-related event. For this reason, hx is excited to announce that we will be partnering with Insurdata as part of our Renew Connect program to give our customers access to this information.
Insurdata delivers high-resolution, accurate property and risk attribute data to optimise pricing, underwriting and risk assessment decisions. The Insurdata Exposure Engine has been designed for full peril coverage globally. The system generates specific attribute data for any type of exposure, including flood, earthquake, wildfire, hurricane, tornado and terrorism.
hx and Insurdata were both members of cohort 3 of the Lloyd’s Lab which was “focussed fully on finding solutions with the potential to contribute to the ecosystem of services as part of the future at Lloyd’s vision, including ways to enhance data sharing and provide new sources of risk insight; pricing and risk models to help Lloyd’s market participants better understand threat scenarios; and ways to reduce the cost of processing claims as well as the burden of compliance and regulation”.
We have continued our collaboration while growing rapidly since the conclusion of our time in the Lloyd’s Lab.
The integration between hx and Insurdata has been designed to be streamlined; by combining Insurdata’s expertise and data capabilities with Renew’s API-first architecture and rapid model deployment. Our customers can enhance their underwriting decision-making process in one centralised place and with no need for re-keying between systems.
“We are delighted to be working with the hx team as a Renew Connect partner. Through this partnership, organisations will now be able to capitalise on our high-resolution, property and risk attribute data to ensure data-driven decision making. We’re very much looking forward to helping hx customers to improve their knowledge of locations with a simple yet powerful integration of our technology.” Rosina Smith, Head of Client Success, Insurdata.
For our Renew clients, the Renew Connect section of our documentation now includes a complete annotated model which can be imported into your Renew environment to get you up and running with address search, geo-location and risk attribute querying in minutes. The Insurdata APIs are all fully documented here and if you have any queries about anything in the Renew documentation please contact our support team. You will need an Insurdata API key available on request.
If you are not a Renew client and you are interested in a demo of how Insurdata in Renew can assist your business, please contact email@example.com and we will arrange a session.
Expose the flaws in exposure data
In September 2007, I wrote an article entitled ‘Exposure Management; Back to the Future’. At the time, modelling was the primary focus of re/insurance analytics and companies were investing considerable sums in building catastrophe modelling teams. It was an exciting time to be in the cat modelling space.
The article was written in response to various discussions regarding over-reliance on modelling. Models were already sophisticated, but high-profile events had highlighted that they were never intended to predict event losses precisely, and that the market still needed to focus on the fundamentals of exposure management.
Of course, while many companies never lost sight of this, some were over-reliant on modelled loss estimates being ‘right’ to maintain or grow the P&L and, of utmost importance, to protect their balance sheet.
One natural response to model dominance was to ensure good old-fashioned exposure management remained at the heart of risk operations and all key metrics; to not be lured too far by the magic of the models. Stay true to the fundamentals; what business am I writing, and where is it?
A major shift in focus
Two huge and previously unconsidered catastrophes, 9/11 and Hurricane Katrina, are both seminal events in the development of exposure management; not just because of their devastating human and financial impact, but also because they both led to a much greater focus on the fundamentals of risk management.
The common denominators underpinning these two events are exposure analytics. Exposures for both events were catastrophically misunderstood. In the case of 9/11, addresses were being incorrectly placed nowhere near the actual buildings, and in Katrina the key vulnerability characteristics were completely wrong, with hundreds of thousands of risks incorrectly coded.
Localised events, such as 9/11, resulted in a range of analytical responses that would be embedded in key re/insurance workflows over the ensuing years. New tech companies popped up, the big brokers created exposure analytics tools, and as the variety and resolution of the models increased there was a sense that exposure management was back in vogue.
The longer-term impact of Katrina was a considerable increase in interest in flood exposure. Such spikes are always evident following major losses. I received my first call about terrorism modelling on 12 September 2001, while for Katrina the phone was ringing with similar flood enquiries before landfall. I was one of those in the market (metaphorically and in many cases literally) explaining what was and, critically, was not, included in the models used to assess Katrina losses.
A clear gap in understanding
Why was this important? Well, for all intents and purposes the different forms of flood modelling had been widely misunderstood up to this point. This was different to 9/11 as the issue there wasn’t the lack of terrorism modelling per se but the lack of understanding regarding exposure correlation; previous misunderstood concentrations were putting companies out of business.
The problem was clear; two of the three core components of all ground-up modelled loss estimates were simply not known. The hazard was fairly well understood, but the same could not be said for property locations – the geocoding – or the property characteristics – the vulnerability.
Today, have we filled these data gaps? No, definitely not. In fact, the problem is getting worse. As models – or the hazard element – and analytics have become increasingly high-resolution, the gaps in the data have become more exposed than ever.
There’s an assumption that high-resolution models are matched with equally high-resolution exposure data. That is not the case. I want to be perfectly clear on this – the industry is exposed to multiple balance sheet impairment issues due to a continued reliance on models and exposure data that are seriously imprecise and mismatched.
We started Insurdata in 2017 because we believed some of the fundamentals of re/insurance risk analytics were flawed. We knew we had quite a task ahead of us, as the flawed analytics are central to the reserving and pricing for most re/insurers, including publicly traded companies. Given the magnitude and sensitivity of the problem, central to building the right technology was a process of education and change management.
This process is very similar to the change-management activities related to significant model changes. Historically, many of the big modelling changes have been likened to a catastrophic event, given the impact on key metrics that some companies have experienced. At Insurdata we knew our technology could have a similar impact. In fact, we anticipated it was very likely.
Fast forward three years and numerous analyses of live exposure data have proved the impact of flawed exposure data beyond any doubt. Rather than pore through endless details, a snapshot of a recently completed project spanning a portfolio subset of ~1,000 properties located in Florida and California, highlights the impact:
- The average geocoding error was 385 metres. In other words, the insurer believed the properties were, on average, 400 metres from the actual location.
- Only 8 percent of the geocodes for the properties were correct.
- Analyses with a commercial model highlighted the annual average loss for ~12% of properties went from no apparent risk at point of underwriting to high risk or vice versa.
- In real terms, the impact for this portfolio subset was a change in premium of $21m.
Bear in mind that the above analysis was conducted on live policies; this isn’t hypothetical and this portfolio subset is most definitely not the worst we’ve seen.
Facing up to the data problem
Companies can no longer simply blame inherent volatility in modelled loss estimates, or hope that the next event will not uncover a systemic exposure management issue. As an industry, we must face up to the deficiencies in the exposure data we rely upon and grasp the technologies which now exist to bridge the gap between data quality and model capabilities.
This is a fundamental problem that simply won’t go away without action, and its potential market impact if not addressed head on is the equivalent of multiple cat 5 hurricanes about to make landfall.
This article was published in Insurance Day on Wednesday 9 September
IRB Brasil Re selects Insurdata platform
- Cat modelling team subscribes to exposure data portal
- Focus on enhancing exposure accuracy, operational efficiency and reducing model volatility
- Insurdata Portal seamlessly integrated into day-to-day workflow
London, 19 August 2020 – Insurdata, the award-winning insurtech firm which specialises in the augmentation of peril-specific exposure and risk data via its exposure platform, has today announced that IRB Brasil Re (IRB) has licensed its Insurdata Portal.
IRB, the largest reinsurance company in Latin America and with an expanding presence in the international market, will use the platform to generate precise property geocode information across its global exposures. The company’s catastrophe modelling team will integrate the Portal into its daily risk assessment workflow, boosting the resolution of existing data in real time using Insurdata’s wide range of proprietary technologies.
Capitalising on the enhanced exposure information across both their Latin American and international portfolios, IRB aims to use the technology to increase operational efficiency significantly and benefit from more precise exposure data to drive increased accuracy of catastrophe modelling outputs and exposure accumulation information. The increased data precision will also serve to reduce overall model volatility.
Commenting on the announcement, Luis Brito, Catastrophe Modelling Manager at IRB, said: “We are very excited to be working with Insurdata as we look to enhance the resolution of the global exposure data which supports our underwriting, pricing and portfolio management decisions. That data quality will enable us to maximise the analytical potential of our catastrophe modelling, reduce the associated volatility of modelled outputs and boost our overall underwriting efficiency.”
Jason Futers, CEO, Insurdata, added: “The decision by a company of the scale and sophistication of IRB to adopt our platform is a fantastic endorsement of our technology and exposure methodologies. By enhancing exposure data, our aim is to give underwriters greater confidence in their modelled loss estimates leading to improved risk selection and portfolio management, and ultimately more accurately priced products and stronger balance sheets. We’re delighted to work closely with IRB to integrate our Portal into their data processes, ensuring they fully capitalise on the potential it creates.”
Mercer joins Insurdata Advisory Board
• Mercer joins Nagel and Drattell on the Insurdata Advisory Board
• Mercer previously co-founder of Validus Re and founder of Ventus Risk Management
London, 06 August 2020 – Insurdata, the award-winning insurtech firm which specialises in the correction, creation and augmentation of peril-specific exposure and risk data via the Insurdata Platform, has today announced that Stuart Mercer has joined the company’s Advisory Board.
Mercer brings decades of industry experience to the Board with a career that spans both the re/insurance and ILS arenas. He has extensive knowledge of the value of precise, high-resolution data in the underwriting cycle having served as CEO and Founder of Ventus Risk Management, a technology and analytics-focused provider of coastal property insurance, and having previously co-founded Validus Re.
Mercer commented: “The re/insurance industry is changing rapidly and having had the opportunity to look at the Insurdata technology in detail, it’s clear to me that Insurdata is at the forefront of that change. The data platform creates the foundational layer upon which the industry can reduce volatility and ensure their key metrics for underwriting, pricing and risk transfer are precise. I’m really looking forward to working with Jason and the team.”
Mercer joins fellow Board members Eric Drattell and Marcus Nagel.
Drattell is an accomplished business leader, with extensive legal, business and compliance expertise across privacy, technology transactions, licensing and cloud/SaaS computing. He is currently General Counsel and Chief Compliance Officer for cloud-based FinTech company Roostify, and during his career was also Senior Vice President and General Counsel at risk modelling firm RMS.
Nagel is a hugely experienced insurance industry figure having held numerous senior positions at Zurich, including CEO of Zurich Germany, Chairman of Zurich Deutscher Herold and Global Head of IFA and broker business at Zurich Global Life.
Jason Futers, CEO of Insurdata, said: “I am delighted to welcome Stuart onto the Insurdata Advisory Board. He has been at the forefront of the evolution of underwriting data and analytics throughout his career and brings an incredible depth of market insight. With Eric, Marcus and Stuart we have a truly outstanding team guiding our development as we grow Insurdata’s market presence and boost our ability to enable our clients to optimise pricing, underwriting and risk assessment decisions.”
Insurdata wins “Insurtech Initiative of the Year” Award
· Award presented at Insurance Day London Market Awards 2019
· Entries judged by independent panel of industry leaders
· Award recognises initiative predicted to have greatest impact on London market
We are delighted to announce that Insurdata has been awarded the “Insurtech Initiative of the Year” award at the Insurance Day London Market Awards 2019.
The shortlist for this award boasted some outstanding initiatives spanning critical aspects of the London insurance market. However, the judging panel, which was composed of leading industry figures from organisations including the LMA, Aon, BCG and the MGAA, believed that Insurdata stood out as the initiative which demonstrated the clearest innovation, provided the most significant opportunity to enhance business practices, and had the greatest potential to impact the London Market.
Commenting on the award success, Jason Futers, CEO, Insurdata, said: “This is a fantastic achievement for a company which only launched in 2017 and recognises the huge opportunity which Insurdata provides to address head-on the fundamental data resolution challenge which the insurance industry faces.
“The last 24 months have been a very busy period for Insurdata. We have secured strong backing for the platform, analysed, created and augmented live risk data for over 30 re/insurers, and made it through the stringent process of the Lloyd’s Lab as part of the Cohort 3 programme. It’s been an incredible two years.”
Jeremy Sterns, Chief Technology Officer, added “This award recognises the immense effort that the Insurdata team has made to turn the original idea of creating a platform that would enable the creation of high-resolution data at the individual exposure level into a fully functioning reality. We’re excited about what the future has in store and are looking forward to continuing to roll out our API to syndicates, (re)insurers, brokers and modellers working to improve their exposure data.
Insurdata study reveals scale of exposure data challenge
Insurdata, the award-winning insurtech firm which specialises in the augmentation of peril-specific exposure and risk data via its Exposure Engine, has today released the findings of an exposure data resolution study for a sample portfolio of US-based insured properties.
- Over 10% of properties in study displaced by 500m-1,000m
- First-floor elevation model assumptions incorrect for 90% of locations
- Study based on sample of 4,500 US-based insured property assets
The findings revealed that of the 4,500 locations assessed in the report, over 40 percent (over 1,800 locations) were displaced by more than 10 metres, with 10 percent of the location data inaccurate by between 500-1,000 metres. The combined results showed an average displacement of 183 metres for the entire data set, with the largest displacement being 33 kilometres.
Additional analysis conducted by Insurdata at the individual company level has revealed that the net impact on modelled loss estimates of correcting location latitudes and longitudes can have a material effect on Average Annualised Loss estimates.
Commenting on the degree of displacement evident in the data, Paul Burgess, Head of Client Development at Insurdata, said: “It is important to consider this level of displacement in the context of standard accumulation zones. If a (re)insurer is applying a 200 metre accumulation zone for its terrorism exposure, for example, our study reveals that at a minimum 6 percent of the risks will fall outside of the accumulation zones they are believed to be in, while locations with smaller displacements could also be outside of the zone perimeters.”
The study also analysed the data sets to assess first-floor elevation (FFE) information against standard model assumptions regularly applied to assets where no FFE data is available. The findings showed that the FFE measurements for the properties were much lower than default modelled assumptions for over 90 percent of locations in the study.
According to Jason Futers, CEO of Insurdata: “The results from this particular data set show that there is a strong likelihood that the modelled loss estimates for flood will be underestimated for these particular locations. Flood exposure is one of the most data intensive risks in the (re)insurance market. Without key attribute data such as FFE the value of applying high-resolution models to such perils will be significantly reduced.”
He continued: “The misrepresentation of location data and a lack of detailed building parameter information can have a significant detrimental impact on the ability of (re)insurance companies to accurately assess exposure levels from the individual risk through to the property portfolio. Advances in technology now mean we can generate property-specific data at a much more granular level for every asset in a portfolio in a quick, efficient and accurate manner. Companies must therefore take steps to capitalise on this to improve risk selection and pricing and reduce portfolio volatility.”
The analysis was based on a sample set of approximately 4,500 insured property assets. The location data, which was sourced from a variety of re/insurance entities, was for US-wide risks. The geocoded property data, which included building and zip-code level information, was augmented using Insurdata’s Exposure Engine with updated building centroid information and additional building attribute data, including FFE and building perimeter dimensions.