Contact Sales
Back to Blog

Summoning the 'insurance genie' for portfolio-wide analysis

Jun 2024
Dave Tobias

AI-driven chat bots could streamline recovery efforts and claims payouts based on comprehensive, portfolio-wide insights.

Jun 2024
Dave Tobias

Imagine having an "insurance genie" to tell you about your portfolio's hidden risks. Not a character from Aladdin but an AI-driven chatbot — a large language model (LLM) — poised to reveal the hidden truths within vast amounts of data. For instance, you could ask, "Which region in my portfolio is most susceptible to wildfire damage?" and receive a comprehensive answer promptly, highlighting specific regions, underlying conditions, trends, and potential risk mitigations. It would completely change how insurers leverage tools to manage risk. And with more data available than ever around properties, LLMs are set to push portfolio-wide analysis from a wish into reality.

How LLMs work

For any LLM to be effective, it requires the right input. Two critical components at the heart of its functionality are aerial imagery and geospatial data. Having consistent, reliable, and high-resolution imagery gives you multiple points of time by which to analyze a given property or region. Comprehensive geospatial data allows you to learn as much as possible about a property's attributes. Computer vision and machine learning models then help the LLM give the insurers actionable information. As you add more and more policies into the mix, you create an LLM working on an aggregate level to bring this all to life. And that is where the fun begins.

LLM efficiency

Imagine a world where insurers navigate the aftermath of severe windstorms in the Midwest or hurricanes threatening the Florida coastline with unprecedented agility. Traditionally, this would entail a laborious process of sifting through property assessments and claims data, which is a method slow and prone to errors. However, with the advent of a sophisticated AI-driven system, insurers can query: "Identify the percentage of properties with the highest wind damage risk in the recently affected areas." As a result, they instantly transition from tackling isolated incidents to strategising recovery efforts and claims payouts based on comprehensive, portfolio-wide insights.
Insurers could also leverage the genie to be more proactive in their portfolio risk management. For example, consider insurers operating in regions prone to specific perils such as wildfire. They could ask the LLM: "What percentage of properties have deteriorating conditions or heightened risk factors? Check for the number of homes in wildfire zones with reduced defensible space and properties with outdated, high-risk roofing materials." With this information in hand, insurers could save millions in risk mitigation strategies, property inspections, and underwriting processes.
Additionally, this technology can play a pivotal role in reinsurance strategies enabling insurers to determine when to transfer risk and at what cost. The ability to precisely segment risk and predict and prevent potential loss scenarios is akin to operating with the insight and agility of a seasoned Business Intelligence analyst perpetually at their side. This capacity to make informed decisions swiftly would fundamentally transform the insurer's relationship with risk while improving cost savings.

Accuracy concerns

Of course, the insurance genie needs to be accurate for all this to work. So, is an LLM, or even AI, generally, reliable? This concern is understandable. However, it is crucial to compare AI not to a standard of perfection but to human performance, which, while skilled, has flaws. We simply ask humans to be as efficient as possible, and the same standard should apply to an LLM. That said, we can establish a rigorous framework that ensures accuracy and precision.
To achieve this, robust guardrails are essential, not just for the sake of specificity and quality in data input, but also to ensure that LLMs serve as diligent assistants rather than autonomous decision-makers. By focusing on the specificity in data input, we minimise the risk of erroneous outputs, guiding the system towards more accurate analyses applicable to the task at hand. Crucially, these guardrails involve human oversight at critical decision points, ensuring that the LLM flags potential issues or opportunities for a human expert to review. This collaboration prevents the system from veering into irrelevant territories or making unfounded assumptions, integrating the nuanced judgment of insurance professionals with the computational efficiency of AI.
Another thing to keep in mind is that, unlike human processes (where adapting to feedback can be slower and more complex), we can adjust the LLM to new data, feedback, or strategic directions. It ensures that their outputs remain precise and valuable over time.

Implementing LLMs

As we continue to imagine the future of AI in insurance, which is constantly growing, the "insurance genie" is not just a figment of imagination. It is a real possibility that can guide us towards unprecedented possibilities in risk management and operational efficiency. Not just on a small use-case basis, but on a portfolio-wide level. With every query answered and every risk insightfully analyzed, we can move closer to a world where the complex is made clear, and the hidden portfolio risks are revealed.
Reprinted with permission from the April 11, 2024 edition of ©2024 ALM Global, LLC. All Rights Reserved. Further duplication without permission is prohibited. All rights reserved. Original article can be found online here.
© Nearmap 2024