
AI agents are steadily redefining the landscape of P&C underwriting. More than just assistants, these systems leverage advances in large language models (LLMs) to automate tasks, analyze data, and reason in context.
In this article, Antoine Sinton, CSO & co-founder of Continuity, explores the use cases we are testing, the tangible benefits we are already seeing, and the limitations that must be kept in mind to move forward responsibly.
How do you define an AI agent?
An AI agent is more than a tool; it is a system designed to automate complex tasks from simple instructions. At its core lies a large language model (LLM), the “brain”, that enables it to understand and reason. This brain is paired with external tools, such as access to the web or databases via APIs or MCPs, along with memory that allows it to recall past interactions.
For example, if asked about the weather in a city, an agent does not just retrieve information. It builds the right query, consults the relevant source, and delivers the result clearly. Think of it as “an intern with infinite patience” capable of working tirelessly on any task, even analyzing a 200-page document to extract key insights, provided the instructions are precise.
What’s the difference between an AI tool and an AI agent?
Traditional AI tools are usually designed for a single task : scoring, classification, or data extraction. They are trained on vast datasets and remain largely fixed in scope. AI agents, on the other hand, are built on general-purpose LLMs. They can understand natural language instructions and coordinate a series of actions by accessing different tools to complete a task autonomously.
The difference is significant: instead of a static output, the agent delivers an executed process. It’s a paradigm shift, AI doesn’t just analyze anymore, it acts.
How are agents used at Continuity?
At Continuity, we deploy agents to automate low-value tasks and provide practical support to P&C underwriting teams. A concrete example, already in production, is the handling of broker emails.
Our first agent, “Kevin,” assists at the intake stage. It reads the message, extracts relevant data, launches internal analyses (for example, property address checks), and drafts a structured response for the underwriter. Integration was fast, “by email”, without requiring months of IT development.
Kevin helps ensure completeness of information and proposes pre-drafted responses. Looking ahead, we envision “teams of agents,” each with its own persona (underwriter, technical management, etc.), collaborating to enhance analysis reliability, essentially forming a “virtual underwriting committee” for every file.
Where do AI agents fit in the P&C underwriting process?
Agents primarily relieve underwriters of repetitive tasks and help accelerate workflows. Upstream, they can sort incoming cases, detect missing information, and flag non-compliant requests to avoid wasted effort.
During analysis, they extract and format useful data, align it with underwriting guidelines, and highlight key points or missing elements (“this piece of information is missing”). Downstream, they could support portfolio reviews, identify accounts that need reassessment, or prepare risk surveys.
What are the concrete benefits for P&C underwriters?
Agents free underwriters to focus on their expertise. They handle repetitive work such as information gathering and document checks, speeding up case processing and reducing errors. They also help “clear the path” on new business by quickly signaling whether a file is viable, saving underwriters from sifting through unnecessary attachments.
Another major advantage: analysis rules can be expressed in natural language, making configuration straightforward even for highly specific cases (such as detecting municipal hazard orders in a particular city). Underwriters can therefore decline quickly, refine proposals, or flag missing documents with far greater efficiency.
What risks and limitations should be considered?
Several challenges remain. Hallucinations, when the agent produces inaccurate or incoherent responses, are a major risk, and hard to measure. To limit them, it’s critical to guide agents with clear process frameworks and provide only reliable factual inputs. Agents with memory may also inadvertently reinforce errors if not supervised.
Another concern is dependence on major model providers (e.g., OpenAI, Google), which raises questions of data sovereignty, a critical issue in insurance. To address this, we maintain human oversight, apply strict quality testing, and explore sovereign alternatives (internally hosted, fully controlled models).
We do not believe in “full automation” without human control. Left entirely on its own, an agent might only reach ~50% reliability on a 15-minute task, a risky outcome given the complexity of underwriting.
What’s your vision for AI agents in the next 2–3 years?
Within the next two to three years, AI agents will be embedded in core business processes. They will interact with one another, handle certain tasks independently, and be fully configurable by business experts without technical skills.
They will effectively turn underwriters into “super-underwriters”: regardless of their level of experience, they will have years of underwriting expertise built into the agents at their fingertips. AI will serve as a force multiplier, allowing underwriters to focus on expert judgment while ensuring that “the decision always rests with humans.”
How do you address transparency and data sovereignty?
Transparency is essential. Our tools support underwriters but never replace their judgment, human responsibility always comes first.
On data sovereignty, we are particularly cautious. Facing global players from the U.S. and China, our strategy is to “retain expertise” while exploring alternatives to third-party hosted models. Our long-term goal is to self-host certain critical modules, ensuring client confidentiality and reducing dependence on dominant providers.