1 in 3 Insurance Agencies Plan to Implement AI: Will it Deliver the Outcomes They Expect?

Although many insurance agencies jump to implement AI for claims, it’s imperative to not rush the process. AI solutions need humans at the helm – or, more popularly, “in the loop” – to remain compliant, reliable, and defensible.

Insurance may be one of the oldest industries in the world. However, with costs climbing and catastrophic losses reaching record levels, it’s no surprise that insurers are jumping on board for one of the newest – artificial intelligence. According to a 2024 survey from the Boston Consulting Group, insurance leads in AI adoption, outpacing virtually all other industries except for tech. Claims teams across the country are using AI to speed up administrative tasks, preserve accuracy, and automate their workload.

Insurance agents are the latest group to jump on board, with 1 in every 3 insurance agencies saying they’re likely to adopt AI for their business in the next few years; in general, 77% of insurance companies say they are at some stage of adopting the technology. These agencies hope to use AI to help find new clients, quote new businesses, and save time. But will implementing AI bring the benefits that insurance agencies expect, and what factors should they keep in mind? 

The Risk of Rushing your AI Adoption

Although many insurance agencies jump to implement AI for claims, it’s imperative to not rush the process. On an individual level, generative AI tools like ChatGPT or Microsoft CoPilot can offer time saving results in an afternoon. For real scalable business change, though, the process takes some planning. Compliance standards, regulatory requirements, and legal risks are evolving, especially with AI being so new. 

AI platforms are fallible, and discrimination is one of the main concerns, with Cigna Health currently involved in a potential class action lawsuit over its automated, AI-based algorithm. Participants argue that the insurance company’s PxDx algorithm rejected plan claims automatically, without clinical doctor review. The US District Court in California has allowed the claim to proceed. 

Regulatory standards (and risks) like these might mean that insurance agents choose to use more caution when implementing AI. Claims summarization platforms, document automation, decision support, and AI copilots are all popular ways to involve domain specific knowledge, human experts, and compliance frameworks into your strategy for AI – without sacrificing the expert knowledge and human touch that regulatory requirements really do need. 

Human-in-the-Loop is The Missing Piece in AI Adoption

AI solutions need humans at the helm – or, more popularly, “in the loop” – to remain compliant, reliable, and defensible. Human expertise is a differentiator in claims, and certainly with insurance agencies, who are often the customer’s first point of contact with the firm. 

AI relies on massive amounts of data, which is how it can handle administrative tasks in a fraction of a human’s time. However, these datasets aren’t perfect, and can introduce new risk: in Huskey v. State Farm, the court denied an insurance company’s motion to dismiss claims that algorithmic decision making resulted in less coverage for Black homeowners. In The Estate of Gene B Lokken et al. v. United Health Group, plaintiffs allege that the insurance company improperly denied insurance claims based on AI. These types of automated errors cannot be blindly abided by and accepted at face value, which is exactly why human-in-the-loop oversight is essential — ensuring every claim outcome is reviewed fairly and without bias.

The Best Path for Compliance

Any dangers from using AI come from adopting generic, fully automated, or unsupervised tools – so it’s best not to rush. Platforms that are custom built for claims, insurance agencies, or risk management have accuracy and human validation at their core. Choosing a claims documentation platform that combines AI with SME-trained models and expert-in-the-loop validation helps keep your agency above board, and reduces your risk overall. 

Legal attempts to regulate AI largely focus on human intervention, and are likely to continue as time goes on. Four states – California, Connecticut, Illinois, and Rhode Island are considering bills that would prevent insurers from adjudicating claims without human oversight. The EU Artificial Intelligence Act and Ontario’s OSFI Guideline E-23 both provide more regulatory guidance to using enterprise AI. SOC 2 and other security/privacy standards are critical, especially for insurance agencies that handle sensitive records for medical, legal, and claims. 

With SOC 2 certification and a proven track record in insurance and legal defense, claims documentation platforms like Wisedocs enable organizations to embrace automation without sacrificing security, compliance, or trust. Purpose-built for medical record review and claims documentation, Wisedocs aligns seamlessly with the needs of insurers, TPAs, and state funds—helping teams work faster, make defensible decisions, and move confidently into the future of claims. Book a call today to learn how Wisedocs can streamline your medical record reviews — compliantly with expert-in-the-loop review.

December 8, 2025

Kristen Campbell

Author

Kristen is the co-founder and Director of Content at Skeleton Krew, a B2B marketing agency focused on growth in tech, software, and statups. She has written for a wide variety of companies in the fields of healthcare, banking, and technology. In her spare time, she enjoys writing stories, reading stories, and going on long walks (to think about her stories).

Soft blue and white abstract blurred gradient background.

Stay ahead of the (AI) curve

How is AI changing the way insurance, legal, and medical professionals work across claims? 
Get analysis and best practices from our team of experts. Sent every other week.