Artificial Intelligence (AI) use in enterprises has risen alongside AI tools designed for personal use. At the same time, professionals who troubleshoot Excel models, summarize medical documents, and organize diagnoses for the workplace are using the same tools to create party invites and maximize fantasy football lines. It should come as no surprise, then, that employees at enterprise companies are picking up AI tools at work – just not the ones their companies provided.
“Shadow AI” is a term for the grassroots adoption of AI tools by individual knowledge workers. These knowledge workers find productivity of off the shelf LLM tools like ChatGPT or Claude (for writing copy), Cursor (code), or DALL-E (graphics) so valuable that they pay for it themselves. The benefit to doing so is simple: no red tape from IT, and no clearance time from legal or finance.
Workers are paying out of pocket for efficiency, but what does this mean for your data?
Employees paying out of pocket for AI is a fundamental shift – not only are workers allowed to bring their own tools to the job, they prefer doing it. In the traditional ‘employee versus contractor’ tests, a company provided equipment to its workers: coffee, computers, monitors, and screens. Today, workers are configuring AI tools on their own time. A clear signal of how far we’ve come—from companies supplying the basics, to employees independently driving innovation through AI.
AI tools that enhance workers’ productivity in ways their employer doesn’t (or can’t) provide. But according to the 2025 Wisedocs Claims Survey Report, 58% of respondents (claims professionals) either did not use AI, or were not sure.
Workers will always default to what is easy, effective, and useful – and by the time an enterprise builds out its strategy for AI, the employees are often already familiar with the tool, and possibly already using it with company data. This is what law firm Hill Dickinson discovered when it found staff accessing AI tools like ChatGPT, Grammarly, and DeepSeek nearly 85,000 times in a single week. Following this, the law firm highlighted its AI policy – including guidance that prohibits the uploading of client info and requires staff to manually verify any response.
When it comes to AI adoption, the solution isn’t a quick fix. Adopting a new technology company wide such as an AI algorithm for worker efficiency takes time, and as headache inducing as the red tape may seem to employees, enterprises must stay protected. Not every LLM can (or should) have access to company data, and in fact, doing so can put your organization at seismic risk when it comes to company policies regarding compliance and security.
So where does one start? AI’s productivity benefits are clear, but success depends on aligning tools with people, processes, and outcomes.With 58% of workers not using AI, one may assume company wide adoption may pose a challenge as hesitation in the technology is still evident.
Wisedocs’ Survey Report found that trust in AI generated output goes up to 60% when expert reviewers are added into the process. Putting ‘experts-in-the-loop’ in this way helps to maximize the speed and minimize review time without sacrificing the quality of each claim.
In our survey report, 33% of respondents who are already using AI will trust its output, compared to 57% of people not using AI, who said they would ‘not at all’ trust the tool. With so much of the workforce already using AI, at least a third are comfortable with the output, and this number almost doubles when you add humans in the loop as expert reviewers.
For those who have no interest in AI, its challenges include accuracy, compliance risk, and training/integration into existing systems; these challenges can all be mitigated, in some way, by adding oversight to the claim. Humans in the loop processes are a trust maximizer – and we suspect this human validation factor can have a powerful effect on technology uptake.
As enterprises continue to scale and be part of the claims landscape, organizations must respond not with restrictions, but intentional design. Teams are already maximizing their personal workday with AI: which means they’re trained, ready, and able to participate. The next step is deciding whether to dedicate the resources to build an in-house solution or invest in a trusted, validated platform. That’s exactly the conversation our Buyer’s Guide was created to support—breaking down the build vs. buy decision and helping claims leaders choose the right path forward.