In August of 2024, California introduced Bill SB-1047 aimed at establishing the first regulatory oversight over, and safety requirements for AI, signalling a turning point for for insurers, TPAs and claims organizations leveraging AI. Here's what you need to know.
In August of 2024, California Senator Scott Weiner introduced Bill SB-1047. Titled Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, Bill SB-1047 was aimed at establishing the first regulatory oversight over, and safety requirements for, AI. Although the bill was ultimately vetoed by Governor Gavin Newsom in September of 2024, SB-1047 sailed through both levels of California’s legislature and attracted high-profile supporters like Nobel Laureate Geoffrey Hinton and Elon Musk.
California’s AI safety bill signals a turning point for AI. For insurers, TPAs and claims organizations, SB-1047 could shape the future AI systems and workflows. Here’s what you need to know:
What differentiated SB-1047 from previous legislation was its intent to regulate only the most powerful AI tools. Focused on very large, general purpose AI models (hence the term “Frontier Models”) California’s AI bill would have imposed safety and transparency standards on companies like OpenAI, Google DeepMind, and Anthropic.
Chapter 22.6 of Senate Bill 1047 outlines the models covered by the legislation, and the compliance requirements these “covered models” are expected to follow. A “covered model” is one of either:
The “10^26 integer or floating-point operations” benchmark was set based on existing tools. For example, models like GPT-4 and Claude 3 use 10^25 to 10^26 floating point operations to train. Structured like this, SB-1047 actively avoided startups or typical business users – as well as future proofing its inclusion criteria and requirements.
The integer or “floating point operations” (FLOPS) refer to the number of basic math calculations the model can do per second on a standard program; the “floating” part means the number can be expressed in either real numbers or integers. AI models almost always use FLOPS because they require precision, but the“floating” aspect means more computing overhead from extra memory required to deal with decimal points, exponents, or values like infinity.
As a result, the language in SB-1047 is specific enough to avoid any loopholes developers of large models would use (like changing floating point data to integers to make them smaller) to avoid falling into a “covered model” category. It also builds in room for open source or fine tuned models to exist outside of SB-1047 (for smaller developers) and for computing costs to drop over time. In short, the bill is robust enough to last, and to make a major impact on the future of industry-use AI.
In addition to language that guaranteed the law would only be applicable to the largest AI developers, SB-1047 aimed to hold developers legally accountable for the impact of their AI. Bill SB-1047 introduced a mandatory “kill switch” and annual audits of safety compliance, as well as:
Developers of the applicable “frontier models” may be the ones to apply these regulations, but the implications of the bill are industry-wide.
Although it was veto’d before being passed into law, SB 1047 demonstrated lawmakers' support for regulation (prior to Newsom’s veto, SB 1047 passed the State Assembly 48-16 and the Senate 30-9). The bill also built out a framework for the transparency, accountability, and safety metrics AI vendors, customers, and organizations must prioritize – especially in regulated sectors like insurance and claims.
Revived or replicated in future legislation, these provisions would have provisions across the claims ecosystem. AI vendors would bear more responsibility (and legal risk) for possible harm, and be transparent about how models are trained, tested and monitored over time. Insurers, TPAs, and claims organizations would be expected to do due diligence on a model before deploying it on data. Claims teams would also need to understand how AI is used in decision making on a claim, and what happens if a claim is mistakenly denied.
Even with the veto, regulatory and public sector scrutiny from SB 1047 means that responsible AI is no longer optional. A recent policy update from the Energy and Commerce Committee contained a proposal to ban states from enforcing AI laws for the next decade – which ultimately, could be a sign that Congress wants to legislate on the issue themselves. Trustworthy, transparent use of AI is essential for maintaining trust across the claims ecosystem, and for future-proofing your workflows with AI.