What the UK Can Learn from California's Frontier AI Regulation Battle
Youth-led insights on balancing innovation, safety, and transparency in emerging tech policy debates.
Summary
A youth activist from Encode recounts the story of California’s frontier AI bill, SB 1047, which they co-sponsored. SB 1047 sought to establish guardrails for the most advanced AI models but was ultimately vetoed by California Governor Gavin Newsom after a long and intense political battle. The effort drew in tech giants, Nobel Prize laureates, and even touched on the succession battle for Nancy Pelosi’s seat. Examining both the content of SB 1047 and the political struggle surrounding it offers valuable insights for UK policymakers as they consider a similarly scoped frontier AI bill.
Despite its defeat, SB 1047's narrow focus on frontier AI models and its $100 million compute threshold provides a workable regulatory template for the UK. The bill avoids placing undue burdens on smaller companies while empowering governments to maintain oversight of transformative AI technologies.
The tech industry’s response—especially Anthropic's support—demonstrates that the bill’s core ideas strike a workable balance between industry concerns and public safety. Transparency requirements and safety plans faced relatively little resistance, whereas liability provisions, such as the “reasonable care” standard, were more contentious. Industry feared such language could lead to open-ended litigation and dampen investments. Accordingly, any UK liability measures should be carefully framed.
The UK is a world leader in AI governance and hosts leading AI labs such as Anthropic and Google DeepMind. However, most UK tech startups currently integrate or fine-tune existing foundation models rather than train them from scratch. Therefore, any new bill must address the needs of these “downstream” developers (e.g. those in fintech, health tech, and climate tech). A bill modeled on SB 1047, with mandatory transparency for frontier models, could offer verifiable assurances, reduce due diligence costs, and accelerate the safe deployment of AI across the broader economy.
As a young person engaged with AI policy, I watched with hope last year as a political battle unfolded across the Atlantic. A broad coalition of young activists, actors, and employees of AI companies fought for basic transparency measures and the passage of California’s SB 1047. The coalition faced an avalanche of opposition from Silicon Valley—a scenario I recognize as a European, given our own prolonged battles over EU digital regulation and its implementation.
Last year, I co-founded a chapter of Encode at the London School of Economics to help launch a youth movement in Europe advocating for a careful, principled approach to this essential emerging technology— beginning with the UK AI bill. Encode formally co-sponsored SB 1047 and participated in California's legislative process, through which organisations officially endorse proposed legislation.
A long-delayed UK AI bill is expected to be introduced in Parliament next year and will focus on frontier AI, according to Technology Secretary Peter Kyle. I support the plan, advanced by senior government officials, to make the voluntary commitments made by AI companies legally binding—including pre-deployment evaluations.
SB 1047 was primarily a light-touch bill focused on transparency and accountability. This approach reflects the objective stated in Labour's manifesto: to focus "on the handful of companies developing the most powerful AI models." Indeed, SB 1047’s scope was strictly limited, covering only models trained with more than 10^26 FLOPs or costing over $100 million. It is important that UK policymakers examine both the content and the legislative struggle surrounding SB 1047, as the bill aligns closely with Labour’s stated goals.
The political battle over SB 1047
Labour’s manifesto correctly targets specific AI models, a position that enjoys broad support from both the scientific community and the public. Nevertheless, political struggles over bills of this nature can be intense.
SB 1047 passed California’s Senate and Assembly by wide margins and garnered support from scientists, the general public, and even some employees of AI labs. While this level of support might be sufficient to pass a bill in the UK, SB 1047 still required approval from Governor Gavin Newsom, who held the power to veto it.
I believe Governor Newsom’s primary motivation for vetoing the bill—despite its overwhelming passage through both legislative chambers—was to protect his relationship with major Silicon Valley donors. Key players in the tech industry reportedly engaged in underhand tactics to oppose the bill. Many scientists in Silicon Valley benefit from funding provided by Big Tech firms and venture capitalists, and echoedopposition to regulating open-source AI—an area that SB 1047 ultimately exempted from its shutdown requirement.
According to reports, many Democratic officials feared the influence of Big Tech due to their reliance on campaign donations. This pressure may have contributed to former Speaker Nancy Pelosi’s unusual decision to publicly criticise a state bill and urge the Governor to veto it. A cynical interpretationsuggests that her intervention may have been motivated by her daughter’s need for tech industry support in a House race against Senator Scott Wiener, the legislator who introduced SB 1047.
When Governor Newsom eventually vetoed the bill, he claimed he preferred a more comprehensive AI bill. However, given the political challenges such legislation would face, this rationale appeared unconvincing to many observers.
Seeing policymakers prioritise relationships with the tech industry over the risks posed by unregulated AI to infrastructure and markets is precisely what drives many young people like me to become politically active.
Core content and industry response
It is important that more people examine the actual provisions of SB 1047. Despite claims made by industry lobbyists, the bill is remarkably light-touch.
The bill's requirements focused primarily on transparency and accountability for a very small subset of AI models. Specifically, it applied only to models costing over $100 million to train and using over 10^26 floating-point operations. This high threshold meant that virtually no academic researchers, startups, or even most established tech companies would be affected. It represents an order of magnitude more computational than the threshold defined in the European Union’s AI Act for general-purpose AI systems with systemic risk, which similarly sought to regulate only the most advanced models. At the time SB 1047 was under debate in California, no model would have met the threshold. As of April 2025, only Grok-3has surpassed it.
The bill would have required companies developing such models to document their safety procedures in a Safety and Security Protocol (SSP), conduct pre-deployment risk assessments, report safety incidents within seventy-two hours, and establish protections for whistleblowers.
For models posing an “unreasonable risk of critical harm”—defined as either mass casualties or incidents resulting in more than $500 million in damages—companies would be expected to exercise "reasonable care" to mitigate such risks.
Crucially, SB 1047 did not create an approval regime or grant government agencies the authority to block model releases. Instead, it established a liability framework wherein adherence to a company’s own documented safety procedures could provide legal protection. This, by any standard, constitutes a minimal regulatory intervention.
SB 1047's main innovation was its model-based regulatory approach, developed specifically to address the general-purpose and inherently dual-use nature of advanced AI. By focusing on powerful models rather than enacting complex sector-specific regulations, the bill provides a useful template for how the United Kingdom might safeguard the public interest without imposing excessive burdens across the entire economy. This strategy—eschewing onerous usage-specific rules—has also been praised by U.S. conservative commentator Dean Ball. Ideally, UK policymakers would regulate not only models based on their size but also AI agents based on their degree of autonomy.
This balanced, light-touch approach offers the industry substantial flexibility while establishing baseline standards for accountability. The United Kingdom would do well to adopt this kind of targeted, tiered model that distinguishes between different levels of AI capability rather than treating all systems equally.
Avoiding blowback: transparency and careful framing of liability
The range of accountability and transparency measures proposed in SB 1047 represents only a first step. Nevertheless, much more will be needed to ensure a safe, prosperous, and free future for young people and generations to come.
A critical component of SB 1047 that generated significant opposition was its treatment of liability—an element far more contentious than the transparency provisions that were later added. Much of Silicon Valley operates on a business model enabled by the liability exemption in Section 230 of the U.S. Communications Decency Act of 1996, which shields social media companies from responsibility for user-generated content. The notion that companies could now be held liable for AI model outputs or actions runs counter to how Big Tech has operated for decades.
While SB 1047 did introduce liability clauses, the details are frequently misunderstood. In its final form, the bill required AI companies to exercise “reasonable care.” Importantly, SB 1047 also included a liability exemption—similar in function to Section 230—if companies followed their own documented Safety and Security Plan (SSP) and acted with reasonable care.
The bill’s combination of high applicability thresholds, reliance on company-defined safety procedures, absence of a government pre-approval requirement for deployment, and a liability framework that incentivised “reasonable care” reflected a deliberately light-touch approach. This structure was intended to minimise regulatory burdens on non-frontier AI developers while still establishing basic safeguards.
Having observed the extent to which tech companies resisted even these modest provisions—provisions that seem entirely reasonable—I am increasingly skeptical of their willingness to innovate responsibly. Their opposition to merely exercising “reasonable care” stands in stark contrast to long-standing standards in industries such as automotive, aviation, and nuclear energy, where expectations have been in place for decades.
The fight over SB 1047 also revealed divisions within the tech industry that could inform UK regulatory strategy. While companies such as OpenAI and Meta strongly opposed the bill, Anthropic eventually expressed support, stating that the bill's benefits likely outweigh its costs. This indicates that it may be possible for a UK AI bill to secure industry backing from the outset, thereby avoiding the intense political backlash that plagued early versions of SB 1047.
Labour’s new AI strategy, which has been praised by tech leaders, could provide a foundation for a more constructive relationship between government and industry. It could also support a “third way” regulatory approach—distinct from both the EU AI Act and the U.S. laissez-faire mode—that balances innovation with public safety and accountability.
Implications for UK AI policy
To ensure that dual-use AI benefits future generations, the United Kingdom must learn from California's experience and implement robust guardrails so that all members of society can share in the advantages of these transformative technologies.
As a young person, this matters more to me beyond the near-term benefits we might expect—such as medical breakthroughs or new ways of living and working with AI. For years, young people have lived with the existential threat of climate change and have naturally developed a longer-term mindset when evaluating policy. The catastrophic harms that SB 1047 sought to mitigate feel much more tangible to my generation. These harms include financial system destabilization, the automation of life-changing decisions such as hiring, and the potential for society to lose control of powerful systems. The latter could manifest through cyberattacks that disable power grids, compromise banking infrastructure, or paralyse healthcare services. The UK government's attention to frontier AI regulation reflects a promising recognition of concerns held not only by scientists and the broader public but particularly by younger generations.
Westminster should ensure that the first draft of the UK AI bill includes a strong emphasis on transparency and a clear articulation of liability frameworks. Doing so may help prevent the level of resistance from UK tech lobbyists that SB 1047 encountered in California.
The UK must also pay particular attention to companies downstream in the AI value chain, as the county hosts significantly more AI application developers than foundation model creators. Put simplify, most British AI startups do not develop frontier models themselves but rather build innovative products on top of them. The economic potential of AI largely resides in these applications. Just as the Second Industrial Revolution was powered by the general-purpose technology of electricity—with most economic value derived from its application in sectors such as manufacturing, entertainment, and transportation, so too does AI offer Britain the opportunity to lead in applied innovation. In doing so, the UK might regain some of the economic and geopolitical influence it ceded to the United States and Germany at the end of the nineteenth century, when electricity supplanted the steam engine as the dominant general-purpose technology.
Establishing clear rules and safety guarantees for foundation models will likely accelerate responsible AI adoption across the UK economy. Application developers will benefit from increased legal clarity and the knowledge that the models they rely on have undergone meaningful oversight. I hope the government will actively engage these downstream players to build support for a balanced and effective foundation model oversight regime—one that serves the interests of many British AI startups. Matt Clifford, the government’s preferred AI advisor, could play a particularly constructive role in this process, given his experience incubating many of these application-layer startups through Entrepreneur First.
From Silicon Valley to Westminster: how California’s AI regulation is applicable on the Thames
SB 1047 would have avoided creating a new regulatory authority and maintained a light-touch approach. This aligns with UK Prime Minister Keir Starmer’s stated preference for AI regulation. A Frontier Model Board, composed of stakeholders, would have issued guidance on risk prevention and audits—and that would have been the extent of its formal involvement.
One reason to be optimistic about UK AI regulation abroad is the so-called “London effect,” a term coined by researchers from Oxford and Cambridge to describe the transmission of British AI rules. Like the “Brussels effect,” this phenomenon leads companies to comply with UK AI rules to avoid creating separate versions of their products. Furthermore, a UK bill could exert soft influence on U.S. policymakers, as a Washington, D.C.-based AI policy advisor confirmed to me.
Of course, California has the advantage of being a leader in the AI race, with more opportunities to impact frontier AI companies directly. However, SB 1047 did not regulate only California-based AI companies; rather, it applied to all models deployed within the California market. This design choice aimed to avoid incentivizing AI companies to relocate to other states.
The United Kingdom should feel confident in implementing a similarly consequential and well-scoped AI bill to complement its AI Opportunities Action Plan and the AI Security Institute (AISI). As an independent nation, the UK can advance its regulatory approach both through standard-bearing and through international coordination—such as via the AI Safety Summits and the International Network of AI Safety Institutes.
Specific recommendations for UK AI policy
The United Kingdom has the capacity to implement substantial components of SB 1047, enhanced with required pre-deployment assessment protocols, in accordance with senior government officials' stated intentions. I hope the UK AI Bill will include the following components:
The focus on frontier AI should make the UK’s bill enforceable, as only models that cost more than £100 million (or $100 million respectively) to train will be regulated. Training such large models only makes economic sense if they are deployed globally, and the substantial costs involved render compliance with SSPs relatively insignificant.
Another provision the UK could adopt from SB 1047 is its comprehensive whistleblower protections. These measures aimed not only to prohibit retaliation against individuals who disclose non-compliance, but also to prevent developers and their contractors from actively discouraging employees from sharing such critical information. California’s State Senate recently passed SB 53, a bill focused on whistleblower protections; the UK could include similar regulations in its forthcoming AI legislation. Alternatively, an amendment to the Public Interest Disclosure Act 1998 could incorporate AI-related disclosures and designate the AISI as a prescribed body for receiving them.
Audits are standard in many industries, such as finance, pharmaceuticals, and automotive. Third-party AI auditors should be tasked with evaluating company SSP implementation, and these evaluations could be published with redactions. Additionally, developers should be required to report safety incidents—such as the loss of model weights or misuse—to relevant UK authorities within seventy-two hours, following standard cybersecurity practices.
Requiring pre-deployment evaluations could stimulate the growth of the emerging British AI assurance industry. This sector develops solutions for AI security, auditing, and authentication, among other things, and has the potential to grow to £6.53 billion by 2035, according to a report commissioned by the UK government. AI model evaluations can address a range of concerns, from systemic loss-of-control risks to bias. A market-shaping program proposed by the think tank UK Day One could support UK AI assurance startups in the UK by leveraging both public and private investments.
What about liability? Existing UK commercial liability laws already hold AI companies responsible for critical harm. However, clauses such as those in SB 1047 would allow these companies to argue for exceptions if they demonstrated that they had taken reasonable care through their SSPs.
What’s next?
Overall, SB 1047 contained many provisions that align well with the vision of the UK’s AI forthcoming frontier AI bill. Technology Secretary Peter Kyle’s intention to prevent a “Christmas-tree bill” is a promising sign. The measures outlined above would not impose additional burdens when moving in the slipstream of the EU but would instead reduce legal uncertainties for AI companies.
I vividly remember the consequences of losing control over rapidly emerging scenarios. During the COVID-19 pandemic, we saw how politicians failed to extrapolate exponential trends, even when they were apparent and deeply concerning. Regrettably, I expect this to apply to scaling laws in AI just as it did for epidemiological predictions.
Nevertheless, given the relationship between ever-increasing computing power and advancing AI capabilities, the projected development of artificial intelligence, and the U.S. government’s determination to create artificial general intelligence (AGI), the door remains open to scenarios in which AI systems become transformational. Strong economic incentives are already pushing for the deployment of AI agents across our economy and society—despite the fact that the companies creating them do not fully understand why these systems behave the way they do.
As such, I do not believe it is speculative to want AI policy to account for worst-case scenarios, such as casualties or severe economic disruption. These outcomes are possible. I, along with other youth activists with Encode, hope to one day look back and see that the Labour government took sensible, precautionary steps. Even a UK tech leader has called to “slow down the race toward AGI,” so we remain optimistic that Labour will not privilege the interests of “a handful of companies” over the public good.
For anyone interested in delving deeper into SB 1047 as a case study in AI policy, I will end with a recommendation: a recent documentary titled The AI Bill That Broke Silicon Valley vividly captures the battle over the bill, offering a detailed look at what its producers call “an unprecedented power struggle over humanity’s most transformative technology.”