An expert panel appointed by California Governor Gavin Newsom has recently released a report with recommendations on an effective regulatory approach to support the deployment, use and governance of generative AI.
The report does not argue for or against any particular piece of legislation or regulation. Instead, based on evidence of AI capabilities and drawing analogies with successful regulation in other industries such as energy, the report recommends a detailed set of design principles for AI regulation.
Innovation and safety: different sides of the same coin
As with South Korea’s Basic AI Act, the California report seeks to move beyond a contest between AI innovation and AI safety to develop a set of regulatory principles which integrates and harmonises both: “[j]ust as policy can help reduce certain risks, it can also play a key role in unlocking those benefits”.
Go early!
Early technological design and governance choices by policymakers have the following advantages:
Early policy choices shape the evolution of new technologies: we live with the consequences today of too-slow regulation of the Internet. Its initial highly accessible protocols and rudimentary security framework were crafted in an era when networks served a small, trusted community. These early design decisions enabled transformative benefits of a global, interconnected network, but also embedded security vulnerabilities as the internet scaled up beyond its initial trusted circle.
The absence of targeted legislation does not mean lack of oversight: a ‘wait and see’ approach is a false option because, in the absence of proactive regulatory frameworks, litigation – an inherently reactive and piecemeal process – becomes the default mechanism for addressing novel technological challenges. The report gives the example of an early cyber security failure in 1988, when a Cornell graduate student, Robert Tappan Morris, developed the world’s first computer worm. He had the seemingly harmless goal of counting the number of machines connected to the internet. Despite his best attempts to shut off the program, his worm compromised 5-10% of all internet-connected machines within a single day. He was prosecuted under a new computer fraud law, but the larger issue of internet security was largely left untouched.
Policy windows do not remain open indefinitely: policymakers have the opportunity now to develop governance frameworks which anticipate AI systems currently interacting in isolation to begin interacting across networks.
The need for public transparency and clear standards
The report says the history of tobacco companies withholding research on cancer risks shows:
With any consumer-facing technology, policy must recognise the value of affording users considerable latitude to make decisions while also balancing the public interest and consumer safety… When industries with incentives to promote particular products possess privileged information about risks while maintaining opacity about their internal research, the resulting information asymmetry undermines effective regulation and public welfare.
The report draws the following key lessons for AI transparency requirements:
Well-calibrated policies can create a thriving entrepreneurial culture for consumer products: policy facilitates innovation long term by creating a common set of expectations – independently defined and monitored – about a product’s minimum capabilities and safety. At the same time, policy must recognise the value of robust markets including companies with varying sizes and goals. It is especially important in technology markets to positively accommodate start-ups and small-medium enterprises.
Transparency is a necessary but insufficient condition for consumers to make informed decisions: the report notes that while more transparency eventually came in the tobacco industry, suppliers can distort public understanding despite available evidence. Independent verification mechanisms are necessary to validate industry claims and ensure that evidence is accurately represented.
Lack of transparency on product safety can result in avoidable, costly litigation: similar to the lack of targeted regulation, the inevitable outcome of upfront transparency requirements is court-ordered disclosure of information on a case-by-case basis. The report argues that “an information-rich environment on safety practices would protect developers from safety-related litigation in cases where their information is made publicly available and independently verified”.
Transparency and independent risk assessment are essential to align commercial incentives with public welfare: when industry actors conduct internal research on their technologies’ impacts, a significant information asymmetry can develop between those with privileged access to data and the broader public. For example, in the 1960s and '70s, researchers at ExxonMobil successfully modelled global warming future trajectories with 63%-83% accuracy. But in public, ExxonMobil made statements that contradicted their internal findings, framing climate projections as uncertain or speculative. If that information had been available publicly, third-party risk assessment mechanisms could have provided decision-makers with comprehensive evidence needed for more effective policy responses, including economic incentives for energy companies to transition to cleaner fuels.
Simulations, modelling and adversarial testing are useful analytical tools to anticipate future impact: in sectors such as pharmaceuticals, regulation imposes rigorous phased testing moving to market release. Sophisticated AI systems, when sufficiently capable, may develop deceptive behaviours to achieve their objectives, including circumventing oversight mechanisms designed to ensure their safety. For example, Claude 3 Opus strategically faked alignment in 12% of the test cases when the model believed its responses would be used for training.
Comprehensive evidence can highlight key junctures for governance action: a challenge in regulating dynamic industries like AI is the inherent uncertainty of forecasting the future. However, in other sectors, sophisticated modelling techniques have been developed to build a more complete, nuanced picture of possible futures to inform forward-looking regulation. For example, The UK government-commissioned Stern Review, which explicitly deployed economic models of uncertainty and risk, estimated that failing to address climate risks could result in the long-term GDP loss of 5% annually, with wider risks and damages resulting in potential damages up to 20%.
Specific regulatory measures
The report recommended mandated transparency across the following dimensions:
Where training data is sourced: the 2024 Foundation Model Transparency Index documents an average score of 32% across major foundation model developers for data sourcing transparency. The report does not go on to address the remedies for inappropriately sourced material, such as when a creator’s work has been used without a licence.
Safety practices: the 2024 Foundation Model Transparency Index documents average scores of 47% and 31% for risk-related and mitigation-related transparency, respectively.
Security practices: the key risk is exfiltration (theft) of unreleased model weights.
Pre-deployment testing results: as developer practices vary so much, disclosure should include the time and depth of pre-deployment testing, whether external testing has been undertaken (which the report says is crucial) and the terms of engagement of the external testers.
Downstream impact: the report says that tracking how foundation models are deployed across the economy is a core prerequisite for measuring the impact of AI in society. While generative models in theory can be used for almost any purpose, individual AI models also may be used heavily for some use cases compared to other models. For example, Anthropic’s Economic Index shows Claude usage is concentrated on software development and technical writing tasks. The report recommends that, as developers will not necessarily know how their models are being used, downstream distributors of models, especially open-source models, should have obligations to report on downloads of individual models.
Different approach to transparency between open vs closed models: While the report notes that open source is not the same as providing transparency, on the whole developers of open-source models have a better record of transparency. The report endorses the EU AI Act approach of limiting the more extensive transparency requirements to those open-source models which carry systemic risks (the larger models).
The report recommends that regulators should design and implement a mandatory reporting system for material adverse events post deployment and information about incidents should be publicly available. Adverse event reporting systems address a core impediment to targeted AI regulation by enabling regulators and the public to learn about realised harms and unanticipated sources of risk.
For example, the Organisation for Economic Co-operation and Development (OECD) recently developed an incident reporting tool called the AI Incidents Monitor (AIM), which provides a platform for stakeholders to see information about where AI incidents have been reported globally. AIM is powered by a machine learning model that monitors news reports about AI incidents and compiles information about the magnitude of their impacts. The report suggests this model could be coupled with mandatory reporting by industry.
However, the report acknowledges there would be challenges with an adverse events reporting requirement. It is likely to be costly for the regulators because they need not only to passively receive the reports, but have the skills to understand the implications and evaluate the proposed corrective actions. Mandatory reporting can also result in under-reporting as companies will be concerned about the larger civil liability it may trigger. The report suggests considering safe harbours to encourage compliance. Alternatively, a hybrid model might mandate reporting by developers but provide voluntary reporting by deployers.
The report also recommends the following additional specific regulatory measures:
There needs to be a ‘safe harbour’ for bona fide independent safety evaluations. The report notes concerns expressed by independent researchers that some developers “disincentivise safety research by implicitly threatening to ban independent researchers that demonstrate safety flaws in their systems”.
Downstream distributors, intermediaries which fine tune models and deployers need to share information and responsibility for verifying safety. For example, if a text-to-image foundation model readily generates synthetic child sexual abuse imagery, upstream data sources should inspect whether real child sexual abuse images are present in their datasets and downstream AI applications should also inspect whether these applications are susceptible to generating similar imagery.
More horizontal co-operation and information sharing are needed between developers. For example, a technique which successfully jailbreaks one model will probably work on a different model and notice should be quickly shared among developers.
There should be whistleblowing protections for developer personnel, as under the EU AI Act.
Conclusion
This report is likely to have global significance, both a useful roadmap for local regulation and because, as the report realistically acknowledges:
The technologies and ideas that emerge from California – generative artificial intelligence among them – shape the world. As home to many of the leading AI companies and research institutions, California has both the capability and responsibility to help ensure these powerful technologies remain safe so that their benefits to society can be realised. Just as California’s technology leads innovation, its governance can also set a trailblazing example with worldwide impact.

Peter Waters
Consultant