On June 17, California released a comprehensive policy framework to help state policymakers as they consider new laws and frameworks governing artificial intelligence (AI). Commissioned by Governor Newsom and authored by leading experts from Stanford, Berkeley, Harvard, Princeton, and Georgetown, this analysis provides a roadmap for lawmakers worldwide navigating AI regulation challenges.
Report Highlights
The report advocates for "evidence-based" policymaking, acknowledging that lawmakers must create effective regulations despite incomplete information, given the rapid evolution of AI technology. The report includes eight (8) foundational principles for AI Governance, including:
- Balancing benefits and risks that harness AI's transformative benefits while implementing safeguards against societal harms
- Engaging in proactive risk assessments even in the early stages of AI development and fostering industry-government collaboration
- Addressing gaps in transparency to promote accountability and public trust
- Implementing protections, including whistleblower protections, enacting safe harbors for third-party system evaluations, and fostering systematic information sharing
- Establishing post-deployment monitoring with robust adverse event reporting systems to track real-world impacts and guide responses
- Creating carefully designed thresholds aligned with governance objectives that are adaptable to technological evolution
Since Governor Newsom vetoed SB 1047, generative AI models have advanced significantly. Current AI systems like Claude 3.7, GPT-4, and Gemini 2.0 demonstrate impressive capabilities in programming, content generation, multilingual conversation, and solving graduate-level problems, yet still exhibit significant limitations in unfamiliar contexts, long-term planning, and factual accuracy. The experts identify three primary risk categories: malicious use (deepfakes, disinformation, cyberattacks, and bioweapon development), malfunctions (reliability failures, algorithmic bias, and systems operating beyond human control), and systemic risks (labor market disruption, market concentration, and environmental impacts).
Current AI developers maintain high levels of opacity, with the 2024 Foundation Model Transparency Index showing that major companies average only 34% transparency on training data, 31% on risk mitigation, and 15% on downstream impacts. The report identifies five critical areas requiring more disclosure: data acquisition practices, safety practices, security practices, pre-deployment testing, and downstream impact analysis. Beyond transparency, the framework proposes establishing third-party risk assessment with legal "safe harbor" protections for third-party safety evaluations, robust whistleblower protections, and responsible disclosure mechanisms for vulnerabilities.
In addition, the report outlines two essential implementation components. First, adverse event reporting systems modeled after successful monitoring in healthcare and transportation would systematically collect AI-related incident information from developers (mandatory) and users (voluntary), enabling the identification of unanticipated risks and improving coordination between agencies and stakeholders. Second, smart regulatory scoping uses carefully designed thresholds to determine which entities face specific obligations. The report identifies four threshold approaches: developer-level properties (company size), cost-level properties (training compute), model-level properties (benchmark performance), and impact-level properties (user numbers, deployment scale). While current thresholds rely primarily on computational training costs, these are imperfect proxies that vary dramatically across model types and poorly represent actual risk.
Business Implications and Recommended Actions
Although the report did not provide a specific framework for AI laws and regulations, it identified key themes that businesses can consider as they develop and enhance their own AI governance programs. The report signals a desire for policymakers to:
- Address the transparency gaps with AI systems
- Develop a means for systematic oversight in the development of AI
- Create robust post-deployment monitoring requirements
Early adoption of transparency and safety practices can help shape industry standards, reduce litigation exposure, and build public trust. And. proactive preparation will better position businesses for broader regulatory compliance as regulatory frameworks emerge. To get a head start, businesses can:
- Establish comprehensive documentation and monitoring systems to track AI systems from development through deployment, encompassing data sources, safety measures, performance metrics, and their real-world impact.
- Develop a comprehensive safety infrastructure that encompasses internal incident tracking, user feedback systems, third-party evaluation partnerships, and legal frameworks that facilitate safety reporting and risk identification.
- Proactively position themselves in the evolving regulatory landscape by using transparency as a competitive advantage, building flexible compliance systems, engaging in industry standard-setting, and understanding which regulatory thresholds may apply to their models.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.