Site icon Gradient Flow

SB 1047 Unpacked

Screenshot

SB 1047, also known as the California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, is a proposed state bill that aims to regulate the development and deployment of advanced AI models in California. The bill targets AI systems above a certain computing power threshold, specifically those capable of performing over 10^26 operations, which would capture next-generation models developed by major players like OpenAI, Google, Anthropic, and Meta.

(click to enlarge)

If enacted, SB 1047 would establish stringent safety standards for AI developers, requiring them to conduct safety assessments, engage in third-party model testing, and obtain certification to ensure their models do not have “hazardous capabilities” or pose significant risks. Developers would also be required to implement a “kill switch” to shut down problematic models if necessary and report any AI safety incidents to a newly established Frontier Model Division within the California Department of Technology.

The Frontier Model Division would be responsible for overseeing compliance efforts, providing guidance to AI teams, and ensuring that AI systems do not cause critical harm. Developers would need to disclose their compliance efforts and certify compliance with the Attorney General.

Analysis

Proponents argue that the bill strikes a balance between promoting safety and allowing innovation to continue. It asks developers of large models to perform basic safety evaluations. The bill’s supporters, including some AI experts and ethicists, believe establishing clear safety standards and accountability mechanisms is crucial as AI technology advances.

(click to enlarge)

However, I have serious concerns that the downsides of SB 1047 outweigh the potential benefits. The bill’s broad definition of “hazardous capability” and onerous liability provisions could stifle innovation and discourage the development and sharing of open-source AI models. Requiring developers to essentially guarantee their models won’t be misused by third parties seems unworkable, especially for open-source projects. This could lead to a mass exodus of AI talent and companies from California.

The bill, though well-intentioned, falters by targeting AI technologies themselves instead of their specific applications. Imposing sweeping compliance and safety protocols on all AI models would disproportionately burden smaller startups and academic researchers, potentially stifling innovation. Rather than empowering a new regulatory body to define standards without clear legislative guidance, a more prudent approach would leverage existing legal frameworks. Just as regulations governing steel differ based on its application in bridges versus automobiles, AI governance should be tailored to the context and risks of each use case. Updating existing laws in sectors like healthcare, transportation, and copyright to account for AI’s impact would be more effective than imposing a monolithic and inflexible framework that could hinder the responsible development of beneficial AI.

Given these issues, I believe California would be wise to proceed more cautiously. As home to Silicon Valley and many of the world’s leading AI companies and open-source projects, the state is in a highly influential position. Rushing to pass flawed legislation could do more harm than good at this stage. A more targeted, use case-specific approach developed in close consultation with technical experts may be preferable.

Then again, the bill’s passage is seen as potentially influential on global AI policy, given California’s tech industry prominence. SB 1047 could end up shaping AI regulations far beyond the state’s borders. This raises the stakes and may explain why California legislators are eager to make their mark on this critical issue. But wielding that influence responsibly requires extremely careful deliberation. The unintended consequences of misguided AI regulations could be severe.

(click to enlarge)

If you enjoyed this post please support our work by encouraging your friends and colleagues to subscribe to our newsletter:


SB 1047 Cheat Sheet

[based on an online version pulled on 2024-06-08]

Table of contents

Pre-Training

Limited Duty Exemption
Cybersecurity Protections
Full Shutdown Capability
Adherence to Covered Guidance
Safety and Security Protocol
Protocol Implementation and Enforcement
Prohibition of Training with Unreasonable Risk
Additional Risk Mitigation Measures
Capability Testing After Training
Annual Certification Report

Return to Table of Contents

Post-Training

Safety and Security
Limited Duty Exemption Determination
Reasonable Safeguards Implementation
Periodic Reevaluation
Reporting and Certification
Annual Compliance Certification
AI Safety Incident Reporting
Key Takeaways for AI Teams

Return to Table of Contents

Requirements for Computing Cluster Operators

You operate a “Computing Cluster”: This means a set of interconnected machines with:

Customer Identification and Verification
Customer Intent Assessment
Annual Validation and Review
Record Maintenance and Disclosure
Emergency Shutdown Capability
Retention of Access Records
Public Pricing Schedule
Permitted Preferential Access

Return to Table of Contents

Other Requirements

I. Reporting and Certification
Annual Certification of Compliance
Artificial Intelligence Safety Incident Reporting
II. Operational Requirements
Capability Testing Post-Training
Reasonable Safeguards Implementation
III. Ethical and Legal Compliance
Pricing and Access Requirements
Employee Whistleblower Protections
IV. Oversight and Enforcement
Duties of Frontier Model Division
Enforcement and Penalties
V. Establishment of CalCompute
CalCompute Public Cloud Computing Cluster

Return to Table of Contents

Exit mobile version