SB 1047, also known as the California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, is a proposed state bill that aims to regulate the development and deployment of advanced AI models in California. The bill targets AI systems above a certain computing power threshold, specifically those capable of performing over 10^26 operations, which would capture next-generation models developed by major players like OpenAI, Google, Anthropic, and Meta.

If enacted, SB 1047 would establish stringent safety standards for AI developers, requiring them to conduct safety assessments, engage in third-party model testing, and obtain certification to ensure their models do not have “hazardous capabilities” or pose significant risks. Developers would also be required to implement a “kill switch” to shut down problematic models if necessary and report any AI safety incidents to a newly established Frontier Model Division within the California Department of Technology.
The Frontier Model Division would be responsible for overseeing compliance efforts, providing guidance to AI teams, and ensuring that AI systems do not cause critical harm. Developers would need to disclose their compliance efforts and certify compliance with the Attorney General.
Analysis
Proponents argue that the bill strikes a balance between promoting safety and allowing innovation to continue. It asks developers of large models to perform basic safety evaluations. The bill’s supporters, including some AI experts and ethicists, believe establishing clear safety standards and accountability mechanisms is crucial as AI technology advances.

However, I have serious concerns that the downsides of SB 1047 outweigh the potential benefits. The bill’s broad definition of “hazardous capability” and onerous liability provisions could stifle innovation and discourage the development and sharing of open-source AI models. Requiring developers to essentially guarantee their models won’t be misused by third parties seems unworkable, especially for open-source projects. This could lead to a mass exodus of AI talent and companies from California.
The bill, though well-intentioned, falters by targeting AI technologies themselves instead of their specific applications. Imposing sweeping compliance and safety protocols on all AI models would disproportionately burden smaller startups and academic researchers, potentially stifling innovation. Rather than empowering a new regulatory body to define standards without clear legislative guidance, a more prudent approach would leverage existing legal frameworks. Just as regulations governing steel differ based on its application in bridges versus automobiles, AI governance should be tailored to the context and risks of each use case. Updating existing laws in sectors like healthcare, transportation, and copyright to account for AI’s impact would be more effective than imposing a monolithic and inflexible framework that could hinder the responsible development of beneficial AI.
Given these issues, I believe California would be wise to proceed more cautiously. As home to Silicon Valley and many of the world’s leading AI companies and open-source projects, the state is in a highly influential position. Rushing to pass flawed legislation could do more harm than good at this stage. A more targeted, use case-specific approach developed in close consultation with technical experts may be preferable.
Then again, the bill’s passage is seen as potentially influential on global AI policy, given California’s tech industry prominence. SB 1047 could end up shaping AI regulations far beyond the state’s borders. This raises the stakes and may explain why California legislators are eager to make their mark on this critical issue. But wielding that influence responsibly requires extremely careful deliberation. The unintended consequences of misguided AI regulations could be severe.

If you enjoyed this post please support our work by encouraging your friends and colleagues to subscribe to our newsletter:
SB 1047 Cheat Sheet
[based on an online version pulled on 2024-06-08]
Table of contents
Pre-Training
Limited Duty Exemption
- Reasonable Assurance of No Hazardous Capability: Before initiating training of a nonderivative covered model, the developer may determine if the model qualifies for a limited duty exemption. This exemption applies if the developer can reasonably provide assurance that the model does not have a hazardous capability and will not come close to possessing one when accounting for safety margins and possible post-training modifications.
- Model Type: Nonderivative Covered ModelsĀ
Cybersecurity Protections
- Administrative, Technical, and Physical Cybersecurity Protections: Implement comprehensive cybersecurity measures to prevent unauthorized access, misuse, or unsafe modification of the covered model. This includes protection against theft, misappropriation, malicious use, or inadvertent release or escape of the model weights, especially from advanced persistent threats or sophisticated actors.
- Model Type: Nonderivative Covered Models without Limited Duty Exemption
Full Shutdown Capability
- Develop the capability to promptly enact a full shutdown of the covered model to mitigate potential risks until the model qualifies for a limited duty exemption.
- Model Type: Nonderivative Covered Models without Limited Duty Exemption
Adherence to Covered Guidance
- Follow relevant guidance from the National Institute of Standards and Technology (NIST), the California Frontier Model Division, and industry best practices for safety, precautions, and testing of similar AI models.
- Model Type: Nonderivative Covered Models without Limited Duty Exemption
Safety and Security Protocol
- Establish a detailed and separate safety and security protocol providing assurance against the development of hazardous capabilities. The protocol must state compliance requirements, identify specific tests and results needed to exclude hazardous capabilities, and describe procedures for fine-tuning and post-training modifications. It should also detail the implementation of cybersecurity requirements, define conditions for a full shutdown, and outline the process for updating the protocol itself.
- Model Type: Nonderivative Covered Models without Limited Duty Exemption
Protocol Implementation and Enforcement
- Designate senior personnel to oversee the implementation of the safety and security protocol by employees and contractors. Monitor and report on compliance, and conduct regular audits, potentially including third-party audits.
- Model Type: Nonderivative Covered Models without Limited Duty Exemption
Prohibition of Training with Unreasonable Risk
- Refrain from initiating training if there is an unreasonable risk that a person, or the model itself, could use the model’s hazardous capabilities (or those of a derivative model) to cause critical harm.
- Model Type: Nonderivative Covered Models without Limited Duty Exemption
Additional Risk Mitigation Measures
- Implement any other reasonably necessary steps to prevent the development or use of hazardous capabilities and manage the associated risks, considering guidance from the Frontier Model Division, NIST, and relevant standard-setting organizations.
- Model Type: Nonderivative Covered Models without Limited Duty Exemption
Capability Testing After Training
- Upon completing training of a nonderivative covered model without a limited duty exemption, perform capability testing to determine if a limited duty exemption applies per the safety protocol. Certify to the Frontier Model Division within 90 days the basis for the exemption determination and the specific testing methodology and results.
- Model Type: Nonderivative Covered Models without Limited Duty Exemption
Annual Certification Report
- Submit an annual certification under penalty of perjury of compliance with the specified provisions to the Frontier Model Division, signed by the chief technology officer or a senior corporate officer.
Post-Training
Safety and Security
Limited Duty Exemption Determination
- Upon completing training of a non-derivative covered model, the developer must perform extensive capability testing to determine if the model qualifies for a limited duty exemption. This involves assessing the model’s hazardous capabilities and ensuring compliance with safety protocols.
- Applies to: Non-derivative covered models
- Details: The developer submits a certification under penalty of perjury to the Frontier Model Division within 90 days of initiating commercial, public, or widespread use. The certification specifies the basis for the exemption determination and the specific testing methodology and results. The testing procedure must be detailed enough to allow third parties to replicate the capability testing.
Reasonable Safeguards Implementation
- Before commercial, public, or widespread release of a non-exempt covered model, the developer must implement reasonable safeguards informed by the training and testing process. These aim to prevent hazardous capability use, derivative model creation, enable attribution of resulting critical harms, and manage risks.
- Applies to: Non-derivative covered models without a limited duty exemption
- Details: The developer must provide derivative model developers with prevention requirements and fine-tuning information. Use is prohibited if an unreasonable risk remains that the model or derivatives could cause critical harm. Additional risk mitigation measures should consider guidance from the Frontier Model Division, NIST, and relevant standard-setting organizations.
Periodic Reevaluation
- The developer of a non-derivative covered model must periodically reevaluate the procedures, policies, protections, capabilities, and safeguards implemented, in light of growing model capabilities and as reasonably necessary.
- Applies to: Non-derivative covered models
- Details: This ensures the model or users cannot remove or bypass the implemented safety measures as AI capabilities advance.
Reporting and Certification
Annual Compliance Certification
- The developer of a non-derivative covered model that is not exempt must submit an annual certification under penalty of perjury to the Frontier Model Division attesting to compliance with the act’s requirements.
- Applies to: Non-derivative covered models without a limited duty exemption
- Details: The certification must be signed by the CTO or more senior officer, follow the Division’s format and submission date, and specify the nature/magnitude of hazardous capabilities, capability testing outcome, risk assessment of safety protocol insufficiency, and other information deemed useful by the Division.
AI Safety Incident Reporting
- The developer of a non-derivative covered model must report any AI safety incidents affecting that model and its derivatives within their custody, control, or possession to the Frontier Model Division as soon as possible, but no later than 72 hours after learning of the incident or having reason to believe one occurred.
- Applies to: Non-derivative covered models and their derivatives
- Details: Reportable incidents include autonomous risk-increasing behavior, theft/release of model weights, control failures, and unauthorized hazardous capability use. The developer must follow the Division’s established reporting process.
Key Takeaways for AI Teams
- Post-training involves significant obligations for developers, especially for models with hazardous capabilities.
- Continuous monitoring is required to regularly assess model capabilities, safety measure effectiveness, and legal compliance.
- Developers must engage closely with the Frontier Model Division’s guidance, requirements, and reporting procedures.
- These regulations emphasize the importance of building AI systems that are powerful, safe, secure, and aligned with societal values through responsible development practices.
Requirements for Computing Cluster Operators
You operate a “Computing Cluster”: This means a set of interconnected machines with:
- Data center networking exceeding 100 gigabits per second.
- A theoretical maximum computing capacity of at least 10^20 integer/floating-point operations per second.
- The ability to be used for AI model training.
Customer Identification and Verification
- Computing cluster operators must obtain basic identifying information from prospective customers using compute resources sufficient to train a covered model. This includes:
- Identity of the customer
- Means and source of payment (financial institution, credit card number, account number, customer identifier, transaction identifiers, virtual currency wallet/address)Ā
- Email address and phone contact information used to verify customer identityĀ
- IP addresses used for access/administration and date/time of each access/action
- Applies to: Covered models
Customer Intent Assessment
- Computing cluster operators must assess whether a prospective customer intends to utilize the computing cluster to deploy a covered model.
- Applies to: Covered models
Annual Validation and Review
- Computing cluster operators must annually validate the customer information collected and conduct the assessment of customer intent.
- Applies to: Covered models
Record Maintenance and Disclosure
- Computing cluster operators must maintain appropriate records of actions taken, including policies and procedures, for 7 years. These records should be provided to the Frontier Model Division or the Attorney General upon request.
- Applies to: Covered modelsĀ
Emergency Shutdown Capability
- Computing cluster operators must implement the capability to promptly enact a full shutdown of the computing cluster in the event of an emergency.
- Applies to: Covered modelsĀ
Retention of Access Records
- Computing cluster operators must retain a customer’s IP addresses used for access/administration and the date and time of each access or administrative action.
- Applies to: Covered models
Public Pricing Schedule
- Computing cluster operators must provide a transparent, uniform, publicly available price schedule for access to the computing cluster at a given level of quality and quantity, subject to the developer’s terms of service. They cannot engage in unlawful discrimination or anti-competitive practices when setting prices or granting access.
- Applies to: Covered models
Permitted Preferential Access
- Computing cluster operators may provide free, discounted, or preferential access to public entities, academic institutions, or for noncommercial research purposes.
- Applies to: Covered models
Other Requirements
I. Reporting and Certification
Annual Certification of Compliance
- Developers of non-derivative covered models that are not subject to a limited duty exemption must submit an annual certification under penalty of perjury of compliance with the requirements, signed by the chief technology officer or a more senior corporate officer. The certification must specify the nature and magnitude of hazardous capabilities the model possesses or may reasonably possess, the outcome of capability testing, an assessment of the risk that compliance with the safety and security protocol may be insufficient to prevent harms, and other useful information.
- Applies to: Non-derivative covered models not subject to limited duty exemption
Artificial Intelligence Safety Incident Reporting
- Developers must report each artificial intelligence safety incident affecting a non-derivative covered model and any derivative versions within their custody, control, or possession to the Frontier Model Division within 72 hours of learning about the incident or facts establishing a reasonable belief that an incident occurred.
- Applies to: Non-derivative covered models and derivative versions
II. Operational Requirements
Capability Testing Post-Training
- Upon completion of training of a covered model that is not a derivative model, the developer must perform capability testing to determine if a limited duty exemption applies. The developer must submit a certification of compliance within 90 days and no more than 30 days after initiating the commercial, public, or widespread use of the covered model.
- Applies to: Non-derivative covered models
Reasonable Safeguards Implementation
- Developers must implement reasonable safeguards and requirements informed by the training and testing process to prevent individuals from using the hazardous capabilities of the model or its derivatives to cause critical harm. This includes ensuring actions and resulting critical harms can be accurately attributed to the model and any responsible users.
- Applies to: Non-derivative covered models and derivative models
III. Ethical and Legal Compliance
Pricing and Access Requirements
- Developers that provide commercial access to a covered model must provide a transparent, uniform, publicly available price schedule and not engage in unlawful discrimination or non-competitive activity in determining price or access.
- Applies to: Covered models provided for commercial access
Employee Whistleblower Protections
- Developers cannot prevent employees from disclosing information to the Attorney General if the employee reasonably believes it indicates the developer is out of compliance with the requirements. Developers cannot retaliate against employees for such disclosures. Developers must provide clear notice of these rights to all employees working on covered models and establish a reasonable internal anonymous disclosure process for employees to report concerns about potential non-compliance or false or misleading statements related to the safety and security protocol.
- Applies to: Employees working on covered models
IV. Oversight and Enforcement
Duties of Frontier Model Division
- The bill creates the Frontier Model Division within the Department of Technology to review annual certifications, advise the Attorney General on violations, issue guidance and best practices, publish safety incident reports, appoint advisory committees, and levy fees.
- Applies to: The Frontier Model Division’s oversight of covered models
Enforcement and Penalties
- The Attorney General has the authority to bring civil action against any person (individual or organization) violating this act. Penalties for violations may include injunctive relief (court orders to stop harmful actions, potentially including the deletion of a model and its weights, or a full shutdown of the model), monetary damages (compensation to those harmed by violations, including punitive damages), and civil penalties (fines based on the development cost of the model, with higher penalties for subsequent violations).
- Applies to: Developers of covered models
V. Establishment of CalCompute
CalCompute Public Cloud Computing Cluster
- The Department of Technology must commission consultants to create CalCompute, a public cloud computing cluster, to conduct research into the safe deployment of large-scale AI models and foster equitable innovation.
