The Case Against SB 1047

California SB 1047 (California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) is a proposed California state bill that aims to regulate the development and deployment of large, advanced AI models by imposing safety and testing requirements on developers. The bill is now in the hands of Governor Gavin Newsom, who will either enact it into law or reject it with a veto.

The bill targets “covered models,” defined as follows:

  • Before January 1, 2027: AI models that are trained using more than 10^26 operations and cost over $100 million, or fine-tuned models using more than 3×10^25 operations and cost over $10 million.
  • After January 1, 2027: The thresholds will be updated annually by the Government Operations Agency to reflect technological advancements.

Note that both foundation models (pre-training) and their customized variants (fine-tuning) are covered.  

To contextualize the immense scale and cost associated with training advanced AI models under SB 1047, consider the following comparison with well-known large language models (LLMs) like GPT-4, Claude, Gemini, and LLaMA. The first threshold involves AI models trained using more than 10^26 operations and costing over $100 million. To put this into perspective, GPT-4, one of the most sophisticated LLMs, is estimated to have been trained with operations in the range of 10^24 to 10^25. This means the models targeted by SB 1047 are operating on a scale at least 10 to 100 times larger, suggesting potentially hundreds of times more parameters and far greater computational complexity. The $100 million cost for these models surpasses the estimated training costs of even the most advanced models like GPT-4, positioning these new foundation models as projects achievable only by the largest tech companies. This matters because scaling is consistently delivering more sophisticated and capable models.

bigger models + more data + more compute = better performance

The second threshold addresses fine-tuned models, which involve using more than 3 × 10^25 operations and exceeding $10 million in costs. Fine-tuning at this scale goes beyond simple adjustments; it entails a significant investment to specialize a pre-trained model, potentially for highly complex tasks within specific domains like medicine or advanced coding. For comparison, this level of fine-tuning is about 100 times the operational scale of GPT-3’s entire training process. The $10 million cost still presents a substantial barrier, likely excluding all but the largest organizations from engaging in such specialized fine-tuning efforts. The targeted models under this threshold are indicative of a growing trend where fine-tuning is not just about improving existing models but pushing the boundaries of AI capabilities in very specific, high-stakes applications.

It’s crucial to recognize that these thresholds are not immutable. Advancements in architectural design or computational power could potentially lower the barriers for more teams to develop frontier models. In the end, the creation of groundbreaking AI models will be reserved for organizations possessing both substantial resources and expertise, given the overwhelming technical and regulatory obstacles posed by SB 1047.

(enlarge)

Additionally, SB 1047 addresses the potential for “critical harm” associated with covered AI models. It defines critical harm as significant real-world damage caused or materially enabled by an AI model or its derivative. This includes severe consequences such as the creation or use of weapons of mass destruction leading to mass casualties, cyberattacks on critical infrastructure resulting in over $500 million in damages or mass casualties, and actions by the model that, with limited human oversight, would be equivalent to specific crimes if performed by a human. The bill also covers other grave threats to public safety and security that are comparable in severity, ensuring that AI developers take comprehensive measures to prevent such outcomes.

Why Governor Newsom Should Veto SB 1047

Governor Newsom should veto SB 1047 because it imposes excessive regulatory burdens that could stifle innovation, harm California’s economy, and place undue restrictions on AI development, particularly for open source communities, startups, and smaller companies.

Federal vs. State Regulation. AI regulation should be handled at the federal level to ensure consistency across the country, rather than through a patchwork of state laws.

Stifles Innovation. SB 1047 imposes excessive regulatory burdens, particularly on startups and smaller companies. The high compliance costs and potential liability risks could hinder innovation and create a chilling effect on AI development. (Dive into the details in the Appendix below.)

Economic Harm to California. The bill could negatively impact California’s economy by driving AI companies and jobs out of the state due to the regulatory and legal burdens it imposes. This could lead to a decline in investment and a weakening of the state’s innovation ecosystem.

Impact on Open-Source AI Development. SB 1047’s stringent requirements, such as emergency shutdown mandates, could disproportionately affect open models (open-source or open-weights). The increased compliance costs and legal liabilities may discourage developers from releasing open models, potentially stifling innovation in this collaborative space.

Overly Broad and Ambiguous Regulation. The bill’s broad scope, which targets general-purpose AI technology rather than specific applications, is too encompassing. Additionally, the vague and unclear requirements, such as what constitutes “prompt” shutdowns, could lead to legal challenges and over-compliance, especially for smaller companies. Instead of creating a new, rigid AI regulatory framework, a more effective approach would be to adapt existing laws. Just as steel regulations differ based on use, AI governance should be tailored to each application’s context and risks. Updating laws in sectors like healthcare, transportation, and copyright to reflect AI’s impact allows for responsible development while mitigating specific concerns.

Premature Regulation of Nascent Technology. AI technology is still in its early stages, and the potential harms are not yet fully understood. Comprehensive regulation, like SB 1047, is premature and could stifle the growth and maturation of AI technologies.

Restricting the Development of Ultra-Large AI Models. SB 1047’s thresholds for AI models trained with more than 10^26 operations and costing over $100 million impose heavy regulatory burdens, limiting such development to only the largest tech companies and stifling broader competition and innovation.

Limiting Access to Specialized Fine-Tuning. The stringent requirements for fine-tuning models—exceeding 3 × 10^25 operations and $10 million in costs—could exclude all but the largest organizations from pursuing specialized AI development in high-stakes domains like medicine, restricting innovation and application diversity.

(From Inside the Data Strategies of Top AI Labs)

Related Content

If you enjoyed this post please support our work by encouraging your friends and colleagues to subscribe to our newsletter:


Appendix: SB 1047 Cheat Sheet

[based on an online version pulled on 2024-08-30]

I. Scope and Definitions

A. Covered Models
  • Definition:
    • Before January 1, 2027: AI models that are trained using more than 10^26 operations and cost over $100 million, or fine-tuned models using more than 3×10^25 operations and cost over $10 million.
    • After January 1, 2027: The thresholds will be updated annually by the Government Operations Agency to reflect technological advancements.
  • Affected Parties: Developers of covered AI models.
B. Critical Harm
  • Definition: Significant real-world harms caused or materially enabled by a covered model or its derivative, including:
    • The creation or use of weapons of mass destruction resulting in mass casualties.
    • Mass casualties or over $500 million in damages from cyberattacks on critical infrastructure.
    • Mass casualties or over $500 million in damages from actions by the model with limited human oversight that would constitute specific crimes if committed by a human.
    • Other grave harms to public safety and security comparable in severity.

II. Pre-Training Requirements

A. Implement Cybersecurity Protections
  • Description: Developers must implement reasonable administrative, technical, and physical cybersecurity measures to prevent unauthorized access, misuse, or unsafe modifications of covered models and their derivatives.
  • Timeline: Before beginning initial training.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
B. Implement Full Shutdown Capability
  • Description: Developers must implement the capability to promptly enact a full shutdown of the model training, covered models, and their derivatives, considering potential impacts on critical infrastructure.
  • Timeline: Before beginning initial training.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
C. Implement Safety and Security Protocol
  • Description: Developers must implement a written protocol specifying protections and procedures to avoid unreasonable risks of critical harm, including detailed testing procedures, compliance requirements, safeguards, and conditions for enacting a full shutdown.
  • Timeline: Before beginning initial training.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
D. Ensure Protocol Implementation
  • Description: Senior personnel must be designated to ensure compliance with the safety protocol by employees and contractors.
  • Timeline: Before beginning initial training.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
E. Retain Safety Protocol
  • Description: Developers must retain an unredacted copy of the safety protocol for as long as the model is commercially available plus five years, including records of updates and revisions.
  • Timeline: Before beginning initial training.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
F. Conduct Annual Protocol Review
  • Description: Developers must annually review and update the safety protocol to account for changes in the model’s capabilities and industry best practices.
  • Timeline: Annually after initial implementation.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
G. Publish Redacted Protocol
  • Description: Developers must publish a redacted version of the safety protocol and submit it to the Attorney General. Redactions are allowed only to protect public safety, trade secrets, or confidential information.
  • Timeline: Before beginning initial training and within 30 days of material modifications.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
H. Implement Additional Safety Measures
  • Description: Developers must take reasonable care to implement other appropriate measures to prevent unreasonable risks of critical harm.
  • Timeline: Before beginning initial training.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.

III. Pre-Deployment Requirements

A. Assess Critical Harm Capability
  • Description: Developers must assess whether the model is reasonably capable of causing or enabling critical harm.
  • Timeline: Before using the model beyond training/evaluation or making it publicly available.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
B. Record and Retain Test Information
  • Description: Developers must record and retain for five years or more the information on specific tests and results used in the risk assessment, with sufficient detail for third-party replication.
  • Timeline: Before using the model beyond training/evaluation or making it publicly available.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
C. Implement Safeguards Against Critical Harm
  • Description: Developers must take reasonable care to implement appropriate safeguards to prevent the model and its derivatives from causing or enabling critical harm.
  • Timeline: Before using the model beyond training/evaluation or making it publicly available.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
D. Ensure Attribution Capability
  • Description: Developers must take reasonable care to ensure that the actions of the model and its derivatives, and any resulting harms, can be accurately attributed to them.
  • Timeline: Before using the model beyond training/evaluation or making it publicly available.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.

IV. Deployment Restrictions

  • Description: Developers are prohibited from deploying a covered model or its derivative for commercial, public, or foreseeable public use if there is an unreasonable risk of it causing or enabling critical harm.
  • Timeline: Ongoing after initial deployment.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.

V. Ongoing Requirements

A. Conduct Annual Reevaluation
  • Description: Developers must annually reevaluate the procedures, policies, protections, capabilities, and safeguards implemented for the model.
  • Timeline: Annually after initial deployment.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
B. Conduct Independent Audit
  • Description: Starting January 1, 2026, developers must annually retain a third-party auditor to perform an independent audit of their compliance with the bill’s provisions.
  • Timeline: Annually beginning January 1, 2026.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
C. Produce and Retain Audit Report
  • Description: The auditor must produce a report assessing the developer’s compliance, which the developer must retain for five years or more.
  • Timeline: Annually beginning January 1, 2026.
  • Affected Parties: Developers and auditors of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
D. Submit Compliance Statement
  • Description: Developers must annually submit a statement to the Attorney General certifying their compliance with the bill’s requirements.
  • Timeline: Annually and within 30 days of initial deployment.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.
E. Report Safety Incidents
  • Description: Developers must report any AI safety incidents affecting the covered model or its derivatives to the Attorney General within 72 hours of becoming aware of the incident.
  • Timeline: Within 72 hours of incident discovery.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Civil penalties up to 10-30% of model training costs for violations causing harm.

VI. Computing Cluster Requirements

A. Conduct Customer Due Diligence
  • Description: Operators of computing clusters must implement procedures to obtain customer information and assess their intent to train a covered model when sufficient resources are used.
  • Timeline: When customers use resources sufficient to train a covered model.
  • Affected Parties: Operators of computing clusters.
  • Enforcement Mechanisms/Penalties: Civil penalties up to $50,000-$100,000 per violation, with a $10 million aggregate cap for related violations.
B. Retain Customer Information
  • Description: Operators must retain customer IP addresses and access times for seven years.
  • Timeline: Ongoing seven-year retention period.
  • Affected Parties: Operators of computing clusters.
  • Enforcement Mechanisms/Penalties: Civil penalties up to $50,000-$100,000 per violation, with a $10 million aggregate cap for related violations.
C. Implement Shutdown Capability
  • Description: Operators must implement the capability to promptly shut down resources used to train or operate customer models.
  • Timeline: Ongoing requirement.
  • Affected Parties: Operators of computing clusters.
  • Enforcement Mechanisms/Penalties: Civil penalties up to $50,000-$100,000 per violation, with a $10 million aggregate cap for related violations.

VII. Whistleblower Protections

A. Protect Employee Disclosures
  • Description: Developers and their contractors or subcontractors are prohibited from preventing employees from disclosing compliance or safety issues to the authorities.
  • Timeline: Ongoing requirement.
  • Affected Parties: Developers, contractors, and subcontractors of covered models.
  • Enforcement Mechanisms/Penalties: Penalties under the Labor Code.
B. Prohibit Retaliation
  • Description: Developers are prohibited from retaliating against employees for protected disclosures related to compliance or safety risks.
  • Timeline: Ongoing requirement.
  • Affected Parties: Developers, contractors, and subcontractors of covered models.
  • Enforcement Mechanisms/Penalties: Penalties under the Labor Code.
C. Provide Notice of Rights
  • Description: Developers must provide clear notice to employees regarding their whistleblower rights and responsibilities.
  • Timeline: Ongoing requirement.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Penalties under the Labor Code.
D. Implement Internal Disclosure Process
  • Description: Developers must establish a reasonable internal process for anonymous employee disclosures regarding potential violations.
  • Timeline: Ongoing requirement.
  • Affected Parties: Developers of covered models.
  • Enforcement Mechanisms/Penalties: Penalties under the Labor Code.

VIII. Governance and Oversight

A. Board of Frontier Models
  • Description: Establishes a Board within the Government Operations Agency to oversee regulatory updates, including the definition of a “covered model.”
  • Timeline: On or before January 1, 2027.
  • Affected Parties: Government Operations Agency, Board of Frontier Models, AI developers.
  • Enforcement Mechanisms/Penalties: Not explicitly specified, but likely under regulatory compliance mechanisms.
B. CalCompute Framework
  • Description: Establishes a consortium to develop a public cloud computing cluster (CalCompute) to support safe, ethical, equitable, and sustainable AI development.
  • Timeline: Report to Legislature by January 1, 2026.
  • Affected Parties: Government Operations Agency, public and private academic institutions, and the general public.
  • Enforcement Mechanisms/Penalties: Implementation contingent on state budget appropriations.

Discover more from Gradient Flow

Subscribe now to keep reading and get access to the full archive.

Continue reading