California Enacts Landmark AI Safety Law Targeting Tech Giants
In a significant move to regulate the burgeoning AI industry, California Governor Gavin Newsom has signed Senate Bill 53 into law. The new legislation, announced on September 29, requires the world’s largest artificial intelligence (AI) companies to publicly disclose their safety protocols and report critical incidents.
SB 53: A New Era of AI Regulation in California
Senate Bill (SB) 53 represents California’s most substantial effort to date in regulating Silicon Valley's rapidly advancing AI industry. The state aims to balance innovation with responsible development while maintaining its position as a global tech hub.
State Senator Scott Wiener, the Bill’s sponsor, stated, "With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails." This new law marks a successful second attempt by Mr. Wiener to establish AI safety regulations.
Key Provisions of the New Law
The Transparency in Frontier Artificial Intelligence Act mandates that major AI companies publicly disclose their safety and security protocols in a redacted form to protect intellectual property. They are also required to report critical safety incidents, such as model-enabled weapons threats, major cyber-attacks, or loss of model control, to state officials within 15 days.
- Public Disclosure: Companies must reveal their safety and security protocols.
- Incident Reporting: Critical safety incidents must be reported within 15 days.
- Whistleblower Protection: Employees are protected for revealing dangers or violations.
The legislation also establishes whistleblower protections for employees who reveal evidence of dangers or violations within AI development.
California's Approach vs. European Union's AI Act
According to Mr. Wiener, California's approach differs from the European Union’s landmark AI Act, which requires private disclosures to government agencies. SB 53 mandates public disclosure to ensure greater accountability. In a provision described by advocates as a world-first, the law requires companies to report instances where AI systems engage in dangerous deceptive behavior during testing.
For example, if an AI system lies about the effectiveness of controls designed to prevent it from assisting in bioweapon construction, developers must disclose the incident if it materially increases catastrophic harm risks.
Expert Leadership Behind the Law
The working group behind the law was led by prominent experts, including Stanford University’s Professor Fei-Fei Li, known as the “godmother of AI”.