Analyzing critical legal trends and developments across data, cyber, AI and digital regulations from around the world and beyond borders

On September 29, 2024, California Governor Gavin Newsom vetoed Senate Bill 1047, which would have enacted the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (the “Act”) to create a comprehensive regulatory framework for the development of artificial intelligence models. The veto embodies the dilemma that has emerged around the regulation of AI applications: how can laws prevent harms in the use and development of AI, while promoting innovation and harnessing the power of new technologies to affect positive change?

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act at a Glance

Although the Act sought to follow the EU AI Act in establishing a comprehensive regulatory framework, it eschewed the AI Act’s risk-based approach to AI regulation. Instead, the Act would have applied to “covered models,” which are defined as AI models that exceed thresholds in the computing power and costs associated with their training.

The Act would have imposed certain obligations and restrictions on covered models, including:

  • The implementation of administrative, technical, and physical cybersecurity measures to protect against unauthorized access, misuse, or post-training modifications
  • The implementation of full-shutdown capabilities
  • A written safety and security protocol, along with the designation of senior personnel to ensure implementations of the protocol as written, the retention of the protocol for as long as the model is made available (plus five years), and the publication and filing of the protocol with the California attorney general
  • Assessment of whether a covered model is capable of causing or enabling material harm
  • Not making covered model available for commercial or public use if there is a risk that the model will create or enable a critical harm
  • Undertaking and retaining annual third-party audits of covered models
  • The submission of a statement of compliance to California’s attorney general
  • Reporting safety incidents to the attorney general

The Act would have provided for attorney general enforcement, with monetary penalties and injunctive relief available for violations.

Although Senate Bill 1047 will not go into law, there are federal legislative efforts underway that will regulate AI developers and cloud providers.  For example, U.S. Department of Commerce’s Bureau of Industry and Security recently released a Notice of Proposed Rulemaking that will impose cyber reporting obligations on frontier AI developers and compute providers.

In Context

The veto of the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act is the culmination of a busy legislative period for AI regulation, with Governor Newsom signing eighteen new laws relating to AI in the preceding 30 days. The new California legislation runs the gamut of subject areas, from laws requiring the disclosure of AI tools used in political advertisements to the establishment of a commission to consider the inclusion of AI literacy in California schools.

While this frenzy of legislative activity suggests a willingness to regulate AI, the refusal to enact a comprehensive regulatory framework in the style of the European Union’s AI Act or Colorado’s recent AI law is significant. The Act attracted opposition from California Representatives Nancy Pelosi and Zoe Lofgren, AI thought leaders like Professor Fei-Fei Li, as well as California organizations leading AI development. The criticism, many of which were reflected in Governor Newsom’s veto message, noted the impact of the law on innovation, its failure to adopt a risk-based approach like the EU AI Act, and potential harms to the development and availability of open source AI models.

Despite the veto, Governor Newsom reiterated his commitment “to working with the Legislature, federal partners, technology experts, ethicists, and academia, to find the appropriate path forward, including legislation and regulation.” Organizations that develop and use AI should continue to monitor legislative developments as statehouses consider both comprehensive and use-specific proposals to regulate AI.

Author

Adam Aft helps global companies navigate the complex issues regarding intellectual property, data, and technology in product counseling, technology, and M&A transactions. He leads the Firm's North America Technology Transactions group and co-leads the group globally. Adam regularly advises a range of clients on transformational activities, including the intellectual property, data and data privacy, and technology aspects of mergers and acquisitions, new product and service initiatives, and new trends driving business such as platform development, data monetization, and artificial intelligence.

Author

Cynthia J. Cole is a partner in Baker McKenzie’s Commercial, Technology and Transactions and Data and Cyber practices, and co-chair of Baker Women California. A former CEO and General Counsel, just before joining the Firm, Cynthia was Deputy Department Chair of the Corporate Section in the California offices of Baker Botts where she built the technology transactions and data privacy practice. An intellectual property transactions attorney, Cynthia also has expertise in AI, digital transformation, data privacy, and cybersecurity strategy.

Author

Brian Hengesbaugh is Global Chair of Baker McKenzie's Data & Cyber Practice. Formerly special counsel to the general counsel of the US Department of Commerce, Brian played a key role in the development and implementation of the US Government’s domestic and international policy in the area of privacy and electronic commerce. In particular, he served on the core team that negotiated the US-EU Safe Harbor Privacy Arrangement (Safe Harbor) and earned a Medal Award from the US Department of Commerce for this service.

Author

Cristina Messerschmidt is a senior associate in the Data and Cyber practice group based in Chicago, advising global organizations on data privacy and cybersecurity compliance requirements, data security incident response, and legal issues related to AI.

Author

Ella is an associate in our Firm's Intellectual Property & Technology Practice Group and is based in our San Francisco office.

Author

Justine focuses her practice on both proactive and reactive cybersecurity and data privacy services, representing clients in matters related to information governance, diligence in acquisitions and investments, incident preparedness and response, the California Consumer Privacy Act, privacy litigation, and cyber litigation.

Author

Garrett is an associate in Baker McKenzie's North America Intellectual Property Group and is based in our San Francisco office. His practice focuses on helping clients build effective information governance programs, comply with privacy laws and regulations, and respond to cybersecurity incidents.

Author

Avi Toltzis is a Knowledge Lawyer in Baker McKenzie's Chicago office.