Analyzing critical legal trends and developments across data, cyber, AI and digital regulations from around the world and beyond borders

On September 29, 2025, California Governor Gavin Newsom put his signature on SB 53, enacting the Transparency in Frontier Artificial Intelligence Act (“TFAIA”). The TFAIA aims to ensure the safety and transparency of large AI models, and is particularly significant because it comes from a state many of the world’s leading AI companies call home. The enactment of the TFAIA is also notable because Governor Newsom vetoed a more restrictive and expansive predecessor bill last year because he perceived the 2024 bill as overly broad, stifling innovation, and lacking evidence-based tailoring to actual risks. Senator Scott Wiener, who authored the TFAIA and also proposed the vetoed 2024 bill, modified several provisions in TFAIA to avoid another veto in 2025 and succeeded in that respect.

Scope

The TFAIA requires developers of “frontier models” to make various disclosures and imposes additional obligations to “large frontier developers” which had annual gross revenues in excess of $500 million in the preceding calendar year. A frontier model is a foundation model that is trained using more than 1026 integer or floating-point operations, including computing for the initial training run as well as for any subsequent fine-tuning, reinforcement learning, or other material modifications applied to the model.

Frontier AI Framework

Large frontier developers must implement, and clearly and conspicuously publish on its website, a Frontier AI framework that must address ten enumerated topics, including: defining and assessing thresholds to determine whether its models could pose catastrophic risks; identifying and responding to critical safety incidents like loss of model control or unauthorized access to or exfiltration of model weights causing harm; and implementing cybersecurity practices. Further, the AI framework must describe how the large frontier developer approaches, among other things, incorporates national standards, international standards, and industry-consensus best practices into its frontier AI framework.  Under the statute, catastrophic risks are foreseeable, material risks that a frontier model could cause death or injury to more than 50 people or over $1 billion in property damage in a single incident, if used in certain ways.

The large frontier developer must review and, as appropriate, update its Frontier AI framework at least annually, and if the developer makes any material modifications to its framework, it must publish the modified framework, along with the justification for the modifications, within 30 days of the change.

Transparency Reports

Before or concurrently with the deployment of a new frontier model, all frontier developers must also publish on its website a Transparency Report that includes a mechanism to communicate with the developer, the release date of the frontier model, the languages the model supports, the modalities of model outputs, the model’s intended uses, and generally applicable restrictions or conditions on the use of the model. Large frontier developers must also summarize assessments of catastrophic risk conducted by the developer, including the results of those assessments and the involvement of third-party evaluators in the assessment.

Reporting Obligations

Large frontier developers must submit summaries of any assessments of catastrophic risk to the California Office of Emergency Services (“OES”) every three months or on another reasonable schedule specified by the developer and communicated in writing to OES.

In addition to these regular summaries, the TFAIA requires frontier developers to report critical safety incidents to the OES within 15 days of the discovery of the critical incident. For critical incidents that pose an imminent risk of death or serious physical injury, the reporting timeframe is shortened to 24 hours and the developer may also need to report such incidents to law enforcement or public safety agencies.

Beginning in 2027, the OES will produce an annual report with anonymized and aggregated information about critical safety incidents from the preceding year. Such reports may not include information that would compromise developers’ trade secrets or cybersecurity, public safety, or national security of the United States.

Frontier developers may meet their critical incident reporting obligations by complying with federal laws, regulations, or guidance documents that include substantially similar (or stricter) reporting requirements. However, to achieve compliance in this way, OES must first adopt regulations designating such laws, regulations, or guidance documents as meeting those criteria and the developer must disclose its intention to rely on this mechanism to OES.

Whistleblower protections

TFAIA also provides that a frontier developer shall not make, adopt, enforce, or enter into a rule, regulation, policy, or contract that prevents a covered employee from disclosing, or retaliates against a covered employee for disclosing, information to the Attorney General, a federal authority, a person with authority over the covered employee, or another covered employee who has authority to investigate, discover, or correct the reported issue, if the covered employee has reasonable cause to believe that the information discloses either a frontier developer’s violation of its TFAIA obligations or that the frontier developer’s activities pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk. Frontier developers are required to provide clear notice to covered employees of their whistleblowing rights and responsibilities including by posting and displaying notice within the workplace and at least once a year providing written notice to covered employees and ensuring that the notice is received and acknowledged by all covered employees. The TFAIA defines covered employees as those employees responsible for assessing, managing, or addressing risk of critical safety incidents.

Enforcement

A frontier developer that fails to comply with the TFAIA requirements, including by its failure to comply with its own AI framework, may be liable for up to $1 million per violation. The California Attorney General enforces the TFAIA.

Discussion: The TFAIA in Context

In recent years California has filled its code books with AI regulations addressing a diverse set of concerns. The TFAIA represents a comprehensive law focused on ensuring the safety of frontier models. As mentioned above, the TFAIA follows an earlier attempt by California lawmakers to impose transparency and safety obligations on developers of frontier models, which the California legislature passed as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, but which Governor Newsom ultimately vetoed.

While the new law shares some aspects with its predecessor (e.g., both include reporting and public notice requirements), several key changes helped the TFAIA avoid another veto. For example, the TFAIA jettisoned the earlier bill’s requirements that frontier developers implement a pre-training written safety and security protocol, provide full shutdown capability for frontier models, and publish yearly third-party audits of their compliance. The TFAIA also lengthens the reporting deadline for AI incidents from 72 hours to 15 days, subject to the exception for critical incidents that pose an imminent risk of death or serious physical injury.

Although these changes will be welcome to developers wary of onerous regulation, they come at a time when the AI governance landscape is becoming increasingly complex and difficult to navigate. For example, the New York state legislature recently passed its Responsible AI Safety and Education Act (the “RAISE Act”) which follows a similar approach to AI governance, but differs in many key details. Concerned about the potential of regulations to stifle innovation and impose significant costs on the AI industry, the House Judiciary Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet recently held a hearing warning of the “Californication” of AI regulation, to which Governor Newsom offered a sharp response.

Although it remains to be seen whether further states will follow California’s lead in regulating AI safety and transparency or whether federal lawmakers will step in to preclude such efforts with a nationwide standard, it is certain that AI governance should continue to be top of mind for organizations that develop and deploy AI tools. We will continue to monitor and report on developments in this space and are happy to assist businesses with questions about how these newly emerging laws apply to their use of AI.

Author

Brian Hengesbaugh is Global Chair of Baker McKenzie's Data & Cyber Practice. Formerly special counsel to the general counsel of the US Department of Commerce, Brian played a key role in the development and implementation of the US Government’s domestic and international policy in the area of privacy and electronic commerce. In particular, he served on the core team that negotiated the US-EU Safe Harbor Privacy Arrangement (Safe Harbor) and earned a Medal Award from the US Department of Commerce for this service.

Author

Cynthia J. Cole is a partner in Baker McKenzie’s Commercial, Technology and Transactions and Data and Cyber practices, and co-chair of Baker Women California. A former CEO and General Counsel, just before joining the Firm, Cynthia was Deputy Department Chair of the Corporate Section in the California offices of Baker Botts where she built the technology transactions and data privacy practice. An intellectual property transactions attorney, Cynthia also has expertise in AI, digital transformation, data privacy, and cybersecurity strategy.

Author

Lothar has been helping companies in Silicon Valley and around the world take products, business models, intellectual property and contracts global for nearly 20 years. He advises on data privacy law compliance, information technology commercialization, interactive entertainment, media, copyrights, open source licensing, electronic commerce, technology transactions, sourcing and international distribution at Baker McKenzie in San Francisco & Palo Alto.

Author

Jonathan Tam is a partner in the San Francisco office focused on global privacy, advertising, intellectual property, content moderation and consumer protection laws. He is a qualified attorney in Canada and the U.S. passionate about helping clients achieve their commercial objectives while managing legal risks. He is well versed in the legal considerations that apply to many of the world’s cutting-edge technologies, including AI-driven solutions, wearables, connected cars, Web3, DAOs, NFTs, VR/AR, crypto, metaverses and the internet of everything.

Author

Helena practices international commercial law with a focus on assisting and advising technology companies with cross-border transactions, drafting and negotiating commercial agreements, and advising on global data privacy law compliance. Helena also advises software developers, e-commerce companies, and global mobile and web gaming developers on regulatory restrictions, intellectual property, contracting and data privacy.

Author

Adam Aft helps global companies navigate the complex issues regarding intellectual property, data, and technology in product counseling, technology, and M&A transactions. He leads the Firm's North America Technology Transactions group and co-leads the group globally. Adam regularly advises a range of clients on transformational activities, including the intellectual property, data and data privacy, and technology aspects of mergers and acquisitions, new product and service initiatives, and new trends driving business such as platform development, data monetization, and artificial intelligence.

Author

Justine focuses her practice on both proactive and reactive cybersecurity and data privacy services, representing clients in matters related to information governance, diligence in acquisitions and investments, incident preparedness and response, the California Consumer Privacy Act, privacy litigation, and cyber litigation.

Author

Keo McKenzie is a partner in Baker McKenzie's Intellectual Property and Technology Practice Group (IPTech), based in the Firm’s Palo Alto office. Keo has significant experience advising multinational technology, life sciences, and healthcare companies with complex matters related to regulatory and transactional issues presented by digital health technologies.

Author

Avi Toltzis is a Knowledge Lawyer in Baker McKenzie's Chicago office.