On September 29, 2025, California Governor Gavin Newsom put his signature on SB 53, enacting the Transparency in Frontier Artificial Intelligence Act (“TFAIA”). The TFAIA aims to ensure the safety and transparency of large AI models, and is particularly significant because it comes from a state many of the world’s leading AI companies call home. The enactment of the TFAIA is also notable because Governor Newsom vetoed a more restrictive and expansive predecessor bill last year because he perceived the 2024 bill as overly broad, stifling innovation, and lacking evidence-based tailoring to actual risks. Senator Scott Wiener, who authored the TFAIA and also proposed the vetoed 2024 bill, modified several provisions in TFAIA to avoid another veto in 2025 and succeeded in that respect.
Scope
The TFAIA requires developers of “frontier models” to make various disclosures and imposes additional obligations to “large frontier developers” which had annual gross revenues in excess of $500 million in the preceding calendar year. A frontier model is a foundation model that is trained using more than 1026 integer or floating-point operations, including computing for the initial training run as well as for any subsequent fine-tuning, reinforcement learning, or other material modifications applied to the model.
Frontier AI Framework
Large frontier developers must implement, and clearly and conspicuously publish on its website, a Frontier AI framework that must address ten enumerated topics, including: defining and assessing thresholds to determine whether its models could pose catastrophic risks; identifying and responding to critical safety incidents like loss of model control or unauthorized access to or exfiltration of model weights causing harm; and implementing cybersecurity practices. Further, the AI framework must describe how the large frontier developer approaches, among other things, incorporates national standards, international standards, and industry-consensus best practices into its frontier AI framework. Under the statute, catastrophic risks are foreseeable, material risks that a frontier model could cause death or injury to more than 50 people or over $1 billion in property damage in a single incident, if used in certain ways.
The large frontier developer must review and, as appropriate, update its Frontier AI framework at least annually, and if the developer makes any material modifications to its framework, it must publish the modified framework, along with the justification for the modifications, within 30 days of the change.
Transparency Reports
Before or concurrently with the deployment of a new frontier model, all frontier developers must also publish on its website a Transparency Report that includes a mechanism to communicate with the developer, the release date of the frontier model, the languages the model supports, the modalities of model outputs, the model’s intended uses, and generally applicable restrictions or conditions on the use of the model. Large frontier developers must also summarize assessments of catastrophic risk conducted by the developer, including the results of those assessments and the involvement of third-party evaluators in the assessment.
Reporting Obligations
Large frontier developers must submit summaries of any assessments of catastrophic risk to the California Office of Emergency Services (“OES”) every three months or on another reasonable schedule specified by the developer and communicated in writing to OES.
In addition to these regular summaries, the TFAIA requires frontier developers to report critical safety incidents to the OES within 15 days of the discovery of the critical incident. For critical incidents that pose an imminent risk of death or serious physical injury, the reporting timeframe is shortened to 24 hours and the developer may also need to report such incidents to law enforcement or public safety agencies.
Beginning in 2027, the OES will produce an annual report with anonymized and aggregated information about critical safety incidents from the preceding year. Such reports may not include information that would compromise developers’ trade secrets or cybersecurity, public safety, or national security of the United States.
Frontier developers may meet their critical incident reporting obligations by complying with federal laws, regulations, or guidance documents that include substantially similar (or stricter) reporting requirements. However, to achieve compliance in this way, OES must first adopt regulations designating such laws, regulations, or guidance documents as meeting those criteria and the developer must disclose its intention to rely on this mechanism to OES.
Whistleblower protections
TFAIA also provides that a frontier developer shall not make, adopt, enforce, or enter into a rule, regulation, policy, or contract that prevents a covered employee from disclosing, or retaliates against a covered employee for disclosing, information to the Attorney General, a federal authority, a person with authority over the covered employee, or another covered employee who has authority to investigate, discover, or correct the reported issue, if the covered employee has reasonable cause to believe that the information discloses either a frontier developer’s violation of its TFAIA obligations or that the frontier developer’s activities pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk. Frontier developers are required to provide clear notice to covered employees of their whistleblowing rights and responsibilities including by posting and displaying notice within the workplace and at least once a year providing written notice to covered employees and ensuring that the notice is received and acknowledged by all covered employees. The TFAIA defines covered employees as those employees responsible for assessing, managing, or addressing risk of critical safety incidents.
Enforcement
A frontier developer that fails to comply with the TFAIA requirements, including by its failure to comply with its own AI framework, may be liable for up to $1 million per violation. The California Attorney General enforces the TFAIA.
Discussion: The TFAIA in Context
In recent years California has filled its code books with AI regulations addressing a diverse set of concerns. The TFAIA represents a comprehensive law focused on ensuring the safety of frontier models. As mentioned above, the TFAIA follows an earlier attempt by California lawmakers to impose transparency and safety obligations on developers of frontier models, which the California legislature passed as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, but which Governor Newsom ultimately vetoed.
While the new law shares some aspects with its predecessor (e.g., both include reporting and public notice requirements), several key changes helped the TFAIA avoid another veto. For example, the TFAIA jettisoned the earlier bill’s requirements that frontier developers implement a pre-training written safety and security protocol, provide full shutdown capability for frontier models, and publish yearly third-party audits of their compliance. The TFAIA also lengthens the reporting deadline for AI incidents from 72 hours to 15 days, subject to the exception for critical incidents that pose an imminent risk of death or serious physical injury.
Although these changes will be welcome to developers wary of onerous regulation, they come at a time when the AI governance landscape is becoming increasingly complex and difficult to navigate. For example, the New York state legislature recently passed its Responsible AI Safety and Education Act (the “RAISE Act”) which follows a similar approach to AI governance, but differs in many key details. Concerned about the potential of regulations to stifle innovation and impose significant costs on the AI industry, the House Judiciary Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet recently held a hearing warning of the “Californication” of AI regulation, to which Governor Newsom offered a sharp response.
Although it remains to be seen whether further states will follow California’s lead in regulating AI safety and transparency or whether federal lawmakers will step in to preclude such efforts with a nationwide standard, it is certain that AI governance should continue to be top of mind for organizations that develop and deploy AI tools. We will continue to monitor and report on developments in this space and are happy to assist businesses with questions about how these newly emerging laws apply to their use of AI.