In brief
On 12 June 2025, New York state lawmakers passed the Responsible AI Safety and Education Act (“RAISE Act“), which now awaits the governor’s signature. If enacted as written, the RAISE Act will establish the first set of legally mandated transparency standards for AI frontier models that are developed, deployed or operating in whole or in part in New York state. The New York Attorney General (AG) and the Division of Homeland Security and Emergency Services (DHSES) would have enforcement authority under the RAISE Act. If signed, the RAISE Act will take effect 90 days after signature.
The RAISE Act’s requirements
What is the RAISE Act’s stated goal?
The RAISE Act aims to prevent future AI models from unleashing “critical harm,” which the bill defines as death or serious injury of 100 or more individuals, or at least USD 1 billion in damages to monetary or property rights, caused or materially enabled by a large developer’s use, storage or release of a frontier model, through either of the following:
- The creation or use of a chemical, biological, radiological or nuclear weapon
- An AI model engaging in conduct that meets both of the following criteria: a) acts with no meaningful human intervention; and b) if committed by a human, the conduct would constitute a crime under the penal law requiring intent, recklessness or gross negligence, or would involve the solicitation or aiding and abetting of such a crime
To whom would the RAISE Act apply?
The RAISE Act would apply only to “large developers” of “frontier models” that are developed, deployed, or operating in whole or in part in New York state. A “large developer” is a person who has trained at least one frontier model and has spent over USD 100 million in compute costs in aggregate in doing so. A “frontier model” is either of the following: (a) an AI model trained using greater than 1026 computational operations, with compute cost exceeding USD 100 million; or (b) an AI model produced by applying “knowledge distillation” to a frontier model, provided that the compute cost exceeds USD 5 million. “Knowledge distillation” is any supervised learning technique that uses a larger AI model — or its output — to train a smaller AI model with similar or equivalent capabilities as the larger AI model.
Transparency requirements
The RAISE Act requires transparency on frontier model training and uses before deployment. Large developers must comply with the following transparency requirements:
- Implement a written safety and security protocol. Safety and security protocols include documented technical and organizational protocols that:
- Describe reasonable protections and procedures that, if successfully implemented, would appropriately reduce the risk of critical harm
- Describe reasonable administrative, technical and physical cybersecurity protections for frontier models within the large developer’s control that, if successfully implemented, would appropriately reduce the risk of unauthorized access to, or misuse of, the frontier models leading to critical harm, including by sophisticated actors
- Describe in detail the testing procedure to evaluate if the frontier model poses an unreasonable risk of critical harm and whether the frontier model could be misused, be modified, be executed with increased computational resources, evade the control of its large developer or user, be combined with other software, or be used to create another frontier model in a manner that would increase the risk of critical harm
- Enable the large developer or a third party to comply with the requirements of the RAISE Act
- Designate senior personnel to be responsible for ensuring compliance
2. Retain an unredacted copy of the safety and security protocol. The unredacted copy includes records and dates of any updates or revisions and must be retained for the duration of the frontier model’s deployment, plus five years.
3. Conspicuously publish a copy of the safety and security protocol. The published copy may include appropriate redactions, which may be made for any of the following purposes:
- Protecting public safety to the extent the developer can reasonably predict such risks
- Protecting trade secrets
- Preventing the release of confidential information as required by state or federal law
- Protecting employee or customer privacy
- Preventing the release of information otherwise controlled by state or federal law.
Beyond mandating that the protocol be published “conspicuously,” the RAISE Act does not specify the manner of publication.
4. Transmit a copy of redacted safety and security protocol to the New York AG and DHSES, if requested.
5. Record and retain information on the specific tests and test results used in any assessment of the frontier model, when reasonably possible, in sufficient detail to allow third parties to replicate the testing procedure for the duration of the frontier model’s deployment, plus five years.
6. Implement appropriate safeguards to prevent unreasonable risk of critical harm.
7. Conduct an annual review of all safety and security protocols mandated by the RAISE Act to account for any changes to the capabilities of their frontier models and industry best practices, and if necessary, modify these safety and security protocols. If any material modifications are made, the large developer shall publish the safety and security protocol.
Restrictions
The RAISE Act prohibits large developers from engaging in the following:
1. Deploying a frontier model if doing so would create an unreasonable risk of critical harm
2. Knowingly making false or materially misleading statements or omissions in, or in connection with, documents produced pursuant to the RAISE Act
Disclosures
The RAISE Act also includes disclosure requirements for each “safety incident” affecting the frontier model. A “safety incident” is a known occurrence of critical harm or an event that provides demonstrable evidence of an increased risk of critical harm, such as the following:
1. A frontier model autonomously engaging in behavior other than at the request of a user
2. Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a frontier model
3. The critical failure of any technical or administrative controls, including controls limiting the ability to modify a frontier model
4. Unauthorized use of a frontier model
Large developers must report these safety incidents to the AG and the DHSES within 72 hours of either learning about the safety incident or obtaining facts sufficient to reasonably believe that a safety incident has occurred. The disclosure shall include: (a) the date of the safety incident; (b) the reasons for classifying the event as a safety incident; and (c) a clear and concise description of the safety incident.
Carveout
The RAISE Act would provide an express carveout for accredited colleges and universities engaging in academic research. However, if a person subsequently transfers full intellectual property rights — including the right to resell — of the frontier model to another person and retains none of those rights, the receiving person shall be deemed a “large developer” and will be subject to all applicable responsibilities and requirements under the RAISE Act following the transfer.
Legal defense
The RAISE Act would provide that a defendant shall not be held liable for critical harm caused by an intervening human actor unless the developer’s activities were a substantial factor in enabling or increasing the likelihood of such harm. Liability applies only if the intervening conduct was reasonably foreseeable as a probable consequence of the developer’s activities and the harm could have been reasonably prevented or mitigated through alternative design, security measures or safety protocols.
Enforcement
The New York AG and the DHSES the exclusive enforcement authorities as the law does not provide for a private right of action. The RAISE Act authorizes the New York AG and the DHSES to recover the following, which are determined based on the severity of the violation of the transparency requirements: (a) a civil penalty in an amount not exceeding USD 10 million for a first violation, and up to USD 30 million for any subsequent violation; and (b) injunctive or declaratory relief.
What’s next
New York, along with Texas and Colorado, may now contribute to an increasingly complex regulatory environment for AI (which may shift further if a federal moratorium is enacted). New York modeled its bill on California’s vetoed AI Bill but was intentionally narrowed to avoid a similar opposition. Generally, we recommend that enterprises developing or deploying AI use cases take practical steps, including the following:
1. Establish an AI governance framework. Implement a comprehensive governance and risk management framework, including internal policies, procedures and systems for reviewing AI use, identifying risks and reporting concerns. This is particularly important, as the RAISE Act requires detailed documentation and designation of senior personnel to be responsible for ensuring compliance.
2. Conduct vendor and system due diligence. Evaluate AI vendors and systems before engagement or deployment. This includes assessing whether their model meets the criteria for designation as a “frontier model” under the RAISE Act.