Analyzing critical legal trends and developments across data, cyber, AI and digital regulations from around the world and beyond borders

On April 27, 2026, a federal court paused enforcement of Colorado’s SB 24‑205 (the “Colorado AI Act”), placing one of the country’s most comprehensive state AI laws on hold while lawmakers are reconsidering the timing and scope of the Colorado AI Act. The District Court for the District of Colorado approved a joint request to suspend enforcement and stay litigation deadlines pending further legislative and regulatory developments. Under the court’s order, Colorado will not initiate investigations or enforcement actions under the law—most recently set to take effect on June 30, 2026—while the case remains stayed.

Background

The Colorado AI Act, enacted in 2024, generally regulates the use of high‑risk AI systems that make “consequential decisions” affecting areas such as employment, housing, credit, insurance, and essential government services. The statute imposes obligations on developers and deployers of these AI tools, including impact assessments, risk‑management programs, mitigation measures, and reporting requirements.

From the outset, however, the law drew criticism from technology companies and business groups. Even Colorado Governor Jared Polis, who signed the bill into law, admitted that he remained “concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike.” Since its enactment, Colorado officials have been considering a substantial rewrite of the law. Lawmakers considered amendment packages in a 2025 special legislative session, but with no consensus emerging, the effective date of the Colorado AI Act was instead postponed from February 2026 to June 30, 2026.

In March 2026, a policy working group convened with the governor’s support released a draft framework that would repeal and replace much of the existing statute. The proposal would eliminate some of the Colorado AI Act’s most onerous requirements, such as mandatory bias audits, impact assessments, and detailed risk‑management programs, and would instead shift toward a framework centered around transparency and disclosure measures. The proposal would also further postpone implementation of the law. No amendment package has yet been formally introduced in the Colorado legislature to date. Although the legislative session is due to adjourn on May 13, 2026, it is not uncommon for bills to move quickly through the legislative process at the end of the legislative session. As things stand, there are not yet any definitive legislative or policy updates with respect to the Colorado AI Act.

Challenges to the Colorado AI Act

With the law set to take effect this June, a leading social media and AI company filed suit in April seeking declaratory and injunctive relief, arguing that several provisions of the Colorado AI Act are unconstitutional. Weeks later, the US Department of Justice formally intervened in the litigation, contending that the law unconstitutionally “require[s] AI systems to incorporate discriminatory ideology.” The government’s intervention fulfills the administration’s promise to challenge state-level AI laws at odds with the national policy of establishing “global AI dominance through a minimally burdensome national policy framework for AI.”

With the possibility of legislative developments overtaking the litigation, the parties jointly moved to stay the proceedings pending the Colorado Attorney General promulgating rules implementing the Colorado AI Act (or any successor legislation) and the resolution of the plaintiff’s forthcoming motion for a preliminary injunction to bar enforcement. The court granted the stay on April 27, 2026, suspending case deadlines and ordering Colorado not to initiate any investigations under the Colorado AI Act for alleged violations that occurred on or before 14 days after the court’s ruling on the anticipated preliminary injunction.

Takeaways

For now, the stay offers businesses developing or using AI systems subject to the Colorado AI Act a short-term reprieve from Colorado AI Act-specific enforcement activity, but it does not resolve whether the law will (i) take effect as-is, (ii) be amended/replaced, or (iii) be enjoined on constitutional grounds. The ultimate scope and timing of Colorado’s AI regulatory framework remains unsettled and is unlikely to crystallize until both the legislative process and rulemaking are complete.

While enforcement of the law is currently paused, the pause may be short‑lived and is contingent on developments in the litigation, which depends on the progress of the rulemaking process, which in turn is dependent on the final form of the Colorado AI Act. At the same time, federal efforts—both to challenge state laws like the Colorado AI Act and to advance potentially preemptive federal legislation—introduce an additional layer of uncertainty. With so many moving, interlocking parts, companies should closely monitor legislative developments, rulemaking activity, and the progress of the litigation, while remaining prepared for the potential emergence of a revised transparency‑focused regulatory framework.

In any event, multinational companies continue to face a complex and rapidly evolving global AI policy environment. Companies should look to adopt AI governance based on frameworks such as the NIST AI Risk Management Framework in order to maintain lawful, responsible AI governance while remaining flexible to continue to adapt to the ever changing legal and regulatory environment.

Author

Brian Hengesbaugh is Global Chair of Baker McKenzie's Data & Cyber Practice. Formerly special counsel to the general counsel of the US Department of Commerce, Brian played a key role in the development and implementation of the US Government’s domestic and international policy in the area of privacy and electronic commerce. In particular, he served on the core team that negotiated the US-EU Safe Harbor Privacy Arrangement (Safe Harbor) and earned a Medal Award from the US Department of Commerce for this service.

Author

Lothar has been helping companies in Silicon Valley and around the world take products, business models, intellectual property and contracts global for nearly 20 years. He advises on data privacy law compliance, information technology commercialization, interactive entertainment, media, copyrights, open source licensing, electronic commerce, technology transactions, sourcing and international distribution at Baker McKenzie in San Francisco & Palo Alto.

Author

Adam Aft helps global companies navigate the complex issues regarding intellectual property, data, and technology in product counseling, technology, and M&A transactions. He leads the Firm's North America Technology Transactions group and co-leads the group globally. Adam regularly advises a range of clients on transformational activities, including the intellectual property, data and data privacy, and technology aspects of mergers and acquisitions, new product and service initiatives, and new trends driving business such as platform development, data monetization, and artificial intelligence.

Author

Susan Eandi is the head of Baker McKenzie's Global Employment and Labor Law practice group for North America, and chair of the California Labor & Employment practice group. She speaks regularly for organizations including ACC, Bloomberg, and M&A Counsel. Susan has been published extensively in various external legal publications in addition to handbooks/magazines published by the Firm. Susan has been recognized as a leader in employment law by The Daily Journal, Legal 500, PLC and is a Chambers ranked attorney.

Author

Cristina Messerschmidt is a partner in the Data and Cyber practice group based in Chicago, advising global organizations on data privacy and cybersecurity compliance requirements, data security incident response, and legal issues related to AI.

Author

Keo McKenzie is a partner in Baker McKenzie's Intellectual Property and Technology Practice Group (IPTech), based in the Firm’s Palo Alto office. Keo has significant experience advising multinational technology, life sciences, and healthcare companies with complex matters related to regulatory and transactional issues presented by digital health technologies.

Author

Avi Toltzis is a Knowledge Lawyer in Baker McKenzie's Chicago office.

Author

Caroline Burnett is a Knowledge Lawyer in Baker McKenzie’s North America Employment & Compensation Group. Caroline is passionate about analyzing trends in US and global employment law and developing innovative solutions to help multinationals stay ahead of the curve.