Analyzing critical legal trends and developments across data, cyber, AI and digital regulations from around the world and beyond borders

On December 11, 2025, President Trump signed an Executive Order on “Ensuring A National Policy Framework For Artificial Intelligence” (the “Order”). The Order represents the Administration’s latest and most pointed attempt to stop and reverse the wave of state AI legislation that has emerged over the preceding year, which the Order asserts “creates a patchwork of 50 different regulatory regimes.” The Order raises the political stakes regarding state AI laws and creates uncertainty in the form of anticipated litigation, but does not instantly remove current or impending state AI law obligations for companies developing or deploying AI.

Background

There are numerous state AI laws that the Order seeks to curtail. Following Colorado’s enactment of its Consumer Protections in Interactions with AI Act (the “Colorado AI Act”)  in 2024, a number of other states, including Texas and California, have passed significant AI governance statutes.

The Order echoes themes that have resonated throughout President Trump’s second term and advances his longstanding agenda of reining in this expanding network of state AI laws. Within days of returning to the White House, President Trump revoked a 2023 Executive Order on AI issued by President Biden and issued his own Executive Order, establishing a national AI objective to “sustain and enhance America’s global AI dominance.”

The President has identified state AI laws as a potential barrier to US pre-eminence. Congress considered a proposal for a ten-year moratorium on state AI laws was pitched as the “One Big Beautiful Bill Act” but did not include such a moratorium in the final legislation. State AI regulation was again in the crosshairs in the White House’s July AI Action Plan, which directed the Office of Management and Budget to “consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” Then, in December, members of Congress introduced (but did not pass) a provision that purported to pre-empt state and local AI laws in the National Defense Authorization Act.

In Detail

AI Litigation Taskforce: The Order instructs the US Attorney General to create an AI Litigation Taskforce to challenge state AI laws that are inconsistent with the policy aim of “sustain[ing] and enhanc[ing] the United States’ global AI dominance through a minimally burdensome national policy framework for AI,” on the basis that such laws unconstitutionally regulate interstate commerce or are pre-empted by existing federal regulation.

Federal Evaluation of State Laws: The Order directs the Secretary of Commerce to assist this effort by publishing an evaluation of existing state AI laws, identifying laws that conflict with the Order’s policy goals. At a minimum, the evaluation must identify laws that require AI to “alter their truthful outputs” or that may require AI developers or deployers to disclose or report information in potential violation of constitutionally protected free speech. We note that a federal court recently dismissed a First Amendment challenge to New York’s new law requiring disclosures of algorithmic pricing, finding that the law is rationally related to legitimate state interests.

Funding Conditions and Restriction: The Order directs the Secretary of Commerce to issue a Policy Notice specifying that states with “onerous AI laws” shall be ineligible for funding under the Broadband Equity Access and Deployment Program and other federal discretionary grant programs.

Federal Pre-emption: The Order directs the Federal Communications Commission (FCC) to explore whether to adopt a “Federal reporting and disclosure standard for AI models” that would pre-empt state laws. The Order further directs the Federal Trade Commission (FTC) to issue a policy statement addressing whether Section 5 of the FTC Act, which bars “unfair or deceptive acts or practices in or affecting commerce”, pre-empts state AI laws that “require alterations to the truthful outputs of AI models.” These directions are likely intended to review anti-bias provisions in recent state laws like the Colorado AI Act or revisions to the Illinois Human Rights Act, which makes it a civil rights violation for employers to use AI that subjects individuals to discrimination in employment decisions. The Order further instructs the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to prepare a joint legislative recommendation establishing a federal AI policy framework that pre-empts conflicting state AI laws.

We note that Executive Orders in the United States, while binding, do not themselves carry the force of legislation; the authority to create new laws lies with Congress. Rather, Executive Orders typically direct federal agencies to take specific actions to promote the President’s policy goals.

Takeaways

Administration Breaks with Convention in Approach to State Technology Regulation: This Order embodies the Administration’s extraordinary approach to state regulation of new technologies. Historically, when new areas of law emerge as a result of technology developments (e.g., electronic signatures, email marketing), the states act as a “laboratory” to develop and try out various legislative approaches.  The federal government, particularly Congress, watches these legislative developments and later decides whether to adopt federal law on the topic, and if so, whether to incorporate pre-emption into that federal legislative package.  Sometimes Congress adopts federal legislation that fully or partially pre-empts a patchwork of state laws (e.g., E-SIGN for electronic signatures or CAN SPAM for email marketing).  Sometimes Congress adopts federal legislation as a floor that does not pre-empt state laws (e.g., HIPAA or GLBA for privacy).  Sometimes, Congress decides not to act or simply has not gained enough consensus to act (e.g., data breach notification and comprehensive consumer privacy). 

This Order is different because, instead of waiting for Congress to act, the Administration is taking the lead to challenge any state law or regulation on AI that it considers would unduly burden interstate commerce, infringe on freedom of speech, give rise to potential unfair or deceptive practices, or other activities prohibited or pre-empted by federal law or regulation. The Administration will also use leverage in the form of withholding funding from states that it considers are inappropriately regulating AI.  Although the Administration will work with Congress to try to adopt federal legislation on AI with pre-emption, historically, an Administration that viewed state legislation as inappropriate would proceed directly to working with Congress on federal pre-emption. Here, however, the parallel strategies of challenging and disincentivizing state regulation on AI could set up uncertainty for business. This uncertainty will be resolved over a long period of time and could end up playing out through Federal administrative actions and judicial proceedings. 

State AI Laws Uncertain But Still Standing For Now: The Order proposes two main mechanisms to limit the impact of state AI laws: challenging state AI laws through the courts and pre-empting conflicting state AI laws through existing federal regulations and a new federal AI law. We will not know in the short term how the courts will rule on the legal challenges to state AI laws directed by the Order. It is also unclear whether a political consensus will emerge for a federal AI law in a midterm election year.

The Order and the anticipated cases to be commenced by the AI Litigation Taskforce are likely to receive pushback from states, which may in turn challenge the propriety of the Order through the courts. Until such state and local laws are successfully invalidated or pre-empted, they remain in force. Some of the most restrictive new state AI laws are due to become effective in coming months. For example, requirements set out in California’s Transparency in Frontier Artificial Intelligence Act and Illinois’ amendments to its Human Rights Act are due to come into effect on January 1, 2026, and compliance with the Colorado AI Act is due to follow on June 30, 2026. Businesses subject to these laws do not have the luxury of waiting to see if they will be invalidated or pre-empted, as litigation or lawmaking make take years to complete. In the meantime, given the uncertainties, we encourage companies to continue to develop their AI governance and policy frameworks with an eye towards emerging state laws and regulations as they emerge. 

Striking a Balance Between Federal and State AI Priorities: The Order suggests that “altering the truthful outputs” of an AI system to comply with anti-discrimination obligations may be a violation of Section 5 of the FTC Act.  Existing state laws however already require that AI deployers and developers avoid certain discriminatory outcomes. Businesses that own or employ AI tools that include features such as filtering potentially discriminatory outputs may be caught in a conflict of laws dilemma, forcing them to weigh potential state liability against potential FTC enforcement.

Businesses should therefore work closely with counsel to identify potentially high risk use cases, adopt strategies to comply with existing state AI laws (and broader anti-discrimination requirements), while avoiding conduct that may attract Section 5 scrutiny from the FTC. Specifically, when implementing AI governance in accordance with NIST or other governance frameworks, businesses should balance how they communicate about AI (as part of transparency) with practices for managing bias.

Author

Adam Aft helps global companies navigate the complex issues regarding intellectual property, data, and technology in product counseling, technology, and M&A transactions. He leads the Firm's North America Technology Transactions group and co-leads the group globally. Adam regularly advises a range of clients on transformational activities, including the intellectual property, data and data privacy, and technology aspects of mergers and acquisitions, new product and service initiatives, and new trends driving business such as platform development, data monetization, and artificial intelligence.

Author

Cristina Messerschmidt is a senior associate in the Data and Cyber practice group based in Chicago, advising global organizations on data privacy and cybersecurity compliance requirements, data security incident response, and legal issues related to AI.

Author

Brian Hengesbaugh is Global Chair of Baker McKenzie's Data & Cyber Practice. Formerly special counsel to the general counsel of the US Department of Commerce, Brian played a key role in the development and implementation of the US Government’s domestic and international policy in the area of privacy and electronic commerce. In particular, he served on the core team that negotiated the US-EU Safe Harbor Privacy Arrangement (Safe Harbor) and earned a Medal Award from the US Department of Commerce for this service.

Author

Caroline Burnett is a Knowledge Lawyer in Baker McKenzie’s North America Employment & Compensation Group. Caroline is passionate about analyzing trends in US and global employment law and developing innovative solutions to help multinationals stay ahead of the curve.

Author

Cynthia J. Cole is a partner in Baker McKenzie’s Commercial, Technology and Transactions and Data and Cyber practices, and co-chair of Baker Women California. A former CEO and General Counsel, just before joining the Firm, Cynthia was Deputy Department Chair of the Corporate Section in the California offices of Baker Botts where she built the technology transactions and data privacy practice. An intellectual property transactions attorney, Cynthia also has expertise in AI, digital transformation, data privacy, and cybersecurity strategy.

Author

Helena practices international commercial law with a focus on assisting and advising technology companies with cross-border transactions, drafting and negotiating commercial agreements, and advising on global data privacy law compliance. Helena also advises software developers, e-commerce companies, and global mobile and web gaming developers on regulatory restrictions, intellectual property, contracting and data privacy.

Author

Robin Samuel is a partner in the Employment Practice Group of Baker McKenzie's Los Angeles office. Robin helps clients manage and resolve local and cross-border employment issues, whether through counseling or litigation. He advises clients on virtually all aspects of the employment relationship, including hiring and firing, wage and hour, discrimination, harassment, contract disputes, restrictive covenants, employee raiding, and trade secret matters. Clients trust Robin to handle their most sensitive and complex employment issues.

Author

Susan Eandi is the head of Baker McKenzie's Global Employment and Labor Law practice group for North America, and chair of the California Labor & Employment practice group. She speaks regularly for organizations including ACC, Bloomberg, and M&A Counsel. Susan has been published extensively in various external legal publications in addition to handbooks/magazines published by the Firm. Susan has been recognized as a leader in employment law by The Daily Journal, Legal 500, PLC and is a Chambers ranked attorney.

Author

Lothar has been helping companies in Silicon Valley and around the world take products, business models, intellectual property and contracts global for nearly 20 years. He advises on data privacy law compliance, information technology commercialization, interactive entertainment, media, copyrights, open source licensing, electronic commerce, technology transactions, sourcing and international distribution at Baker McKenzie in San Francisco & Palo Alto.

Author

Keo McKenzie is a partner in Baker McKenzie's Intellectual Property and Technology Practice Group (IPTech), based in the Firm’s Palo Alto office. Keo has significant experience advising multinational technology, life sciences, and healthcare companies with complex matters related to regulatory and transactional issues presented by digital health technologies.

Author

Mark Weiss is a partner in the Firm's North America Antitrust & Competition Practice Group and the North American Chair of the Practice’s Litigation Task Force.

Author

Jerome has extensive experience representing clients in government litigation and enforcement investigations before the SEC, DOJ, various United States Attorneys Offices and the Commodities Futures Trading Commission .

Author

Justine focuses her practice on both proactive and reactive cybersecurity and data privacy services, representing clients in matters related to information governance, diligence in acquisitions and investments, incident preparedness and response, the California Consumer Privacy Act, privacy litigation, and cyber litigation.

Author

Nancy is a partner in our Baker KcKenzie's North America Litigation and Government Enforcement Group, based in the Los Angeles office. Nancy is a commercial litigator focusing on class actions, complex business disputes, and investigations. Her experience includes representing clients in commercial matters based on contract claims, unfair competition, false advertising, privacy law violations, fraud and related business torts in both court litigation and arbitration.