On December 11, 2025, President Trump signed an Executive Order on “Ensuring A National Policy Framework For Artificial Intelligence” (the “Order”). The Order represents the Administration’s latest and most pointed attempt to stop and reverse the wave of state AI legislation that has emerged over the preceding year, which the Order asserts “creates a patchwork of 50 different regulatory regimes.” The Order raises the political stakes regarding state AI laws and creates uncertainty in the form of anticipated litigation, but does not instantly remove current or impending state AI law obligations for companies developing or deploying AI.
Background
There are numerous state AI laws that the Order seeks to curtail. Following Colorado’s enactment of its Consumer Protections in Interactions with AI Act (the “Colorado AI Act”) in 2024, a number of other states, including Texas and California, have passed significant AI governance statutes.
The Order echoes themes that have resonated throughout President Trump’s second term and advances his longstanding agenda of reining in this expanding network of state AI laws. Within days of returning to the White House, President Trump revoked a 2023 Executive Order on AI issued by President Biden and issued his own Executive Order, establishing a national AI objective to “sustain and enhance America’s global AI dominance.”
The President has identified state AI laws as a potential barrier to US pre-eminence. Congress considered a proposal for a ten-year moratorium on state AI laws was pitched as the “One Big Beautiful Bill Act” but did not include such a moratorium in the final legislation. State AI regulation was again in the crosshairs in the White House’s July AI Action Plan, which directed the Office of Management and Budget to “consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” Then, in December, members of Congress introduced (but did not pass) a provision that purported to pre-empt state and local AI laws in the National Defense Authorization Act.
In Detail
AI Litigation Taskforce: The Order instructs the US Attorney General to create an AI Litigation Taskforce to challenge state AI laws that are inconsistent with the policy aim of “sustain[ing] and enhanc[ing] the United States’ global AI dominance through a minimally burdensome national policy framework for AI,” on the basis that such laws unconstitutionally regulate interstate commerce or are pre-empted by existing federal regulation.
Federal Evaluation of State Laws: The Order directs the Secretary of Commerce to assist this effort by publishing an evaluation of existing state AI laws, identifying laws that conflict with the Order’s policy goals. At a minimum, the evaluation must identify laws that require AI to “alter their truthful outputs” or that may require AI developers or deployers to disclose or report information in potential violation of constitutionally protected free speech. We note that a federal court recently dismissed a First Amendment challenge to New York’s new law requiring disclosures of algorithmic pricing, finding that the law is rationally related to legitimate state interests.
Funding Conditions and Restriction: The Order directs the Secretary of Commerce to issue a Policy Notice specifying that states with “onerous AI laws” shall be ineligible for funding under the Broadband Equity Access and Deployment Program and other federal discretionary grant programs.
Federal Pre-emption: The Order directs the Federal Communications Commission (FCC) to explore whether to adopt a “Federal reporting and disclosure standard for AI models” that would pre-empt state laws. The Order further directs the Federal Trade Commission (FTC) to issue a policy statement addressing whether Section 5 of the FTC Act, which bars “unfair or deceptive acts or practices in or affecting commerce”, pre-empts state AI laws that “require alterations to the truthful outputs of AI models.” These directions are likely intended to review anti-bias provisions in recent state laws like the Colorado AI Act or revisions to the Illinois Human Rights Act, which makes it a civil rights violation for employers to use AI that subjects individuals to discrimination in employment decisions. The Order further instructs the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to prepare a joint legislative recommendation establishing a federal AI policy framework that pre-empts conflicting state AI laws.
We note that Executive Orders in the United States, while binding, do not themselves carry the force of legislation; the authority to create new laws lies with Congress. Rather, Executive Orders typically direct federal agencies to take specific actions to promote the President’s policy goals.
Takeaways
Administration Breaks with Convention in Approach to State Technology Regulation: This Order embodies the Administration’s extraordinary approach to state regulation of new technologies. Historically, when new areas of law emerge as a result of technology developments (e.g., electronic signatures, email marketing), the states act as a “laboratory” to develop and try out various legislative approaches. The federal government, particularly Congress, watches these legislative developments and later decides whether to adopt federal law on the topic, and if so, whether to incorporate pre-emption into that federal legislative package. Sometimes Congress adopts federal legislation that fully or partially pre-empts a patchwork of state laws (e.g., E-SIGN for electronic signatures or CAN SPAM for email marketing). Sometimes Congress adopts federal legislation as a floor that does not pre-empt state laws (e.g., HIPAA or GLBA for privacy). Sometimes, Congress decides not to act or simply has not gained enough consensus to act (e.g., data breach notification and comprehensive consumer privacy).
This Order is different because, instead of waiting for Congress to act, the Administration is taking the lead to challenge any state law or regulation on AI that it considers would unduly burden interstate commerce, infringe on freedom of speech, give rise to potential unfair or deceptive practices, or other activities prohibited or pre-empted by federal law or regulation. The Administration will also use leverage in the form of withholding funding from states that it considers are inappropriately regulating AI. Although the Administration will work with Congress to try to adopt federal legislation on AI with pre-emption, historically, an Administration that viewed state legislation as inappropriate would proceed directly to working with Congress on federal pre-emption. Here, however, the parallel strategies of challenging and disincentivizing state regulation on AI could set up uncertainty for business. This uncertainty will be resolved over a long period of time and could end up playing out through Federal administrative actions and judicial proceedings.
State AI Laws Uncertain But Still Standing For Now: The Order proposes two main mechanisms to limit the impact of state AI laws: challenging state AI laws through the courts and pre-empting conflicting state AI laws through existing federal regulations and a new federal AI law. We will not know in the short term how the courts will rule on the legal challenges to state AI laws directed by the Order. It is also unclear whether a political consensus will emerge for a federal AI law in a midterm election year.
The Order and the anticipated cases to be commenced by the AI Litigation Taskforce are likely to receive pushback from states, which may in turn challenge the propriety of the Order through the courts. Until such state and local laws are successfully invalidated or pre-empted, they remain in force. Some of the most restrictive new state AI laws are due to become effective in coming months. For example, requirements set out in California’s Transparency in Frontier Artificial Intelligence Act and Illinois’ amendments to its Human Rights Act are due to come into effect on January 1, 2026, and compliance with the Colorado AI Act is due to follow on June 30, 2026. Businesses subject to these laws do not have the luxury of waiting to see if they will be invalidated or pre-empted, as litigation or lawmaking make take years to complete. In the meantime, given the uncertainties, we encourage companies to continue to develop their AI governance and policy frameworks with an eye towards emerging state laws and regulations as they emerge.
Striking a Balance Between Federal and State AI Priorities: The Order suggests that “altering the truthful outputs” of an AI system to comply with anti-discrimination obligations may be a violation of Section 5 of the FTC Act. Existing state laws however already require that AI deployers and developers avoid certain discriminatory outcomes. Businesses that own or employ AI tools that include features such as filtering potentially discriminatory outputs may be caught in a conflict of laws dilemma, forcing them to weigh potential state liability against potential FTC enforcement.
Businesses should therefore work closely with counsel to identify potentially high risk use cases, adopt strategies to comply with existing state AI laws (and broader anti-discrimination requirements), while avoiding conduct that may attract Section 5 scrutiny from the FTC. Specifically, when implementing AI governance in accordance with NIST or other governance frameworks, businesses should balance how they communicate about AI (as part of transparency) with practices for managing bias.