Analyzing critical legal trends and developments across data, cyber, AI and digital regulations from around the world and beyond borders

On September 26, 2025, the Office of Science and Technology Policy (OSTP) published a request for information (RFI) requesting input in identifying existing Federal statutes, regulations, agency rules, guidance, forms, and administrative processes that unnecessarily hinder the development, deployment, and adoption of artificial intelligence (AI) technologies within the United States.

This RFI, focusing on identifying the regulatory and procedural barriers that unnecessarily slow safe, beneficial AI deployment, is the next step in the White House AI Action Plan of July 23, 2025, directed to maintaining American leadership in AI innovation. That Action Plan directed the OSTP to “launch a Request for Information [RFI] from businesses and the public at large about current Federal regulations that hinder AI innovation or adoption, and work with relevant Federal agencies to take appropriate action.”

The OSTP is focused on streamlining and modernizing — rather than eliminating — regulation and states in part:

The realization of the benefits from AI applications cannot be done through complete de-regulation, but require policy frameworks, both regulatory and non-regulatory. Suitable policy frameworks enable innovation while safeguarding the public interest. This is critical to foster public trust in AI technologies, leading to broader deployment and faster adoption. Such policy frameworks may include statutory and regulatory requirements, technical standards, guidance documents, voluntary frameworks, and other instruments.

Most existing Federal regulatory regimes and policy mechanisms were developed before the rise of modern AI technologies. As a result, they often rest on assumptions about human-operated systems that are not appropriate for AI-enabled or AI-augmented systems. These assumptions include, but are not limited to:

  • Decision-Making and Explainability—Decisions are made, documented, and explained, in ways where the processes and rationale are traceable to a human actor.
  • Liability and Accountability—Allocation of legal responsibility and remedial frameworks rests with human actors or clearly identifiable organizational decision points.
  • Human Oversight and Intervention—Prescriptive requirements for human oversight, review, intervention, or continuous supervision in operational processes.
  • Data Practices—Data collection, retention, provenance, sharing, and permitted uses cases that do not account for the scale, reuse, or training dynamics characteristic of AI systems.
  • Testing, Validation, and Certification—Approaches to testing, approval, and post-market oversight designed for static products or human-delivered services, rather than adaptive or continuously learning systems.

Specifically, OSTP invite responses to one or more of the following questions:

(i) What AI activities, innovations, or deployments are currently being inhibited, delayed, or otherwise constrained due to Federal statues, regulations, or policies? Please describe the specific barrier and the AI capability or application that would be enabled if it was addressed. The barriers may directly hinder AI development or adoption, or indirectly hinder through incompatible policy frameworks.

(ii) What specific Federal statutes, regulations, or policies present barriers to AI development, deployment, or adoption in your sector? Please identify the relevant rules and authority with specificity, including a cite to the Code of Federal Regulations (CFR) or the U.S. Code (U.S.C.) where applicable.

(iii) Where existing policy frameworks are not appropriate for AI applications, what administrative tools ( e.g., waivers, exemptions, experimental authorities) are available, but underutilized? Please identify the administrative tools with specificity, citing the CFR or U.S.C. where applicable.

(iv) Where specific statutory or regulatory regimes are structurally incompatible with AI applications, what modifications would be necessary to enable lawful deployment while preserving regulatory objectives?

(v) Where barriers arise from a lack of clarity or interpretive guidance on how existing rules cover AI activities, what forms of clarification (e.g., standards, guidance documents, interpretive rules) would be most effective?

(vi) Are there barriers that arise from organizational factors that impact how Federal statues, regulations, or policies are used or not used? How might Federal action appropriately address them?

Interested persons have until October 27, 2025, to submit feedback.

What’s next?

The RFI is not the only item from the AI Action Plan that aims to dismantle unnecessary regulatory barriers to innovation in AI technologies. In addition to the RFI, the Action Plan recommends that the Office of Management and Budget (OMB) coordinate with Federal agencies to revise or repeal regulations that “unnecessarily hinder AI development or deployment.” The OMB is also tasked with leading an effort to ensure that agencies that have AI-related discretionary funding consider a state’s AI regulatory climate when making funding decisions. Additionally, the Federal Trade Commission is to review open cases and past orders to identify those that unduly burden AI innovation.

Organizations that develop or deploy AI should consider whether to engage in the OTSP consultation process and should continue to monitor actions taken in accordance with the White House Action Plan to assess the impacts on their business.

Author

Cynthia J. Cole is a partner in Baker McKenzie’s Commercial, Technology and Transactions and Data and Cyber practices, and co-chair of Baker Women California. A former CEO and General Counsel, just before joining the Firm, Cynthia was Deputy Department Chair of the Corporate Section in the California offices of Baker Botts where she built the technology transactions and data privacy practice. An intellectual property transactions attorney, Cynthia also has expertise in AI, digital transformation, data privacy, and cybersecurity strategy.

Author

Adam Aft helps global companies navigate the complex issues regarding intellectual property, data, and technology in product counseling, technology, and M&A transactions. He leads the Firm's North America Technology Transactions group and co-leads the group globally. Adam regularly advises a range of clients on transformational activities, including the intellectual property, data and data privacy, and technology aspects of mergers and acquisitions, new product and service initiatives, and new trends driving business such as platform development, data monetization, and artificial intelligence.

Author

Avi Toltzis is a Knowledge Lawyer in Baker McKenzie's Chicago office.