Analyzing critical legal trends and developments across data, cyber, AI and digital regulations from around the world and beyond borders

On July 23, the White House unveiled its much-anticipated AI Action Plan. The Action Plan follows President Trump’s Executive Order 14179 of January 23 on “Removing Barriers to American Leadership in Artificial Intelligence”—which directed the development of the Action Plan within 180 days—and subsequent consultation with stakeholders to “define the priority policy actions needed to sustain and enhance America’s AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation.” This update provides a summary of the Action Plan and key considerations for businesses developing or deploying AI.

The Action Plan is structured around three pillars: (I) Accelerating AI Innovation, (II) Building American AI Infrastructure, and (III) Leading in International AI Diplomacy and Security. Although, the AI Action Plan is not legally binding in itself, each pillar contains a number of policy recommendations and actions, which will subsequently need to be actioned by various government agencies and institutes.

Pillar I – Accelerating AI Innovation

Pillar I focuses on reducing the impact of regulation that may hamper AI development. To this end, the Action Plan instructs the Office of Management and Budget to “consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” Pillar I emphasizes the need for workplace action that supports transition to an AI economy, citing AI literary and skill development among key workforce priorities.  The Action Plan also calls for federal- and state-led efforts to evaluate the impact of AI on the labor market. In order to promote advancements in American AI technologies, Pillar I specifically calls for investment in open-source AI models, support for the preparation of high-quality datasets for use in model training, and acceleration of the federal government’s adoption of AI.

Pillar II – Building American AI Infrastructure

Pillar II of the Action Plan includes actions aimed at strengthening the country’s AI infrastructure. The Action Plan seeks to streamline the expansion of America’s semiconductor manufacturing capabilities by removing extraneous policy requirements for CHIPS-funded semiconductor manufacturing operations.  Pillar II also focuses on the fortification of AI systems and other critical infrastructure assets against cybersecurity threats. In order to achieve these goals, the Action Plan proposes various measures to enhance cybersecurity protections such as sharing AI-security threat intelligence across critical infrastructure sectors and developing standards to facilitate the development of resilient and secure AI systems.

Pillar III – Leading in International AI Diplomacy and Security

Under Pillar III, the Action Plan outlines the strategic role of AI in national security and focuses on actions to prevent US adversaries from gaining access to AI technologies that may be used to harm American interests. This pillar suggests various export control measures, such as preventing the diversion of advanced AI compute technologies and closing loopholes in existing export control frameworks around semiconductor sub-system manufacturing. The Action Plan also returns to cybersecurity by suggesting partnering with frontier AI developers to evaluate chemical, biological, radiological, nuclear, or explosives (CBRNE) weapons and cybersecurity risks arising from the use of adversaries’ AI systems in critical infrastructure.

Key Considerations for Businesses

  • Federal vs. State Approach to AI: The Action Plan outlines a different approach to AI regulation than that espoused by certain state policymakers. The Action Plan’s call for the elimination of regulatory barriers to AI innovation stands in apparent tension to new state AI laws. It remains to be seen whether the Action Plan’s call for deregulation and threat to use federal purse strings in furtherance of this goal will change the course of state action. In the meantime, businesses should continue to focus on complying with applicable state laws and regulations and assessing the impact of new and upcoming requirements (including significant recent developments in California, Texas and New York) on their development and use of AI. 
  • AI Investments to Continue: The Action Plan’s emphasis on investing in AI progress is consistent with and complementary to the current level of investment in AI, and AI assets are being valued at a premium. Across sectors, AI continues to be a high area of focus for investment, development, and deployment. Businesses wishing to take advantage of these business conditions should focus on developing data and AI strategies at the enterprise level and aligning these strategies with a flexible and sustainable AI governance program, which accounts for constant expansion and shifting priorities.
  • Geopolitics Driving AI Race: Geopolitical factors continue to drive technology policy and regulation.
    • The Action Plan comes as the US Department of Justice begins to enforce its new rule regulating access to certain sensitive US data by “countries of concern” and shares many common themes with that new rule.
    • To prevent opposing  access to transformative AI technologies, the Action Plan promotes strengthening the nation’s export control strategy. It proposes the implementation of “creative” approaches to export control enforcement around AI, including location verification features on advanced AI-capable US chips and expanding end-use monitoring in high-risk regions. It also calls for the enhancement of export controls on semiconductors to address gaps in existing US export controls, such as new controls on semiconductor manufacturing sub-systems.
    • The Action Plan encourages enforcement collaboration with allies to monitor and prevent the diversion of advanced AI compute via third countries. It further recommends the development of a “carrot and stick” technology diplomacy strategic plan to encourage allies to adopt complementary, plurilateral export controls and promote strategic alignment or face expansive extraterritorial US controls on their foreign-made products via the “Foreign Direct Product Rule” or through punitive secondary tariffs.
    • The plan instructs the Department of Commerce to establish a program to gather proposals for “full-stack AI export packages” and coordinate with other federal agencies to facilitate export deals that meet security requirements. This coincides with an Executive Order on Promoting the Export of the American AI Technology Stack issued on the same day and requiring the Secretary of Commerce to establish and implement an American AI Exports Program within 90 days.
    • These recommendations come as the industry awaits replacement export control rules following the Trump Administration’s announcement in May 2025 that it would be rescinding and would not enforce the expansive Biden Administration’s AI Diffusion Rule issued in January 2025. In the area of biosecurity, picking up on ongoing national security concerns related to advancements in biotechnology, the Action Plan recommends adoption of enforcement and screening mechanisms by nucleic acid synthesis providers to guard against risks posed by fraudulent or malicious actors.
  • People at the Center: The Action Plan places American workers at the center of its AI strategy, asserting that the infrastructure buildout will generate jobs and economic opportunity. It prioritizes retraining and upskilling through coordinated efforts led by the Department of Labor, Department of Education, National Science Foundation, and Department of Commerce. To facilitate retraining, the Action Plan also proposes that employers may be able to offer tax-free reimbursement to their workforce for AI-related training. Additionally, the Action Plan recommends that the National Institute of Standards and Technology (NIST) revisit the NIST AI Risk Management Framework in order to eliminate references to – among others – “diversity, equity, and inclusion.” Businesses should continue to monitor relevant policy developments, particularly as relevant to the use of AI in recruiting, promoting, or otherwise making changes to the workforce. At the same time, businesses should continue to focus on principle-based AI governance.  
  • Cyber Threats & Risk Shape AI Strategy: The White House makes clear what CISOs and technology lawyers have known for years: the emergence of advanced AI is fundamentally reshaping the threat landscape and it is more important than ever for both public and private sector to foster a culture of cyber readiness and resilience. The roadmap to readiness is doubling down on investments in people, processes and technologies to safeguard AI and critical infrastructure. The plan recasts the newly rebranded Center for AI Standards and Innovation (CAISI) as the hub for public-private cyber collaboration and information-sharing, effectively filling the role previously held by CISA. CAISI is tasked with fostering collaboration between public and private sectors, coordinating AI security standards, incident response frameworks, and national security evaluations, including DARPA-led resilience research. It also recommends establishing an Information Sharing and Analysis Center (AI-ISAC) led by Department of Homeland Security and the Office of the National Cyber Director to guide industry on AI-specific vulnerabilities. Businesses using and developing AI enabled technologies can get ahead of these threats by preparing insider threat programs and incorporating specific AI incident response actions into their incident response plans and playbooks.

While the Action Plan presents a comprehensive vision of AI under the current administration it is just the beginning of the story. The ways that federal agencies implement the Action Plan, and the reaction of state lawmakers, the private sector, and America’s global partners will determine the course of the Action Plan. We will continue to monitor and comment on policy, legal and regulatory developments in the AI space.

Author

Brian Hengesbaugh is Global Chair of Baker McKenzie's Data & Cyber Practice. Formerly special counsel to the general counsel of the US Department of Commerce, Brian played a key role in the development and implementation of the US Government’s domestic and international policy in the area of privacy and electronic commerce. In particular, he served on the core team that negotiated the US-EU Safe Harbor Privacy Arrangement (Safe Harbor) and earned a Medal Award from the US Department of Commerce for this service.

Author

Adam Aft helps global companies navigate the complex issues regarding intellectual property, data, and technology in product counseling, technology, and M&A transactions. He leads the Firm's North America Technology Transactions group and co-leads the group globally. Adam regularly advises a range of clients on transformational activities, including the intellectual property, data and data privacy, and technology aspects of mergers and acquisitions, new product and service initiatives, and new trends driving business such as platform development, data monetization, and artificial intelligence.

Author

Justine focuses her practice on both proactive and reactive cybersecurity and data privacy services, representing clients in matters related to information governance, diligence in acquisitions and investments, incident preparedness and response, the California Consumer Privacy Act, privacy litigation, and cyber litigation.

Author

Cynthia J. Cole is a partner in Baker McKenzie’s Commercial, Technology and Transactions and Data and Cyber practices, and co-chair of Baker Women California. A former CEO and General Counsel, just before joining the Firm, Cynthia was Deputy Department Chair of the Corporate Section in the California offices of Baker Botts where she built the technology transactions and data privacy practice. An intellectual property transactions attorney, Cynthia also has expertise in AI, digital transformation, data privacy, and cybersecurity strategy.

Author

Keo McKenzie is a partner in Baker McKenzie's Intellectual Property and Technology Practice Group (IPTech), based in the Firm’s Palo Alto office. Keo has significant experience advising multinational technology, life sciences, and healthcare companies with complex matters related to regulatory and transactional issues presented by digital health technologies.

Author

Alison Stafford Powell has considerable experience counseling companies on cross-border outbound trade compliance in the areas of export controls, trade and financial sanctions, anti-terrorism controls, anti-corruption and anti-money laundering rules, US anti-boycott laws, and US foreign investment restrictions. With a background also in EU and UK trade restrictions, she helps non-US companies navigate conflicting compliance obligations and risks under US and EU trade rules. She is a dual US/English qualified lawyer and has worked in the Firm's London, Washington, DC and Palo Alto offices since 1996.

Author

Susan Eandi is the head of Baker McKenzie's Global Employment and Labor Law practice group for North America, and chair of the California Labor & Employment practice group. She speaks regularly for organizations including ACC, Bloomberg, and M&A Counsel. Susan has been published extensively in various external legal publications in addition to handbooks/magazines published by the Firm. Susan has been recognized as a leader in employment law by The Daily Journal, Legal 500, PLC and is a Chambers ranked attorney.

Author

Robin Samuel is a partner in the Employment Practice Group of Baker McKenzie's Los Angeles office. Robin helps clients manage and resolve local and cross-border employment issues, whether through counseling or litigation. He advises clients on virtually all aspects of the employment relationship, including hiring and firing, wage and hour, discrimination, harassment, contract disputes, restrictive covenants, employee raiding, and trade secret matters. Clients trust Robin to handle their most sensitive and complex employment issues.

Author

Cristina Messerschmidt is a senior associate in the Data and Cyber practice group based in Chicago, advising global organizations on data privacy and cybersecurity compliance requirements, data security incident response, and legal issues related to AI.