Heading into 2026, the intersection of technology, regulation and geopolitical dynamics will continue to reshape the global data and cyber landscape. Global businesses will face novel challenges as regulatory regimes evolve, cyber threats intensify, cross-border data flows come under greater scrutiny, and companies continue rapidly develop and deploy AI. Our top issues to watch for global data and cyber legal risks are as follows:
- Geopolitical risks will increase in intensity. Geopolitical risks will continue to increase in intensity. From a data and cyber perspective, we expect to see a continued focus on national security, restrictions on cross-border data flows, and protections for critical infrastructure. In the United States, we expect initial enforcement actions under the new US DOJ Rule on Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern (“DOJ Rule”), and the finalization of the Cyber Incident Reporting for Critical Infrastructure Act (“CIRCIA”) regulations on reporting obligations. We anticipate that China may respond with continued tighter implementation – particularly with respect to China to US data transfers – of its Cyber Security Law (“CSL”), Data Security Law (“DSL”) and Personal Information Protection Law (“PIPL”). In Europe, geopolitical tensions are reflected in several regulatory moves. The Network and Information Systems 2 Directive (“NIS 2”) – still in the process of being fully transposed in several Member States – imposes tougher cybersecurity obligations on essential and important entities, while the Cyber Resilience Act (“CRA”) introduces mandatory security requirements for products with digital elements, aiming to strengthen supply chain resilience. In financial services, the EU Digital Operational Resilience Act (“DORA”) will trigger its first oversight cycle for critical ICT providers in 2026, adding scrutiny on cloud and tech dependencies amid rising concerns over concentration and sovereignty. And, in other regions such as Latin America, new restrictions will come into force as well as enhanced enforcement activities, such as the reform to the Chilean Personal Data Protection Act, and the plans for increased enforcement activities by the Brazilian national data protection authority. Multinationals will need to make strategic decisions on short, medium, and long-term strategies in a world where criminal and civil liability, and other potentially severe legal consequences – beyond typical commercial data protection laws – can affect global business transfers and investments.
- The European Court of Justice will revisit transatlantic data transfers. The European Court of Justice (“ECJ”) takes up its review of the EU-US Data Privacy Framework (“DPF”). Good reasons exist to be optimistic that “third time is the charm” for this transatlantic data privacy solution, but if this mechanism is invalidated, there will be broad regulatory risk for all transatlantic data transfers. All companies engaging in EU to US data transfers (whether using DPF, standard clauses, binding corporate rules, or otherwise) rely on the DPF adequacy finding for their Transfer Impact Assessments (“TIAs”) and associated government access analysis. If the ECJ invalidates DPF, the US government could easily withdraw/remove certain privacy protections on surveillance (e.g., the Data Protection Review Court, other protections in EO 14086, etc.), resulting in fewer privacy protections for EU residents, and even more limited options for EU and US companies to manage privacy risks with transatlantic transfers. Meanwhile, European data protection authorities continue to emphasize robust TIAs and fallback mechanisms (SCCs, BCRs) as best practice. Organizations should also anticipate stricter scrutiny from national authorities on government access assessments, particularly in light of evolving jurisprudence and coordinated enforcement trends across the EU.
- California sets the bar high in the US for privacy risk assessments, cyber audits, and automated decision-making technologies. Starting January 1, 2026, new implementing regulations under the California Consumer Privacy Act (“CCPA”) impose obligations to document “risk assessments” in accordance with state regulations for data processing activities that are deemed to pose a “substantial risk” to privacy for consumers, employees, business partners, and other data subjects. Examples of such significant risks can include selling/sharing personal information for digital advertising, processing of sensitive data, and use of Automated Decision-Making Technologies (“ADMT”) for significant decisions. In successive waves, the CCPA regulations also will require businesses to implement notice, choice, access, and related rules for ADMT. CCPA regulations will also require businesses to conduct independent “reasonable security” cyber audits to assure that specific technical and organizational controls are in place. The CCPA regulations include additional requirements for attestations that must be signed by a member of the executive team under penalty of perjury with mandatory filings starting in 2028. Other states are quickly following suit, with new laws coming into force in Indiana, Kentucky, and Rhode Island in 2026, and more than 20 US states having adopted comprehensive privacy legislation. California also leads in privacy litigation under laws like the California Invasion of Privacy Act (“CIPA”), which plaintiffs allege restricts the use of cookies, pixels and tracking technologies because they constitute unlawful wiretapping. CIPA legislative reform is expected to resume in the 2026 legislative cycle but until then many demand letters and complaints will persist.
- More enforcement actions will emerge under US state privacy regulations. Given 2025’s federal deregulation, state authorities are increasingly banding together to pursue companies that they consider do not adhere to state and federal privacy laws on targeted advertising, sales/sharing of data, children’s privacy concerns, and more. In on some key issues, the state legislatures have super-charged local authorities to adopt aggressive positions that are difficult to meet in practice, which can give rise to practical challenges for targeted businesses.
- Laws addressing online harms and platform accountability gain momentum worldwide. Jurisdictions will continue to pursue regulation to curb AI-generated deepfakes and misinformation, and to protect minors online. A key trend is the shift of responsibility from individual users to online service providers, app stores, and platforms, exemplified by frameworks such as Europe’s DSA and the UK’s Online Safety Act, the Utah Social Media Regulation Act, and the Australian Online Act, which impose proactive obligations on platforms to mitigate risks. Despite this overall trend, approaches vary significantly: some jurisdictions favor lighter-touch, risk-based models, while others pursue more restrictive measures, like Australia’s proposed bans on certain harmful content. Other jurisdictions, such as Colombia, will focus on increased emphasis on the protection of children’s and adolescent’s data under reforms to data protection law. This divergence underscores ongoing debates about the optimal balance between innovation and safety, and the extent to which accountability should extend across the internet stack—from hosting services to dominant tech companies. As these discussions evolve, expect regulators to intensify efforts to harmonize standards while grappling with questions of proportionality and enforcement.
- Vendor management & supply chain risk will continue to take center stage. New data and cyber laws globally impose obligations on businesses to identify and manage third party risk. Contracts must include specific provisions imposing certain privacy and security obligations on vendors and give businesses the right to audit. These emerging contractual and legal requirements will force procurement and legal teams to revisit data and cyber exhibits.
- Global cooperation to combat cyber-crime will increase. In October 2025, governments from around the world convened in Hanoi, Vietnam, to sign the United Nations Convention against Cybercrime (“Hanoi Convention”). As the first global treaty dedicated to preventing, investigating, and prosecuting cybercrime, the Hanoi Convention is pivotal moment for global cooperation to prevent and combat cybercrime. Public/private initiatives are emerging globally that will create new opportunities (and risks) for businesses in how they share information with law enforcement, government agencies and regulators.
- AI adoption and governance will maintain dominance as a top board level strategic priority. Driven by mounting regulatory pressure, market volatility, and investor expectations, AI governance will increasingly demand a seat in the boardroom in 2026. Boards will need to contend with jurisdictional and sectoral divergence in AI regulation—reconciling prescriptive regimes like the EU AI Act with more hands-off approaches adopted in other regions—creating governance gaps for multinational corporations. Overlapping and sometimes conflicting requirements, such as differing obligations for automated decision-making tools under the GDPR and the new California regulations, demand active board oversight of AI model inventories, risk classification, and lifecycle management to demonstrate accountability and steer strategic direction. Compounding these challenges is the growing regulator and private enforcement of “AI washing,” where exaggerated or misleading claims about AI capabilities may expose firms to reputational and regulatory scrutiny.
- New AI laws will flourish. Despite some pullback at the Federal level in the US, and certain signs of moderation in the EU, laws and regulations on AI continue to flourish globally. The use of AI will continue to impact all aspects of legal, data and business activities. Legislatures and regulators will want to get involved to “protect” citizens and potentially local business/competitive positions. Companies should expect more – not fewer – legislative and regulatory initiatives, building on existing AI legal frameworks (e.g., EU AI Act, Colorado AI Act) but also adding new dimensions and innovations.
- AI will continue to improve efficiencies . . . for cybercriminals. Cybercriminals will continue to exploit AI in all aspects of their “business,” from deep fakes and social engineering, to entire lifecycle execution of ransomware from scanning endpoints, to initial access and credential exploitation, malware development, data exfiltration, and even extortion analysis and ransom note development. Expect the volume of attempted attacks to increase. Regulators and agencies are sounding the alarm. The ENISA Threat Landscape 2025 and Europol SOCTA reports highlight the industrialization of AI-driven attacks. Emerging threats include model poisoning and adversarial manipulation of AI systems used by businesses, creating new compliance and security challenges under the EU AI Act and other new laws.
- Initial efforts at regulatory convergence and simplification will continue to evolve. After years of overlapping EU digital laws—GDPR, NIS 2, DORA, CRA, Data Act, AI Act—the European Commission’s Digital Omnibus proposal aims to streamline compliance. Expect intense debate in 2026 on key points such as a single EU-wide incident reporting portal, harmonized breach notification timelines, clarifications and phased deadlines under the AI Act, and alignment of core definitions (such as “personal data”) to reduce fragmentation. Adoption is unlikely before late 2026, but companies should anticipate changes to reporting workflows and governance structures as negotiations progress.
- Geopolitical tensions will drive a widening divergence in cross-border data transfer requirements. In recent years, countries have increasingly split between those prioritizing data sovereignty and localization—such as China’s “Important Data” regime and Russia’s localization mandates—and those advocating for data free flow with trust, exemplified by frameworks like the DPF and other adequacy decisions. This conflict reflects a growing recognition that data access and control can be weaponized in international relations, with national security concerns and AI adoption amplifying scrutiny. For instance, the DOJ Rule signals a new era where national security interests drive the adoption of data regulation. These developments underscore that multinational corporations must closely monitor evolving laws, regulations, and enforcement trends to navigate an increasingly fragmented global data landscape.
Our sense is that 2026 is going to be a wild ride for global data and cyber. Stay tuned to Baker McKenzie’s ConnectOnTech.com, our Global Data & Cyber Handbook, and other resources for key developments and strategic guidance.