Analyzing critical legal trends and developments across data, cyber, AI and digital regulations from around the world and beyond borders

Lawmakers and regulators around the world are intensifying their focus on protecting children and teenagers online. Although the goal of protecting the privacy and safety of young people online is widely shared, approaches to achieving it vary significantly across jurisdictions. A pivotal point of debate is whether online service providers should be explicitly required to implement age assurance mechanisms to estimate, verify or confirm users’ ages, and then tailor users’ online experiences accordingly.

United Kingdom

The primary regulatory requirements for age assurance in the UK are contained in the Age Appropriate Design Code 2021 and the Online Safety Act 2023.

Age assurance can play a significant role in keeping children and their personal information safe online. It describes approaches or tools that help estimate or assess a child’s age and, therefore, allows services to be tailored to their needs or access to be restricted where required.

The UK AADC — the first-ever statutory code of practice for protecting children’s data — mandates a risk-based approach to age assurance, requiring services to effectively apply standards to children and youth, and establish or estimate age with a level of certainty commensurate on risk. It is regulated by the UK Information Commissioner’s Office and mandates 15 design standards to protect children’s data.

The Online Safety Act, which is overseen by Ofcom, requires “highly effective” age assurance for in-scope services in relation to pornographic or other specified kinds of content that is harmful to children.

The ICO and Ofcom have published extensive guidance on what constitutes effective age assurance and make clear that age assurance is meant to provide flexibility and to be tech neutral, reliable, and fair.

Ofcom guidance suggests methods such as photo ID matching, facial age estimation, mobile network operator age checks, credit card checks, digital identity services and email-based age estimation as capable of being highly effective. Methods such as self-declaration of age are not considered effective.

European Union

In the European Union, age verification is a recommended measure for media services providers and providers of video-sharing platforms under the Audiovisual Media Services Directive, and for providers of very large online platforms and very large online search engines under the Digital Services Act (DSA).

In addition, in July 2025, the European Commission published guidelines to assist providers of online platforms (of any size) in complying with their obligation under the DSA to put in place appropriate and proportionate measures to ensure a high level of privacy, safety and security for minors. The guidelines are not binding but represent a benchmark against which the European Commission will assess providers’ compliance with this obligation.

These guidelines set criteria for effective age assurance, which are broadly consistent with the criteria for “highly effective” age assurance set out by Ofcom under the UK Online Safety Act.

The guidelines also specify a number of circumstances in which the European Commission considers the use of age verification and age estimation to be appropriate and provides recommendations as to how age verification and age estimation should be implemented, in particular to preserve users’ privacy.

The European Commission is currently testing its own solution to facilitate age verification until the EU Digital Identity Wallet become available.

France

France’s regulatory framework for protecting minors online is structured around two complementary laws — the July 2023 law on digital majority and the SREN law.

The law on digital majority establishes that registration for online services that process personal data is subject to verifiable parental consent for minors under the age of 15. This law aligns with Article 8 of the EU General Data Protection Regulation (GDPR), which allows member states to set their own age threshold for digital consent between 13 and 16, and the choice made by France is to set this age at 15. However, while the principle of digital majority at age 15 remains enshrined in law, the absence of an implementing decree means that such law is not currently enforceable. Indeed, a decree was planned to implement this measure, but it was not published due to feedback from the European Commission, which found the law to be non-compliant with the DSA.

In parallel, the SREN law, adopted in 2024, focuses on securing and regulating the digital space, notably by mandating robust age verification to ensure users are at least 18 before accessing pornographic content. It empowers the French media regulator, the Autorité de régulation de la communication audiovisuelle et numérique (“ARCOM”), to enforce compliance — including blocking non-compliant sites — without requiring judicial approval. In recent months, enforcement efforts have intensified: ARCOM has ordered the blocking of several adult websites for failing to implement compliant age verification mechanisms, as required under the SREN law. These platforms, operated by companies based in other EU member states, have challenged the orders, citing the EU’s country-of-origin principle under the E-Commerce Directive and the DSA. In mid-June 2025, an administrative court temporarily suspended the enforcement of these blocking measures, highlighting the legal tension between national regulation and EU-wide digital governance.

Interestingly, France’s data protection authority, the Commission nationale de l’informatique et des libertĂ©s (CNIL), has issued detailed guidance emphasizing the need to balance youth protection with privacy and data minimization. It calls for age verification systems to be structured around principles such as proportionality, robustness and the use of independent third parties.

These recommendations, first outlined in 2022, were subsequently validated by the CNIL in 2024 in a deliberation which endorsed the technical framework proposed by ARCOM and marked a shift from guidance to enforceable standards.

The CNIL and ARCOM discourage methods that involve direct identity collection by content publishers or reliance on biometric data unless specifically authorized by law. Instead, they favor privacy-preserving models — such as cryptographic proofs of age — that enable users to verify eligibility without disclosing their identity.

For high-risk contexts such as access to pornographic sites, the CNIL and ARCOM both advocate for a double-blind architecture, where one entity verifies age and another grants access, preventing either from linking identity to browsing behavior.

While acknowledging that current solutions remain imperfect and potentially circumventable, the CNIL and ARCOM encourage the development and certification of third-party systems that offer verifiable assurances without compromising user anonymity.

This approach highlights the degree to which effective age assurance must be integrated with core data protection principles rather than treated as an exception to them.

United States

Numerous US states have implemented, or are trying to implement, explicit age assurance requirements. One example is Texas’ Securing Children Online Through Parental Empowerment Act, which requires covered digital service providers to collect and register the age of every individual attempting to create an account. Although federal courts have enjoined enforcement of several substantive provisions of the SCOPE Act on constitutional grounds, including those related to content filtering, advertising restrictions and age verification for accessing certain materials, the age registration provision in Section 509.051 has not expressly been enjoined. As a result, while much of the statute is currently unenforceable, covered digital service providers may still be obligated to comply with the age registration requirement, unless and until a court rules otherwise.

Another example is Louisiana’s Secure Online Child Interaction and Age Limitation Act, which would require covered social media companies to use commercially reasonable efforts to verify the age of Louisiana account holders with a level of certainty appropriate to the risk posed by their data practices. Although the law is scheduled to take effect 1 July, it is currently the subject of a constitutional challenge seeking to enjoin its enforcement.

Yet another example is California’s Age-Appropriate Design Code Act, modeled after the UK AADC. This statute would have required covered businesses to estimate the ages of users and tailor their online experiences accordingly, or else treat all users as minors. However, a federal court enjoined the statute in its entirety on First Amendment grounds. The California Attorney General has appealed the decision.

The court’s ruling reflects broader concerns in the US about the potential chilling effects of age-gating requirements on free expression, as well as unresolved tensions between age assurance mandates and user privacy expectations. In a win for policymakers seeking to expand age-assurance mandates, the US Supreme Court in August 2025 allowed a Mississippi law requiring social media platforms to verify users’ ages and obtain parental consent for minors to remain in effect while legal challenges proceed.

Separately, some US states may indirectly impose age assurance obligations on companies deemed to have willfully disregarded or failed to appropriately investigate users’ ages. For example, the California Consumer Privacy Act imposes prescriptive requirements on knowingly selling certain minors’ personal information or sharing it for cross-context behavioral advertising and provides that a “business that willfully disregards the consumer’s age shall be deemed to have had actual knowledge of the consumer’s age.”

As another example, Maryland’s Age-Appropriate Design Code Act seeks to impose various data privacy requirements on controllers that “should have known” the user is under the age of 18. Although the law is scheduled to take effect 1 October, it is currently the subject of a constitutional challenge seeking to enjoin its enforcement.

These statutory frameworks highlight how age assurance can serve as a trigger for broader privacy obligations, particularly where minors’ personal data is involved.

A patchwork of additional laws in around 20 states targets online access to obscene material by minors and requires covered operators to verify the ages of users before permitting them access. In June 2025, the US Supreme Court found that Texas’s age-verification law for pornographic websites was constitutional because it only incidentally burdened adults’ access to legal content, allowing the state to require ID checks to keep minors from seeing obscene material. This ruling paves the way for other states to defend and enforce similar age-verification laws against constitutional challenges, potentially accelerating the nationwide adoption of stricter access controls for online content deemed harmful to minors.

Australia

Australia has introduced, and is currently expanding, age assurance requirements for a range of online services. Some of these are coupled with requirements to prohibit access by children, whilst others instead underpin requirements to apply default protections for child users.

The Australian online safety regulator (the eSafety Commissioner) released a roadmap in 2023 for the implementation of a mandatory age verification scheme for online pornographic material. This recommended a trial of age assurance technologies before mandating their adoption reflecting concerns about the maturity of available solutions, their effectiveness, and ability to safeguard privacy and security. The trial of age assurance technologies commenced in late 2024 and has only recently concluded, with the final report submitted to the Government but not yet released publicly.  However, regulatory development in this space has continued in parallel with the trial.

In late 2024, new legislation was passed to introduce social media minimum age (SMMA) requirements preventing Australians under 16 from having accounts. The SMMA requirements will take effect by December 2025, and in the interim significant work has been underway to both define which platforms will be subject to the SMMA obligation, and exactly what will be required of them. Specific privacy protections (over and above those in the Privacy Act) were added before the legislation was passed including a rule-making power for the Minister to prohibit the collection of particular forms of personal information for the purposes of the minimum age obligation, as well as restrictions on platforms mandating provision of government identifiers (including Digital ID).

At the same time, development of mandatory Phase 2 Industry Codes under the Online Safety Act (aimed at child protection and adult content) has been underway. The eSafety Commissioner has recently announced the registration of three of these Phase 2 Codes including one containing some age assurance obligations for search engines.  A number of other Phase 2 Codes are still under consideration for other service categories.  If registered by the eSafety Commissioner, providers subject to age assurance requirements under the Phase 2 Codes would have to take account of technical accuracy, robustness, reliability, and fairness, as well as privacy considerations. Interaction with other Australian laws such as the SMMA requirements would also be key.  Mechanisms included in the Ofcom guidance in the UK on “highly effective age assurance” may well also be appropriate for the purposes of the Australian Code requirements. If any or all Codes are not registered, the eSafety Commissioner may develop industry standards in their place.

In parallel, legislation was passed in 2024 to support the development of a Children’s Online Privacy Code under the Privacy Act. The way in which this will intersect with the age assurance obligations under Australian online safety law is not yet clear. The Office of the Australian Information Commissioner has indicated that it will look to the UK Age Appropriate Design Code for alignment opportunities, whilst being mindful of the complexity outlined above. Consultation on the development of the Code has been underway.

The rapid evolution of these requirements is creating significant complexity as providers navigate intersecting obligations driven by both privacy-by-design and safety-by-design concerns.

Canada

Canada has not yet adopted statutory age assurance requirements for online service providers. However, the Office of the Privacy Commissioner of Canada has taken an active role in shaping the policy conversation, most recently through a 2024 exploratory consultation on age assurance.

In that document, the OPC recognizes that age assurance can support child safety by enabling more tailored protections or limiting access to harmful content, but stresses any such measures must be developed with strong privacy safeguards. The OPC discourages broad deployment of identity verification systems for general-use services, warning that such practices risk normalizing intrusive data collection.

Instead, the OPC urges organizations to consider alternatives such as applying child-appropriate protections to all users or empowering parental controls at the device level. The consultation also highlights risks related to data minimization, function creep and unintended exclusion, particularly for youth without access to government-issued ID or for whom biometric systems may be less accurate.

The OPC’s guidance reflects a cautious stance: any move toward age assurance must be both demonstrably necessary and rigorously protective of users’ privacy and dignity. While Canada does not currently require online service providers to implement age verification or estimation measures, the OPC has signaled that it may issue further guidance, and encourages organizations to take a proportionate, privacy-by-design approach where age assurance is contemplated.

The Canadian House of Commons also introduced An Act to enact the Protection of Minors in the Digital Age Act and to amend two Acts in June 2025. If enacted, this statute would establish a comprehensive framework to protect children from harmful online content and activities. It would prohibit certain types of harmful communications directed at minors, require platforms to take measures to mitigate associated risks, and create new obligations for online services to address child sexual exploitation and other harms. The bill would also amend existing federal laws to strengthen enforcement and penalties for violations.

Outlook

Across jurisdictions, age assurance remains a fast-evolving and contested area of regulation, but a common theme is emerging — organizations are expected to assess risks contextually and calibrate their practices accordingly.

Services that may expose minors to elevated safety or privacy harms are increasingly expected to demonstrate that they have implemented meaningful safeguards, including age assurance mechanisms that are proportionate to those risks. At the same time, regulators are signaling that indiscriminate or overly invasive age checks — particularly those involving biometric or identity data — may create new privacy and equity concerns of their own.

Online service providers must therefore balance competing imperatives: protecting young users, complying with divergent legal frameworks, and upholding privacy-by-design principles.

Organizations grappling with these issues should be prepared to revisit their age assurance strategies, particularly considering mounting enforcement activity and the prospect of new technical standards emerging in the months ahead.

Those responsible for trust and safety, product, or privacy functions will need to navigate not only the legal complexity but also the practical and ethical tradeoffs posed by different age assurance approaches, particularly as enforcement increases and international standards begin to take shape.

Author

Magalie Dansac Le Clerc is a partner in Baker McKenzie's Paris office. A member of the Firm's Information Technology and Communications Practice Group, she is a Certified Information Privacy Professional (CIPP).

Author

Theo heads Baker McKenzie's Canadian Information Technology/Communications practice and is a member of the Firm's Global IP/Technology Practice Group, and Technology, Media & Telecoms and Financial Institutions Industry Groups.

Author

Jonathan Tam is a partner in the San Francisco office focused on global privacy, advertising, intellectual property, content moderation and consumer protection laws. He is a qualified attorney in Canada and the U.S. passionate about helping clients achieve their commercial objectives while managing legal risks. He is well versed in the legal considerations that apply to many of the world’s cutting-edge technologies, including AI-driven solutions, wearables, connected cars, Web3, DAOs, NFTs, VR/AR, crypto, metaverses and the internet of everything.

Author

Allison Manvell is a special counsel in the Technology, Communications and Commercial, and Media & Content, teams at Baker McKenzie. Allison works across Baker McKenzie's Sydney and Brisbane offices. Allison has more than ten years' experience advising on commercial and regulatory matters across a range of industries with a particular focus on digital media, technology, broadcasting and content licensing and regulation. Allison has also spent time on client secondment within the media industry. She is a member of the Communications and Media Law Association and she speaks and presents regularly on legal issues relevant to convergence and digital media.

Author

John is an associate in Baker McKenzie's Intellectual Property and Technology team, based in London. He joined the Firm in 2016 and was admitted as a solicitor in England and Wales in 2018. He is also admitted as an attorney in the state of New York. His practice encompasses aspects of commercial, technology and intellectual property law. He has a particular focus on data protection.

Author

Elizabeth Denham CBE, joined Baker McKenzie as International Consultant, Data and Tech in 2022. She has over 15 years' experience as a data protection regulator in four jurisdictions. She was most recently the Information Commissioner for the UK (2016-2021) . During her tenure in the UK she also chaired the Global Privacy Assembly, which brings together more than 130 data protection authorities around the world - the premier global forum for data protection. She is recognized as a leader in enabling responsible data use by government and the commercial sector, and for implementing the GDPR into UK law. She tackled some of the most complex issues facing the digital economy, including the use of data in political campaigns, the use of live facial recognition technologies in the commercial and police sectors, and the transparent and fair use of analytics and AI. She is passionate about the protection of children online, ethical and accountable use of health data, and supporting companies to embed data protection and security into their services and offerings.