Analyzing critical legal trends and developments across data, cyber, AI and digital regulations from around the world and beyond borders

In December 2025, Japan’s Cabinet Office released a draft framework for public comment entitled “Principle Code on Intellectual Property Protection and Transparency for the Appropriate Utilization of Generative AI” (“Principle Code”). The draft has since drawn the close attention of both rights‑holder organizations and stakeholders across the AI industry. The public comment period closed on 26 January 2026 and the Principle Code is expected to be finalized shortly.

At its core, the Principle Code represents Japan’s latest effort to address growing concerns over intellectual property protection and transparency in the age of generative AI while preserving its general policy preference for innovation‑friendly, non‑prescriptive regulation. Although the overall direction of the draft has been welcomed by many, it has also sparked vigorous debate over feasibility, competitive impact and whether its objectives can realistically be achieved through a soft‑law framework.

1. Policy background: IP and trust in generative AI

Japan’s approach to AI regulation has consistently reflected its government’s preference for flexibility and voluntary compliance, relying on guidelines and principles rather than binding, punitive regulation. This approach is evident in a range of AI‑related policy instruments issued by different government bodies, as well as in the Japan AI Act enacted in 2025.

At the same time, generative AI has intensified long‑standing tensions between technological innovation and IP protection. Rights holders, particularly in creative industries, have raised concerns that AI systems may be trained on copyrighted works without authorization and may generate outputs that closely resemble protected content. These concerns have been amplified by the increasing commercial deployment of large‑scale generative AI models.

The Principle Code can be seen as a targeted response to these issues. Drawing inspiration from both the EU AI Act and Japan’s experience with principle‑based corporate governance codes, the Cabinet Office is trying to promote a “co‑creation relationship” between AI developers and providers on one side and rights holders and users on the other. Transparency is positioned as the central mechanism for building trust and enabling constructive engagement among these stakeholders.

2.1  Broad scope

The draft Principle Code applies broadly to all entities involved in generative AI, including the following:

  • AI model developers
  • AI service providers offering generative AI systems to users
  • Businesses providing operational or technical support for generative AI services

    This wide scope reflects a policy view that responsibility for transparency and IP protection should be shared across the entire AI value chain rather than resting solely with upstream model developers.

2.2           Soft law and “comply or explain”

Crucially, the Principle Code is explicitly framed as soft law. It does not impose legally binding obligations or statutory penalties. Rather, it adopts a “comply or explain” approach: covered entities are expected either to comply with the principles or to publicly explain the reasons for their non‑compliance.

While this preserves flexibility, it also relies heavily on reputational incentives and market pressure. In practice, the effectiveness of the Principle Code will depend on how users, rights holders, business partners and regulators treat disclosures made under the framework.

3.              Three core principles

The draft Principle Code is structured around three core principles aimed at enhancing transparency and facilitating IP protection.

3.1           Transparency through public disclosure

First, AI providers are expected to publish clear and accessible information on their websites or similar platforms regarding their generative AI systems. This includes, in particular, the following:

  • AI models used
  • Key facts concerning training data
  • Internal accountability and governance structures
  • Whether technical measures to prevent IP infringement have been implemented

The draft emphasizes that this information should be available in a form accessible to anyone, underscoring the government’s focus on openness and public trust. At the same time, it leaves unresolved questions about the appropriate level of detail, particularly where disclosure may conflict with the protection of trade secrets or confidential know‑how.

3.2           Responding to rights‑holder inquiries

Second, where a rights holder is engaged in, or preparing for, legal proceedings to protect its rights or legally protected interests, AI developers and providers are expected to respond to specific inquiries.

In such cases, the Principle Code contemplates disclosure of whether training datasets include URLs or other materials identified by a rights holder. This principle is intended to strengthen the practical ability of rights holders to assess potential infringements and decide whether to pursue legal remedies.

From an industry perspective, this principle has been one of the most controversial elements of the draft, given the technical difficulty of tracing training data at scale and the risk that disclosure could expose sensitive proprietary information.

3.3           Responding to user inquiries

Third, similar expectations apply to user inquiries. Where a user requests information regarding content generated through an AI system, the provider is expected to indicate whether the relevant data appeared in the training dataset.

This principle reflects an effort to enhance accountability and user trust, but it also raises practical questions about scalability and operational burden, particularly for widely deployed consumer‑facing AI services.

4.              Industry reactions: support and skepticism

Based on media reporting on the public comment process, a clear divergence in stakeholder views has emerged.

Rights‑holder organizations — particularly those representing creative industries — have broadly welcomed the Principle Code as a meaningful step toward addressing AI‑related IP concerns and improving transparency. Many view the draft as a long‑overdue recognition of the need for stronger mechanisms to protect creative works in the AI era.

By contrast, business associations representing technology‑driven companies have expressed significant reservations. These groups emphasize that information concerning training data lies at the core of an AI provider’s competitive advantage, and caution that expansive disclosure expectations could risk revealing proprietary know‑how and undermining incentives to invest in AI development.

In a similar vein, industry‑led AI governance groups have highlighted concerns about structural imbalances. According to these commentators, entities that comply with the Principle Code may bear disproportionate costs while non‑compliant actors continue operating without comparable burdens. From this perspective, the Code may struggle to achieve its stated objectives unless adoption becomes sufficiently widespread.

5.              What comes next — and why it matters

Public comments on the draft Principle Code are now under review by the Cabinet Office’s expert panel, which is expected to prepare a final version. How the government responds to concerns regarding feasibility, protection of trade secrets and implementation details will be critical to the Code’s effectiveness.

More broadly, the Principle Code reflects the current direction of Japan’s AI policy. While it does not impose binding obligations, it signals clear expectations around transparency and IP governance that may influence market practices, contractual arrangements and future regulatory developments. For AI developers and service providers operating in or targeting the Japanese market, the message is clear: even within a soft‑law framework, transparency and responsible data governance are becoming central components of AI risk management. Closely monitoring the finalization of the Principle Code and how it is adopted in practice will be essential as Japan’s approach to AI regulation continues to evolve.

Author

Tsugihiro Okada is a partner in the IP Tech group in the Tokyo office. He is vice chairman of the Tokyo Bar Association's International Committee and a member of the International Association for the Protection of Intellectual Property. Tsugihiro was selected as a "Rising Star Bengoshi" by The Legal 500 (TMT, 2020).

Author

Fumiya Igarashi is an associate in Baker McKenzie's Tokyo office.