All eyes are on South Korea as the Personal Information Protection Commission (PIPC) released groundbreaking generative AI guidelines on August 6, 2025, that will change global expectations on AI governance. Korea is determined to lead in AI governance, and these guidelines mark the beginning of their comprehensive regulatory framework. The ‘Guidelines for Personal Data Processing for the Development and Utilization of Generative AI‘ (“AI Guidelines”) position Korea as a global leader in AI governance and strike a balance between maintaining flexibility to encourage innovation and clear compliance obligations. This article analyzes the key provisions of the guidelines and their implications for multinational businesses operating in or with Korea. The full English translation of the AI Guidelines is available here.
Korea’s AI Guidelines attempt to balance innovation with protection. Although they are presented as guidance, they effectively set the standard against which compliance will be measured in Korea’s enforcement actions. For businesses developing or deploying AI in Korea, it is crucial to align with these requirements from the outset in order to effectively manage regulatory risk. The incentive is clear: businesses that implement these requirements quickly will be best placed to capitalize on global opportunities, particularly in the APAC region.
Context: Why These Guidelines Matter Now
On January 21, 2025, Korea enacted its Framework Act on Artificial Intelligence Development and Trustworthiness, which goes into effect in January, 2026. The promulgation of detailed subordinate regulations is also expected imminently. At the Asia-Pacific Economic Cooperation (APEC) Digital and AI Ministerial Meeting in Incheon on August 4, 2025, Korea was appointed to lead APEC’s AI initiative, signaling its role to influence regional and global AI governance standards. The full English translation of the upcoming AI Framework Act is available here.
PIPC has been a global watch dog investigating data breaches to ensure businesses comply with Korea’s Personal Information Protection Act (PIPA). Under PIPA, data controllers who fail to implement adequate security measures may face penalties of up to 3% of their global annual revenue. While these AI Guidelines are not legally binding, they will influence the PIPC’s approach to investigating data breaches involving AI systems. Demonstrating compliance with these guidelines will provide businesses with an affirmative defense for exemption. Businesses looking for ways to demonstrate defensible AI governance programs should consider using these guidelines as a practical standard for the APAC region.
Key Provisions: Four Core Areas
1. Legal Basis for AI Training: The “Legitimate Interests” Clarification
The guidelines resolve long-standing uncertainty by confirming that PIPA Article 15(1)(vi) — the “legitimate interests” provision — can serve as the legal basis for processing publicly available data for AI training purposes. This guidance provides much-needed clarity for AI developers operating in Korea. For non-public data, the guidelines establish three pathways for compliance:
- Original purpose processing based on consent
- Secondary use under PIPA Article 15(3) for purposes that are reasonably related
- Pseudonymization under PIPA Article 28-2
The PIPC emphases that invoking legitimate interests requires business to implement robust technical and organizational safeguards:
- Technical measures: Verifying data sources, preventing data contamination, implementing output filters
- Administrative measures: Documenting processing criteria in privacy policies, conducting privacy reviews, implementing deployment-specific controls
- Rights protection: Providing feasible mechanisms for exercising data subject rights and clearly communicating any technical limitations
2. Options for AI Development Models
The guidelines recognize that AI development takes many different forms and sets forth three distinct models, each with their own compliance requirements:
LLM as a Service (API-based): Organizations must focus on contractual controls, particularly through enterprise licenses that exclude data retention and retraining. The PIPC recommends that organizations processing sensitive personal data through commercial LLMs (Large Language Models) adopt enterprise APIs that disable data retention and AI learning by default.
Off-the-Shelf LLMs (Open-weight models): There is an emphasis on verifying the source and legitimacy of training datasets. Organizations must monitor for model updates and security patches, implementing them promptly when vulnerabilities are discovered.
Self-Developed LLMs: Full lifecycle responsibility is required, encompassing comprehensive risk management across pre-training, fine-tuning, deployment and post-deployment monitoring.
3. Multi-Layered Safety Requirements
The AI Guidelines mandate a comprehensive approach across three levels:
Data Level
- Exclude content that is protected from scraping
- Implement pseudonymization or anonymization immediately after data collection
- Consider using synthetic data as an alternative
- Remove high-risk identifiers (e.g. resident registration numbers, account numbers and credit card numbers) before training
Model Level
- Apply safety training techniques to ensure appropriate outputs
- Implement privacy-preserving technologies during training
- Establish testing benchmarks for system robustness
- Conduct regular security assessments
System Level
- Implement granular access controls for APIs
- Deploy input and output filtering mechanisms
- Use retrieval-augmented generation (RAG) where appropriate
- Establish a continuous monitoring infrastructure
4. Governance and Documentation
Organizations must establish AI privacy governance structures with privacy by design principles incorporated throughout the AI lifecycle. The AI Guidelines require the Chief Privacy Officer (CPO), to be involved from the inception of a project through to its deployment. Recommended documentation includes:
- Privacy Impact Assessments: Currently recommended for the private sector and mandatory for the public sector
- Pre-deployment testing documentation: Records of tests for privacy risks, bias and accuracy before system launch
- Acceptable use policies: Clear statements on permitted and prohibited uses of the AI system
- AI-specific incident response procedures: Protocols for handling AI-generated errors or privacy breaches
What Businesses Need to Know & Action
1. Establish Application of AI Guidelines & Legal Basis Documentation
The first step in AI Governance is identifying what laws apply when using AI-powered technologies. Companies currently training AI on Korean data must document their legal basis under PIPA. For publicly available data, prepare assessments of legitimate interests demonstrating necessity, proportionality and safeguards. For user data, review whether current consents or contracts cover AI training purposes.
2. Implement Technical Safeguards Now
Digital transformation and proper AI governance requires three interconnected components: people, process and technology. The technical requirements set out in the guidelines will likely become the benchmark in breach investigations and an affirmative defense for organizations. Priority implementations should include:
- data minimization and pseudonymization protocols
- privacy-preserving training methods
- system-level access controls, management and monitoring
3. Prepare for Enhanced Regulatory Oversight
Given that Korea is positioning itself as a leader in AI governance and the legislation allows the PIPC to request reports “at any time” in the event of potential breaches, you can expect to see more proactive enforcement and regular compliance checks.
Practical Recommendations
For multinational companies operating in Korea, our local Data + Cyber team based in Seoul offers the following actionable guidance:
Conduct Gap Assessments: Compare current AI development and deployment practices against the guidelines’ requirements, prioritizing high-risk use cases involving Korean personal data.
Establish Cross-Functional Teams: Bring together the legal, privacy, security and AI development teams to create working groups that will implement the comprehensive approach required by the guidelines with regard to data, models and systems.
Document Everything: Given the PIPC’s emphasis on demonstrable compliance and the potential for regulatory requests at any time, maintain comprehensive documentation of the following:
- legal basis determinations and assessments
- technical safeguard implementations and testing results
- risk assessments and mitigation measures
- evidence of the CPO’s involvement in AI projects
Engage Proactively: Consider participating in PIPC consultations and industry forums to help shape implementation practices and demonstrate your commitment to complying with the regulations. Organizations that implement these guidelines early will be better positioned when the AI Framework Act regulations are finalized in Korea. Early adoption may also give you leverage when it comes to shaping industry standards at the upcoming Global Privacy Assembly (GPA) conference in Seoul from 15 September 2025.
How Baker McKenzie Can Help
Our Baker-Korea team has extensive experience navigating PIPC investigations and requirements and can assist with:
- Legal basis assessments and documentation strategies for AI training
- Preparing and reviewing Privacy Impact Assessments
- Technical safeguard implementation roadmaps
- CPO advisory services and governance structure design
- AI-specific incident response planning
- Compliance gap analysis and remediation strategies