Law
Automated decision-making in South Korea: a critical review of the revised Personal Information Protection Act
D. H. Kim and D. H. Park
This study by Dong Hyeon Kim and Do Hyun Park delves into the critical comparisons between South Korea's Personal Information Protection Act and the EU's GDPR, highlighting major gaps in protection against automated decisions. Discover essential insights into legislative improvements for privacy protection!
~3 min • Beginner • English
Introduction
The paper addresses how South Korea’s revised Personal Information Protection Act (PIPA) regulates automated decision-making compared with the EU’s GDPR Article 22. With AI’s growth and associated risks, the authors highlight the need for special rules beyond general data protection norms. They note the interpretive clarity available in the EU via GDPR Recitals and WP29/EDPB guidance versus Korea’s reliance on statutes and enforcement decrees. The study’s purpose is to assess whether PIPA Article 37-2 ensures protection equivalent to GDPR Article 22 by analyzing differences in format (right vs prohibition), target (system vs decision), and content (scope of explanation), taking into account Korea’s legal hierarchy and the role of the PIPA Enforcement Decree. It argues that aligning with international standards like the GDPR brings practical benefits (e.g., adequacy, cross-border data flows) and sets the stage for detailed comparison and suggested remedies.
Literature Review
The paper situates its analysis within debates on GDPR Article 22’s interpretation, noting the authoritative WP29 guidelines that construe Article 22(1) as a general prohibition on solely automated decisions with legal or similarly significant effects, and contrasting scholarly views that read it as a right to object (Bygrave). It reviews Korea’s legislative evolution toward comprehensive data protection (PIPA 2011; major 2020 revision influenced by GDPR) and sectoral provisions (CIA Article 36-2 for finance) that prefigured rights to explanation. It canvasses critiques of the notice-and-consent paradigm (Barocas & Nissenbaum; Sloan & Warner; Cate & Mayer-Schönberger) and introduces Korea’s right to informational self-determination and its embedding in PIPA’s purpose. The review also covers comparative administrative law sources (Germany’s VwVfG Article 35a informing Korea’s GAPA Article 20) and guidance by Korea’s PIPC aligning with WP29 on “meaningful” human intervention. Sector studies and statistics (e.g., AI use in recruitment; low uptake of explanation rights in credit scoring) further ground the discussion. Overall, prior work highlights interpretive uncertainty around Article 22, practical limits of consent, and the challenge of explaining complex AI systems, motivating a focused comparison of PIPA’s new Article 37-2.
Methodology
This is a doctrinal and comparative legal analysis. The authors examine the statutory text of PIPA Article 37-2 and its Enforcement Decree (Articles 44-2, 44-3, 44-4), compare them to GDPR Article 22 and related GDPR provisions (Articles 13, 14, 15), and draw on authoritative soft-law interpretations (WP29 guidelines; EDPB lineage) and Korean administrative guidance (PIPC 2024). They analyze Korea’s GAPA Article 20 and its German antecedent (VwVfG Article 35a) to clarify the system-versus-decision focus. The analysis is structured around three dimensions—format (right vs prohibition), target (completely automated system vs solely automated decision), and content (scope and nature of the right to explanation). The authors supplement legal interpretation with illustrative sectoral data and administrative practice: surveys on consent behavior (PIPC 2022), AI adoption in recruitment (HRD Korea 2023), and usage statistics for explanation rights in credit scoring (NARS 2022). The study proposes legislative and interpretive remedies grounded in these comparisons.
Key Findings
- Format: Unlike the WP29 view that GDPR Article 22(1) functions as a general prohibition on solely automated decisions with exceptions, PIPA Article 37-2 establishes a right to object. Its exceptions are tied to data collection bases (PIPA Article 15(1)1,2,4) rather than to the decision stage. Article 37-2(3) requires “necessary” measures (e.g., human re-processing, explanations) unless there is a compelling reason not to do so. This format risks under-protection where data subjects cannot effectively exercise rights, particularly in fast, large-scale AI contexts or when individuals are unaware of decisions. Empirical context: 62.2% of the Korean public do not review privacy notices in detail; wide data collection occurs under formal consent; web scraping/crawling and one-party consent in messenger contexts complicate obtaining decision-time consent.
- Target: PIPA regulates a “completely automated system,” whereas GDPR targets “decisions based solely on automated processing.” The system-level focus and the modifier “completely” can exclude multi-stage profiling and triaging systems with partial human involvement from PIPA’s scope, creating a regulatory gap. In AI recruitment triaging, 62% of large Korean firms use AI and 20% rely solely on AI at document screening; under GDPR such intermediate steps may fall under Article 22 if human involvement is not meaningful, while under PIPA they may not be seen as “completely automated systems.” PIPC guidance attempts decision-level assessment, but that strains the statutory wording and raises difficulties in evaluating whether intermediate steps have “significant” legal effects.
- Content (right to explanation): PIPA explicitly creates a right to explanation (Article 37-2(2)), with details in the Enforcement Decree (Article 44-3(2)): result, types of major personal information used, major criteria, and procedures. This orients toward ex post, local explanations, while Article 37-2(4) separately imposes ex ante, global disclosure. Where effects are not “significant,” explanations can be substituted by disclosure information, potentially undermining the ability to contest decisions. Technical and practical constraints—especially for black-box AI and large models—make defining “major” data and criteria difficult. In the credit sector (CIA Article 36-2), explanation rights are clearer and more tractable, yet were exercised only 36,224 times out of 3,809,069 score inquiries (<1%) from 2018–H1 2022, suggesting low uptake even where impact is high.
Discussion
The findings show that PIPA’s approach, while inspired by GDPR, offers weaker functional protection in key scenarios. A rights-based (opt-out) format places the burden on data subjects, who often lack awareness or capacity to object in time, unlike a prohibition-based default. The system-level target and “completely automated” threshold leave gaps for multi-stage profiling and triaging, common in hiring and resource allocation, where meaningful human oversight may be limited. The right to explanation’s ex post, local orientation and the possibility to substitute explanations with general disclosures for non-significant cases may impede contestation and accountability, especially with opaque AI systems. To align with GDPR’s level of protection, the authors argue for: (1) moving toward a prohibition-based baseline given PIPA’s already narrow applicability and the “compelling reason” escape clause; (2) interpreting or amending targets from system-level to decision-level to capture multi-stage and triaging architectures and to align with the Enforcement Decree’s disclosure and explanation duties; and (3) clarifying that explanations must enable follow-on measures (objection, contestation), avoiding substitutions that deprive data subjects of actionable information. Over the longer term, sector-specific guidance and potential AI legislation coordinated with PIPA could calibrate obligations to technical realities, drawing on XAI developments while acknowledging legal-technical translation limits.
Conclusion
The study concludes that PIPA Article 37-2 and its Enforcement Decree provide less comprehensive safeguards than GDPR Article 22 in three respects: format (right to object vs general prohibition), target (system-level “completely automated” vs decision-level “solely automated”), and content (practical limits on the right to explanation). To ensure GDPR-comparable protection: a prohibition-based default should be considered given PIPA’s narrow scope; targets should be reframed or interpreted at the decision level to cover multi-stage profiling and triaging; and the right to explanation should be specified to furnish information enabling contestation and other remedies, avoiding substitution with general disclosures where this undermines effectiveness. Harmonization through updated decrees and guidance, sector-specific rules (as with the CIA in credit scoring), judicial interpretation, or complementary AI legislation (akin to the EU AI Act’s high-risk regime) are proposed avenues. Future work should develop sectoral templates for explanations, bridge legal and XAI approaches, and monitor uptake and efficacy of rights in practice.
Limitations
The analysis is doctrinal and interpretive, relying on statutory texts, decrees, and guidance rather than empirical evaluation of outcomes across sectors. Judicial and administrative interpretations are still evolving, so conclusions depend on current guidance (e.g., PIPC 2024) that may change. Sectoral variability (e.g., credit vs hiring) limits generalization of explanation practices, and technical opacity of AI systems constrains the practicability of detailed ex post, local explanations. The paper also notes potential trade-offs with innovation and administrative burden, and recognizes that some solutions may require future legislation or sector-specific regulation beyond PIPA.
Related Publications
Explore these studies to deepen your understanding of the subject.

