logo
ResearchBunny Logo
Introduction
The increasing use of artificial intelligence (AI) in decision-making necessitates specific regulations beyond general data protection laws. The EU's GDPR, through Article 22, addresses automated decisions, serving as an international standard. South Korea's PIPA, amended in 2023 to include Article 37-2, aims to provide similar protections. However, the PIPA's framework differs from the GDPR, raising concerns about equivalent protection of fundamental rights. This paper compares the two regulations across three aspects: the format of the right (right to object versus general prohibition), the target of the regulation (completely automated system versus decisions based solely on automated processing), and the content of the right to explanation. The paper will explore the differences and propose potential solutions to ensure adequate data subject protection in South Korea.
Literature Review
The paper reviews existing literature on the GDPR's Article 22, highlighting the varied interpretations and the WP29's influential guidelines. It then explores the evolution of South Korea's personal information protection legislation, tracing its development from the initial Act on the Protection of Personal Information of Public Institutions to the comprehensive PIPA, including the 2020 and 2023 amendments. The discussion emphasizes the influence of the right to informational self-determination and the notice-and-consent paradigm on the PIPA's structure. The paper also examines the relationship between the PIPA, the Act on Promotion of Information and Communications Network Utilisation and Information Protection, and the Credit Information Use and Protection Act (CIA), particularly focusing on Article 36-2 of the CIA, which introduced a right to explanation for automated evaluations in the financial sector prior to the PIPA's amendment. Finally, the literature review contextualizes the PIPA within the broader landscape of international data protection standards and the ongoing discussions regarding AI regulation.
Methodology
This research employs a comparative legal analysis, focusing on a detailed examination of Article 22 of the GDPR and Article 37-2 of the PIPA. The analysis dissects the differences in wording and structure across three key aspects: the format of the right granted to data subjects (right to object versus a general prohibition on automated decisions), the target of regulation (completely automated systems versus decisions solely based on automated processing), and the content of the right to explanation. The study leverages interpretations and guidelines issued by the WP29 and the Korean Personal Information Protection Commission (PIPC) to analyze the practical implications of the legal differences. The authors utilize examples from various sectors, such as AI recruitment and credit scoring, to illustrate the practical challenges and potential regulatory gaps arising from the disparities between the GDPR and PIPA. The methodology also involves reviewing related Korean legislation, including the General Act on Public Administration (GAPA) and the Enforcement Decree of the PIPA, and relevant case law to understand the legal and practical contexts of automated decision-making in South Korea.
Key Findings
The comparative analysis reveals significant differences between the GDPR and the PIPA regarding automated decisions. First, concerning the format, the PIPA grants a right to object rather than establishing a general prohibition, which the authors argue limits protection for individuals who cannot effectively exercise their rights, particularly in the context of complex AI systems. While the PIPA's exceptions mirror those in the GDPR, the framing differs, potentially undermining the protection intended by the general prohibition approach of the GDPR. Second, the study finds a regulatory gap in the PIPA's target of regulation. The PIPA focuses on "completely automated systems," potentially overlooking multi-stage profiling systems where human intervention exists at certain stages, unlike the GDPR's focus on decisions "based solely on automated processing." This creates a vacuum for regulating the often-complex multi-stage decision-making processes prevalent in AI systems. The analysis highlights the challenge of defining "completely automated" in the context of multi-stage decision-making, such as in AI-driven recruitment processes. Third, regarding the right to explanation, the PIPA explicitly grants this right but lacks detail on its content, delegating specifics to presidential decree and the Enforcement Decree. While the Enforcement Decree provides some guidance, its focus on providing 'local' and 'ex post' explanations may be insufficient to allow data subjects to effectively understand the basis for automated decisions and exercise their other rights. The study points out the practical challenges associated with requiring such detailed explanations in the context of opaque AI models. The analysis also points to a lower-than-expected rate of exercising the right to explanation in the credit scoring sector in Korea, raising questions about the efficacy of the current framework.
Discussion
The findings suggest that the PIPA, while intended to provide comparable protection to the GDPR, falls short in several critical areas. The right to object approach, the limited scope of the "completely automated system" definition, and the lack of sufficient detail in the right to explanation all create vulnerabilities in protecting data subjects' rights. The authors argue that these limitations, though potentially promoting AI innovation, risk undermining public trust in AI systems. The discussion highlights the need for a balance between fostering technological advancement and safeguarding fundamental rights. The authors also discuss the difficulty in implementing a strictly prohibition-based approach, considering the potential impact on innovation and costs for personal information controllers.
Conclusion
This study concludes that the PIPA requires improvements to provide fundamental rights protection equivalent to the GDPR. The current approach creates regulatory gaps, particularly concerning multi-stage profiling systems and the meaningful exercise of the right to explanation. The authors suggest several avenues for improvement, including legislative amendments, updates to the presidential decree and guidelines, and the development of interpretative frameworks. They also propose exploring alternative solutions such as creating specialized AI legislation that complements the PIPA, similar to the relationship between the GDPR and the EU AI Act, thereby ensuring better protection of individual rights while fostering innovation.
Limitations
This study focuses solely on a legal comparison of the PIPA and GDPR, without empirical investigation into the actual application and impact of these regulations. Further research could involve analyzing real-world case studies to understand how automated decisions are made in practice and the effectiveness of current mechanisms for protecting data subjects' rights. The authors also acknowledge the dynamic nature of AI technology and the constant need for legal adaptations to keep pace with advancements in the field.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny