logo
ResearchBunny Logo
Regulating ChatGPT and other Large Generative AI Models

Computer Science

Regulating ChatGPT and other Large Generative AI Models

P. Hacker, A. Engel, et al.

Explore how large generative AI models like ChatGPT are reshaping our communication and creativity. This research, conducted by Philipp Hacker, Andreas Engel, and Marco Mauer, delves into the regulatory landscape for these advanced systems in light of existing EU laws. Discover a fresh perspective on building an effective framework for the future of AI regulation.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper addresses how law and policy should be tailored to the capabilities and risks of large generative AI models (LGAIMs). It situates LGAIMs within ongoing regulatory debates in the EU (AI Act, AI Liability Directive, Product Liability Directive, Digital Services Act, Digital Markets Act) and argues that current frameworks are ill-prepared, focusing too much on direct AI regulation rather than urgent content moderation concerns. The authors contend that technology-neutral regimes (e.g., GDPR, non-discrimination law) may be better equipped to manage LGAIM risks than technology-specific measures currently drafted. They set out to develop appropriate concepts and duties across the AI value chain and to propose concrete policy measures, including a shift of high-risk obligations toward deployers and users, targeted upstream duties for developers, and the extension of content moderation tools to LGAIMs.
Literature Review
The paper builds on and critiques emerging literature on foundation models/LGAIMs, EU AI Act proposals for general-purpose AI systems (GPAIS), GDPR implications (including model inversion risks), and DSA-based content moderation frameworks. It references technical foundations of large models and risks (bias, hallucinations, energy use), legal analyses criticizing over-broad or impracticable GPAIS obligations, and scholarship proposing more technology-neutral enforcement. It also draws on case law (CJEU discrimination cases) and comparative regulatory tools (notice-and-action, trusted flaggers, pretrial discovery analogies) to frame gaps and solutions.
Methodology
Normative and doctrinal legal-policy analysis. The authors: (1) summarize technical characteristics and use scenarios of LGAIMs; (2) analyze the EU AI Act (Council and Parliament versions) with a focus on GPAIS/foundation models; (3) examine applicability and challenges of non-discrimination law and the GDPR to LGAIMs (including model inversion and transparency obligations); (4) assess the DSA’s scope and limitations for moderating AI-generated content; and (5) propose a structured regulatory framework (three-layer model) and concrete policies (transparency, risk management focused on applications, upstream non-discrimination data audits, DSA-style mechanisms for LGAIMs). Two illustrative use cases (sportswear design; private party invitations) clarify allocation of roles and duties across developers, deployers, professional and non-professional users, and recipients.
Key Findings
- Current AI Act GPAIS provisions are over-inclusive and impose impracticable risk management burdens on versatile LGAIMs; treating them as per se high-risk would be inefficient and anti-competitive, likely benefiting only large incumbents. - Regulation should distinguish actors along the AI value chain (developers, deployers, professional/non-professional users, recipients) and allocate duties accordingly. - Three-layer approach: (1) minimum standards for all LGAIMs (GDPR, non-discrimination law, selected data governance and cybersecurity, sustainability, baseline transparency, and content moderation duties); (2) high-risk obligations triggered by concrete high-risk use cases, primarily for deployers and professional users; (3) mandated collaboration across the value chain to enable compliance, including structured information sharing with safeguards and joint-and-several-style accountability mechanisms. - Non-discrimination law can apply upstream to developers when models are built or offered for domains covered by equality law; users must monitor outputs for discriminatory harms (with expanded duties beyond the current high-risk/professional scope). - GDPR compliance is challenged by model inversion risks and transparency requirements for both training data (Article 14) and user-provided data (Article 13); the Italian DPA’s ChatGPT action underscores needs for clearer legal bases, transparency, minors’ protection, and accuracy/fairness. - The DSA does not clearly cover standalone LGAIMs; core DSA tools (notice-and-action, trusted flaggers, dispute resolution, audits) should be selectively extended to LGAIMs and their integration within platforms; consider watermarking and AI-content detection to support transparency and enforcement. - Practical proposals: staged releases and regulated codes of conduct; transparency for developers/deployers (data provenance, performance, incidents, GHG emissions) and for professional users (disclosure of AI-generated/adapted content to recipients in public-facing domains); proportional upstream non-discrimination data audits; DSA-style content moderation mechanisms tailored to LGAIMs.
Discussion
Focusing high-risk obligations on concrete applications and on deployers/users directly addresses the core regulatory challenge posed by LGAIM versatility: it avoids hypothetical, unmanageable risk assessments at the model level and places responsibilities with actors controlling real-world deployments. Upstream minimum standards (data governance for bias, cybersecurity, transparency) mitigate systemic risks that are best handled during development. Expanding DSA-style content moderation tools to LGAIMs targets growing threats of disinformation and harmful speech that current platform-centric rules miss. The framework balances feasibility (to prevent consolidation around incumbents) with accountability (through collaboration, protected disclosures, and joint-and-several-type arrangements). It highlights the comparative advantages of technology-neutral laws (GDPR, anti-discrimination) and proposes limited, targeted updates to technology-specific regimes (AI Act, DSA) to keep pace with LGAIMs.
Conclusion
The paper proposes a differentiated regulatory approach for LGAIMs: (1) establish minimum standards that directly bind developers (GDPR, non-discrimination, selective data governance and cybersecurity, sustainability, and core content moderation); (2) apply full high-risk obligations only to specific high-risk applications, with duties falling primarily on deployers and professional users; (3) require structured collaboration across the AI value chain with protected information sharing and joint accountability. It also recommends detailed transparency obligations, staged release strategies, proportional upstream anti-discrimination audits, and selective extension of DSA mechanisms to LGAIMs. Overall, combining technology-neutral frameworks with targeted technology-specific adjustments can better manage LGAIM risks and foster innovation and competition. Policymakers should act quickly to adapt EU regulation to the dynamics of GPT-4 and successors.
Limitations
The study is a normative legal-policy analysis without empirical evaluation. Due to space constraints it does not fully address intellectual property law, power dynamics and political economy, deeper comparisons of technology-neutral vs. technology-specific regulation, or military applications of LGAIMs. Some proposals require further technical development (e.g., robust watermarking, detection) and institutional design (e.g., audits, trusted flaggers, protective orders).
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny