Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/300752 
Year of Publication: 
2024
Citation: 
[Journal:] Internet Policy Review [ISSN:] 2197-6775 [Volume:] 13 [Issue:] 3 [Year:] 2024 [Pages:] 1-26
Publisher: 
Alexander von Humboldt Institute for Internet and Society, Berlin
Abstract: 
This article provides an initial analysis of the EU AI Act's (AIA) approach to regulating general-purpose artificial intelligence (AI) - such as OpenAI's ChatGPT - and argues that it marks a significant shift from reactive to proactive AI governance. While this may alleviate concerns that regulators are constantly lagging behind technological developments, complex questions remain about the enforceability, democratic legitimacy, and future-proofing of the AIA. We present an interdisciplinary analysis of the relevant technological and legislative developments that ultimately led to the hybrid regulation that the AIA has become: a framework largely focused on product safety and standardisation with some elements related to the protection of fundamental rights. We analyse and discuss the legal requirements and obligations for the development and use of general-purpose AI and present the envisaged enforcement and penalty structure for the (un)lawful use of general-purpose AI in the EU. In conclusion, we argue that the AIA has significant potential to become a global benchmark for governance and regulation in this area of strategic global importance. However, its success hinges on effective enforcement, fruitful intra-European and international cooperation, and the EU's ability to adapt to the rapidly evolving AI landscape.
Subjects: 
Artificial intelligence
General-purpose AI
AI Act
European Union
AI governance
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.