Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/250397 
Authors: 
Year of Publication: 
2021
Citation: 
[Journal:] Internet Policy Review [ISSN:] 2197-6775 [Volume:] 10 [Issue:] 4 [Publisher:] Alexander von Humboldt Institute for Internet and Society [Place:] Berlin [Year:] 2021 [Pages:] 1-29
Publisher: 
Alexander von Humboldt Institute for Internet and Society, Berlin
Abstract: 
This paper introduces a socio-technical typology of bias in data-driven machine learning and artificial intelligence systems. The typology is linked to the conceptualisations of legal anti-discrimination regulations, so that the concept of structural inequality-and, therefore, of undesirable bias-is defined accordingly. By analysing the controversial Austrian "AMS algorithm" as a case study as well as examples in the contexts of face detection, risk assessment and health care management, this paper defines the following three types of bias: firstly, purely technical bias as a systematic deviation of the datafied version of a phenomenon from reality; secondly, socio-technical bias as a systematic deviation due to structural inequalities, which must be strictly distinguished from, thirdly, societal bias, which depicts-correctly-the structural inequalities that prevail in society. This paper argues that a clear distinction must be made between different concepts of bias in such systems in order to analytically assess these systems and, subsequently, inform political action.
Subjects: 
Artificial intelligence
Machine learning
Bias
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.