Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/286927 
Year of Publication: 
2021
Citation: 
[Journal:] Business & Information Systems Engineering [ISSN:] 1867-0202 [Volume:] 64 [Issue:] 3 [Publisher:] Springer Fachmedien Wiesbaden [Place:] Wiesbaden [Year:] 2021 [Pages:] 335-348
Publisher: 
Springer Fachmedien Wiesbaden, Wiesbaden
Abstract: 
Contemporary information systems make widespread use of artificial intelligence (AI). While AI offers various benefits, it can also be subject to systematic errors, whereby people from certain groups (defined by gender, age, or other sensitive attributes) experience disparate outcomes. In many AI applications, disparate outcomes confront businesses and organizations with legal and reputational risks. To address these, technologies for so-called "AI fairness" have been developed, by which AI is adapted such that mathematical constraints for fairness are fulfilled. However, the financial costs of AI fairness are unclear. Therefore, the authors develop AI fairness for a real-world use case from e-commerce, where coupons are allocated according to clickstream sessions. In their setting, the authors find that AI fairness successfully manages to adhere to fairness requirements, while reducing the overall prediction performance only slightly. However, they find that AI fairness also results in an increase in financial cost. Thus, in this way the paper's findings contribute to designing information systems on the basis of AI fairness.
Subjects: 
AI fairness
Algorithmic fairness
Fair AI
Costs
Artificial intelligence
Machine learning
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article
Document Version: 
Published Version

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.