Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/299323 
Year of Publication: 
2023
Series/Report no.: 
Queen’s Economics Department Working Paper No. 1510
Publisher: 
Queen's University, Department of Economics, Kingston (Ontario)
Abstract: 
Predictive AI is increasingly used to guide decisions on agents. I show that even a bias-neutral predictive AI can potentially amplify exogenous (human) bias in settings where the predictive AI represents a cost-adjusted precision gain to unbiased predictions, and the final judgments are made by biased human evaluators. In the absence of perfect and instantaneous belief updating, expected victims of bias become less likely to be saved by randomness under more precise predictions. An increase in aggregate discrimination is possible if this effect dominates. Not accounting for this mechanism may result in AI being unduly blamed for creating bias.
Subjects: 
artificial intelligence
AI
algorithm
human-machine interactions
discrimination
bias
algorithmic bias
financial institutions
JEL: 
O33
J15
G2
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.