Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/249178 
Year of Publication: 
2022
Series/Report no.: 
SAFE Working Paper No. 335
Publisher: 
Leibniz Institute for Financial Research SAFE, Frankfurt a. M.
Abstract: 
In more and more situations, artificially intelligent algorithms have to model humans' (social) preferences on whose behalf they increasingly make decisions. They can learn these preferences through the repeated observation of human behavior in social encounters. In such a context, do individuals adjust the selfishness or prosociality of their behavior when it is common knowledge that their actions produce various externalities through the training of an algorithm? In an online experiment, we let participants' choices in dictator games train an algorithm. Thereby, they create an externality on future decision making of an intelligent system that affects future participants. We show that individuals who are aware of the consequences of their training on the payoffs of a future generation behave more prosocially, but only when they bear the risk of being harmed themselves by future algorithmic choices. In that case, the externality of artificially intelligence training induces a significantly higher share of egalitarian decisions in the present.
Subjects: 
Artificial Intelligence
Morality
Prosociality
Generations
Externalities
JEL: 
C49
C91
D10
D62
D63
O33
Persistent Identifier of the first edition: 
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.