Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/300973 
Erscheinungsjahr: 
2024
Schriftenreihe/Nr.: 
IZA Discussion Papers No. 17077
Verlag: 
Institute of Labor Economics (IZA), Bonn
Zusammenfassung: 
Subjective performance evaluation is an important part of hiring and promotion decisions. We combine experiments with administrative data to understand what drives gender bias in such evaluations in the technology industry. Our results highlight the role of personal interaction. Leveraging 60,000 mock video interviews on a platform for software engineers, we find that average ratings for code quality and problem solving are 12 percent of a standard deviation lower for women than men. Half of these gaps remain unexplained when we control for automated measures of coding performance. To test for statistical and taste-based bias, we analyze two field experiments. Our first experiment shows that providing evaluators with automated performance measures does not reduce the gender gap. Our second experiment removed video interaction, and compared blind to non-blind evaluations. No gender gap is present in either case. These results rule out traditional economic models of discrimination. Instead, we show that gender gaps widen with extended personal interaction, and are larger for evaluators educated in regions where implicit association test scores are higher.
Schlagwörter: 
discrimination
gender
coding
experiment
information
JEL: 
C93
D83
J16
J71
M51
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
1.72 MB





Publikationen in EconStor sind urheberrechtlich geschützt.