Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/279067 
Year of Publication: 
2023
Series/Report no.: 
IZA Discussion Papers No. 16369
Publisher: 
Institute of Labor Economics (IZA), Bonn
Abstract: 
We use unique data from journal submissions to identify and unpack publication bias and p-hacking. We find that initial submissions display significant bunching, suggesting the distribution among published statistics cannot be fully attributed to a publication bias in peer review. Desk-rejected manuscripts display greater heaping than those sent for review i.e. marginally significant results are more likely to be desk rejected. Reviewer recommendations, in contrast, are positively associated with statistical significance. Overall, the peer review process has little effect on the distribution of test statistics. Lastly, we track rejected papers and present evidence that the prevalence of publication biases is perhaps not as prominent as feared.
Subjects: 
publication bias
p-hacking
selective reporting
JEL: 
A11
C13
C40
Document Type: 
Working Paper

Files in This Item:
File
Size
2.11 MB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.