Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/189797 
Year of Publication: 
2018
Series/Report no.: 
cemmap working paper No. CWP56/18
Publisher: 
Centre for Microdata Methods and Practice (cemmap), London
Abstract: 
Currently there is little practical advice on which treatment effect estimator to use when trying to adjust for observable differences. A recent suggestion is to compare the performance of estimators in simulations that somehow mimic the empirical context. Two ways to run such "empirical Monte Carlo studies" (EMCS) have been proposed. We show theoretically that neither is likely to be informative except under restrictive conditions that are unlikely to be satisfied in many contexts. To test empirical relevance, we also apply the approaches to a real-world setting where estimator performance is known. We find that in our setting both EMCS approaches are worse than random at selecting estimators which minimise absolute bias. They are better when selecting estimators that minimise mean squared error. However, using a simple bootstrap is at least as good and often better. For now researchers would be best advised to use a range of estimators and compare estimates for robustness.
Subjects: 
empirical Monte Carlo studies
program evaluation
selection on observables
treatment effects
JEL: 
C15
C21
C25
C52
Persistent Identifier of the first edition: 
Document Type: 
Working Paper

Files in This Item:
File
Size
367.05 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.