Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/284130 
Year of Publication: 
2023
Series/Report no.: 
cemmap working paper No. CWP06/23
Publisher: 
Centre for Microdata Methods and Practice (cemmap), London
Abstract: 
We consider penalized extremum estimation of a high-dimensional, possibly nonlinear model that is sparse in the sense that most of its parameters are zero but some are not. We use the SCAD penalty function, which provides model selection consistent and oracle efficient estimates under suitable conditions. However, asymptotic approximations based on the oracle model can be inaccurate with the sample sizes found in many applications. This paper gives conditions under which the bootstrap, based on estimates obtained through SCAD penalization with thresholding, provides asymptotic refinements of size O (n −2) for the error in the rejection (coverage) probability of a symmetric hypothesis test (confidence interval) and O (n −1) for the error in rejection (coverage) probability of a one-sided or equal tailed test (confidence interval). The results of Monte Carlo experiments show that the bootstrap can provide large reductions in errors in coverage probabilities. The bootstrap is consistent, though it does not necessarily provide asymptotic refinements, even if some parameters are close but not equal to zero. Random-coefficients logit and probit models and nonlinear moment models are examples of models to which the procedure applie
Subjects: 
: extremum estimation
nonlinear models
high-dimensional inference
bootstrap based confidence intervals
asymptotic refinemen
Persistent Identifier of the first edition: 
Document Type: 
Working Paper

Files in This Item:
File
Size
468.87 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.