Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/217136 
Authors: 
Year of Publication: 
2019
Citation: 
[Journal:] Quantitative Economics [ISSN:] 1759-7331 [Volume:] 10 [Issue:] 1 [Publisher:] The Econometric Society [Place:] New Haven, CT [Year:] 2019 [Pages:] 43-65
Publisher: 
The Econometric Society, New Haven, CT
Abstract: 
Morton and Wecker (1977) stated that the value iteration algorithm solves a dynamic program's policy function faster than its value function when the limiting Markov chain is ergodic. I show that their proof is incomplete, and provide a new proof of this classic result. I use this result to accelerate the estimation of Markov decision processes and the solution of Markov perfect equilibria.
Subjects: 
Markov decision process
Markov perfect equilibrium
strong convergence
relative value iteration
dynamic discrete choice
nested fixed point
nested pseudo-likelihood
JEL: 
C01
C13
C15
C61
C63
C65
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by-nc Logo
Document Type: 
Article

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.