Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/258066 
Erscheinungsjahr: 
2020
Quellenangabe: 
[Journal:] Risks [ISSN:] 2227-9091 [Volume:] 8 [Issue:] 4 [Article No.:] 113 [Publisher:] MDPI [Place:] Basel [Year:] 2020 [Pages:] 1-20
Verlag: 
MDPI, Basel
Zusammenfassung: 
In traditional Reinforcement Learning (RL), agents learn to optimize actions in a dynamic context based on recursive estimation of expected values. We show that this form of machine learning fails when rewards (returns) are affected by tail risk, i.e., leptokurtosis. Here, we adapt a recent extension of RL, called distributional RL (disRL), and introduce estimation efficiency, while properly adjusting for differential impact of outliers on the two terms of the RL prediction error in the updating equations. We show that the resulting "efficient distributional RL" (e-disRL) learns much faster, and is robust once it settles on a policy. Our paper also provides a brief, nontechnical overview of machine learning, focusing on RL.
Schlagwörter: 
distributional reinforcement learning
markov decision process
leptokurtic distribution
tail risk
efficient estimator
Persistent Identifier der Erstveröffentlichung: 
Creative-Commons-Lizenz: 
cc-by Logo
Dokumentart: 
Article
Erscheint in der Sammlung:

Datei(en):
Datei
Größe
661.21 kB





Publikationen in EconStor sind urheberrechtlich geschützt.