Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/239166 
Erscheinungsjahr: 
2020
Quellenangabe: 
[Journal:] Journal of Risk and Financial Management [ISSN:] 1911-8074 [Volume:] 13 [Issue:] 4 [Publisher:] MDPI [Place:] Basel [Year:] 2020 [Pages:] 1-12
Verlag: 
MDPI, Basel
Zusammenfassung: 
We present a deep reinforcement learning framework for an automatic trading of contracts for difference (CfD) on indices at a high frequency. Our contribution proves that reinforcement learning agents with recurrent long short-term memory (LSTM) networks can learn from recent market history and outperform the market. Usually, these approaches depend on a low latency. In a real-world example, we show that an increased model size may compensate for a higher latency. As the noisy nature of economic trends complicates predictions, especially in speculative assets, our approach does not predict courses but instead uses a reinforcement learning agent to learn an overall lucrative trading policy. Therefore, we simulate a virtual market environment, based on historical trading data. Our environment provides a partially observable Markov decision process (POMDP) to reinforcement learners and allows the training of various strategies.
Schlagwörter: 
CfD
contract for difference
deep learning
long short-term memory
LSTM
neural networks
Q-learning
reinforcement learning
RL
Persistent Identifier der Erstveröffentlichung: 
Creative-Commons-Lizenz: 
cc-by Logo
Dokumentart: 
Article

Datei(en):
Datei
Größe
415.04 kB





Publikationen in EconStor sind urheberrechtlich geschützt.