Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/287698 
Year of Publication: 
2021
Citation: 
[Journal:] Journal of Revenue and Pricing Management [ISSN:] 1477-657X [Volume:] 21 [Issue:] 1 [Publisher:] Palgrave Macmillan UK [Place:] London [Year:] 2021 [Pages:] 50-63
Publisher: 
Palgrave Macmillan UK, London
Abstract: 
Dynamic pricing is considered a possibility to gain an advantage over competitors in modern online markets. The past advancements in Reinforcement Learning (RL) provided more capable algorithms that can be used to solve pricing problems. In this paper, we study the performance of Deep Q-Networks (DQN) and Soft Actor Critic (SAC) in different market models. We consider tractable duopoly settings, where optimal solutions derived by dynamic programming techniques can be used for verification, as well as oligopoly settings, which are usually intractable due to the curse of dimensionality. We find that both algorithms provide reasonable results, while SAC performs better than DQN. Moreover, we show that under certain conditions, RL algorithms can be forced into collusion by their competitors without direct communication.
Subjects: 
Dynamic pricing
Competition
Reinforcement learning
E-commerce
Price collusion
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article
Document Version: 
Published Version

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.