Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/94321 
Year of Publication: 
1998
Series/Report no.: 
Working Paper No. 1998-25
Publisher: 
Rutgers University, Department of Economics, New Brunswick, NJ
Abstract: 
This paper describes the results of simulation experiments performed on a suite of learning algorithms. We focus on games in {\em network contexts}. These are contexts in which (1) agents have very limited information about the game; users do not know their own (or any other agent's) payoff function, they merely observe the outcome of their play. (2) Play can be extremely asynchronous; players update their strategies at very different rates. There are many proposed learning algorithms in the literature. We choose a small sampling of such algorithms and use numerical simulation to explore the nature of asymptotic play. In particular, we explore the extent to which the asymptotic play depends on three factors, namely: limited information, asynchronous play, and the degree of responsiveness of the learning algorithm.
Subjects: 
learning
JEL: 
C72
Document Type: 
Working Paper

Files in This Item:
File
Size
543.78 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.