Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/277755 
Year of Publication: 
2023
Series/Report no.: 
SAFE Working Paper No. 401
Publisher: 
Leibniz Institute for Financial Research SAFE, Frankfurt a. M.
Abstract: 
In current discussions on large language models (LLMs) such as GPT, understanding their ability to emulate facets of human intelligence stands central. Using behavioral economic paradigms and structural models, we investigate GPT's cooperativeness in human interactions and assess its rational goal-oriented behavior. We discover that GPT cooperates more than humans and has overly optimistic expectations about human cooperation. Intriguingly, additional analyses reveal that GPT's behavior isn't random; it displays a level of goal-oriented rationality surpassing human counterparts. Our findings suggest that GPT hyper-rationally aims to maximize social welfare, coupled with a strive of self-preservation. Methodologically, our research highlights how structural models, typically employed to decipher human behavior, can illuminate the rationality and goal-orientation of LLMs. This opens a compelling path for future research into the intricate rationality of sophisticated, yet enigmatic artificial agents.
Subjects: 
large language models
cooperation
goal orientation
economic rationality
Persistent Identifier of the first edition: 
Document Type: 
Working Paper

Files in This Item:
File
Size
981.23 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.