Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/299987 
Year of Publication: 
2024
Series/Report no.: 
CIGI Papers No. 290
Publisher: 
Centre for International Governance Innovation (CIGI), Waterloo, ON, Canada
Abstract: 
The growing popularity of large language models (LLMs) has raised concerns about their accuracy. These chatbots can be used to provide information, but it may be tainted by errors or made-up or false information (hallucinations) caused by problematic data sets or incorrect assumptions made by the model. The questionable results produced by chatbots has led to growing disquiet among users, developers and policy makers. The author argues that policy makers need to develop a systemic approach to address these concerns. The current piecemeal approach does not reflect the complexity of LLMs or the magnitude of the data upon which they are based, therefore, the author recommends incentivizing greater transparency and accountability around data-set development.
Creative Commons License: 
cc-by Logo
Document Type: 
Working Paper

Files in This Item:
File
Size
1.21 MB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.