Abstract:
In the era of rapidly advancing artificial intelligence (AI), understanding to what extent people rely on generative AI products (AI tools), such as ChatGPT, is crucial. This study experimentally investigates whether people rely more on AI tools than their human peers in assessing the authenticity of misinformation. We quantify participants' degree of reliance using the weight of reference (WOR) and decompose it into two stages using the activation-integration model. Our results indicate that participants exhibit a higher reliance on ChatGPT than their peers, influenced significantly by the quality of the reference and their prior beliefs. The proportion of real parts did not impact the WOR. In addition, we found that the reference source affects both the activation and integration stages, but the quality of reference only influences the second stage.