Loading...
On the reliability of Large Language Models to misinformed and demographically informed prompts
Aremu, Toluwani ; Akinwehinmi, Oluwakemi ; Nwagu, Chukwuemeka ; Ahmed, Syed Ishtiaque ; Orji, Rita ; Del Amo, Pedro Arnau ; El Saddik, Abdulmotaleb
Aremu, Toluwani
Akinwehinmi, Oluwakemi
Nwagu, Chukwuemeka
Ahmed, Syed Ishtiaque
Orji, Rita
Del Amo, Pedro Arnau
El Saddik, Abdulmotaleb
Supervisor
Department
Computer Vision
Embargo End Date
Type
Journal article
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
We investigate and observe the behavior and performance of Large Language Model (LLM)-backed chatbots in addressing misinformed prompts and questions with demographic information within the domains of Climate Change and Mental Health. Through a combination of quantitative and qualitative methods, we assess the chatbots' ability to discern the veracity of statements, their adherence to facts, and the presence of bias or misinformation in their responses. Our quantitative analysis using True/False questions reveals that these chatbots can be relied on to give the right answers to these close-ended questions. However, the qualitative insights, gathered from domain experts, shows that there are still concerns regarding privacy, ethical implications, and the necessity for chatbots to direct users to professional services. We conclude that while these chatbots hold significant promise, their deployment in sensitive areas necessitates careful consideration, ethical oversight, and rigorous refinement to ensure they serve as a beneficial augmentation to human expertise rather than an autonomous solution. Dataset and assessment information can be found at https://github.com/tolusophy/Edge-of-Tomorrow.
Citation
T. Aremu et al., “On the reliability of Large Language Models to misinformed and demographically informed prompts,” AI Mag, vol. 46, no. 1, p. e12208, Mar. 2025, doi: 10.1002/AAAI.12208.
Source
AI Magazine
Conference
Keywords
Large Language Models (LLMs), Misinformation, Linguistic, Demographic biases, Climate change, Mental health
Subjects
Source
Publisher
Wiley
