Item

Bias in the Mirror: Are LLMs opinions robust to their own adversarial attacks

Rennard, Virgile
Xypolopoulos, Christos
Vazirgiannis, Michalis
Supervisor
Department
Machine Learning
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Large language models (LLMs) inherit biases from their training data and alignment processes, influencing their responses in subtle ways. While many studies have examined these biases, little work has explored their robustness during interactions. In this paper, we introduce a novel approach where two instances of an LLM engage in self-debate, arguing opposing viewpoints to persuade a neutral version of the model. Through this, we evaluate how firmly biases hold and whether models are susceptible to reinforcing misinformation or shifting to harmful viewpoints. Our experiments span multiple LLMs of varying sizes, origins, and languages, providing deeper insights into bias persistence and flexibility across linguistic and cultural contexts.
Citation
V. Rennard, C. Xypolopoulos, M. Vazirgiannis, and É. Polytechnique, “Bias in the Mirror : Are LLMs opinions robust to their own adversarial attacks,” vol. 1, pp. 2128–2143, Aug. 2025, doi: 10.18653/V1/2025.ACL-LONG.106
Source
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Conference
63rd Annual Meeting of the Association for Computational Linguistics
Keywords
Responsible Deployment Frameworks, Cross-domain Generalization, Safety Evaluation Metrics, Multi-task Benchmarking, Human-AI Collaboration, Bias Mitigation Strategies, Transparent Leaderboards
Subjects
Source
63rd Annual Meeting of the Association for Computational Linguistics
Publisher
Association for Computational Linguistics
Full-text link