Evaluating and mitigating bias in AI-based medical text generation
Chen, Xiuying ; Wang, Tairan ; Zhou, Juexiao ; Song, Zirui ; Gao, Xin ; Zhang, Xiangliang
Chen, Xiuying
Wang, Tairan
Zhou, Juexiao
Song, Zirui
Gao, Xin
Zhang, Xiangliang
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Journal article
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Artificial intelligence (AI) systems, particularly those based on deep learning models, have increasingly achieved expert-level performance in medical applications. However, there is growing concern that such AI systems may reflect and amplify human bias, reducing the quality of their performance in historically underserved populations. The fairness issue has attracted considerable research interest in the medical imaging classification field, yet it remains understudied in the text-generation domain. In this study, we investigate the fairness problem in text generation within the medical field and observe substantial performance discrepancies across different races, sexes and age groups, including intersectional groups, various model scales and different evaluation metrics. To mitigate this fairness issue, we propose an algorithm that selectively optimizes those underserved groups to reduce bias. Our evaluations across multiple backbones, datasets and modalities demonstrate that our proposed algorithm enhances fairness in text generation without compromising overall performance.
Citation
X. Chen, T. Wang, J. Zhou, Z. Song, X. Gao, and X. Zhang, “Evaluating and mitigating bias in AI-based medical text generation,” Nat Comput Sci, pp. 1–9, Apr. 2025, doi: 10.1038/S43588-025-00789-7
Source
Nature Computational Science
Conference
Keywords
Subjects
Source
Publisher
Springer Nature
