Cross-Lingual Pitfalls: Automatic Probing Cross-Lingual Weakness of Multilingual Large Language Models
Xu, Zixiang ; Wang, Yanbo ; Huang, Yue ; Chen, Xiuying ; Zhao, Jieyu ; Jiang, Meng ; Zhang, Xiangliang
Xu, Zixiang
Wang, Yanbo
Huang, Yue
Chen, Xiuying
Zhao, Jieyu
Jiang, Meng
Zhang, Xiangliang
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Large Language Models (LLMs) have achieved remarkable success in Natural Language Processing (NLP), yet their cross-lingual performance consistency remains a significant challenge. This paper introduces a novel methodology for efficiently identifying inherent cross-lingual weaknesses in LLMs. Our approach leverages beam search and LLM-based simulation to generate bilingual question pairs that expose performance discrepancies between English and target languages. We construct a new dataset of over 6,000 bilingual pairs across 16 languages using this methodology, demonstrating its effectiveness in revealing weaknesses even in state-of-the-art models. The extensive experiments demonstrate that our method precisely and cost-effectively pinpoints cross-lingual weaknesses, consistently revealing over 50% accuracy drops in target languages across a wide range of models. Moreover, further experiments investigate the relationship between linguistic similarity and cross-lingual weaknesses, revealing that linguistically related languages share similar performance patterns and benefit from targeted post-training. Code is available at https://github.com/xzx34/CrossLingual-Pitfalls.
Citation
Z. Xu et al., “Cross-Lingual Pitfalls: Automatic Probing Cross-Lingual Weakness of Multilingual Large Language Models,” vol. 1, pp. 8254–8284, Aug. 2025, doi: 10.18653/V1/2025.ACL-LONG.404.
Source
Proceedings of the Annual Meeting of the Association for Computational Linguistics
Conference
63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
Keywords
Large Language Model Inference, Sample-Compute Allocation, Efficiency Optimization, Mixed-Configuration Sampling, Code Generation Benchmarking, Reasoning Task Efficiency, Inference Cost Reduction, Adaptive Sampling Strategies
Subjects
Source
63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
Publisher
Association for Computational Linguistics
