Item

Where Are We? Evaluating LLM Performance on African Languages

Adebara, Ife
Toyin, Hawau Olamide
Ghebremichael, Nahom Tesfu
Elmadany, Abdel Rahim A.
Abdul-Mageed, Muhammad
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Africa's rich linguistic heritage remains underrepresented in NLP, largely due to historical policies that favor foreign languages and create significant data inequities. In this paper, we integrate theoretical insights on Africa's language landscape with an empirical evaluation using SAHARA- a comprehensive benchmark we curate from large-scale, publicly accessible datasets capturing the continent's linguistic diversity. By systematically assessing the performance of leading large language models (LLMs) on SAHARA, we demonstrate how policy-induced data variations directly impact model effectiveness across African languages. Our findings reveal that while models perform reasonably well on few languages, many Indigenous languages remain marginalized due to sparse data. Leveraging these insights, we offer actionable recommendations for policy reforms and inclusive data practices. Overall, our work underscores the urgent need for a dual approach-combining theoretical understanding with empirical evaluation-to foster linguistic diversity in AI for African communities.
Citation
I. Adebara, H. Olamide, T. Ω Nahom, T. Ghebremichael, A. Elmadany, and M. Abdul-Mageed, “Where Are We? Evaluating LLM Performance on African Languages,” vol. 1, pp. 32704–32731, Aug. 2025, doi: 10.18653/V1/2025.ACL-LONG.1572
Source
Proceedings of the Annual Meeting of the Association for Computational Linguistics
Conference
63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
Keywords
African Languages, Large Language Models, Benchmarking Performance, Data Resource Disparities, Multilingual NLP, Language Policy Impacts, Indigenous Language Technology, Inclusive NLP Evaluation
Subjects
Source
63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
Publisher
Association for Computational Linguistics
Full-text link