MMTEB: massive multilingual text embedding benchmark
Enevoldsen, Kenneth ; Chung, Isaac ; Kerboua, Imene ; Kardos, Marton ; Mathur, Ashwin ; Stap, David ; Gala, Jay ; Siblini, Wissam ; Krzeminski, Dominik ; Winata, Genta ... show 10 more
Enevoldsen, Kenneth
Chung, Isaac
Kerboua, Imene
Kardos, Marton
Mathur, Ashwin
Stap, David
Gala, Jay
Siblini, Wissam
Krzeminski, Dominik
Winata, Genta
Author
Enevoldsen, Kenneth
Chung, Isaac
Kerboua, Imene
Kardos, Marton
Mathur, Ashwin
Stap, David
Gala, Jay
Siblini, Wissam
Krzeminski, Dominik
Winata, Genta
Sturua, Saba
Utpala, Saiteja
Ciancone, Mathieu
Schaeffer, Marion
Misra, Diganta
Dhakal, Shreeya
Rystrom, Jonathan
Solomatin, Roman
Cagatan, Omer
Kundu, Akash
Bernstorff, Martin
Xiao, Shitao
Sukhlecha, Akshita
Pahwa, Bhavish
Posiwata, Rafal
GV, Kranthi Kiran
Ashraf, Shawon
Auras, Daniel
Pluster, Bjorn
Harries, Jan
Magne, Loic
Mohr, Isabelle
Zhu, Dawei
Gisserot-Boukhlef, Hippolyte
Aarsen, Tom
Kostkan, Jan
Wojtasik, Konrad
Lee, Taemin
Suppa, Marek
Zhang, Crystina
Rocca, Roberta
Hamdy, Mohammed
Michail, Andrianos
Yang, John
Faysse, Manuel
Vatolin, Aleksei
Thakur, Nandan
Dey, Manan
Vasani, Dipam
Chitale, Pranjal
Tedeschi, Simone
Tai, Nguyen
Snegirev, Artem
Hendriksen, Mariya
Gunther, Michael
Xia, Mengzhou
Shi, Weijia
Lu, Xing Han
Clive, Jordan
K, Gayatri
Anna, Maksimova
Wehrli, Silvan
Tikhonova, Maria
Panchal, Henil
Abramov, Aleksandr
Ostendorff, Malte
Liu, Zheng
Clematide, Simon
Miranda, Lester James V.
Fenogenova, Alena
Song, Guangyu
Bin Safi, Ruqiya
Li, Wen-Ding
Borghini, Alessia
Cassano, Federico
Hansen, Lasse
Hooker, Sara
Xiao, Chenghao
Adlakha, Vaibhav
Weller, Orion
Reddy, Siva
Muennighoff, Niklas
Chung, Isaac
Kerboua, Imene
Kardos, Marton
Mathur, Ashwin
Stap, David
Gala, Jay
Siblini, Wissam
Krzeminski, Dominik
Winata, Genta
Sturua, Saba
Utpala, Saiteja
Ciancone, Mathieu
Schaeffer, Marion
Misra, Diganta
Dhakal, Shreeya
Rystrom, Jonathan
Solomatin, Roman
Cagatan, Omer
Kundu, Akash
Bernstorff, Martin
Xiao, Shitao
Sukhlecha, Akshita
Pahwa, Bhavish
Posiwata, Rafal
GV, Kranthi Kiran
Ashraf, Shawon
Auras, Daniel
Pluster, Bjorn
Harries, Jan
Magne, Loic
Mohr, Isabelle
Zhu, Dawei
Gisserot-Boukhlef, Hippolyte
Aarsen, Tom
Kostkan, Jan
Wojtasik, Konrad
Lee, Taemin
Suppa, Marek
Zhang, Crystina
Rocca, Roberta
Hamdy, Mohammed
Michail, Andrianos
Yang, John
Faysse, Manuel
Vatolin, Aleksei
Thakur, Nandan
Dey, Manan
Vasani, Dipam
Chitale, Pranjal
Tedeschi, Simone
Tai, Nguyen
Snegirev, Artem
Hendriksen, Mariya
Gunther, Michael
Xia, Mengzhou
Shi, Weijia
Lu, Xing Han
Clive, Jordan
K, Gayatri
Anna, Maksimova
Wehrli, Silvan
Tikhonova, Maria
Panchal, Henil
Abramov, Aleksandr
Ostendorff, Malte
Liu, Zheng
Clematide, Simon
Miranda, Lester James V.
Fenogenova, Alena
Song, Guangyu
Bin Safi, Ruqiya
Li, Wen-Ding
Borghini, Alessia
Cassano, Federico
Hansen, Lasse
Hooker, Sara
Xiao, Chenghao
Adlakha, Vaibhav
Weller, Orion
Reddy, Siva
Muennighoff, Niklas
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Text embeddings are typically evaluated on a limited set of tasks, which are constrained by language, domain, and task diversity. To address these limitations and provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) - a large-scale, community-driven expansion of MTEB, covering over 500 quality-controlled evaluation tasks across 250+ languages. MMTEB includes a diverse set of challenging, novel tasks such as instruction following, long-document retrieval, and code retrieval, representing the largest multilingual collection of evaluation tasks for embedding models to date. Using this collection, we develop several highly multilingual benchmarks, which we use to evaluate a representative set of models. We find that while large language models (LLMs) with billions of parameters can achieve state-of-the-art performance on certain language subsets and task categories, the best-performing publicly available model is multilingual-e5-large-instruct with only 560 million parameters. To facilitate accessibility and reduce computational cost, we introduce a novel downsampling method based on inter-task correlation, ensuring a diverse selection while preserving relative model rankings. Furthermore, we optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks that drastically reduce computational demands. For instance, our newly introduced zero-shot English benchmark maintains a similar ranking order as the full-scale version but only requires 2% of the original documents vastly reducing the computational cost. © 2025 13th International Conference on Learning Representations, ICLR 2025. All rights reserved.
Citation
K. Enevoldsen et al., “MMTEB: Massive Multilingual Text Embedding Benchmark,” International Conference on Representation Learning, vol. 2025, pp. 101715–101771, May 2025
Source
13th International Conference on Learning Representations, ICLR 2025
Conference
13th International Conference on Learning Representations, ICLR 2025
Keywords
Subjects
Source
13th International Conference on Learning Representations, ICLR 2025
Publisher
International Conference on Learning Representations, ICLR
