Loading...
Thumbnail Image
Item

Statement-Tuning Enables Efficient Cross-lingual Generalization in Encoder-only Models

Elshabrawy, Ahmed
Nguyen, Thanh-Nhi
Kang, Yeeun
Feng, Lihan
Jain, Annant
Shaikh, Faadil Abdullah
Mansurov, Jonibek
Imam, Mohamed Fazli Mohamed
Ortiz-Barajas, Jesus-German
Chevi, Rendi
... show 1 more
Research Projects
Organizational Units
Journal Issue
Abstract
Large Language Models (LLMs) excel in zero-shot and few-shot tasks, but achieving similar performance with encoder-only models like BERT and RoBERTa has been challenging due to their architecture. However, encoders offer advantages such as lower computational and memory costs. Recent work adapts them for zero-shot generalization using Statement Tuning, which reformulates tasks into finite templates. We extend this approach to multilingual NLP, exploring whether encoders can achieve zero-shot cross-lingual generalization and serve as efficient alternatives to memory-intensive LLMs for low-resource languages. Our results show that state-of-the-art encoder models generalize well across languages, rivaling multilingual LLMs while being more efficient. We also analyze multilingual Statement Tuning dataset design, efficiency gains, and language-specific generalization, contributing to more inclusive and resource-efficient NLP models. We release our code and models.
Citation
A. Elshabrawy, T.-N. Nguyen, Y. Kang, L. Feng, A. Jain, F.A. Shaikh, J. Mansurov, M.F.M. Imam, J.-G. Ortiz-Barajas, R. Chevi, A.F. Aji, "Statement-Tuning Enables Efficient Cross-lingual Generalization in Encoder-only Models," 2025, pp. 16226-16248.
Source
Findings of the Association for Computational Linguistics: ACL 2025
Conference
Findings of the Association for Computational Linguistics: ACL 2025
Keywords
Subjects
Source
Findings of the Association for Computational Linguistics: ACL 2025
Publisher
Association for Computational Linguistics
Full-text link