Item

Which Word Orders Facilitate Length Generalization in LMs? An Investigation with GCG-Based Artificial Languages

El-Naggar, Nadine
Kuribayashi, Tatsuki
Briscoe, Ted
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Whether language models (LMs) have inductive biases that favor typologically frequent grammatical properties over rare, implausible ones has been investigated, typically using artificial languages (ALs) (White and Cotterell, 2021; Kuribayashi et al., 2024). In this paper, we extend these works from two perspectives. First, we extend their context-free AL formalization by adopting Generalized Categorial Grammar (GCG) (Wood, 2014), which allows ALs to cover attested but previously overlooked constructions, such as unbounded dependency and mildly context-sensitive structures. Second, our evaluation focuses more on the generalization ability of LMs to process unseen longer test sentences. Thus, our ALs better capture features of natural languages and our experimental paradigm leads to clearer conclusions — typologically plausible word orders tend to be easier for LMs to productively generalize.
Citation
N. El-Naggar, T. Kuribayashi, and T. Briscoe, “Which Word Orders Facilitate Length Generalization in LMs? An Investigation with GCG-Based Artificial Languages,” Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pp. 35587–35601, 2025, doi: 10.18653/V1/2025.EMNLP-MAIN.1803
Source
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Conference
2025 Conference on Empirical Methods in Natural Language Processing
Keywords
Interactive Agent Collaboration, Large Language Models, Hierarchical Communication Protocols, Dynamic Task Assignment, Multi-agent Reinforcement Learning, Efficiency in Agent Teams, Robustness to Agent Failure, Open-ended Task Solving
Subjects
Source
2025 Conference on Empirical Methods in Natural Language Processing
Publisher
Association for Computational Linguistics
Full-text link