Item

From Human Judgements to Predictive Models: Unravelling Acceptability in Code-Mixed Sentences

Kodali, Prashant
Goel, Anmol
Asapu, Likhith
Bonagiri, Vamshi Krishna
Govil, Anirudh
Choudhury, Monojit
Kumaraguru, Ponnurangam
Shrivastava, Manish
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Journal article
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Current computational approaches for analysing or generating code-mixed sentences do not explicitly model "naturalness"or "acceptability"of code-mixed sentences, but rely on training corpora to reflect distribution of acceptable code-mixed sentences. Modelling human judgement for the acceptability of code-mixed text can help in distinguishing natural code-mixed text and enable quality-controlled generation of code-mixed text. To this end, we construct Cline - a dataset containing human acceptability judgements for English-Hindi (en-hi) code-mixed text. Cline is the largest of its kind with 16,642 sentences, consisting of samples sourced from two sources: synthetically generated code-mixed text and samples collected from online social media. Our analysis establishes that popular code-mixing metrics such as CMI, Number of Switch Points, Burstines, which are used to filter/curate/compare code-mixed corpora have low correlation with human acceptability judgements, underlining the necessity of our dataset. Experiments using Cline demonstrate that simple Multilayer Perceptron (MLP) models when trained solely using code-mixing metrics as features are outperformed by fine-tuned pre-trained Multilingual Large Language Models (MLLMs). Specifically, among Encoder models XLM-Roberta and Bernice outperform IndicBERT across different configurations. Among Encoder-Decoder models, mBART performs better than mT5, however, Encoder-Decoder models are not able to outperform Encoder-only models. Decoder-only models perform the best when compared with all other MLLMS, with Llama 3.2 - 3B models outperforming similarly sized Qwen, Phi models. Comparison with zero and fewshot capabilitites of ChatGPT show that MLLMs fine-tuned on larger data outperform ChatGPT, providing scope for improvement in code-mixed tasks. Zero-shot transfer from English-Hindi to English-Telugu acceptability judgments using our model checkpoints proves superior to random baselines, enabling application to other code-mixed language pairs and providing further avenues of research. We publicly release our human-annotated dataset, trained checkpoints, code-mix corpus, and code for data generation and model training.
Citation
P. Kodali et al., “From Human Judgements to Predictive Models: Unravelling Acceptability in Code-Mixed Sentences,” ACM Transactions on Asian and Low-Resource Language Information Processing, vol. 24, no. 9, pp. 1–31, Sep. 2025, doi: 10.1145/3748312
Source
ACM Transactions on Asian and Low-Resource Language Information Processing
Conference
Keywords
Acceptability, Code-Mixing, English-Hindi, English-Telugu, LLMs
Subjects
Source
Publisher
Association for Computing Machinery
Full-text link