Loading...
Thumbnail Image
Item

Training and Evaluating with Human Label Variation: An Empirical Study

Kurniawan, Kemal
Mistica, Meladel
Baldwin, Timothy
Lau, Jey Han
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Journal article
Date
License
http://creativecommons.org/licenses/by/4.0/
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Abstract Human label variation (HLV) challenges the standard assumption that a labeled instance has a single ground truth, instead embracing the natural variation in human annotation to train and evaluate models. While various training methods and metrics for HLV have been proposed, it is still unclear which methods and metrics perform best in what settings. We propose new evaluation metrics for HLV leveraging fuzzy set theory. Because these new proposed metrics are differentiable, we then in turn experiment with using these metrics as training objectives. We conduct an extensive study over 6 HLV datasets testing 14 training methods and 6 evaluation metrics. We find that training on either disaggregated annotations or soft labels performs best across metrics, outperforming training using the proposed training objectives with differentiable metrics. We also show that our proposed soft micro F1 score is one of the best metrics for HLV data.1
Citation
K. Kurniawan, M. Mistica, T. Baldwin, J.H. Lau, "Training and Evaluating with Human Label Variation: An Empirical Study," Computational Linguistics, pp. 1-27, 2026, https://doi.org/10.1162/coli.a.578.
Source
Computational Linguistics
Conference
Keywords
46 Information and Computing Sciences, 4608 Human-Centred Computing
Subjects
Source
Publisher
MIT Press
Full-text link