SubRegWeigh: Effective and Efficient Annotation Weighing with Subword Regularization
Tsuji, Kohei ; Hiraoka, Tatsuya ; Cheng, Yuchang ; Iwakura, Tomoya
Tsuji, Kohei
Hiraoka, Tatsuya
Cheng, Yuchang
Iwakura, Tomoya
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
NLP datasets may still contain annotation errors, even when they are manually annotated. Researchers have attempted to develop methods to automatically reduce the adverse effect of errors in datasets. However, existing methods are time-consuming because they require many trained models to detect errors. This paper proposes a time-saving method that utilizes a tokenization technique called subword regularization to simulate multiple error detection models for detecting errors. Our proposed method, SubRegWeigh, can perform annotation weighting four to five times faster than the existing method. Additionally, SubRegWeigh improved performance in document classification and named entity recognition tasks. In experiments with pseudo-incorrect labels, SubRegWeigh clearly identifies pseudo-incorrect labels as annotation errors. Our code is available at https://github.com/4ldk/SubRegWeigh.
Citation
K. Tsuji, T. Hiraoka, Y. Cheng, and T. Iwakura, “SubRegWeigh: Effective and Efficient Annotation Weighing with Subword Regularization,” Proceedings - International Conference on Computational Linguistics, COLING, vol. Part, pp. 1908–1921, Jan. 2025.
Source
Proceedings - International Conference on Computational Linguistics, COLING
Conference
Keywords
Annotation errors, Subword regularization, SubRegWeigh, Natural Language Processing (NLP), Error detection?
Subjects
Source
Publisher
Association for Computational Linguistics
