Loading...
Thumbnail Image
Item

Spelling-out is not Straightforward: LLMs’ Capability of Tokenization from Token to Characters

Hiraoka, Tatsuya
Inui, Kentaro
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Conference proceeding
Date
License
http://creativecommons.org/licenses/by/4.0/
Language
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Large language models (LLMs) can spell out tokens character by character with high accuracy, yet they struggle with more complex character-level tasks, such as identifying compositional subcomponents within tokens. In this work, we investigate how LLMs internally represent and utilize character-level information during the spelling-out process. Our analysis reveals that, although spelling out is a simple task for humans, it is not handled in a straightforward manner by LLMs. Specifically, we show that the embedding layer does not fully encode character-level information, particularly beyond the first character. As a result, LLMs rely on intermediate and higher Transformer layers to reconstruct character-level knowledge, where we observe a distinct “breakthrough” in their spelling behavior. We validate this mechanism through three complementary analyses: probing classifiers, identification of knowledge neurons, and inspection of attention weights.
Citation
T. Hiraoka, K. Inui, "Spelling-out is not Straightforward: LLMs’ Capability of Tokenization from Token to Characters," 2025, pp. 13340-13353.
Source
Emnlp 2025 2025 Conference on Empirical Methods in Natural Language Processing Findings of Emnlp 2025
Conference
EMNLP 2025 - 2025 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2025
Keywords
Subjects
Source
EMNLP 2025 - 2025 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2025
Publisher
Association for Computational Linguistics (ACL)
Full-text link