Loading...
Large Language Models Are Human-Like Internally
Kuribayashi, Tatsuki ; Oseki, Yohei ; Taieb, Souhaib Ben ; Inui, Kentaro ; Baldwin, Timothy
Kuribayashi, Tatsuki
Oseki, Yohei
Taieb, Souhaib Ben
Inui, Kentaro
Baldwin, Timothy
Files
Loading...
tacl.a.58.pdf
Adobe PDF, 32.18 MB
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Journal article
Date
License
http://creativecommons.org/licenses/by/4.0/, http://creativecommons.org/licenses/by/4.0/
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Abstract Recent cognitive modeling studies have reported that larger language models (LMs) exhibit a poorer fit to human reading behavior (Oh and Schuler, 2023b; Shain et al., 2024; Kuribayashi et al., 2024), leading to claims of their cognitive implausibility. In this paper, we revisit this argument through the lens of mechanistic interpretability and argue that prior conclusions were skewed by an exclusive focus on the final layers of LMs. Our analysis reveals that next-word probabilities derived from internal layers of larger LMs align with human sentence processing data as well as, or better than, those from smaller LMs. This alignment holds consistently across behavioral (self-paced reading times, gaze durations, MAZE task processing times) and neurophysiological (N400 brain potentials) measures, challenging earlier mixed results and suggesting that the cognitive plausibility of larger LMs has been underestimated. Furthermore, we first identify an intriguing relationship between LM layers and human measures: Earlier layers correspond more closely with fast gaze durations, while later layers better align with relatively slower signals such as N400 potentials and MAZE processing times. Our work opens new avenues for interdisciplinary research at the intersection of mechanistic interpretability and cognitive modeling.1
Citation
T. Kuribayashi, Y. Oseki, S.B. Taieb, K. Inui, T. Baldwin, "Large Language Models Are Human-Like Internally," Transactions of the Association for Computational Linguistics, vol. 13, pp. 1743-1766, 2025, https://doi.org/10.1162/tacl.a.58.
Source
Transactions of the Association for Computational Linguistics
Conference
Keywords
46 Information and Computing Sciences, 4602 Artificial Intelligence, 47 Language, Communication and Culture, 4704 Linguistics
Subjects
Source
Publisher
MIT Press
