Loading...
Can Input Attributions Explain Inductive Reasoning in In-Context Learning?
Ye, Mengyu ; Kuribayashi, Tatsuki ; Kobayashi, Goro ; Suzuki, Jun
Ye, Mengyu
Kuribayashi, Tatsuki
Kobayashi, Goro
Suzuki, Jun
Files
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Conference proceeding
Date
License
http://creativecommons.org/licenses/by/4.0/, http://creativecommons.org/licenses/by/4.0/
Language
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Interpreting the internal process of neural models has long been a challenge. This challenge remains relevant in the era of large language models (LLMs) and in-context learning (ICL); for example, ICL poses a new issue of interpreting which example in the few-shot examples contributed to identifying/solving the task. To this end, in this paper, we design synthetic diagnostic tasks of inductive reasoning, inspired by the generalization tests in linguistics; here, most in-context examples are ambiguous w.r.t. their underlying rule, and one critical example disambiguates the task demonstrated. The question is whether conventional input attribution (IA) methods can track such a reasoning process, i.e., identify the influential example, in ICL. Our experiments provide several practical findings; for example, a certain simple IA method works the best, and the larger the model, the generally harder it is to interpret the ICL with gradient-based IA methods.
Citation
M. Ye, T. Kuribayashi, G. Kobayashi, J. Suzuki, "Can Input Attributions Explain Inductive Reasoning in In-Context Learning?," 2025, pp. 21199-21225.
Source
Proceedings of the Annual Meeting of the Association for Computational Linguistics
Conference
Findings of the Association for Computational Linguistics: ACL 2025
Keywords
Subjects
Source
Findings of the Association for Computational Linguistics: ACL 2025
Publisher
Association for Computational Linguistics
