Investigating How Pre-training Data Leakage Affects Models’ Reproduction and Detection Capabilities
Kaneko, Masahiro ; Baldwin, Timothy
Kaneko, Masahiro
Baldwin, Timothy
Author
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Large Language Models (LLMs) are trained on massive web-crawled corpora, often containing personal information, copyrighted text, and benchmark datasets. This inadvertent inclusion in the training dataset, known as data leakage, poses significant risks and could compromise the safety of LLM outputs. Despite its criticality, existing studies do not examine how leaked instances in the pre-training data influence LLMs’ output and detection capabilities. In this paper, we conduct an experimental survey to elucidate the relationship between data leakage in training datasets and its effects on the generation and detection by LLMs. Our experiments reveal that LLMs often generate outputs containing leaked information, even when there is little such data in the training dataset. Moreover, the fewer the leaked instances, the more difficult it becomes to detect such leakage. Finally, we demonstrate that enhancing leakage detection through few-shot learning can help mitigate the impact of the leakage rate in the training data on detection performance.
Citation
M. Kaneko and T. B. Mbzuai, “Investigating How Pre-training Data Leakage Affects Models’ Reproduction and Detection Capabilities,” Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pp. 23556–23566, 2025, doi: 10.18653/V1/2025.EMNLP-MAIN.1201.
Source
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Conference
2025 Conference on Empirical Methods in Natural Language Processing
Keywords
Neural Process Modeling, Context-aware Language Understanding, Temporal Sequence Encoding, Efficient Transformer Variants, Dynamic Attention Mechanism, Real-time NLP Systems, Cross-domain Transfer Learning, Lightweight Deployment Strategies
Subjects
Source
2025 Conference on Empirical Methods in Natural Language Processing
Publisher
Association for Computational Linguistics
