GRADA: Graph-based Reranking against Adversarial Documents Attack
Zheng, Jingjie ; Gema, Aryo Pradipta ; Hong, Giwon ; He, Xuanli ; Minervini, Pasquale ; Sun, Youcheng ; Xu, Qiongkai
Zheng, Jingjie
Gema, Aryo Pradipta
Hong, Giwon
He, Xuanli
Minervini, Pasquale
Sun, Youcheng
Xu, Qiongkai
Supervisor
Department
Computer Science
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Retrieval Augmented Generation (RAG) frameworks can improve the factual accuracy of large language models (LLMs) by integrating external knowledge from retrieved documents, thereby overcoming the limitations of models’ static intrinsic knowledge. However, these systems are susceptible to adversarial attacks that manipulate the retrieval process by introducing documents that are adversarial yet semantically similar to the query. Notably, while these adversarial documents resemble the query, they exhibit weak similarity to benign documents in the retrieval set. Thus, we propose a simple yet effective **G**raph-based **R**eranking against **A**dversarial **D**ocument **A**ttacks (GRADA) framework aiming at preserving retrieval quality while significantly reducing the success of adversaries. Our study evaluates the effectiveness of our approach through experiments conducted on six LLMs: GPT-3.5-Turbo, GPT-4o, Llama3.1-8b-Instruct, Llama3.1-70b-Instruct, Qwen2.5-7b-Instruct and Qwen2.5-14b-Instruct. We use three datasets to assess performance, with results from the Natural Questions dataset demonstrating up to an 80% reduction in attack success rates while maintaining minimal loss in accuracy.
Citation
J. Zheng et al., “GRADA: Graph-based Reranking against Adversarial Documents Attack,” Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pp. 22255–22277, 2025, doi: 10.18653/V1/2025.EMNLP-MAIN.1132.
Source
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Conference
2025 Conference on Empirical Methods in Natural Language Processing
Keywords
Adversarial Document Attacks, Retrieval Robustness, Large Language Models, Query-Document Graphs, Attack Success Rate Reduction, Retrieval Accuracy Maintenance, Retrieval Security Benchmarking
Subjects
Source
2025 Conference on Empirical Methods in Natural Language Processing
Publisher
Association for Computational Linguistics
