Item

How to Compare Things Properly? A Study of Argument Relevance in Comparative Question Answering

Nikishina, Irina
Anwar, Saba
Dolgov, Nikolay
Manina, Maria
Ignatenko, Daria
Shelmanov, Artem O.
Biemann, Chris
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Comparative Question Answering (CQA) lies at the intersection of Question Answering, Argument Mining, and Summarization. It poses unique challenges due to the inherently subjective nature of many questions and the need to integrate diverse perspectives. Although the CQA task can be addressed using recently emerged instruction-following Large Language Models (LLMs), challenges such as hallucinations in their outputs and the lack of transparent argument provenance remain significant limitations. To address these challenges, we construct a manually curated dataset comprising arguments annotated with their relevance. These arguments are further used to answer comparative questions, enabling precise traceability and faithfulness. Furthermore, we define explicit criteria for an “ideal” comparison and introduce a benchmark for evaluating the outputs of various Retrieval-Augmented Generation (RAG) models with respect to argument relevance. All code and data are publicly released to support further research.
Citation
I. Nikishina et al., “How to Compare Things Properly? A Study of Argument Relevance in Comparative Question Answering,” vol. 1, pp. 15702–15720, Aug. 2025, doi: 10.18653/V1/2025.ACL-LONG.765
Source
Proceedings of the Annual Meeting of the Association for Computational Linguistics
Conference
63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
Keywords
Prompt Optimization, Large Language Models, Exemplar-Guided Reflection, Memory in Prompting, Efficient Prompt Search, Instruction Tuning, Performance Boost, Real-world NLP Tasks
Subjects
Source
63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
Publisher
Association for Computational Linguistics
Full-text link