Item

PORT: Preference Optimization on Reasoning Traces

Lahlou, Salem
Abubaker, Abdalgadar
Hacid, Hakim
Supervisor
Department
Machine Learning
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Preference optimization methods have been successfully applied to improve not only the alignment of large language models (LLMs) with human values, but also specific natural language tasks such as summarization and stylistic continuations. This paper proposes using preference optimization methods on Chain-of-Thought steps in order to improve the mathematical reasoning performances of language models. While the chosen answers are obtained from datasets that include reasoning traces, we propose two complementary schemes for generating rejected answers: weak LLM prompting, and digit corruption. Our approach leads to increased accuracy on the GSM8K and AQuA-RAT mathematical reasoning benchmarks for Falcon2-11B and Mistral-7B. Additionally, the improved abilities transfer to non-mathematical tasks, including the ARC benchmark and symbolic reasoning challenges. For example, our method can lead to up to relative 8.47% and 18.73% increases in accuracy on the GSM8K and AQuA benchmarks respectively, without any extra annotations. This work suggests that the path towards better language reasoning abilities goes through spending resources on creating high-quality datasets of reasoning traces.
Citation
S. Lahlou, A. Abubaker, and H. Hacid, “PORT: Preference Optimization on Reasoning Traces,” vol. 1, pp. 10989–11005, Jun. 2025, doi: 10.18653/V1/2025.NAACL-LONG.549
Source
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Conference
2025 Conference of the North American Chapter of the Association for Computational Linguistics-NAACL
Keywords
Preference Optimization, Reasoning Traces, Large Language Models, Chain-of-Thought, Direct Preference Optimization, Mathematical Reasoning, GSM8K, AQuA RAT
Subjects
Source
2025 Conference of the North American Chapter of the Association for Computational Linguistics-NAACL
Publisher
Association for Computational Linguistics
Full-text link