Item

OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling

Yang, Zhicheng
Wang, Yiwei
Huang, Yinya
Guo, Zhijiang
Shi
Han, Xiongwei
Feng, Liang
Song, Linqi
Liang, Xiaodan
Tang, Jing
Supervisor
Department
Computer Vision
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Large language models (LLMs) have exhibited their problem-solving abilities in mathematical reasoning. Solving realistic optimization (OPT) problems in application scenarios requires advanced and applied mathematics ability. However, current OPT benchmarks that merely solve linear programming are far from complex realistic situations. In this work, we propose OPTIBENCH, a benchmark for end-to-end optimization problem-solving with human-readable inputs and outputs. OPTIBENCH contains rich optimization problems, including linear and nonlinear programming with or without tabular data, which can comprehensively evaluate LLMs' solving ability. In our benchmark, LLMs are required to call a code solver to provide precise numerical answers. Furthermore, to alleviate the data scarcity for optimization problems, and to bridge the gap between open-source LLMs on a small scale (e.g., Llama-3-8b) and closed-source LLMs (e.g., GPT-4), we further propose a data synthesis method namely ReSocratic. Unlike general data synthesis methods that proceed from questions to answers, ReSocratic first incrementally synthesizes formatted optimization demonstrations with mathematical formulations step by step and then back-translates the generated demonstrations into questions. Based on this, we synthesize the RESOCRATIC-29K dataset. We further conduct supervised fine-tuning with RESOCRATIC-29K on multiple open-source models. Experimental results show that RESOCRATIC-29K significantly improves the performance of open-source models. © 2025 13th International Conference on Learning Representations, ICLR 2025. All rights reserved.
Citation
Z. YANG et al., “OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling,” International Conference on Representation Learning, vol. 2025, pp. 24726–24759, May 2025
Source
13th International Conference on Learning Representations, ICLR 2025
Conference
13th International Conference on Learning Representations, ICLR 2025
Keywords
Subjects
Source
13th International Conference on Learning Representations, ICLR 2025
Publisher
International Conference on Learning Representations, ICLR
DOI
Full-text link