Collaborative discrete-continuous black-box prompt learning for language models
Zhang, Hualin ; Zhang, Haozhen ; Liu, Zhekai ; Gu, Bin ; Chang, Yi
Zhang, Hualin
Zhang, Haozhen
Liu, Zhekai
Gu, Bin
Chang, Yi
Supervisor
Department
Machine Learning
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Large Scale Pre-Trained Language Models (PTMs) have demonstrated unprecedented capabilities across diverse natural language processing tasks. Adapting such models to downstream tasks is computationally intensive and time-consuming, particularly in black-box scenarios common in Language-Model-as-a-Service (LMaaS) environments, where model parameters and gradients are inaccessible. Recently, black-box prompt learning using zeroth-order gradients has emerged as a promising approach to address these challenges by optimizing learnable continuous prompts in embedding spaces, starting with randomly initialized discrete text prompts. However, its reliance on randomly initialized discrete prompts limits adaptability to diverse downstream tasks or models. To address this limitation, this paper introduces ZO-PoG, a novel framework that optimizes prompts through a collaborative approach, combining Policy Gradient optimization for initial discrete text prompts and Zeroth-Order optimization for continuous prompts in embedding space. By optimizing collaboratively between discrete and continuous prompts, ZO-PoG maximizes adaptability to downstream tasks, achieving superior results without direct access to the model's internal structures. Importantly, we establish the sub-linear convergence of ZO-PoG under mild assumptions. The experiments on different datasets demonstrate significant improvements in various tasks compared to the baselines. Our code is available at: https://github.com/zhanghualin0/ZO-PoG. © 2025 13th International Conference on Learning Representations, ICLR 2025. All rights reserved.
Citation
H. Zhang, H. Zhang, Z. Liu, B. Gu, and † Yi Chang, “Collaborative Discrete-Continuous Black-Box Prompt Learning for Language Models,” International Conference on Representation Learning, vol. 2025, pp. 52865–52888, May 2025
Source
13th International Conference on Learning Representations, ICLR 2025
Conference
13th International Conference on Learning Representations, ICLR 2025
Keywords
Subjects
Source
13th International Conference on Learning Representations, ICLR 2025
Publisher
International Conference on Learning Representations, ICLR
