Diffusion-Enhanced Test-Time Adaptation with Text and Image Augmentation
Feng, Chun-Mei ; He, Yuanyang ; Zou, Jian ; Khan, Salman ; Xiong, Huan ; Li, Zhen ; Zuo, Wangmeng ; Goh, Rick Siow Mong ; Liu, Yong
Feng, Chun-Mei
He, Yuanyang
Zou, Jian
Khan, Salman
Xiong, Huan
Li, Zhen
Zuo, Wangmeng
Goh, Rick Siow Mong
Liu, Yong
Supervisor
Department
Computer Vision
Embargo End Date
Type
Journal article
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Existing test-time prompt tuning (TPT) methods focus on single-modality data, primarily enhancing images and using confidence ratings to filter out inaccurate images. However, while image generation models can produce visually diverse images, single-modality data enhancement techniques still fail to capture the comprehensive knowledge provided by different modalities. Additionally, we note that the performance of TPT-based methods drops significantly when the number of augmented images is limited, which is not unusual given the computational expense of generative augmentation. To address these issues, we introduce
, a novel test-time adaptation method that utilizes a pre-trained generative model for multi-modal augmentation of each test sample from unknown new domains. By combining augmented data from pre-trained vision and language models, we enhance the ability of the model to adapt to unknown new test data. Additionally, to ensure that key semantics are accurately retained when generating various visual and text enhancements, we employ cosine similarity filtering between the logits of the enhanced images and text with the original test data. This process allows us to filter out some spurious augmentation and inadequate combinations. To leverage the diverse enhancements provided by the generation model across different modals, we have replaced prompt tuning with an adapter for greater flexibility in utilizing text templates. Our experiments on the test datasets with distribution shifts and domain gaps show that in a zero-shot setting,
outperforms state-of-the-art test-time prompt tuning methods with a 5.50% increase in accuracy.
Citation
Y. Song, D. He, M. Dai, S. Chan, K. K. R. Choo, and M. Guizani, “Blockchain Assisted Trust Management for Data-Parallel Distributed Learning,” IEEE Trans Mob Comput, vol. 24, no. 5, pp. 3826–3843, 2025, doi: 10.1109/TMC.2024.3521443
Source
International Journal of Computer Vision
Conference
Keywords
Generative models, Multi-modal learning, Test time adaptation
Subjects
Source
Publisher
Springer Nature
