Item

From Small to Large: In-Context Learning as a New Paradigm for Domain Generalization

Zhou, Guanglin
Han, Zhongyi
Xie, Shaoan
Chen, Shiming
Huang, Biwei
Zhu, Liming
Yao, Lina
Khan, Salman
Supervisor
Department
Computer Vision
Embargo End Date
Type
Journal article
Date
2026
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Domain generalization (DG) ensures machine learning models remain robust against distribution shifts from source to unseen target domains. DG research has evolved from small-scale models tailored with specialized loss functions, to parameter-efficient fine-tuning of moderately large models, and now toward leveraging large multimodal models (LMMs) that are pre-trained on vast datasets. Despite their zero-shot capabilities, LMMs face challenges in adapting to specialized scenarios (e.g., healthcare) without costly retraining. In this work, we propose ICL-DG, a novel DG framework that integrates in-context learning (ICL) to help adaptability under distribution shifts. We theoretically conceptualize demonstration selection with a Bayesian inference perspective as the process of providing effective conditional priors. To realize this, we introduce the class-conditioned contrastive invariance (CCI) principle, which reshapes the embedding space so that same-class samples from different domains cluster together while maintaining separation between distinct classes. This approach enables the selection of demonstrations based on stable class-level semantics rather than domain-specific artifacts, thereby guiding LMMs under distribution shifts without parameter updates. Empirical evaluations on four benchmarks, including Camelyon17 and HAM10000, demonstrate the efficacy of ICL-DG, with improvements of 34.2% and 16.9% in 7-shot accuracy over the zero-shot baseline, respectively. These results highlight the potential of pairing ICL with invariant demonstration selection to advance LMMs-based DG, particularly in high-stakes domains like healthcare.Our code is available at: https://github.com/jameszhou-gl/ICL-DG.
Citation
G. Zhou et al., “From Small to Large: In-Context Learning as a New Paradigm for Domain Generalization,” International Journal of Computer Vision 2025 134:1, vol. 134, no. 1, pp. 9-, Dec. 2025, doi: 10.1007/S11263-025-02618-W
Source
International Journal of Computer Vision
Conference
Keywords
Contrastive learning, Demonstration selection, Domain generalization, In-context Learning, Large multimodal models, Smart healthcare
Subjects
Source
Publisher
Springer Nature
Full-text link