Cross-domain Few-shot Classification via Invariant-content Feature Reconstruction
Tian, Hongduan ; Liu, Feng ; Cheung, Kachun ; Fang, Zhen ; See, Simon Chong Wee ; Liu, Tongliang ; Han, Bo
Tian, Hongduan
Liu, Feng
Cheung, Kachun
Fang, Zhen
See, Simon Chong Wee
Liu, Tongliang
Han, Bo
Supervisor
Department
Machine Learning
Embargo End Date
Type
Journal article
Date
2026
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
In cross-domain few-shot classification (CFC), mainstream studies aim to train a simple module (e.g. a linear transformation head) to select or transform features (a.k.a., the high-level semantic features) for previously unseen domains with a few labeled training data available on top of a powerful pre-trained model. These studies usually assume that high-level semantic features are shared across these domains, and just simple feature selection or transformations are enough to adapt features to previously unseen domains. However, in this paper, we find that the simply transformed features are too general to fully cover the key content features regarding each class. Thus, we propose an effective method, invariant-content feature reconstruction (IFR), to train a simple module that simultaneously considers both high-level and fine-grained invariant-content features for the previously unseen domains. Specifically, the fine-grained invariant-content features are considered as a set of informative and discriminative features learned from a few labeled training data of tasks sampled from unseen domains and are extracted by retrieving features that are invariant to style modifications from a set of content-preserving augmented data in pixel level with an attention module. Extensive experiments on the Meta-Dataset benchmark show that IFR achieves good generalization performance on unseen domains, which demonstrates the effectiveness of the fusion of the high-level features and the fine-grained invariant-content features. Specifically, IFR improves the average accuracy on unseen domains by 1.6% and 6.5% respectively under two different cross-domain few-shot classification settings.
Citation
H. Tian et al., “Cross-domain Few-shot Classification via Invariant-content Feature Reconstruction,” International Journal of Computer Vision 2026 134:2, vol. 134, no. 2, pp. 54-, Jan. 2026, doi: 10.1007/S11263-025-02601-5.
Source
International Journal of Computer Vision
Conference
Keywords
Cross-domain, Feature reconstruction, Few-shot classification, Invariant features
Subjects
Source
Publisher
Springer Nature
