Item

Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection

Bangalath, Hanoona
Department
Computer Vision
Embargo End Date
Type
Thesis
Date
2022
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Existing open-vocabulary object detectors typically enlarge their vocabulary sizes by leveraging different forms of weak supervision. This helps generalize to novel objects at inference. Two popular forms of weak-supervision used in open-vocabulary detection (OVD) include pretrained CLIP model and image-level supervision. We note that both these modes of supervision are not optimally aligned for the detection task: CLIP is trained with image-text pairs and lacks precise localization of objects while the image-level supervision has been used with heuristics that do not accurately specify local object regions. In this work, we propose to address this problem by performing object-centric alignment of the language embeddings from the CLIP model. Furthermore, we visually ground the objects with only image-level supervision using a pseudo-labeling process that provides high-quality object proposals and helps expand the vocabulary during training. We establish a bridge between the above two object-alignment strategies via a novel weight transfer function that aggregates their complimentary strengths. In essence, the proposed model seeks to minimize the gap between object and image-centric representations in the OVD setting. On the COCO benchmark, our proposed approach achieves 36.6 AP-50 on novel classes, an absolute 8.2 gain over the previous best performance. For LVIS, we surpass the state-of-the-art ViLD model by 5.0 mask AP for rare categories and 3.4 overall.
Citation
H. Bangalath, "Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection", M.S. Thesis, Computer Vision, MBZUAI, Abu Dhabi, UAE, 2022.
Source
Conference
Keywords
Subjects
Source
Publisher
DOI
Full-text link