Depth Attention for Robust RGB Tracking
Liu, Yu ; Mahmood, Arif ; Khan, Muhammad Haris
Liu, Yu
Mahmood, Arif
Khan, Muhammad Haris
Supervisor
Department
Computer Vision
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
RGB video object tracking is a fundamental task in computer vision. Its effectiveness can be improved using depth information, particularly for handling motion-blurred target. However, depth information is often missing in commonly used tracking benchmarks. In this work, we propose a new framework that leverages monocular depth estimation to counter the challenges of tracking targets that are out of view or affected by motion blur in RGB video sequences. Specifically, our work introduces following contributions. To the best of our knowledge, we are the first to propose a depth attention mechanism and to formulate a simple framework that allows seamlessly integration of depth information with state of the art tracking algorithms, without RGB-D cameras, elevating accuracy and robustness. We provide extensive experiments on six challenging tracking benchmarks. Our results demonstrate that our approach provides consistent gains over several strong baselines and achieves new SOTA performance. We believe that our method will open up new possibilities for more sophisticated VOT solutions in real-world scenarios. Our code and models are publicly released: https://github.com/LiuYuML/Depth-Attention.
Citation
Y. Liu, A. Mahmood, and M. H. Khan, “Depth Attention for Robust RGB Tracking,” pp. 295–313, Oct. 2025, doi: 10.1007/978-981-96-0901-7_18.
Source
Computer Vision – ECCV 2024
Conference
Keywords
Monocular Depth Estimation, Multi-Modal Tracking, Single object tracking, Visual Object Tracking
Subjects
Source
Publisher
Springer Nature
