EarthDial: Turning Multi-sensory Earth Observations to Interactive Dialogues
Soni, Sagar ; Dudhane, Akshay A. ; Debary, Hiyam ; Fiaz, Mustansar ; Munir, Muhammad Akhtar ; Danish, Muhammad Sohail ; Fraccaro, Paolo ; Watson, Campbell D. ; Klein, Levente J. ; Khan, Fahad Shahbaz
Soni, Sagar
Dudhane, Akshay A.
Debary, Hiyam
Fiaz, Mustansar
Munir, Muhammad Akhtar
Danish, Muhammad Sohail
Fraccaro, Paolo
Watson, Campbell D.
Klein, Levente J.
Khan, Fahad Shahbaz
Supervisor
Department
Computer Science
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Automated analysis of vast Earth observation data via interactive Vision-Language Models (VLMs) can unlock new opportunities for environmental monitoring, disaster response, and resource management. Existing generic VLMs do not perform well on Remote Sensing data, while the recent Geo-spatial VLMs remain restricted to a fixed resolution and few sensor modalities. In this paper, we introduce EarthDial, a conversational assistant specifically designed for Earth Observation (EO) data, transforming complex, multi-sensory Earth observations into interactive, natural language dialogues. EarthDial supports multi- spectral, multi-temporal, and multi-resolution imagery, enabling a wide range of remote sensing tasks, including classification, detection, captioning, question answering, visual reasoning, and visual grounding. To achieve this, we introduce an extensive instruction tuning dataset comprising over 11.11M instruction pairs covering RGB, Synthetic Aperture Radar (SAR), and multispectral modalities such as Near-Infrared (NIR) and infrared. Furthermore, EarthDial handles bi-temporal and multi-temporal sequence analysis for applications like change detection. Our extensive experimental results on 44 downstream datasets demonstrate that EarthDial outperforms existing generic and domain-specific models, achieving better generalization across various EO tasks. Our source codes and pre-trained models are at https://github.com/hiyamdebary/EarthDial.
Citation
S. Soni et al., "EarthDial: Turning Multi-sensory Earth Observations to Interactive Dialogues," 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2025, pp. 14303-14313, doi: 10.1109/CVPR52734.2025.01334.
Source
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Conference
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025
Keywords
Remote Sensing, Vlm
Subjects
Source
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025
Publisher
IEEE
