Item

Vision-Language Models for Edge Networks: A Comprehensive Survey

Sharshar, Ahmed
Khan, Latif U.
Ullah, Waseem
Guizani, Mohsen
Supervisor
Department
Machine Learning
Embargo End Date
Type
Journal article
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Vision Large Language Models (VLMs) combine visual understanding with natural language processing, enabling tasks like image captioning, visual question answering, and video analysis. While VLMs show impressive capabilities across domains such as autonomous vehicles, smart surveillance, and healthcare, their deployment on resource-constrained edge devices remains challenging due to processing power, memory, and energy limitations. This survey explores recent advancements in optimizing VLMs for edge environments, focusing on model compression techniques, including pruning, quantization, knowledge distillation, and specialized hardware solutions that enhance efficiency. We provide a detailed discussion of efficient training and fine-tuning methods, edge deployment challenges, and privacy considerations. Additionally, we discuss the diverse applications of lightweight VLMs across healthcare, environmental monitoring, and autonomous systems, illustrating their growing impact. By highlighting key design strategies, current challenges, and offering recommendations for future directions, this survey aims to inspire further research into the practical deployment of VLMs, ultimately making advanced AI accessible in resource-limited settings. © 2014 IEEE.
Citation
A. Sharshar, L. U. Khan, W. Ullah and M. Guizani, "Vision-Language Models for Edge Networks: A Comprehensive Survey," in IEEE Internet of Things Journal, doi: 10.1109/JIOT.2025.3579032
Source
IEEE Internet of Things Journal
Conference
Keywords
Vision language models, edge computing, efficient fine-tuning, transformers, large language models
Subjects
Source
Publisher
IEEE
Full-text link