Loading...
Thumbnail Image
Item

FlexInfer: Breaking Memory Constraint via Flexible and Efficient Offloading for On-Device LLM Inference

Du, Hongchao
Wu, Shangyu
Kharlamova, Arina
Guan, Nan
Xue, Chun Jason
Research Projects
Organizational Units
Journal Issue
Abstract
Large Language Models (LLMs) face challenges for on-device inference due to high memory demands. Traditional methods to reduce memory usage often compromise performance and lack adaptability. We propose FlexInfer, an optimized offloading framework for on-device inference, addressing these issues with techniques like asynchronous prefetching, balanced memory locking, and flexible tensor preservation. These strategies enhance memory efficiency and mitigate I/O bottlenecks, ensuring high performance within user-specified resource constraints. Experiments demonstrate that FlexInfer significantly improves throughput under limited resources, achieving up to 12.5 times better performance than existing methods and facilitating the deployment of large models on resource-constrained devices.
Citation
H. Du, S. Wu, A. Kharlamova, N. Guan, and C. J. Xue, “FlexInfer: Breaking Memory Constraint via Flexible and Efficient Offloading for On-Device LLM Inference,” Mar. 2025, doi: 10.1145/3721146.3721961
Source
EuroMLSys 2025 - Proceedings of the 2025 5th Workshop on Machine Learning and Systems
Conference
5th Workshop on Machine Learning and Systems, EuroMLSys 2025
Keywords
LLM, offloading, on-device inference, resource-constrained devices
Subjects
Source
5th Workshop on Machine Learning and Systems, EuroMLSys 2025
Publisher
Association for Computing Machinery
Full-text link