Advancing Elderly Care through Multimodal Fall Detection Technology
Alobeidli, Hamza
Alobeidli, Hamza
Author
Supervisor
Department
Machine Learning
Embargo End Date
2024-01-01
Type
Thesis
Date
2024
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
In the realm of healthcare, the detection of falls among the elderly stands as a pivotal area, where prompt detection and intervention can substantially reduce the risk of severe injuries. This thesis introduces an innovative multimodal methodology that amalgamates sensor-based analyses with vision-based observations to elevate fall detection technologies. Utilizing comprehensive datasets such as UPFALL, UR Fall Detection, LE2I WISDM, and SmartFall, which span a variety of environments and activities, our approach aims to capture the intricate dynamics of falls among older adults accurately. Our methodology employs Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks to analyze sensor-driven data, capturing the nuanced physical movements that precede a fall. Concurrently, for vision-based detection tasks, we leverage Vision Transformers (ViTs) to process visual data from sources like LE2I, URFD, and UPFALL. These datasets provide a wealth of visual cues crucial for identifying fall-specific patterns within real-world settings. The integration of these technologies enables a comprehensive understanding of both the visual and physical precursors to falls, thereby enhancing the detection accuracy and reliability. By synthesizing data from both vision and sensor modalities, our approach not only aims to detect falls more accurately but also seeks to offer contextual insights for preventive elderly care. This proactive dimension of our methodology could potentially recognize early indicators of fall risk, facilitating timely preventative measures and contributing to the overall health and safety of the elderly population. This thesis highlights the potential of employing CNN/LSTM models for sensor data interpretation alongside ViTs for vision-based tasks to develop an advanced, real-time fall detection system. By focusing on multimodal tools for real-time fall detection, this work lays the groundwork for future research to integrate the outcomes of various modalities, enhancing the reliability of fall classification. The expected outcomes of this research include not just advancements in elderly fall detection technologies but also broader implications for enhancing seniors quality of life and alleviating healthcare systems pressures.
Citation
H. Alobeidi, "Advancing Elderly Care through Multimodal Fall Detection Technology", MS. Thesis, Machine Learning, MBZUAI, Abu Dhabi, UAE, 2024
