Loading...
Thumbnail Image
Item

BiMediX2 : Bio-Medical EXpert LMM for Diverse Medical Modalities

Mullappilly, Sahal Shaji
Kurpath, Mohammed Irfan
Pieri, Sara
Alseiari, Saeed Yahya
Cholakkal, Shanavas
Aldahmani, Khaled M
Khan, Fahad Shahbaz
Anwer, Rao Muhammad
Khan, Salman
Baldwin, Timothy
... show 1 more
Research Projects
Organizational Units
Journal Issue
Abstract
We introduce BiMediX2, a bilingual (Arabic-English) Bio-Medical EXpert Large Multimodal Model that supports text-based and image-based medical interactions. It enables multi-turn conversation in Arabic and English and supports diverse medical imaging modalities, including radiology, CT, and histology. To train BiMediX2, we curate BiMed-V, an extensive Arabic-English bilingual healthcare dataset consisting of 1.6M samples of diverse medical interactions. This dataset supports a range of medical Large Language Model (LLM) and Large Multimodal Model (LMM) tasks, including multi-turn medical conversations, report generation, and visual question answering (VQA). We also introduce BiMed-MBench, the first Arabic-English medical LMM evaluation benchmark, verified by medical experts. BiMediX2 demonstrates excellent performance across multiple medical LLM and LMM benchmarks, achieving state-of-the-art results compared to other open-sourced models. On BiMed-MBench, BiMediX2 outperforms existing methods by over 9% in English and more than 20% in Arabic evaluations. Additionally, it surpasses GPT-4 by approximately 9% in UPHILL factual accuracy evaluations and excels in various medical VQA, report generation, and report summarization tasks. Our trained models, instruction set, and source code are available at - https://github.com/mbzuai-oryx/BiMediX2
Citation
S.S. Mullappilly, M.I. Kurpath, S. Pieri, S.Y. Alseiari, S. Cholakkal, K.M. Aldahmani, F.S. Khan, R.M. Anwer, S. Khan, T. Baldwin, H. Cholakkal, "BiMediX2 : Bio-Medical EXpert LMM for Diverse Medical Modalities," 2025, pp. 14051-14071.
Source
Findings of the Association for Computational Linguistics: EMNLP 2025
Conference
Findings of the Association for Computational Linguistics: EMNLP 2025
Keywords
Subjects
Source
Findings of the Association for Computational Linguistics: EMNLP 2025
Publisher
Association for Computational Linguistics
Full-text link