Causal representation learning from multi-modal biomedical observations
Eran Segal ; Eric Xing ; Kun Zhang ; Yuewen Sun ; Guangyi Chen ; Zijian Li ; Loka Li ; Gongxu Luo ; Yixuan Zhang
Eran Segal
Eric Xing
Kun Zhang
Yuewen Sun
Guangyi Chen
Zijian Li
Loka Li
Gongxu Luo
Yixuan Zhang
Author
Eran Segal , Eric Xing , Kun Zhang , Yuewen Sun , Guangyi Chen , Zijian Li , Loka Li , Gongxu Luo , Yixuan Zhang
Supervisor
Department
Computational Biology
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Prevalent in biomedical applications (e.g., human phenotype research), multimodal datasets can provide valuable insights into the underlying physiological mechanisms. However, current machine learning (ML) models designed to analyze these datasets often lack interpretability and identifiability guarantees, which are essential for biomedical research. Recent advances in causal representation learning have shown promise in identifying interpretable latent causal variables with formal theoretical guarantees. Unfortunately, most current work on multimodal distributions either relies on restrictive parametric assumptions or yields only coarse identification results, limiting their applicability to biomedical research that favors a detailed understanding of the mechanisms. In this work, we aim to develop flexible identification conditions for multimodal data and principled methods to facilitate the understanding of biomedical datasets. Theoretically, we consider a nonparametric latent distribution (c.f., parametric assumptions in previous work) that allows for causal relationships across potentially different modalities. We establish identifiability guarantees for each latent component, extending the subspace identification results from previous work. Our key theoretical contribution is the structural sparsity of causal connections between modalities, which, as we will discuss, is natural for a large collection of biomedical systems. Empirically, we present a practical framework to instantiate our theoretical insights. We demonstrate the effectiveness of our approach through extensive experiments on both numerical and synthetic datasets. Results on a real-world human phenotype dataset are consistent with established biomedical research, validating our theoretical and methodological framework. © 2025 13th International Conference on Learning Representations, ICLR 2025. All rights reserved.
Co-author(s)
Sun Yuewen, Kong Lingjing, Chen Guangyi, Li Loka, Luo Gongxu, Li Zijian, Zhang Yixuan, Zheng Yujia, Yang Mengyue, Stojanov Petar, Segal Eran, Xing Eric P., Zhang Kun
Citation
Y. Sun et al., “Causal Representation Learning from Multimodal Biomedical Observations,” International Conference on Representation Learning, vol. 2025, pp. 18536–18568, May 2025
Source
13th International Conference on Learning Representations, ICLR 2025
Conference
13th International Conference on Learning Representations, ICLR 2025
Keywords
Subjects
Source
13th International Conference on Learning Representations, ICLR 2025
Publisher
International Conference on Learning Representations, ICLR