Multimodal CustOmics: A unified and interpretable multi-task deep learning framework for multimodal integrative data analysis in oncology
PMCID: PMC12173418
PMID: 40526743
DOI: 10.1371/journal.pcbi.1013012
Journal: PLoS computational biology
Publication Date: 2025-6-17
Authors: Benkirane H, Vakalopoulou M, Planchard D, Adam J, Olaussen K, et al.
Key Points
- Multimodal CustOmics achieved superior predictive performance across multiple cancer types, with classification AUC consistently above 98% when integrating whole slide images and multi-omics data
- Survival prediction concordance index reached up to 84.2% across eight TCGA cohorts, highlighting the method's robust predictive capabilities
- The framework offers unprecedented interpretability, bridging computational analysis with biological understanding by generating scores that explain model predictions at molecular and spatial levels
Summary
The Multimodal CustOmics framework represents a sophisticated deep learning approach to integrating multi-omics and histopathological data for precision cancer characterization. By developing a novel method that can simultaneously analyze molecular and imaging data, the researchers created a robust predictive tool capable of outperforming existing methodologies across multiple cancer datasets, including Pan-Cancer, Breast, Colorectal, and Stomach cancer cohorts.
The study's core innovation lies in its ability to generate three distinct interpretability scores that provide insights across biological levels—genes, pathways, and spatial configurations. By employing a Mixture-of-Experts approach, the method demonstrated superior performance in classification tasks, achieving up to 99.5% Area Under the ROC Curve (AUC) in multi-omics integration and providing more stable pathway importance assessments compared to traditional attention mechanisms.