This workshop is part of the MICCAI 2025 conference.
Overview
Machine learning (ML) systems are achieving remarkable performances at the cost of increased complexity. Deep neural networks, in particular, appear as black-box machines, and their behavior can sometimes be unpredictable. Furthermore, more complex models are less interpretable, which may cause distrust. As these systems are pervasively introduced to critical domains, such as medical image computing and computer-assisted intervention (MICCAI), developing methodologies for explaining model predictions is imperative. Such methodologies would help physicians to decide whether they should follow/trust a prediction or not and might help to identify failure cases. Additionally, it could facilitate the deployment of such systems from a legal perspective. Ultimately, interpretability is closely related to AI safety in healthcare.
However, there needs to be more work regarding interpretability of ML systems in the MICCAI research. Besides increasing trust and acceptance by physicians, the interpretability of ML systems can be helpful during method development. For instance, inspecting if the model is learning aspects coherent with domain knowledge or studying failures. Also, it may help reveal biases in the training data or identify the most relevant data (e.g., specific MRI sequences in multi-sequence acquisitions). This is critical since the rise of chronic conditions has led to a continuous growth in the usage of medical imaging, while at the same time reimbursements have been declining. Hence, interpretability can help improve image acquisition protocols' productivity by highlighting learned features and their relationships to disease patterns.
The Workshop on Interpretability of Machine Intelligence in Medical Image Computing (iMIMIC) at MICCAI 2025 aims at introducing the challenges & opportunities related to the topic of interpretability of ML systems in the context of MICCAI.
Scope
Interpretability can be defined as an explanation of the machine learning system. It can be broadly defined as global or local. The former explains the model and how it has learned, while the latter explains individual predictions. Visualization is often helpful in assisting the process of model interpretation. The model’s uncertainty may be seen as a proxy for interpreting it by identifying difficult instances. Still, more work is needed to tackle the lack of formal and clear definitions, general approaches, and regulatory frameworks. Additionally, interpretability results often rely on comparing explanations with domain knowledge. Hence, there is a need for defining objective, quantitative, and systematic evaluation methodologies.
This workshop aims to foster discussions, and presentation of ideas to tackle the many challenges and identify opportunities related to the topic of interpretability of ML systems in the context of MICCAI. Therefore, the primary purposes of this workshop are:
- To introduce the challenges/opportunities related to the interpretability of machine learning systems in the context of MICCAI. While there have been workshops on the interpretability of machine learning systems in general machine learning and A.I. conferences (NeurIPS, ICML), to the best of our knowledge, iMIMIC is the only workshop dedicated to the medical imaging application domain.
- To understand the state of the art of this field. This will be achieved through the submitted manuscripts and the invited keynote speakers.
- To join researchers in this field and to discuss the issues related to it and future work.
- To understand the implications of (or lack of) interpretability of machine learning systems in the MICCAI field.
Covered topics include but are not limited to:
- Definition of interpretability in context of medical image analysis.
- Visualization techniques useful for model interpretation in medical image analysis.
- Local explanations for model interpretability in medical image analysis.
- Interpretability methods to make use of multimodal data.
- Causal interpretability.
- Methods to improve transparency of machine learning models commonly used in medical image analysis.
- Textual explanations of model decisions in medical image analysis.
- Uncertainty quantification in context of model interpretability.
- Quantification and measurement of interpretability.
- Legal and regulatory aspects of model interpretability in medicine.
Program
DST time - September 27th
Keynote Speaker
-
Jaesik Choi, Korea Advanced Institute of Science & Technology, Republic of Korea.
Title: Recent Advances in Explainable Artificial Intelligence [PDF]
Abstract: Explainable and interpretable machine learning models and algorithms are important topics which have received growing attention from research, application and administration. Many advanced artificial intelligence systems are often perceived as black-boxes. Many government agencies pay special attention to the topic. As an example, the EU General Data Protection Regulation (GDPR) and AI Act mandates a right to explanation from machine learning models. In this talk, I will overview recent advances of explainable artificial intelligence with focus on medical applications. In Particular, I will present how we use the interpretability methods to improve the prediction of AKI (Acute Kidney Injury).
Oral Presentations
- "Distribution-Based Masked Medical Vision-Language Model Using Structured Reports" by Shreyank Gowda et al. [PDF]
- "VLEER: Vision and Language Embeddings for Explainable Whole Slide Image Representation" by Anh Nguyen et al. [PDF]
- "Hybrid Explanation-Guided Learning for Transformer-Based Chest X-Ray Diagnosis" by Shelley Zixin Shu et al. [PDF]
- "Minimum Data, Maximum Impact: 20 annotated samples for explainable lung nodule classification" by Luisa Gallée et al. [PDF]
- "Evaluating the Explainability of Vision Transformers in Medical Imaging" by Leili Barekatain et al. [PDF]
- "A Hybrid Fully Convolutional CNN-Transformer Model for Inherently Interpretable Disease Detection from Retinal Fundus Images" by Kerol Djoumessi et al. [PDF]
- "From Explainable to Explained AI: Ideas for Falsifying and Quantifying Explanations" by Yoni Schirris et al. [PDF]
- "ProtoENet: Dynamic Prototype Learning for Inherently Interpretable Ejection Fraction Estimation in Echocardiography" by Yaganeh Ghamary et al. [PDF]
Best paper award
Congratulations to the authors of the best paper award iMIMIC 2025!
ProtoENet: Dynamic Prototype Learning for Inherently Interpretable Ejection Fraction Estimation in Echocardiography" by Yaganeh Ghamary et al. [PDF]
Important dates
- Opening of submission system:
21 May 2025 - Paper submission due:
25 June 2025 - Reviews due:
10 July 2025 - Notification of paper decisions:
16 July 2025 - Camera-ready papers due:
15 August 2025 - Workshop:
27 September 2025
Venue
The iMIMIC workshop took place as part of MICCAI 2025 conference that took place between the 23th and 27th of September 2025 in the Daejeon Convention Center in Daejeon, South Korea.
More information regarding the venue can be found at the conference website.
Organizing Team
General Chairs
- Mauricio Reyes, University of Bern, Switzerland.
- Jaime Cardoso, INESC Porto, Universidade do
Porto, Portugal
- Jayashree Kalpathy-Cramer, MGH
Harvard University,
USA.
- Nguyen Le
Minh, Japan Advanced Institute of Science and Technology, Japan.
- Pedro Abreu,
CISUC and
University of Coimbra, Portugal.
- José Amorim, CISUC and
University of Coimbra, Portugal.
- Wilson Silva, Utrecht University,
The Netherlands.
- Mara Graziani, HES-SO Valais-Wallis, Switzerland.
- Amith Kamath, University of Bern,
Switzerland.
- Hao Chen, Hong Kong University of Science and
Technology, Hong Kong.
- Shangqi Gao, University of Cambridge, United
Kingdom.
Sponsors
Interested in participating and being a sponsor? Email us