This workshop is part of the MICCAI 2024 conference.

Overview

Machine learning (ML) systems are achieving remarkable performances at the cost of increased complexity. Deep neural networks, in particular, appear as black-box machines, and their behavior can sometimes be unpredictable. Furthermore, more complex models are less interpretable, which may cause distrust. As these systems are pervasively introduced to critical domains, such as medical image computing and computer-assisted intervention (MICCAI), developing methodologies for explaining model predictions is imperative. Such methodologies would help physicians to decide whether they should follow/trust a prediction or not and might help to identify failure cases. Additionally, it could facilitate the deployment of such systems from a legal perspective. Ultimately, interpretability is closely related to AI safety in healthcare.

However, there needs to be more work regarding interpretability of ML systems in the MICCAI research. Besides increasing trust and acceptance by physicians, the interpretability of ML systems can be helpful during method development. For instance, inspecting if the model is learning aspects coherent with domain knowledge or studying failures. Also, it may help reveal biases in the training data or identify the most relevant data (e.g., specific MRI sequences in multi-sequence acquisitions). This is critical since the rise of chronic conditions has led to a continuous growth in the usage of medical imaging, while at the same time reimbursements have been declining. Hence, interpretability can help improve image acquisition protocols' productivity by highlighting learned features and their relationships to disease patterns.

The Workshop on Interpretability of Machine Intelligence in Medical Image Computing (iMIMIC) at MICCAI 2024 aims at introducing the challenges & opportunities related to the topic of interpretability of ML systems in the context of MICCAI.

Interpretability can be defined as an explanation of the machine learning system. It can be broadly defined as global or local. The former explains the model and how it has learned, while the latter explains individual predictions. Visualization is often helpful in assisting the process of model interpretation. The model’s uncertainty may be seen as a proxy for interpreting it by identifying difficult instances. Still, more work is needed to tackle the lack of formal and clear definitions, general approaches, and regulatory frameworks. Additionally, interpretability results often rely on comparing explanations with domain knowledge. Hence, there is a need for defining objective, quantitative, and systematic evaluation methodologies.

This workshop aims to foster discussions, and presentation of ideas to tackle the many challenges and identify opportunities related to the topic of interpretability of ML systems in the context of MICCAI. Therefore, the primary purposes of this workshop are:

  1. To introduce the challenges/opportunities related to the interpretability of machine learning systems in the context of MICCAI. While there have been workshops on the interpretability of machine learning systems in general machine learning and A.I. conferences (NeurIPS, ICML), to the best of our knowledge, iMIMIC is the only workshop dedicated to the medical imaging application domain.
  2. To understand the state of the art of this field. This will be achieved through the submitted manuscripts and the invited keynote speakers.
  3. To join researchers in this field and to discuss the issues related to it and future work.
  4. To understand the implications of (or lack of) interpretability of machine learning systems in the MICCAI field.

Covered topics include but are not limited to:

DST time - October 6th

  • 13:30 – 13:40: Opening Session
  • 13:40 – 14:25 Keynote: Michael Kampffmeyer, University of Tromsø, Norway (45 mins)
  • 14:30 – 15:30: Oral Presentations Part I (15 mins each)
  • 15:30 – 16:30: Coffee break and poster presentations
  • 16:30 – 17:15: Oral Presentations Part II (15 mins each)
  • 17:15 – 17:30: Awards and Closing Session