This workshop is part of the MICCAI 2022 conference.

The proceedings may be directly downloaded from here.

Overview

Machine learning (ML) systems are achieving remarkable performances at the cost of increased complexity. Deep neural networks, in particular, appear as black box machine and their behaviour can be sometimes unpredictable. Furthermore, more complex models are less interpretable, which may cause distrust. As these systems are pervasively being introduced to critical domains, such as medical image computing and computer assisted intervention (MICCAI), it becomes imperative to develop methodologies for explaining model predictions.

Such methodologies would help physicians to decide whether they should follow/trust a prediction or not, and might help to identify failure cases. Additionally, it could facilitate the deployment of such systems, from a legal perspective. Ultimately, interpretability is closely related with AI safety in healthcare.

However, there is very limited work regarding interpretability of ML systems among the MICCAI research. Besides increasing trust and acceptance by physicians, interpretability of ML systems can be helpful during method development. For instance, by inspecting if the model is learning aspects coherent with domain knowledge, or by studying failures. Also, it may help revealing biases in the training data, or identifying the most relevant data (e.g., specific MRI sequences in multi-sequence acquisitions). This is critical since the rise of chronic conditions has led to a continuous growth in usage of medical imaging, while at the same time reimbursements have been declining. while at the same time reimbursements have been declining. Hence, interpretability can help to improve productivity of image acquisition protocols by highlighting learned features and their relationships to disease patterns.

The Workshop on Interpretability of Machine Intelligence in Medical Image Computing (iMIMIC) at MICCAI 2022 aims at introducing the challenges & opportunities related to the topic of interpretability of ML systems in the context of MICCAI.

Interpretability can be defined as an explanation of the machine learning system. It can be broadly defined as global, or local. The former explains the model and how it learned, while the latter is concerned with explaining individual predictions. Visualization is often useful for assisting the process of model interpretation. The model’s uncertainty may be seen as a proxy for interpreting it, by identifying difficult instances. Still, although we can find some approaches for tackling machine learning interpretability, there is a lack of formal and clear definition and taxonomy, as well as general approaches, as well as regulatory frameworks. Additionally, interpretability results often rely on comparing explanations with domain knowledge. Hence, there is need for defining objective, quantitative, and systematic evaluation methodologies.

This workshop aims at fostering the discussions, presentation of ideas to tackle the many challenges and identifying opportunities related to the topic of interpretability of ML systems in the context of MICCAI. Therefore, the main purposes of this workshop are:

  1. To introduce the challenges/opportunities related to the topic of interpretability of machine learning systems in the context of MICCAI. While there have been workshops of interpretability of machine learning systems in general machine learning and A.I. conferences (NeurIPS, ICML), to the best of our knowledge, iMIMIC is the only workshop dedicated to the medical imaging application domain.
  2. To understand the state of the art of this field. This will be achieved through the submitted manuscripts and the invited keynote speakers.
  3. To join researchers in this field, and to discuss the issues related to it and future work.
  4. To understand the implications of (or lack of) interpretability of machine learning systems in the MICCAI field.

Covered topics include but are not limited to:

The program of the workshop includes keynote presentations of experts working in the field of interpretability of machine learning. A selection of submitted manuscripts will be chosen for long oral presentations (9 minutes + 3 minutes Q&A) and short oral presentations (6 minutes + 3 minutes Q&A) alongside the keynotes. Finally, we will have a group discussion which leaves room for a brainstorming on the most pressing issues in interpretability of machine intelligence in the context of MICCAI.

Preliminary program:

SGT time - September 22nd

  • 12:00: Opening Session
  • 12:05: Keynote: Ruth Fong - "Understanding Deep Neural Networks"
  • 12:40: Hanxiao Zhang et al. - "Interpretable Lung Cancer Diagnosis with Online Model Debugging" [PDF]
  • 12:52: Daehyun Cho et al. - "Do pre-processing and augmentation help explainability? A multi-seed analysis for brain age estimation" [PDF]
  • 13:04: Florian Kowarsch et al. - "Towards Self-Explainable Transformers for Cell Classification in Flow Cytometry Data" [PDF]
  • 13:16: Jiahao Lu et al. - "Reducing Annotation Need in Self-Explanatory Models for Lung Nodule Diagnosis" [PDF]
  • 13:30: Coffee Break
  • 13:40: Keynote: Alexander Binder - "Explainability beyond eyeballing heatmaps: towards model improvement using XAI" [PDF]
  • 14:10: Mara Graziani et al. - "Attention-based Interpretable Regression of Gene Expression in Histology" [PDF]
  • 14:22: Benjamin Lambert et al. - "Beyond Voxel Prediction Uncertainty: Identifying brain lesions you can trust" [PDF]
  • 14:31: Ashkan Khakzar et al. - "Interpretable Vertebral Fracture Diagnosis" [PDF]
  • 14:40: Bas H.M. van der Velden et al. - "Multi-modal volumetric concept activation to explain detection and classification of metastatic prostate cancer on PSMA-PET/CT" [PDF]
  • 14:59: Dongyang Kuang et al. - "KAM - a Kernel Attention Module for Emotion Classification with EEG Data" [PDF]
  • 15:08: Amy Rafferty et al. - "Explainable Artificial Intelligence for Breast Tumour Classification: Helpful or Harmful" [PDF]
  • 15:15: Closing Session and Best Paper Award
  • Congratulations to the authors of the best paper award iMIMIC 2022!

    "Towards Self-Explainable Transformers for Cell Classification in Flow Cytometry Data" by Florian Kowarsch et al. [PDF]

    Authors should prepare a manuscript of 8-10 pages, including references. The manuscript should be formatted according to the Lecture Notes in Computer Science (LNCS) style and anonymized. As per previous years, we will have preference to publish proceedings following MICCAI Springer’s publication model.

    All submissions will be reviewed by 3 reviewers. Authors will be asked to disclose possible conflict of interests, such as cooperation in the previous two years. Moreover, care will be taken to avoid reviewers from the same institution as the authors. The selection of the papers will be based on their relevance for medical image analysis, significance of results, technical and experimental merit, and clear presentation. Following previous editions, we will employ Microsoft’s CMT platform to conduct the review process.

    We intend to join the MICCAI Satellite Events joint proceedings, and publish the accepted papers as LNCS. We are also considering making the pre-print of the accepted papers publicly available.

    The authors of the best paper of the workshop will receive a Best Paper award.

    Click here to submit your paper.

    The iMIMIC workshop will take place as part of MICCAI 2022 conference between 18 September and 22 September 2022 in the Resorts World Convention Centre, Singapore.

    More information regarding the venue can be found at the conference website.

    Sponsors

    Interested in participating and being a sponsor? Email us