This workshop is part of the MICCAI 2021 conference.
The workshop will take place in the afternoon of September 27, 2021.
Overview
Machine learning (ML) systems are achieving remarkable performances at the cost of increased complexity. Deep neural networks, in particular, appear as black box machine and their behaviour can be sometimes unpredictable. Furthermore, more complex models are less interpretable, which may cause distrust. As these systems are pervasively being introduced to critical domains, such as medical image computing and computer assisted intervention (MICCAI), it becomes imperative to develop methodologies for explaining model predictions.
Such methodologies would help physicians to decide whether they should follow/trust a prediction or not, and might help to identify failure cases. Additionally, it could facilitate the deployment of such systems, from a legal perspective. Ultimately, interpretability is closely related with AI safety in healthcare.
However, there is very limited work regarding interpretability of ML systems among the MICCAI research. Besides increasing trust and acceptance by physicians, interpretability of ML systems can be helpful during method development. For instance, by inspecting if the model is learning aspects coherent with domain knowledge, or by studying failures. Also, it may help revealing biases in the training data, or identifying the most relevant data (e.g., specific MRI sequences in multi-sequence acquisitions). This is critical since the rise of chronic conditions has led to a continuous growth in usage of medical imaging, while at the same time reimbursements have been declining. Hence, improved productivity through the development of more efficient acquisition protocols is urgently needed.
The Workshop on Interpretability of Machine Intelligence in Medical Image Computing (iMIMIC) at MICCAI 2021 aims at introducing the challenges & opportunities related to the topic of interpretability of ML systems in the context of MICCAI.
Scope
Interpretability can be defined as an explanation of the machine learning system. It can be broadly defined as global, or local. The former explains the model and how it learned, while the latter is concerned with explaining individual predictions. Visualization is often useful for assisting the process of model interpretation. The model’s uncertainty may be seen as a proxy for interpreting it, by identifying difficult instances. Still, although we can find some approaches for tackling machine learning interpretability, there is a lack of formal and clear definition and taxonomy, as well as general approaches. Additionally, interpretability results often rely on comparing explanations with domain knowledge. Hence, there is the need for defining objective, quantitative, and systematic evaluation methodologies.
This workshop aims at fostering the discussions, presentation of ideas to tackle the many challenges and identifying opportunities related to the topic of interpretability of ML systems in the context of MICCAI. Therefore, the main purposes of this workshop are:
- To introduce the challenges/opportunities related to the topic of interpretability of machine learning systems in the context of MICCAI. While there have been workshops of interpretability of machine learning systems in general machine learning and A.I. conferences (NIPS, ICML), to the best of our knowledge, iMIMIC is the only workshop dedicated to the medical imaging application domain.
- To understand the state of the art of this field. This will be achieved through the submitted manuscripts and the invited keynote speakers.
- To join researchers in this field, and to discuss the issues related to it and future work.
- To understand the implications of (or lack of) interpretability of machine learning systems in the MICCAI field.
Covered topics include but are not limited to:
- Definition of interpretability in context of medical image analysis.
- Visualization techniques useful for model interpretation in medical image analysis.
- Local explanations for model interpretability in medical image analysis.
- Methods to improve transparency of machine learning models commonly used in medical image analysis.
- Textual explanations of model decisions in medical image analysis.
- Uncertainty quantification in context of model interpretability.
- Quantification and measurement of interpretability.
- Legal and regulatory aspects of model interpretability in medicine.
Program
The program of the workshop includes keynote presentations of experts working in the field of interpretability of machine learning. A selection of submitted manuscripts will be chosen for short oral presentations (10 minutes + 3 minutes Q&A) alongside the keynotes. Finally, we will have a group discussion which leaves room for a brainstorming on the most pressing issues in interpretability of machine intelligence in the context of MICCAI.
Preliminary program:
UTC time - September 27th, 2021
Best paper award
Congratulations to the authors of the best paper award iMIMIC 2021!
"Visual Explanation by Unifying Adversarial Generation and Feature Importance Attributions" by Martin Charachon, Paul-Henry Cournède, Céline Hudelot and Roberto Ardon. [PDF]
Proceedings
Joint Proceedings of iMIMIC and TDA4MedicalData has been published as art of the Lecture Notes in Computer Science book series (LNCS, volume 12929)
Interpretability of Machine Intelligence in Medical Image Computing, and Topological Data Analysis and Its Applications for Medical Data is available here.
Keynote speakers
-
Prof. Dr. Mihaela van der Schaar, University of Cambridge, UK.
Title: Quantitative Epistemology: Conceiving a new human-machine partnership
-
Been Kim, PhD, Google Brain, USA.
Title: Interpretability for philosophical and skeptical minds
-
Cynthia Rudin, PhD, Duke University, USA.
Title: Interpretable Neural Networks for Computer Vision: Clinical Decisions that are Computer-Aided, not Automated
Abstract: Let us consider a difficult computer vision challenge. Would you want an algorithm to determine whether you should get a biopsy, based on an xray? That's usually a decision made by a radiologist, based on years of training. We know that algorithms haven't worked perfectly for a multitude of other computer vision applications, and biopsy decisions are harder than just about any other application of computer vision that we typically consider. The interesting question is whether it is possible that an algorithm could be a true partner to a physician, rather than making the decision on its own. To do this, at the very least, we would need an interpretable neural network that is as accurate as its black box counterparts. In this talk, I will discuss two approaches to interpretable neural networks: (1) case-based reasoning, where parts of images are compared to other parts of prototypical images for each class, and (2) neural disentanglement, using a technique called concept whitening. The case-based reasoning technique is strictly better than saliency maps, and the concept whitening technique provides a strict advantage over the posthoc use of concept vectors. Here are the papers I will discuss:
- This Looks Like That: Deep Learning for Interpretable Image Recognition. NeurIPS spotlight, 2019.
- Concept Whitening for Interpretable Image Recognition. Nature Machine Intelligence, 2020.
- Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and use Interpretable Models Instead. Nature Machine Intelligence, 2019.
- IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography, 2021.
- Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges, 2021.
Paper submission
Authors should prepare a manuscript of 8 pages, excluding references. The manuscript should be formatted according to the Lecture Notes in Computer Science (LNCS) style. All submissions will be reviewed by 3 reviewers. The reviewing process will be single-blinded. Authors will be asked to disclose possible conflict of interests, such as cooperation in the previous two years. The selection of the papers will be based on their relevance for medical image analysis, significance of results, technical and experimental merit, and clear presentation.
We intend to join the MICCAI Satellite Events joint proceedings, and publish the accepted papers as LNCS. We are also considering making the pre-print of the accepted papers publicly available.
The authors of the best paper of the workshop will receive a Best Paper award.
Click here to submit your paper.
Important dates
- Opening of submission system:
Open Now. - Paper submission due:
June 30, 2021. - Reviews due:
July 23, 2021. - Notification of paper decisions:
July 28, 2021. - Camera-ready papers due:
August 4, 2021. - Workshop: September 27, 2021.
Venue
The iMIMIC workshop will take place as part of MICCAI 2021 conference between 27 September and October 1 2021 in Strasbourg, France.
More information regarding the venue can be found at the conference website
Organizing Team
General Chairs
- Mauricio Reyes, University of Bern, Switzerland.
- Jaime Cardoso, INESC Porto, Universidade do
Porto, Portugal
- Himabindu Lakkaraju, Harvard University, USA.
- Jayashree Kalpathy-Cramer, MGH Harvard University,
USA.
- Nguyen Le
Minh, Japan Advanced Institute of Science and Technology, Japan.
- Pedro Abreu, CISUC and
University of Coimbra, Portugal.
- Roland Wiest,
University Hospital Bern, Bern, Switzerland.
- José Amorim, CISUC and
University of Coimbra, Portugal.
- Wilson Silva, INESC TEC
and University of Porto, Portugal.
Program Committee
- Adam Perer, Carnegie Mellon University, USA.
- Alexander Binder, University of Oslo, Norway.
- Ben Glocker, Imperial College, United Kingdom.
- Bettina Finzel, University of Bamberg, Germany.
- Bjoern Menze, TUM, Germany.
- Carlos A. Silva, University of Minho, Portugal.
- Christoph Molnar, Ludwig Maximilian University of Munich, Germany.
- Claes Nøhr Ladefoged, Rigshospitalet, Denmark.
- Dwarikanath Mahapatra, Inception Institute of AI, Abu Dhabi, UAE.
- Ender Konukoglu, ETH Zurich, Switzerland.
- George Panoutsos, University of Sheffield, United Kingdom.
- Henning Müller, HES-SO Valais-Wallis, Switzerland.
- Hrvoje Bogunovic, Medical University of Vienna, Austria.
- Isabel Rio-Torto, University of Porto, Portugal.
- Islem Rekik, Istanbul Technical University, Turkey.
- Mara Graziani, HES-SO Valais-Wallis, Switzerland.
- Nick Pawlowski, Imperial College London, United Kingdom.
- Sérgio Pereira, Lunit, South Korea.
- Ute Schmid, University of Bamberg, Germany.
- Wojciech Samek, Fraunhofer HHI, Germany.
Sponsors
Interested in participating and being a sponsor? Email us