Web-demo Interpretability Approaches
In this section we provide few examples of common interpretability approaches working on a deep learning model, trained to identify cardiomegaly cases from lung x-ray images.
Note: To simplify the demo, the model being interpreted was trained to identify cardiomegaly cases, however we note that these interpretability approaches can be used in multi-class problems as well.
The following set of interpretability approaches have been included as implemented in the
iNNvestigate python library.
Instructions:
- Select the input image and the interpretability method (middle column).
- Click on "Run Interpretation" to visualize the corresponding saliency map.
- Move the slide bar to change the opacity of the saliency map.
- when selecting a new image, click on "Run Interpretation" to update the saliency map.
The top and bottom rows of the input images show cardiomegaly and non-cardiomegaly cases.
The VGG16 model and training dataset
The underlying model corresponds to the well-known
VGG16 neural network, trained on the publicly available
NIH Chest dataset
The model was trained using Keras, and its architecture consists of 6 blocks, each containing convolutional layers using ReLU activation functions and a max-pooling layer. To compute final class probabilities, the model includes two dense fully-connected layers and two dropout layers for regularization purposes. For training, the model was initialized via pre-training using
Image-Net.
Some notes worth discussing on these results:
The saliency maps suggest that overall the VGG16 model seemed to have learned to associate cardiomegaly cases with patterns related to an enlarged left atrial appendage and enlarged pulmonary artery (indicative of pulmonary hypertension) (e.g. case 1).
Interestingly, in some cases it can be seen, from the saliency maps, that the VGG16 model learned to identify as well bypass post-operative clips, as typically patients receiving a bypass operation are often associated with cardiomegaly.
This suggests that the VGG16 might have learned to identify cardiomegalies using different radiographic features as a radiologist would use, which in turn can help in assessing the validity of the features the trained model uses for the task.
Can interpretability methods be used to audit any potential bias in the training data?
Curent deep learning models are excellent at finding patterns in the data that correlate with a desired outcome. However, many times the correlation stems from an undesired bias in the data. Under the tab "Biased Dataset" a simulation corresponding to a bias in the data is shown. A VGG16 neural network was trained with cardiomegaly images where a square was artificially added (induced bias).
By running different interpretability approaches it is possible to see that the saliency maps indicate pixels belonging to the artificially added square, indicating that the model had learned to identify that as indication of the condition.
Big thanks to Fabio Anderegg, who produced this demo while doing an internship in our lab