Invited talk on medical imaging interpretabilty

We hosted Iam Palatnik (Pontifical Catholic University of Rio de Janeiro, Brazil) at MedGIFT for an invited talk on interpretability in medical imaging. Iam shared challenges and solutions for the Local Interpretable Model Agnostic Explanations (LIME) method. The video of his presentation is available online.

Abstract: An application of Local Interpretable Model Agnostic Explanations (LIME) is described for two case studies: Metastases and Malaria classification. Some of the key challenges of using LIME for this purpose – most notably the instability of explanations – are discussed, as well as some potential solutions. Namely, a genetic algorithm based solution called EvEx, where explanations are evolved as the average of a Pareto Front, and Squaregrid, a parameterless rough approximation. The results seem to show that EvEx finds more consistent explanations for regions of high explanatory weight, and that Squaregrid could be a viable way to diminish the need for segmentation parameter fine tuning.

Journal paper accepted in Computing and Informatics

Our paper on “Breast Histopathology with High-Performance Computing and Deep Learning” (M. Graziani et. al) has been accepted for publication in Computing and Informatics, in the special issue on Providing Computing Solutions for Exascale Challenges.

In this work, we present our modular pipeline for detecting tumorous regions in digital specimens of breast lymph nodes with deep learning models. We evaluate challenges and benefits of training models on high-performance and cloud computing with millions of images.

Overview of the proposed CamNet software

Invited talk of Mara Graziani on Deep Learning Interpretability at CIBM

Our PhD student Mara Graziani discussed the topic of defining machine learning interpretability at CIBM (Center for Biomedical Imaging, Switzerland). She also presented our latest applications to the histopathology domain. In particular, she covered our recent work on the “Evaluation and Comparison of CNN Visual Explanations for Histopathology”. She then explained how interpretability can be used in a proactive way to improve model performance. You can watch her talk online at this link: https://www.youtube.com/watch?v=7hs21U-3hgk&feature=youtu.be

Below, the information about the presentation:

Title: A myth-busting attempt for DL interpretability: discussing taxonomies, methodologies and applications to medical imaging.

Abstract: 

Deep Learning (DL) models report almost perfect accuracy on some medical tasks, though this seems to plunge in real-world practices [1]. Started in 2009 as the generation of deep visualizations [2, 3], the field of interpretability has grown and developed over the years, with the intent of understanding why such failures happen and discovering hidden and erroneous behaviors. Several interpretability techniques were then developed to address the fundamentally incomplete problem of evaluating DL models on the sole task performance [4]. 

While defining the key terms used in the field, I will try to bust some myths on DL interpretability: are explainable and interpretable the same thing? Is a “transparent” model an “interpretable” model? Besides, within the application in the field of medical imaging, I will describe the risk of confirmation bias and present our work on evaluating the reliability of interpretability methods. Finally, I will bring examples from our previous works on how interpretable AI can be used to improve model performance and reduce the distance between the clinical and the DL ontologies. 

[1] Yune, S., et al. “Real-world performance of deep-learning-based automated detection system for intracranial hemorrhage.” 2018 SIIM Conference on Machine Intelligence in Medical Imaging: San Francisco (2018).

[2] Erhan, D., et al. “Visualizing Higher-Layer Features of a Deep Network.” (2009).

[3] Zeiler, Matthew D., and Rob Fergus. “Visualizing and understanding convolutional networks.” European conference on computer vision. Springer, Cham, 2014.

[4] Doshi-Velez, Finale, and Been Kim. “Towards A Rigorous Science of Interpretable Machine Learning.” stat 1050 (2017): 2.

Two papers accepted at ECIR 2021

Two papers have been accepted for presentation at the 43rd European Conference on Information Retrieval (ECIR 2021), that will take place online online March 28 – April 1, 2021.

The first paper: “The 2021 ImageCLEF Benchmark: Multimedia Retrieval in Medical, Nature, Internet and Social Media Applications“, Bogdan Ionesco et. al, presents an overview of the upcoming ImageCLEF 2021 campaign and is supported by the AI4media project.

The second paper: “LifeCLEF 2021 teaser: Biodiversity Identification and Prediction Challenges“, Hervé Goëau et. al, presents the different tasks of the LifeCLEF 2021 conference including PlantCLEF, BirdCLEF and GeoLifeCLEF.

Project accepted for the Germaine de Stael Funding Program 2021

The “Germaine de Staël” program promotes collaboration between French and Swiss researchers and research teams. Several exchanges between our group (Prof. Henning Müller) and Sorbonne university of histopathology image analysis (Nicolas Lomenie and Camille Kurtz) will be funded by this project to work on deep learning in digital pathology with gigapixel images of hepatic tissue: “BioGigaDeep -Apprentissage profond en pathologie digitale pour l’analyse d’images gigapixels de tissus hépatiques”.

Paper accepted in ISBI 2021

Our paper entitled “InvNet: A Deep Learning Approach to Invert Complex Deformation Fields”, by Marek Wodzinski and Henning Müller, has been accepted for presentation at ISBI 2021, the IEEE International Symposium on Biomedical Imaging to be held virtually on April 13-16, 2021.

Visualization of the results of the proposed deep learning-based algorithm for inverting complex, nonrigid deformation fields based on a modified U-Net architecture and the symmetric inverse consistency

Paper accepted at the Explainable Agency in Artificial Intelligence Workshop AAAI-21

Our paper on the “Evaluation and Comparison of CNN Visual Explanations for Histopathology” by Mara Graziani, Thomas Lompech, Henning Müller and Vincent Andrearczyk has been accepted for presentation at the AAAI-21 Workshop on Explainable Agency in Artificial Intelligence.

In this work, we evaluate the alignment of XAI visualisations to cancer diagnostic procedures for breast tissue. Do the visualizations highlight specific nuclei types? Visual explanations may induce confirmation bias about CNN decisions.

IOU with nuclei types
Quantification of CNN attention on the nuclei types (+ link to the GitHub repository)

Paper published in Scientific Reports – Nature

Our work on “The importance of feature aggregation in radiomics: a head and neck cancer study“, conducted by Pierre Fontaine, Oscar Acosta, Joël Castelli, Renaud De Crevoisier, Henning Müller and Adrien Depeursinge has been published in Scientific Reports.

We show that visual feature aggregation, often overlooked in radiomics studies, plays an important role in the prediction of disease characteristics.

Influence of the size and localization of the ROI for aggregating the feature maps using the average.

Two papers accepted in AIDP2021

We are looking forward to presenting our work at the AIDP2021 workshop at ICPR 2021. The following two papers were accepted for presentation.

  • “Classification of noisy free-text prostate cancer pathology reports using natural language processing”, by Anjani Dhrangadhariya, Sebastian Otalora, Manfredo Atzori and Henning Mueller
  • “Semi-supervised learning with a teacher-student paradigm for histopathology classification: a resource to face data heterogeneity and lack of local annotations”, by Niccolò Marini, Sebastian Otalora, Henning Mueller and Manfredo Atzori.

A video on our 3D printed prosthetic hand on Swiss national television

The recording “In buone mani” (In good hands) was presented at a scientific show of the Swiss Radio-television in Italian language (RSI). The show described the projects ProHand and MeganePro , targeting the development of robotic hands with advanced technologies, such as 3D scanning, additive manufacturing and machine learning.

https://www.rsi.ch/la1/programmi/cultura/il-giardino-di-albert/tutti-i-servizi/In-buone-mani-13427411.html