HECKTOR test data release

The emulation of the HECKTOR challenge at MICCAI2021 is on with many active participants! The test data has just been released and the submission website will be open from Sep. 01 to Sep. 10. We are very much looking forward to your participation !

More info here: https://www.aicrowd.com/challenges/miccai-2021-hecktor

We are very grateful to the HECKTOR 2021 sponsors:

Task 1 (segmentation) is sponsored by Siemens Healthineers Switzerland with a prize of 500 €.
Task 2 (prediction of PFS) is sponsored by Aquilab with a prize of 500 €.
Task 3 (prediction of PFS using ground-truth delineation of tumors) is sponsored by Biomemtech with a prize of 500 €.

Paper accepted at MICCAI 2021

Our paper “Sharpening Local Interpretable Model-agnostic Explanations for Histopathology: Improved Understandability and Reliability” by Mara Graziani, Iam Palatnik De Sousa et. al will be presented at MICCAI 2021, Sept. 27 to Oct 1st, Strasbourg, France.

In this work, we improve the application of LIME to histopathology images by leveraging nuclei annotations. The obtained visualizations reveal the sharp, neat and high attention of the deep classifier to the neoplastic nuclei in the dataset, an observation in line with clinical decision making.
Compared to standard LIME, our explanations show improved understandability for domain-experts, report higher stability and pass the sanity checks of consistency to data or initialization changes and sensitivity to network parameters.

Paper accepted for presentation at MIDL 2021

Our work on “Common limitations of performance metrics in biomedical image analysis” has been accepted for publication in MIDL 2021. Conducted by a large international consortium including the BIAS initiative, the MICCAI Society’s challenge working group and the benchmarking working group of the MONAI framework, this initiative aims at generating best practice recommendations with respect to metrics in medical image analysis.

Full paper available on arxiv: https://arxiv.org/abs/2104.05642

Paper accepted in IEEE Transaction on Image Processing

Check out our steerable detectors if you lost your (biological) patterns in noisy backgrounds 🙂 https://ieeexplore.ieee.org/document/9406375

A single-shot template learning will allow finding all occurrences no matter their orientation are and with high recall and specificity ! 

And it is obviously all packaged in an open-source ImageJ plugin!

Plus open source code on GitHub:

Joint work with Julien Fageot, Virginie Uhlmann, Zsuzsanna Püspöki, Benjamin Beck and Michael Unser.

HECKTOR 2021 at MICCAI starting soon

The second edition of the HECKTOR challenge will take place at MICCAI 2021 tackling the Head & Neck tumor analysis in PET-CT images. Visit the challenge page for full description: https://www.aicrowd.com/challenges/miccai-2021-hecktor

In addition to last year’s challenge, we will include more data for the segmentation task as well as a new task for the automatic prediction of patient outcome.

Evaluate your cutting edge algorithm on 300 cases with high-quality annotations from six centers.


  • Release date of the training cases: June 1 2021
  • Release date of the test cases: Aug. 1 2021
  • Submission date(s): opens Sept. 1 2021 closes Sept. 10 2021
  • Release date of the results: Sept. 15 2021
  • Associated workshop days: Sept. 27 2021 or Oct. 1 2021

Paper on scale invariance in deep learning accepted in MAKE journal

Our work “On the Scale Invariance in State of the Art CNNs Trained on ImageNet”, by Mara Graziani, Thomas Lompech, Henning Müller, Adrien Depeursinge and Vincent Andrearczyk. was published with open access rights in a special issue of the Machine Learning and Knowledge Extraction (MAKE) journal: https://www.mdpi.com/2504-4990/3/2/19#abstractc

By using our tool Regression Concept Vectors (pip install rcvtool), we modeled information about scale within intermediate CNN layers. Scale covariance peaks at deep layers and invariance is learned only in the final layers.
Based on this, we designed a pruning strategy that preserves scale-covariant features. This gives better transfer learning results for medical tasks where scale is discriminative, for example, to distinguish the magnification level of microscope images of tissue biopses.

HECKTOR 2021 challenge accepted at MICCAI

The second edition of the HECKTOR challenge was accepted at MICCAI 2021. This challenge is designed for researchers in medical imaging to compare their algorithms for automatic segmentation of tumor in Head and Neck Cancer. In addition to last year’s challenge, we will include more data for the segmentation task as well as a new task for the automatic prediction of patient outcome.


  • Release date of the training cases: June 1 2021
  • Release date of the test cases: Aug. 1 2021
  • Submission date(s): opens Sept. 1 2021 closes Sept. 10 2021
  • Release date of the results: Sept. 15 2021
  • Associated workshop days: Sept. 27 2021 or Oct. 1 2021

More information will come soon !

Start of EU project BIGPICTURE

This week took place the kickoff meeting of BIGPICTURE, an EU project gathering about 50 partners including hospitals, research centers and pharmaceutical companies with the aim to create the largest database of histopathology images for computer assisted cancer diagnostic and treatment planning.

Manfredo Atzori, senior researcher and Henning Müller, professor at the Institute of Information Systems, HES-SO

Journal paper accepted in Developmental Cognitive Neuroscience

Our paper on “The development of attentional control mechanisms in multisensory environments” has been accepted for publication in Developmental Cognitive Neuroscience, Elsevier.

This work led by Nora Turoman, Pawel J. Matusz and their colleagues focused on using advanced multivariate EEG analyses to understand how children develop their attentional control in real-world like settings and when different processes that underlie this attention control reach the adult-like state.

Workshop on Interpretable AI for Digital Pathology at AMLD 2021

Deep learning models may hide inherent risks: codification of biases, weak accountability and little transparency of the decision-making. 

Our workshop at AMLD2021 (Building Interpretable AI for Digital Pathology) offers an all-round discussion on building interpretable AI for digital pathology.

Join us to hear about visualization methods, concept attribution and interpretable graph-networks.

Invited talk on medical imaging interpretabilty

We hosted Iam Palatnik (Pontifical Catholic University of Rio de Janeiro, Brazil) at MedGIFT for an invited talk on interpretability in medical imaging. Iam shared challenges and solutions for the Local Interpretable Model Agnostic Explanations (LIME) method. The video of his presentation is available online.

Abstract: An application of Local Interpretable Model Agnostic Explanations (LIME) is described for two case studies: Metastases and Malaria classification. Some of the key challenges of using LIME for this purpose – most notably the instability of explanations – are discussed, as well as some potential solutions. Namely, a genetic algorithm based solution called EvEx, where explanations are evolved as the average of a Pareto Front, and Squaregrid, a parameterless rough approximation. The results seem to show that EvEx finds more consistent explanations for regions of high explanatory weight, and that Squaregrid could be a viable way to diminish the need for segmentation parameter fine tuning.

Journal paper accepted in Computing and Informatics

Our paper on “Breast Histopathology with High-Performance Computing and Deep Learning” (M. Graziani et. al) has been accepted for publication in Computing and Informatics, in the special issue on Providing Computing Solutions for Exascale Challenges.

In this work, we present our modular pipeline for detecting tumorous regions in digital specimens of breast lymph nodes with deep learning models. We evaluate challenges and benefits of training models on high-performance and cloud computing with millions of images.

Overview of the proposed CamNet software

Invited talk of Mara Graziani on Deep Learning Interpretability at CIBM

Our PhD student Mara Graziani discussed the topic of defining machine learning interpretability at CIBM (Center for Biomedical Imaging, Switzerland). She also presented our latest applications to the histopathology domain. In particular, she covered our recent work on the “Evaluation and Comparison of CNN Visual Explanations for Histopathology”. She then explained how interpretability can be used in a proactive way to improve model performance. You can watch her talk online at this link: https://www.youtube.com/watch?v=7hs21U-3hgk&feature=youtu.be

Below, the information about the presentation:

Title: A myth-busting attempt for DL interpretability: discussing taxonomies, methodologies and applications to medical imaging.


Deep Learning (DL) models report almost perfect accuracy on some medical tasks, though this seems to plunge in real-world practices [1]. Started in 2009 as the generation of deep visualizations [2, 3], the field of interpretability has grown and developed over the years, with the intent of understanding why such failures happen and discovering hidden and erroneous behaviors. Several interpretability techniques were then developed to address the fundamentally incomplete problem of evaluating DL models on the sole task performance [4]. 

While defining the key terms used in the field, I will try to bust some myths on DL interpretability: are explainable and interpretable the same thing? Is a “transparent” model an “interpretable” model? Besides, within the application in the field of medical imaging, I will describe the risk of confirmation bias and present our work on evaluating the reliability of interpretability methods. Finally, I will bring examples from our previous works on how interpretable AI can be used to improve model performance and reduce the distance between the clinical and the DL ontologies. 

[1] Yune, S., et al. “Real-world performance of deep-learning-based automated detection system for intracranial hemorrhage.” 2018 SIIM Conference on Machine Intelligence in Medical Imaging: San Francisco (2018).

[2] Erhan, D., et al. “Visualizing Higher-Layer Features of a Deep Network.” (2009).

[3] Zeiler, Matthew D., and Rob Fergus. “Visualizing and understanding convolutional networks.” European conference on computer vision. Springer, Cham, 2014.

[4] Doshi-Velez, Finale, and Been Kim. “Towards A Rigorous Science of Interpretable Machine Learning.” stat 1050 (2017): 2.

Two papers accepted at ECIR 2021

Two papers have been accepted for presentation at the 43rd European Conference on Information Retrieval (ECIR 2021), that will take place online online March 28 – April 1, 2021.

The first paper: “The 2021 ImageCLEF Benchmark: Multimedia Retrieval in Medical, Nature, Internet and Social Media Applications“, Bogdan Ionesco et. al, presents an overview of the upcoming ImageCLEF 2021 campaign and is supported by the AI4media project.

The second paper: “LifeCLEF 2021 teaser: Biodiversity Identification and Prediction Challenges“, Hervé Goëau et. al, presents the different tasks of the LifeCLEF 2021 conference including PlantCLEF, BirdCLEF and GeoLifeCLEF.