Article published in npj Digital Medicine

Our paper on Unleashing the potential of digital pathology data by training computer-aided diagnosis models without human annotations, by Niccolò Marini et al., has been published in npj Digital Medicine and is available in open access.

This work presents an approach that removes the need for human experts to annotate data, using an automatic analysis of healthcare reports to create automatic annotations that can be used to train deep learning models. A case study on the classification of colon whole slide images shows the benefit of the approach to best exploit the potential of healthcare data from hospital workflows.

a Input data from the clinical workflow. b Image Classification pipeline c The textual report pipeline automatically analyzes pathologist reports, to identify meaningful concepts to be used as weak labels for the CNN.

Article on a global taxonomy of interpretable AI published in AI Review

Our work on A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences, by Mara Graziani et al. has been published in Artificial Intelligence Review, Springer.

On April 29th, 2021, we organized a debate to bring together researchers from multidisciplinary backgrounds to collaborate on a global definition of interpretability that may be used with high versatility in the documentation of social, cognitive, philosophical, ethical and legal concerns about AI. One of the many interesting outcomes is the definition of interpretable AI that was reached:

“An AI system is interpretable if it is possible to translate its working principles and outcomes in human-understandable language without affecting the validity of the system”.

The article is published and available online in open access.

Article published in npj Digital Medicine

Our article entitled “Unleashing the potential of digital pathology data by training computer-aided diagnosis models without human annotations“, by Niccolò Marini et al. has been published in npj Digital Medicine from Nature Portfolio.

In this work, we present an approach that removes the need for medical experts when building assisting tools for clinical decision support, paving the way for the exploitation of exa-scale sources of medical data collected worldwide from hospital workflows. 

The combination of large amounts of healthcare data with new artificial intelligence technologies are leading to the creation of new systems to assist medical experts during the diagnosis. 

However, the potential of the combination is not fully exploited because of a bottleneck represented by the need of medical experts to analyze and annotate healthcare data. 

This study presents an approach aiming to remove the need of human experts to annotate data: the reports corresponding to healthcare data (that can be CT scans, MRI, whole slide images) are automatically analyzed in order to create automatic annotations that can be used to train deep learning models. 

The approach involves two components: a natural language processing algorithm, called SKET (Semantic Knowledge Extractor Tool) that analyzes the reports and extracts meaningful concepts, and a computer vision algorithm, a convolutional neural network trained to classify medical images using the concepts extracted from SKET as weak labels.

The code is available in our GitHub repository for the computer vision algorithm and the natural language processing algorithm.

Teaching and Awards at the VISUM 2022 summer school

Henning Müller and Mara Graziani gave lectures, hands-on and mentoring at the VISion Understanding and Machine intelligence – VISUM 2022 summer school, in Porto.

  • Henning gave an excellent lecture on Explainable AI.
  • Under Mara’s mentorship, Laura O’Mahony from University of Limerick was awarded the best mentorship pitch ! Within the very short duration of the summer school, Laura and Mara came up with an interesting small project on using interpretability for model compression.
Mara presenting the on-site hand-on session for the track of Explainable AI

Article published in Bioinformatics with shared ImageJ plugin

Our work on automatic template detection for microscopy images “Steer’n’Detect: Fast 2D Template Detection with Accurate Orientation Estimation“, by V. Uhlmann, Z. Püspöki, A. Depeursinge, M. Unser, D. Sage and J. Fageot., has been accepted for publication in the Bioiformatics jounal.

In this work, we introduce Steer’n’Detect, an ImageJ plugin to detect patterns of interest at any orientation with high accuracy from a single template in 2D images.

Detection of rod-shaped bacteria with Steer’n’Detect (synthetic image degraded by self-similar Gaussian random field background noise).
Detection of spermatozoan axoneme with Steer’n’Detect (transmission electron microscopy image) (Reused from http://www.cellimagelibrary.org/images/35970). Top left: image crop used as template; bottom left: resulting detector; right: detection results

Paper on H&N segmentation accepted at EMBC 2022

Our paper on “Segmentation and Classification of Head and Neck Nodal Metastases and Primary Tumors in PET/CT”, by V. Andrearczyk, V. Oreiller, M. Jreige, J.Castelli, J. O. Prior and A. Depeursinge has been accepted for presentation at the IEEE International Engineering in Medicine and Biology Conference (EMBC), held in Glasgow, 11-15 July 2022.

The prediction of cancer characteristics, treatment planning and patient outcome from medical images generally requires tumor delineation. In Head and Neck cancer (H\&N), the automatic segmentation and differentiation of primary Gross Tumor Volumes (GTVt) and malignant lymph nodes (GTVn) is a necessary step for large-scale radiomics studies. We developed a bi-modal 3D U-Net model to automatically individually segment GTVt and GTVn in PET/CT images. The model is trained for multi-class and multi-components segmentation on the multi-centric HECKTOR 2020 dataset.

Illustrations of 2D PET (SUV scale 0-7 mg/L) and CT slices overlayed with GTVt (blue) and GTVn (red) automatic segmentation. The corresponding DSC are reported in the captions. The ground truth is in bright color, the prediction in dark color. (a,b) Correctly detected and segmented GTVt and GTVn; (c) One GTVn correctly segmented (right), one largely undersegmented (left); (d) GTVn misclassified as a GTVt. Standard DSC is reported for each case to evaluate the 3D segmentation of GTVt and GTVn.

HECKTOR challenge officially at MICCAI 2022

We are pleased to announce that the HECKTOR (HEad and neCK TumOR segmentation and outcome prediction in PET/CT images) challenge has been renewed for a third edition at MICCAI 2022 (September 18-22, 2022, Singapore).

This challenge was created in 2020 to create an opportunity for the comparison of automated algorithms dedicated to the segmentation of head and neck primary tumors in FDG PET/CT multimodal images. This first edition relied on a dataset of 254 patients from 5 centers and attracted about 20 challengers. In 2021, the challenge was renewed with a larger dataset (325 patients from 6 centers) with the addition of a second task, the prediction of progression-free survival (PFS). This second edition was quite successful, with twice the number of submissions (~40) for the first task and 30 for the second task [2].

This year, challengers are again invited to compete in the same tasks, although the segmentation one now includes the fully automated detection and segmentation of lymph nodes in addition to the primary tumor, and the dataset should contain more than a thousand patients collected from 9 participating centers.

For 2022, the challenge is now hosted on the platform grand-challenge.org: https://hecktor.grand-challenge.org/. Registrations will open soon.

The full list of MICCAI challenges is now online: https://conferences.miccai.org/2022/en/MICCAI2022-CHALLENGES.html

Paper accepted at the BioNLP workshop at ACL 2022

Our paper “DISTANT-CTO: A Zero Cost, Distantly Supervised Approach to Improve Low-Resource Entity Extraction Using Clinical Trials Literature”, by A. Dhrangadhariya and H. Müller is accepted for presentation at the BioNLP workshop at ACL2022 (Dublin, 22-27 May).

The paper takes strides toward democratizing entity/span level PICO recognition by automatically creating a large entity annotated dataset using open-access resources. PICO extraction is a vital but tiring process for writing systematic reviews.

The approach uses gestalt pattern matching and a modified version of the match scoring to flexibly align structured terms from http://clinicaltrials.gov onto the unstructured text and select high-confidence match candidates.

The method is extensible and data-centric which allows for constant extension of the dataset, and for optimization of the automatic annotation method for the recall-oriented PICO recognition.

Overview of the proposed DISTANT-CTO method

Article published in Computers in Biology and Medicine

Our work on “Grading diabetic retinopathy and prostate cancer diagnostic images with deep quantum ordinal regression“, by S. Toledo-Cortés D. H. Useche H. Müller and F. A. González, has been published in Computers in Biology and Medicine (Elsevier).

A new end-to-end neural probabilistic ordinal regression method is proposed to predict a probability distributions over a range of grades. The model, evaluated on prostate cancer diagnosis and diabetic retinopathy grade estimation, can also quantify its uncertainty.

Overview of the proposed approach for diabetic retinopathy (DR) grading. An Inception-V3 network was used as feature extractor for the eye fundus image. These features were
the input for the regressor model, which yielded a posterior probability distribution over the DR grades.

Winner of the BraTS-Reg challenge

Congratulations to Marek Wodzinski, Niccolò Marini and Henning Müller for winning the BraTS-Reg challenge hosted at ISBI 2022.

The goal of the challenge was to propose the most accurate medical image registration method to align pre-operative to follow-up MRIs. Our team proposed a hybrid approach based on a multilevel deep network combined with an iterative, instance optimization.

Paper accepted for presentation at MIDL 2022

Our paper on nucleus segmentation with local rotation invariance has been accepted for poster presentation at the Medical Imaging with Deep Learning (MIDL) conference 2022. As one of the first in-person conferences (hybrid) in our field, MIDL, held in Zürich, 6 – 8 July 2022, will be a great opportunity to catch up with researchers in our field and discuss our work.

Illustration of segmentation predictions robustness with respect to input orientation. The red color map indicates the average pixel-wise difference. which is averaged across the six pairs of 90◦ rotations

Full paper “Multi-Organ Nucleus Segmentation Using a Locally Rotation Invariant Bispectrum U-Net“, V. Oreiller, J. Fageot, V. Andrearczyk, J. O. Prior and A. Depeursinge, available online: https://openreview.net/forum?id=paGzvj2t_x

Article on feature stability published in Nature Scientific Reports

Our work on Assessing radiomics feature stability with simulated CT acquisitions has been published in Nature Scientific Reports. The authors K. Flouris, O. Jimenez-del-Toro, C. Aberle, M. Bach, R. Schaer, M. M. Obmann, B. Stieltjes, H. Müller, A. Depeursinge and E. Konukoglu developed and validated a Computed Tomography (CT) simulator and showed that the variability, stability and discriminative power of the radiomics features extracted from the virtual phantom images are similar to those observed in a tandem phantom study. They also demonstrated that the simulator can be utilised to assess radiomics features’ stability and discriminative power.

More details in the article available online: https://www.nature.com/articles/s41598-022-08301-1

Annotated regions of interest on the anthropomorphic phantom.

HECKTOR 2021 proceedings available online

The proceedings of the HEad and neCK TumOR segmentation and outcome prediction in PET/CT images (HECKTOR) 2021 challenge, at MICCAI, are now available in a Springer LNCS book.

https://link.springer.com/book/10.1007/978-3-030-98253-9

The participants papers describe the various innovative methods, and our overview paper provides a description of the challenge, the data, the methods and the results.

Stay tuned for the next edition of the challenge: HECKTOR 2022 !

Paper published in Clinical and Translational Radiation Oncology

In our paper “Cleaning Radiotherapy Contours for Radiomics Studies, is it Worth it? A Head and Neck Cancer Study“, published in ctRO (Elsevier), we study the benefit of delineating tumor contours specifically for radiomics analyses (automatic prognostic prediction of patients with Head and Neck cancer). The contours originating from radiotherapy often include parts of surrounding organs (e.g. trachea, bones) which impact the extraction of visual features that characterize the tumor.

Example of VOI delineation: Radiotherapy (green), Resegmented (purple), and Dedicated (blue) overlayed on a fused FDG-PET/CT image. The blue contour is closer to the true volume of the primary tumor.

SwissNeuroRehab, a new model of neurorehabilitation. FlagShip project funded by Innosuisse

In 2021, the Swiss innovation agency Innosuisse has launched the funding scheme Flagship aimed at stimulating systemic innovation and transdisciplinary collaboration to solve future challenges relevant for the Swiss economy and society. Among the 15 supported projects out of 84 submitted, Innosuisse has financed with 11.2 MCHF over 5 years the project SwissNeuroRehab.

As part of a large Swiss consortium working on this project, our group at the University of Applied Science of Western Switzerland (HES-SO) will lead the “Data” sub project.


SwissNeuroRehab aims at developing a novel model of neurorehabilitation along the continuum of care, from the hospital to home. In the first phase, the project will focus on stroke, traumatic brain injury and spinal cord injury. Through the partnership between university hospitals, research centers, neurorehabilitation clinics, therapists and the industry, SwissNeuroRehab will combine the best available approaches for neuro-rehabilitation with new digital and techonological methods to create innovative and efficient therapeutic programs tailored to the individual needs of patients and their
families.

More details available here (in french).

Abstracts Accepted at Conferences on Childhood Disability

We will be presenting our work at two conferences: the Swiss Academy of Childhood Disability (SACD), 20th January 2022 and the European Academy of Childhood Disability (EACD), which will take place 18-21 May 2022 in Barcelona.

At SACD, pour paper “Serious games embedded in virtual reality as a visual rehabilitation tool for individuals with pediatric amblyopia: A protocol for a crossover randomized controlled trial“, by C. Simon-Martinez , B. Backus , B. Dornbos , M.-P. Antoniou , M. Kropp, G. Thumann, W. Bouthour, H. Steffen and P. J. Matusz, received the 2nd prize on the category of study protocols.


Award ceremony at SACD, 2nd prize on the category of study protocols

At EACD, we will present our work on “The use of DeepLabCut to detect and quantify mirror movements in children with unilateral cerebral palsy“, by B. Berclaz, H. Haberfehlner, K. Klingels, H. Feys, P. Meyns, H. Müller and C. Simon-Martinez

At EACD, Cristina Simon-Martinez is also co-author on 2 other abstracts, one with the KU Leuven and one with the University Hospital in Bern, that were accepted as oral presentations! Meet our team there to discuss all this amazing work.