The satellite event of HECKTOR (HEad and neCK TumOR segmentation and outcome prediction) at MICCAI 2022 was a great success in terms of participation and quality.
Congratulations to all participants, and particularly to the winners:
Task 1 (segmentation) winner: Andriy Myronenko et al., team NVAUTO
Task 2 (outcome prediction) winner: Louis Rebaud et al., team LITO
Submissions have now re-opened on grand-challenge, we are looking forward to new submissions.
Find more about the HECKTOR challenge in this amazing post.
The Semantic Knowledge Extractor Tool (SKET) is an unsupervised hybrid system that combines rule-based techniques with pre-trained machine learning models to extract key pathological concepts from diagnostic reports. SKET is a viable solution to reduce pathologists’ workload and can be used as a first, cheap solution to bootstrap supervised models in absence of manual annotations.
SKET architecture.
The SKET eXplained (SKET X) is a web-based system that supports pathologists and domain experts in the visual understanding of SKET predictions. SKET X can refine parameters and rules over time, thus improving the system effectiveness and increasing user’s trust and confidence
SKET X dashboard providing information about the executed SKET pipelines
Our paper on Unleashing the potential of digital pathology data by training computer-aided diagnosis models without human annotations, by Niccolò Marini et al., has been published in npj Digital Medicine and is available in open access.
This work presents an approach that removes the need for human experts to annotate data, using an automatic analysis of healthcare reports to create automatic annotations that can be used to train deep learning models. A case study on the classification of colon whole slide images shows the benefit of the approach to best exploit the potential of healthcare data from hospital workflows.
a Input data from the clinical workflow. b Image Classification pipeline c The textual report pipeline automatically analyzes pathologist reports, to identify meaningful concepts to be used as weak labels for the CNN.
On April 29th, 2021, we organized a debate to bring together researchers from multidisciplinary backgrounds to collaborate on a global definition of interpretability that may be used with high versatility in the documentation of social, cognitive, philosophical, ethical and legal concerns about AI. One of the many interesting outcomes is the definition of interpretable AI that was reached:
“An AI system is interpretable if it is possible to translate its working principles and outcomes in human-understandable language without affecting the validity of the system”.
The article is published and available online in open access.
Prof. Henning Müller has been announced one of ten digital shapers of Switzerland in the medical field. This nomination recognizes his and his tream’s contribution as eMedics, i.e. people who use digital transformation to enhance different aspects of well-being, health and medicine.
Congratulations to the other 99 Digital Shapers for driving digital innovation and change!
In this work, we present an approach that removes the need for medical experts when building assisting tools for clinical decision support, paving the way for the exploitation of exa-scale sources of medical data collected worldwide from hospital workflows.
The combination of large amounts of healthcare data with new artificial intelligence technologies are leading to the creation of new systems to assist medical experts during the diagnosis.
However, the potential of the combination is not fully exploited because of a bottleneck represented by the need of medical experts to analyze and annotate healthcare data.
This study presents an approach aiming to remove the need of human experts to annotate data: the reports corresponding to healthcare data (that can be CT scans, MRI, whole slide images) are automatically analyzed in order to create automatic annotations that can be used to train deep learning models.
The approach involves two components: a natural language processing algorithm, called SKET (Semantic Knowledge Extractor Tool) that analyzes the reports and extracts meaningful concepts, and a computer vision algorithm, a convolutional neural network trained to classify medical images using the concepts extracted from SKET as weak labels.
Henning Müller and Mara Graziani gave lectures, hands-on and mentoring at the VISion Understanding and Machine intelligence – VISUM 2022 summer school, in Porto.
Henning gave an excellent lecture on Explainable AI.
Under Mara’s mentorship, Laura O’Mahony from University of Limerick was awarded the best mentorship pitch ! Within the very short duration of the summer school, Laura and Mara came up with an interesting small project on using interpretability for model compression.
Mara presenting the on-site hand-on session for the track of Explainable AI
Last week, we met at the Solalex hut in Gryon to enjoy a team building day, what we call Green Day. We invited our colleagues from the TranslationalML Laboratory (CHUV/UNIL) led by Jonas Richiardi. We worked together on strengthening our collaboration on Machine Learning and Medical Imaging projects, as well as identifying challenges in science, communication and research.
In this work, we introduce Steer’n’Detect, an ImageJ plugin to detect patterns of interest at any orientation with high accuracy from a single template in 2D images.
Detection of rod-shaped bacteria with Steer’n’Detect (synthetic image degraded by self-similar Gaussian random field background noise).Detection of spermatozoan axoneme with Steer’n’Detect (transmission electron microscopy image) (Reused from http://www.cellimagelibrary.org/images/35970). Top left: image crop used as template; bottom left: resulting detector; right: detection results
Our paper on “Segmentation and Classification of Head and Neck Nodal Metastases and Primary Tumors in PET/CT”, by V. Andrearczyk, V. Oreiller, M. Jreige, J.Castelli, J. O. Prior and A. Depeursinge has been accepted for presentation at the IEEE International Engineering in Medicine and Biology Conference (EMBC), held in Glasgow, 11-15 July 2022.
The prediction of cancer characteristics, treatment planning and patient outcome from medical images generally requires tumor delineation. In Head and Neck cancer (H\&N), the automatic segmentation and differentiation of primary Gross Tumor Volumes (GTVt) and malignant lymph nodes (GTVn) is a necessary step for large-scale radiomics studies. We developed a bi-modal 3D U-Net model to automatically individually segment GTVt and GTVn in PET/CT images. The model is trained for multi-class and multi-components segmentation on the multi-centric HECKTOR 2020 dataset.
Illustrations of 2D PET (SUV scale 0-7 mg/L) and CT slices overlayed with GTVt (blue) and GTVn (red) automatic segmentation. The corresponding DSC are reported in the captions. The ground truth is in bright color, the prediction in dark color. (a,b) Correctly detected and segmented GTVt and GTVn; (c) One GTVn correctly segmented (right), one largely undersegmented (left); (d) GTVn misclassified as a GTVt. Standard DSC is reported for each case to evaluate the 3D segmentation of GTVt and GTVn.
We are pleased to announce that the HECKTOR (HEad and neCK TumOR segmentation and outcome prediction in PET/CT images) challenge has been renewed for a third edition at MICCAI 2022 (September 18-22, 2022, Singapore).
This challenge was created in 2020 to create an opportunity for the comparison of automated algorithms dedicated to the segmentation of head and neck primary tumors in FDG PET/CT multimodal images. This first edition relied on a dataset of 254 patients from 5 centers and attracted about 20 challengers. In 2021, the challenge was renewed with a larger dataset (325 patients from 6 centers) with the addition of a second task, the prediction of progression-free survival (PFS). This second edition was quite successful, with twice the number of submissions (~40) for the first task and 30 for the second task [2].
This year, challengers are again invited to compete in the same tasks, although the segmentation one now includes the fully automated detection and segmentation of lymph nodes in addition to the primary tumor, and the dataset should contain more than a thousand patients collected from 9 participating centers.
Our paper “DISTANT-CTO: A Zero Cost, Distantly Supervised Approach to Improve Low-Resource Entity Extraction Using Clinical Trials Literature”, by A. Dhrangadhariya and H. Müller is accepted for presentation at the BioNLP workshop at ACL2022 (Dublin, 22-27 May).
The paper takes strides toward democratizing entity/span level PICO recognition by automatically creating a large entity annotated dataset using open-access resources. PICO extraction is a vital but tiring process for writing systematic reviews.
The approach uses gestalt pattern matching and a modified version of the match scoring to flexibly align structured terms from http://clinicaltrials.gov onto the unstructured text and select high-confidence match candidates.
The method is extensible and data-centric which allows for constant extension of the dataset, and for optimization of the automatic annotation method for the recall-oriented PICO recognition.
A new end-to-end neural probabilistic ordinal regression method is proposed to predict a probability distributions over a range of grades. The model, evaluated on prostate cancer diagnosis and diabetic retinopathy grade estimation, can also quantify its uncertainty.
Overview of the proposed approach for diabetic retinopathy (DR) grading. An Inception-V3 network was used as feature extractor for the eye fundus image. These features were the input for the regressor model, which yielded a posterior probability distribution over the DR grades.
Congratulations to Marek Wodzinski, Niccolò Marini and Henning Müller for winning the BraTS-Reg challenge hosted at ISBI 2022.
The goal of the challenge was to propose the most accurate medical image registration method to align pre-operative to follow-up MRIs. Our team proposed a hybrid approach based on a multilevel deep network combined with an iterative, instance optimization.
Our paper on nucleus segmentation with local rotation invariance has been accepted for poster presentation at the Medical Imaging with Deep Learning (MIDL) conference 2022. As one of the first in-person conferences (hybrid) in our field, MIDL, held in Zürich, 6 – 8 July 2022, will be a great opportunity to catch up with researchers in our field and discuss our work.
Illustration of segmentation predictions robustness with respect to input orientation. The red color map indicates the average pixel-wise difference. which is averaged across the six pairs of 90◦ rotations
Full paper “Multi-Organ Nucleus Segmentation Using a Locally Rotation Invariant Bispectrum U-Net“, V. Oreiller, J. Fageot, V. Andrearczyk, J. O. Prior and A. Depeursinge, available online: https://openreview.net/forum?id=paGzvj2t_x
Our work on Assessing radiomics feature stability with simulated CT acquisitions has been published in Nature Scientific Reports. The authors K. Flouris, O. Jimenez-del-Toro, C. Aberle, M. Bach, R. Schaer, M. M. Obmann, B. Stieltjes, H. Müller, A. Depeursinge and E. Konukoglu developed and validated a Computed Tomography (CT) simulator and showed that the variability, stability and discriminative power of the radiomics features extracted from the virtual phantom images are similar to those observed in a tandem phantom study. They also demonstrated that the simulator can be utilised to assess radiomics features’ stability and discriminative power.
The proceedings of the HEad and neCK TumOR segmentation and outcome prediction in PET/CT images (HECKTOR) 2021 challenge, at MICCAI, are now available in a Springer LNCS book.
The participants papers describe the various innovative methods, and our overview paper provides a description of the challenge, the data, the methods and the results.
Stay tuned for the next edition of the challenge: HECKTOR 2022 !
A randomized clinical trial (RCT) “Amblyopia and Stereoptic Games for Vision” developed by Dr.Paul Matusz through his SNSF Ambizione career grant has been discussed in Le Figaro.
In our paper “Cleaning Radiotherapy Contours for Radiomics Studies, is it Worth it? A Head and Neck Cancer Study“, published in ctRO (Elsevier), we study the benefit of delineating tumor contours specifically for radiomics analyses (automatic prognostic prediction of patients with Head and Neck cancer). The contours originating from radiotherapy often include parts of surrounding organs (e.g. trachea, bones) which impact the extraction of visual features that characterize the tumor.
Example of VOI delineation: Radiotherapy (green), Resegmented (purple), and Dedicated (blue) overlayed on a fused FDG-PET/CT image. The blue contour is closer to the true volume of the primary tumor.