Breaking down a deep learning model into interpretable units allows us to better understand how models store representations. However, the occurrence of polysemantic neurons, or neurons that respond to multiple unrelated features, makes interpreting individual neurons challenging. This has led to the search for meaningful directions, known as concept vectors, in activation space instead of looking at individual neurons.
In this work, we propose an interpretability method to disentangle polysemantic neurons into concept vectors consisting of linear combinations of neurons that encapsulate distinct features. We found that monosemantic regions exist in activation space, and features are not axis aligned. Our results suggest that exploring directions, instead of neurons may lead us toward finding coherent fundamental units. We hope this work helps bridge the gap between understanding the fundamental units of models, an important goal of mechanistic interpretability, and concept discovery.
QuantImage v2 (QI2) is an open-source web-based platform for no-code clinical radiomics research developed in collaboration with HES-SO and CHUV. It was developed with the aim to empower physicians to play a leading role in clinical radiomics research.
To test and validate novel CT techniques, such as texture analysis in radiomics, repeat measurements are required. In this work, we analyze in detail the quantitative characteristics of iodine-ink based 3D-printed phantom. Design considerations and the manufacturing process are described. Potential pitfalls for radiomics testing for such phantoms are presented.
The Springer LNCS proceedings of the Third Challenge, HEad and neCK TumOR segmentation and outcome prediction in PET/CT images (HECKTOR) 2022, Held in Conjunction with MICCAI 2022, are now published and available online.
The editors V. Andrearczyk, V. Oreiller, M. Hatt, and A. Depeursinge gathered:
23 papers from participants reporting their methods on the tasks of tumor segmentation and outcome prediciton.
The first AI-Cafe of 2023 will put in the spotlight the ExaMode project on February 8th 2023 14-15h CEST.
Prof. Henning Müller will present “ExaMode: a practical approach towards using machine learning in histopathology”. The ExaMode project is funded by the European Commission to work on scalable solutions for AI in histopathology. The project thus follows a practical approach defining use cases, collecting multimodal data from hospitals and and then developing decision support tools. Aspects such as learning from limited data and multimodal learning will be addressed as well as working on aspects of explainability when building systems that are integrated into the clinical workflow.
In this work, we present a method to improve the efficiency of color augmentation methods by increasing the reliability of the augmented samples used for training computational pathology models. During CNN training, a database including over 2 million H&E color variations collected from private and public datasets is used as a reference to discard augmented data with color distributions that do not correspond to realistic data.
This Collection presents a series of articles describing annotated datasets of medical images and video. Data are presented without hypotheses or significant analyses to support improvements such as benchmarking or improving machine learning algorithms. All medical specialities are considered and data can be derived from study participants, tissue samples, electronic health records (EHRs) or other sources. All described datasets are assessed to ensure their open availability (where possible) or secure access controls (where required) via Scientific Data‘s editorial and peer review processes.
Our work on “Mapping of the Upper Limb Work-Space: Benchmarking Four Wrist Smoothness Metrics“, led by Alessandro Scano, Cristina Brambilla, Henning Müller and Manfredo Atzori has been published in the Applied Sciences journal, MDPI.
Smoothness-based models describe effectively skilled motion planning. Despite smoothness being often used as a measure of motor control and to evaluate clinical pathologies, so far, a smoothness map is not available for the whole workspace of the upper limb. In this work, we provide a map of the upper limb workspace comparing four smoothness metrics: the normalized jerk, the speed metric, the spectral arc length, and the number of speed peaks.
A 4-pages article has been published in RSVIPVision, highlighting our work on A global taxonomy of interpretable AI, based on an excellent interview of Mara Graziani and Vincent Andrearczyk by Ralph Anzarouth.
More details on our work “A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences” are available in the paper.
This work was made possible thanks to the large collaboration of experts in various domains: Mara Graziani, Lidia Dutkiewicz, Davide Calvaresi, José Pereira Amorim, Katerina Yordanova, Mor Vered, Rahul Nair, Pedro Henriques Abreu, Tobias Blanke, Valeria Pulignano, John O. Prior, Lode Lauwaert, Wessel Reijers, Adrien Depeursinge, Vincent Andrearczyk and Henning Müller.
Pierre Fontaine successfully defended his PhD work on “Extraction and analysis of image features within a radiomics work-flow applied to prediction in radiotherapy”. His thesis focused on how to optimally exploit regions of interest in medical images when building radiomics models for prognosis prediction. His list of publications is available online: http://publications.hevs.ch/index.php/authors/show/2474
This work was co-supervised by
Dr Oscar Acosta, and Prof. Renaud De Crevoisier, Univ Rennes, CLCC Eugene Marquis, INSERM, LTSI – UMR 1099, F-35000 Rennes, France
Prof. Adrien Depeursinge, Institute of Informatics, University of Applied Sciences Western Switzerland (HESSO).
Valentin Oreiller succesfully defended his PhD thesis on Validating and Optimizing the Specificity of Image Biomarkers for Personalized Medicine. In his work, supervised by Prof. Adrien Depeursinge (HES-SO) and, Prof. John O. Prior (CHUV), Valentin obtained excellent achievements:
He designed directional image operators that are Locally Rotation Invariant (LRI), based on the power spectrum and bispectrum of the circular harmonics expansion for 2D images or spherical harmonics expansion for 3D images.
He also tackled a crucial aspect in designing robust predictive models, that is the need for large datasets and benchmarks. An important part of his thesis is the organization of the HEad and neCK tumOR segmentation and outcome prediction in PET/CT images (HECKTOR) challenge.
The quality and reproducibility of radiomics studies are essential requirements for the standardisation of radiomics models. As recent data-driven respiratory gating (DDG) [18F]-FDG has shown superior diagnostic performance in lung cancer, we evaluated the impact of DDG on the reproducibility of radiomics features derived from [18F]-FDG PET/CT in comparison to free-breathing flow (FB) imaging.
This study showed that 131 out of 141 radiomics features calculated on all pulmonary lesions can be used interchangeably between DDG and FB PET/CT acquisitions. Besides, radiomics features derived from pulmonary lesions located inferior to the superior lobes are subject to greater variability as well as pulmonary lesions of smaller size.
Our review article “Automated Tumor Segmentation in Radiotherapy“, by Ricky Savjani et al. has been published in Seminars in Radiation Oncology. In this work, we present advances in gross tumor volume automatic segmentation made in multiple key sites: brain, head and neck, thorax, abdomen, and pelvis.
Automatic tumor segmentation can decrease clinical demand, provide consistency across clinicians and institutions for radiation treatment planning. Additionally, automatic segmentation can enable imaging analyses such as radiomics to construct and deploy large studies with thousands of patients.
The Semantic Knowledge Extractor Tool (SKET) is an unsupervised hybrid system that combines rule-based techniques with pre-trained machine learning models to extract key pathological concepts from diagnostic reports. SKET is a viable solution to reduce pathologists’ workload and can be used as a first, cheap solution to bootstrap supervised models in absence of manual annotations.
The SKET eXplained (SKET X) is a web-based system that supports pathologists and domain experts in the visual understanding of SKET predictions. SKET X can refine parameters and rules over time, thus improving the system effectiveness and increasing user’s trust and confidence
Our paper on Unleashing the potential of digital pathology data by training computer-aided diagnosis models without human annotations, by Niccolò Marini et al., has been published in npj Digital Medicine and is available in open access.
This work presents an approach that removes the need for human experts to annotate data, using an automatic analysis of healthcare reports to create automatic annotations that can be used to train deep learning models. A case study on the classification of colon whole slide images shows the benefit of the approach to best exploit the potential of healthcare data from hospital workflows.
On April 29th, 2021, we organized a debate to bring together researchers from multidisciplinary backgrounds to collaborate on a global definition of interpretability that may be used with high versatility in the documentation of social, cognitive, philosophical, ethical and legal concerns about AI. One of the many interesting outcomes is the definition of interpretable AI that was reached:
“An AI system is interpretable if it is possible to translate its working principles and outcomes in human-understandable language without affecting the validity of the system”.
The article is published and available online in open access.
Prof. Henning Müller has been announced one of ten digital shapers of Switzerland in the medical field. This nomination recognizes his and his tream’s contribution as eMedics, i.e. people who use digital transformation to enhance different aspects of well-being, health and medicine.
Congratulations to the other 99 Digital Shapers for driving digital innovation and change!
In this work, we present an approach that removes the need for medical experts when building assisting tools for clinical decision support, paving the way for the exploitation of exa-scale sources of medical data collected worldwide from hospital workflows.
The combination of large amounts of healthcare data with new artificial intelligence technologies are leading to the creation of new systems to assist medical experts during the diagnosis.
However, the potential of the combination is not fully exploited because of a bottleneck represented by the need of medical experts to analyze and annotate healthcare data.
This study presents an approach aiming to remove the need of human experts to annotate data: the reports corresponding to healthcare data (that can be CT scans, MRI, whole slide images) are automatically analyzed in order to create automatic annotations that can be used to train deep learning models.
The approach involves two components: a natural language processing algorithm, called SKET (Semantic Knowledge Extractor Tool) that analyzes the reports and extracts meaningful concepts, and a computer vision algorithm, a convolutional neural network trained to classify medical images using the concepts extracted from SKET as weak labels.