Vincent Andrearczyk presented his research on medical imaging at the AI-Cafe, an online forum to gain insights into the European AI scene. He emphasized the importance of three essential ingredients: Generalizability, interpretability and interaction with clinicians. The talk is available online:
Medical imaging is an essential step in patient care, from diagnostic and treatment planning to follow-up, allowing doctors to assess organs, tissue and blood vessels non-invasively. AI capabilities to analyze medical images are extremely promising for assisting clinicians in their daily routines. This presentation introduces some of the essential ingredients for developing reliable medical imaging AI models with a focus on generalizability, interpretability and interaction with clinicians. Generalizability refers to the capacity of the models to adapt to new, previously unseen data, for instance, images coming from a new machine or hospital. Interpretability refers to the translation of the working principles and outcomes of the models in human-understandable terms. Finally, the involvement of clinicians, in all phases of a model development and evaluation is crucial to ensure the utility, usability and alignment of the solutions. This talk covered all these topics and their integration in various tasks to foster patient care. I will give concrete examples including brain lesion management based on MRI analysis, and head and neck tumor segmentation and outcome prediction from PET/CT images.
Our review article “A Scoping Review of Interpretability and Explainability concerning Artificial Intelligence Methods in Medical Imaging“, M. Champendal, H. Müller, J. O. Prior and C. Sá dos Reis, has been published in the European Journal of Radiology.
Our study shows the increase of XAI publications, primarily focusing on applying MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. We also show that terminology standardisation remains a challenge, as terms like “explainable” and “interpretable” are sometimes used indistinctively. More details and interesting results are available in the full paper.
In Convolutional Neural Networks (CNNs), the field of view is traditionally set to very small values (e.g. 3 × 3 pixels) for individual kernels and grown throughout the network by cascading layers. Automatically learning or adapting the best spatial support of the kernels can be done by using large kernels. Obtaining an optimal receptive field with few layers is very relevant in applications with a limited amount of annotated training data, e.g. in medical imaging.
We show that CNNs (2D U-Nets) with large kernels outperform similar models with standard small kernels on the task of nuclei segmentation in histopathology images. We observe that the large kernels mostly capture low-frequency information, which motivates the need for large kernels and their efficient compression via the Discrete Cosine Transform (DCT). Following this idea, we develop a U-Net model with wide and compressed DCT kernels that leads to similar performance and trends to the standard U-Net, with reduced complexity.
The Latsis University Prize aims to acknowledge outstanding work conducted by young researchers. Mara Graziani has been honored with this prestigious award in recognition of her remarkable contributions to the field of trustworthy AI. Her research has significantly enhanced the comprehensibility of deep learning models and their ability to generalize to unseen datasets.
Many of the current artificial intelligence (AI) algorithms operate as “black boxes,” meaning they lack transparency in explaining the reasoning behind their predictions. This lack of transparency poses significant challenges in the use and regulation of AI-based devices in high-stakes contexts.
During her Ph.D. at MedGIFT (HES-SO) and UNIGE, Mara developed several innovative methods that shed light on the inner workings of complex deep learning models. Additionally, she has made substantial progress in multi-task learning methods, guiding models to focus on essential features. This approach has proven to greatly enhance the models’ generalization capabilities and their resilience to domain shifts.
In addition to her remarkable research contributions, Mara was honored with the prize for her commitment to the AI community. In 2021, she initiated the “Introduction to Interpretable AI” expert network, aimed at fostering global discussions on deep learning interpretability. This initiative has reached hundreds of students worldwide and received support from various AI experts who have contributed through online seminars, lecture notes, and open-source code.
The aim of this study was to develop and test an artificial intelligence (AI)-based algorithm for detecting common technical errors in canine thoracic radiography. More specifically, the algorithm was designed to classify the images as correct or having one or more of the following errors: rotation, underexposure, overexposure, incorrect limb positioning, incorrect neck positioning, blurriness, cut-off, or the presence of foreign objects, or medical devices. The algorithm was able to correctly identify errors in thoracic radiographs with an overall accuracy of 81.5% in latero-lateral and 75.7% in sagittal images.
Hand grasp patterns are the results of complex kinematic-muscular coordination and synergistic control might help reducing the dimensionality of the motor control space at the hand level. Kinematic-muscular synergies combining muscle and kinematic hand grasp data have not been investigated before. This paper provides a novel analysis of kinematic-muscular synergies from kinematic-muscular data of 28 subjects, performing up to 20 hand grasps.
The results generalize the description of muscle and hand kinematics, better clarifying several limits of the field and fostering the development of applications in rehabilitation and assistive robotics.
The objective of the VIDIMU dataset is to pave the way towards affordable patient gross motor tracking solutions for daily life activities recognition and kinematic analysis in out-of-the-lab environments.
The novelty of this dataset lies in: (i) the clinical relevance of the chosen movements, (ii) the combined utilization of affordable video and custom sensors, and (iii) the implementation of state-of-the-art tools for multimodal data processing of 3D body pose tracking and motion reconstruction in a musculoskeletal model from inertial data.
We describe the second edition of the HECKTOR challenge, including the data, tasks and participation, and present various post-challenge analyses including ranking robustness, ensembles of algorithms, inter-center performance, and influence of tumor size on performance.
For participation in the latest challenge, go to this link.
This work, conducted by Federico Spagnolo and colleagues, is the result of a great collaboration between HES-SO, CHUV and UniBasel. We investigate to what extent automatic tools in multiple sclerosis management fulfill the Quantitative Neuroradiology Initiative (QNI) framework necessary to integrate automated detection and segmentation into the clinical neuroradiology workflow.
Our new dataset “Task-Based Anthropomorphic CT Phantom for Radiomics Stability and Discriminatory Power Analyses (CT-Phantom4Radiomics)” has been published on The Cancer Imaging Archive (TCIA). The aims of this dataset are to determine the stability of radiomics features against computed tomography (CT) parameter variations and to study their discriminative power concerning tissue classification using a 3D-printed CT phantom based on real patient data.
Congratulations to Nataliia Molchanova who received the Summa Cum Laude Merit Award at the recent 2023 ISMRM & ISMRT Annual Meeting & Exhibition for the abstract “Towards Informative Uncertainty Measures for MRI Segmentation in Clinical Practice: Application to Multiple Sclerosis”.
This recognition indicates an emerging interest in Safe AI supporting medical imaging analysis, highlighting an effort of the organizers to raise awareness about the limitations of deep learning and the necessity of moving towards more trustworthy, explainable, and transparent AI solutions in clinical practice.
Niccolò Marini successfully defended his PhD work on “Deep learning methods to reduce the need for annotations for the extraction of knowledge from multimodal heterogeneous medical data”. His thesis focused on:
how to alleviate the need for manual annotations to train deep learning algorithms
how to leverage color variability to improve the CNN generalization on unseen data
how to learn a more details WSI representation combining multiple magnification levels
how to empower the raw-pixel level image representation introducing high-level concepts from textual reports
This work was co-supervised by Prof. Henning Müller and Prof. Manfredo Atzori, Institute of Informatics, University of Applied Sciences Western Switzerland (HESSO), Prof. Stephane Marchand-Maillet, Department of Computer Science, University of Geneva. Prof. Anne Martel, Department of Medical Biophysics, University of Toronto, completed the committee as international expert in the domain.
Cristina Simon-Martinez was interviewed by Research Works to present the ReHaBot project in a very interesting podcast.
RehaBot is a chatbot between therapists and patients to establish tele-rehabilitation programs and quantify their outcome. This project, led in collaboration with Mario Martinez-Zarzuela, received a warm welcome by research and clinical partners.
Learn more about the project in the podcast available online.
The project “FishLab – Semi-autonomous system for centralised video-monitoring of fish migration in river systems” has been granted an InnoCheque funding by Innosuisse. This will support our collaboration with FishLab to develop a standardized/automated observatory of fish flows in rivers.
For this purpose, fish ladders are fitted with underwater cameras with infrared backlighting.
The end customers are dam operators, who are required to demonstrate the ecological restoration of their structures by 2030 (Federal Water Protection Act 2011).
At HES-SO, we are studying video processing for the detection of fish, their tracking, count, species identification and size measurement. We are also developing an interface to check and complete the automated results.
Our article “A dexterous hand prosthesis based on additive manufacturing”, M. Atzori et al., was accepted at the VIII congress of the Italian Group of Bioengineering, which will be held in June 2023.
The article presents a project that has been ongoing for several years now and that is aimed at the development of low cost and dexterous prosthetic hands to be used in real life conditions. The tests of the prosthetic hand demonstrate its dexterity and the potential of future systems based on 3D printing and machine learning. More details are available in the paper.
Breaking down a deep learning model into interpretable units allows us to better understand how models store representations. However, the occurrence of polysemantic neurons, or neurons that respond to multiple unrelated features, makes interpreting individual neurons challenging. This has led to the search for meaningful directions, known as concept vectors, in activation space instead of looking at individual neurons.
In this work, we propose an interpretability method to disentangle polysemantic neurons into concept vectors consisting of linear combinations of neurons that encapsulate distinct features. We found that monosemantic regions exist in activation space, and features are not axis aligned. Our results suggest that exploring directions, instead of neurons may lead us toward finding coherent fundamental units. We hope this work helps bridge the gap between understanding the fundamental units of models, an important goal of mechanistic interpretability, and concept discovery.
QuantImage v2 (QI2) is an open-source web-based platform for no-code clinical radiomics research developed in collaboration with HES-SO and CHUV. It was developed with the aim to empower physicians to play a leading role in clinical radiomics research.
To test and validate novel CT techniques, such as texture analysis in radiomics, repeat measurements are required. In this work, we analyze in detail the quantitative characteristics of iodine-ink based 3D-printed phantom. Design considerations and the manufacturing process are described. Potential pitfalls for radiomics testing for such phantoms are presented.
The Springer LNCS proceedings of the Third Challenge, HEad and neCK TumOR segmentation and outcome prediction in PET/CT images (HECKTOR) 2022, Held in Conjunction with MICCAI 2022, are now published and available online.
The editors V. Andrearczyk, V. Oreiller, M. Hatt, and A. Depeursinge gathered:
23 papers from participants reporting their methods on the tasks of tumor segmentation and outcome prediciton.