Presentation of V. Andrearczyk on AI for medical imaging

Vincent Andrearczyk presented his research on medical imaging at the AI-Cafe, an online forum to gain insights into the European AI scene. He emphasized the importance of three essential ingredients: Generalizability, interpretability and interaction with clinicians. The talk is available online:

Medical imaging is an essential step in patient care, from diagnostic and treatment planning to follow-up, allowing doctors to assess organs, tissue and blood vessels non-invasively. AI capabilities to analyze medical images are extremely promising for assisting clinicians in their daily routines.
This presentation introduces some of the essential ingredients for developing reliable medical imaging AI models with a focus on generalizability, interpretability and interaction with clinicians.
Generalizability refers to the capacity of the models to adapt to new, previously unseen data, for instance, images coming from a new machine or hospital. Interpretability refers to the translation of the working principles and outcomes of the models in human-understandable terms. Finally, the involvement of clinicians, in all phases of a model development and evaluation is crucial to ensure the utility, usability and alignment of the solutions.
This talk covered all these topics and their integration in various tasks to foster patient care. I will give concrete examples including brain lesion management based on MRI analysis, and head and neck tumor segmentation and outcome prediction from PET/CT images.

Review article on XAI in medical imaging

Our review article “A Scoping Review of Interpretability and Explainability concerning Artificial Intelligence Methods in Medical Imaging“, M. Champendal, H. Müller, J. O. Prior and C. Sá dos Reis, has been published in the European Journal of Radiology.

Our study shows the increase of XAI publications, primarily focusing on applying MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. We also show that terminology standardisation remains a challenge, as terms like “explainable” and “interpretable” are sometimes used indistinctively. More details and interesting results are available in the full paper.

Examples of different XAI output formats

Article on wide kernels and DCT in CNNs published in Informatics in Medicine Unlocked

Our work on “Wide kernels and their DCT compression in convolutional networks for nuclei segmentation“, V. Andrearczyk, V. Oreiller and A. Depeursinge, has been published in Informatics in Medicine Unlocked.

In Convolutional Neural Networks (CNNs), the field of view is traditionally set to very small values (e.g. 3 × 3 pixels) for individual kernels and grown throughout the network by cascading layers. Automatically learning or adapting the best spatial support of the kernels can be done by using large kernels. Obtaining an optimal receptive field with few layers is very relevant in applications with a limited amount of annotated training data, e.g. in medical imaging.

We show that CNNs (2D U-Nets) with large kernels outperform similar models with standard small kernels on the task of nuclei segmentation in histopathology images. We observe that the large kernels mostly capture low-frequency information, which motivates the need for large kernels and their efficient compression via the Discrete Cosine Transform (DCT). Following this idea, we develop a U-Net model with wide and compressed DCT kernels that leads to similar performance and trends to the standard U-Net, with reduced complexity.

Trained 2D kernels (a) and their averaged DCT transform (b) in a standard U-Net described in Section 2.1 with a kernel size of 13. The average of the 2×8=16 sets of squared coefficients is plotted here for one trained model.
Example of a wide-kernel U-Net prediction. Individual colors are used in the top row for each nucleus instance. The U-Net softmax predictions are illustrated in the second row. The predictions post-processed with the watershed algorithm are illustrated on the top row between the input and the ground truth images

Open PhD position – Machine learning and multimedia knowledge extraction from biomedical data

We are looking for a talented PhD student in the MedGIFT research group.

The position involves working on machine learning and multimedia knowledge extraction from biomedical data.

The PhD will be part of the European Horizon research project HEREDITARY, made up of a consortium of 18 partners from Europe and the United States of America.

More information here.

Other PhD, post-doc and internship positions are also open, do not hesitate to contact us for more information.

Mara Graziani awarded prestigious Latsis prize at UNIGE

The Latsis University Prize aims to acknowledge outstanding work conducted by young researchers. Mara Graziani has been honored with this prestigious award in recognition of her remarkable contributions to the field of trustworthy AI. Her research has significantly enhanced the comprehensibility of deep learning models and their ability to generalize to unseen datasets.

Many of the current artificial intelligence (AI) algorithms operate as “black boxes,” meaning they lack transparency in explaining the reasoning behind their predictions. This lack of transparency poses significant challenges in the use and regulation of AI-based devices in high-stakes contexts.

During her Ph.D. at MedGIFT (HES-SO) and UNIGE, Mara developed several innovative methods that shed light on the inner workings of complex deep learning models. Additionally, she has made substantial progress in multi-task learning methods, guiding models to focus on essential features. This approach has proven to greatly enhance the models’ generalization capabilities and their resilience to domain shifts.

In addition to her remarkable research contributions, Mara was honored with the prize for her commitment to the AI community. In 2021, she initiated the “Introduction to Interpretable AI” expert network, aimed at fostering global discussions on deep learning interpretability. This initiative has reached hundreds of students worldwide and received support from various AI experts who have contributed through online seminars, lecture notes, and open-source code.

Link to the award ceremony here (38’45”)

Mara Graziani receives the Latsis prize from the recteur of UNIGE Yves Flückiger and vice-recteur Jean-Marc Triscone

Article on canine thoracic radiographs classification published in Scientific Reports

Our recent work on “An AI-based algorithm for the automatic evaluation of image quality in canine thoracic radiographs” has been published in Scientific Reports (Nature).

The aim of this study was to develop and test an artificial intelligence (AI)-based algorithm for detecting common technical errors in canine thoracic radiography. More specifically, the algorithm was designed to classify the images as correct or having one or more of the following errors: rotation, underexposure, overexposure, incorrect limb positioning, incorrect neck positioning, blurriness, cut-off, or the presence of foreign objects, or medical devices. The algorithm was able to correctly identify errors in thoracic radiographs with an overall accuracy of 81.5% in latero-lateral and 75.7% in sagittal images.

Article on Kinematic-Muscular Synergies in Hand Grasp Patterns Published in IEEE Access

Our recent work “Functional synergies applied to a publicly available dataset of hand grasps show evidence of kinematic-muscular synergistic control” has been published in IEEE access.

Hand grasp patterns are the results of complex kinematic-muscular coordination and
synergistic control might help reducing the dimensionality of the motor control space at the hand level.
Kinematic-muscular synergies combining muscle and kinematic hand grasp data have not been investigated before. This paper provides a novel analysis of kinematic-muscular synergies from kinematic-muscular data of 28 subjects, performing up to 20 hand grasps.

The results generalize the description of muscle and hand kinematics, better clarifying several limits of the field and fostering the development of applications in rehabilitation and assistive robotics.

Article published in Scientific Data (Nature)

Our latest paper published in Scientific Data (Nature), entitled “Multimodal video and IMU kinematic dataset on daily life activities using affordable devices”, M. Martínez-Zarzuela et al. , describes our publicly available dataset VIDIMU, available in Zenodo repository

The objective of the VIDIMU dataset is to pave the way towards affordable patient gross motor tracking solutions for daily life activities recognition and kinematic analysis in out-of-the-lab environments.

The novelty of this dataset lies in: (i) the clinical relevance of the chosen movements, (ii) the combined utilization of affordable video and custom sensors, and (iii) the implementation of state-of-the-art tools for multimodal data processing of 3D body pose tracking and motion reconstruction in a musculoskeletal model from inertial data.

HECKTOR paper published in Medical Image Analysis

Our article “Automatic Head and Neck Tumor segmentation and outcome prediction relying on FDG-PET/CT images: Findings from the second edition of the HECKTOR challenge“, V. Andrearczyk et al., has been published in Medical Image Analysis.

We describe the second edition of the HECKTOR challenge, including the data, tasks and participation, and present various post-challenge analyses including ranking robustness, ensembles of algorithms, inter-center performance, and influence of tumor size on performance.

For participation in the latest challenge, go to this link.

Review article published in NeuroImage: Clinical

Our review paper “Automated MS lesion detection and segmentation in clinical workflow: a systematic review” was published in NeuroImage: Clinical.

This work, conducted by Federico Spagnolo and colleagues, is the result of a great collaboration between HES-SO, CHUV and UniBasel. We investigate to what extent automatic tools in multiple sclerosis management fulfill the Quantitative Neuroradiology Initiative (QNI) framework necessary to integrate automated detection and segmentation into the clinical neuroradiology workflow.

New CT Phantom for Radiomics dataset published on TCIA

Our new dataset “Task-Based Anthropomorphic CT Phantom for Radiomics Stability and Discriminatory Power Analyses (CT-Phantom4Radiomics)” has been published on The Cancer Imaging Archive (TCIA). The aims of this dataset are to determine the stability of radiomics features against computed tomography (CT) parameter variations and to study their discriminative power concerning tissue classification using a 3D-printed CT phantom based on real patient data.

The dataset is available here: https://doi.org/10.7937/a1v1-rc66

The source code and instructions on how to run the full feature extraction pipeline on the TCIA dataset can be found here: https://github.com/QA4IQI/qa4iqi-extraction

More details on the QA4IQI project results: https://qa4iqi.github.io/

Nataliia Molchanova reveived the Summa Cum Laude Merit Award at ISMRM

Congratulations to Nataliia Molchanova who received the Summa Cum Laude Merit Award at the recent 2023 ISMRM & ISMRT Annual Meeting & Exhibition for the abstract “Towards Informative Uncertainty Measures for MRI Segmentation in Clinical Practice: Application to Multiple Sclerosis”.

This recognition indicates an emerging interest in Safe AI supporting medical imaging analysis, highlighting an effort of the organizers to raise awareness about the limitations of deep learning and the necessity of moving towards more trustworthy, explainable, and transparent AI solutions in clinical practice.

Nataliia Molchanova and her work on uncertainty measures for MRI segmentation

Congratulations to Dr. Niccolò Marini !

Niccolò Marini successfully defended his PhD work on “Deep learning methods to reduce the need for annotations for the extraction of knowledge from multimodal heterogeneous medical data”.
His thesis focused on:

  • how to alleviate the need for manual annotations to train deep learning algorithms
  • how to leverage color variability to improve the CNN generalization on unseen data
  • how to learn a more details WSI representation combining multiple magnification levels
  • how to empower the raw-pixel level image representation introducing high-level concepts from textual reports

This work was co-supervised by Prof. Henning Müller and Prof. Manfredo Atzori, Institute of Informatics, University of Applied Sciences Western Switzerland (HESSO), Prof. Stephane Marchand-Maillet, Department of Computer Science, University of Geneva. Prof. Anne Martel, Department of Medical Biophysics, University of Toronto, completed the committee as international expert in the domain.

For more details, check his list of publications.

Interview of Cristina Simon-Martinez on the RehaBot project

Cristina Simon-Martinez was interviewed by Research Works to present the ReHaBot project in a very interesting podcast.

RehaBot is a chatbot between therapists and patients to establish tele-rehabilitation programs and quantify their outcome. This project, led in collaboration with Mario Martinez-Zarzuela, received a warm welcome by research and clinical partners.

Learn more about the project in the podcast available online.

InnoCheque funding obtained for the FishLab project

The project “FishLab – Semi-autonomous system for centralised video-monitoring of fish migration in river systems” has been granted an InnoCheque funding by Innosuisse. This will support our collaboration with FishLab to develop a standardized/automated observatory of fish flows in rivers.

For this purpose, fish ladders are fitted with underwater cameras with infrared backlighting.

The end customers are dam operators, who are required to demonstrate the ecological restoration of their structures by 2030 (Federal Water Protection Act 2011).

At HES-SO, we are studying video processing for the detection of fish, their tracking, count, species identification and size measurement. We are also developing an interface to check and complete the automated results.

Paper on dexterous prosthetic hands at GNB 2023

Our article “A dexterous hand prosthesis based on additive manufacturing”, M. Atzori et al., was accepted at the VIII congress of the Italian Group of Bioengineering, which will be held in June 2023.

The article presents a project that has been ongoing for several years now and that is aimed at the development of low cost and dexterous prosthetic hands to be used in real life conditions.
The tests of the prosthetic hand demonstrate its dexterity and the potential of future systems based on 3D printing and machine learning. More details are available in the paper.



3D printed robotic hand and socket.
Examples of hand grasp tests.

Paper accepted at XAI4CV, Workshop at CVPR 2023

Our work on “Disentangling Neuron Representations with Concept Vectors“, by Laura O’Mahony et al., will be presented at the 2nd Explainable AI for Computer Vision (XAI4CV) Workshop at CVPR, Vancouver June 19, 2023.

Breaking down a deep learning model into interpretable units allows us to better understand how models store representations. However, the occurrence of polysemantic neurons, or neurons that respond to multiple unrelated features, makes interpreting individual neurons challenging. This has led to the search for meaningful directions, known as concept vectors, in activation space instead of looking at individual neurons.

In this work, we propose an interpretability method to disentangle polysemantic neurons into concept vectors consisting of linear combinations of neurons that encapsulate distinct features. We found that monosemantic regions exist in activation space, and features are not axis aligned. Our results suggest that exploring directions, instead of neurons may lead us toward finding coherent fundamental units. We hope this work helps bridge the gap between understanding the fundamental units of models, an important goal of mechanistic interpretability, and concept discovery.

The code is available on GitHub.

Great post by Laura available on Medium.

The proposed method to obtain two concept vectors from a neuron is depicted for a neuron that activates for both apples and sports fields. See full paper for details.

QuantImage v2 cloud platform described in Journal of European Radiology Experimental

QuantImage v2 (QI2) is an open-source web-based platform for no-code clinical radiomics research developed in collaboration with HES-SO and CHUV. It was developed with the aim to empower physicians to play a leading role in clinical radiomics research.

The article describing the platform “QuantImage v2: a comprehensive and integrated physician-centered cloud platform for radiomics and machine learning research“, Daniel Abler et al., has been published in Springer Nature, Journal of European Radiology Experimental.

Physician-centered radiomics research. Physician-centered radiomics envisions medical doctors at the center of the radiomics research, development, and translation cycle

The web page describing QuantImage is available here.

Article on CT phantoms for radiomics published in Medical Physics

Our work on “3D-printed iodine-ink CT phantom for radiomics feature extraction – advantages and challenges“, M. Bach et al., has been published in Medical Physics.

To test and validate novel CT techniques, such as texture analysis in radiomics, repeat measurements are required. In this work, we analyze in detail the quantitative characteristics of iodine-ink based 3D-printed phantom. Design considerations and the manufacturing process are described. Potential pitfalls for radiomics testing for such phantoms are presented.

CT datasets of the different phantom parts and overall phantom appearance. (a) Transversal CT image of the lung-part, (b) the liver part and (c) the test-patterns. (d) coronal CT image of the phantom. (e) cuboid black phantom positioned on a board in a CT scanner.

LNCS proceedings of HECKTOR 2022 published

The Springer LNCS proceedings of the Third Challenge, HEad and neCK TumOR segmentation and outcome prediction in PET/CT images (HECKTOR) 2022, Held in Conjunction with MICCAI 2022, are now published and available online.

The editors V. Andrearczyk, V. Oreiller, M. Hatt, and A. Depeursinge gathered:

  • 23 papers from participants reporting their methods on the tasks of tumor segmentation and outcome prediciton.
  • An overview paper presenting the data, participation, results etc.

Post-challenge participation is still open on grand-challenge.