Journal article published in Neuroscience

Our recent work on “Spatial and temporal muscle synergies provide a dual characterization of low-dimensional and intermittent control of upper-limb movements“, authored by C. Brambilla, M. Atzori, H. Müller, A. d’Avella, and A. Scano, has been published in Neuroscience (Elsevier).

Some of the exciting highlights of the work:

  • Non-negative matrix factorization allows to extract spatial and temporal synergies.
  • Spatial and temporal synergies were extracted from two upper-limb datasets.
  • We showed the existence of EMG spatial and temporal structure with surrogate data.
  • Spatial and temporal synergies may capture different hierarchical levels of motor control.
  • The structure of temporal synergies may be related to intermittent control.
Overview of the study

Henning Müller will present the ExaMode project at the AI-Cafe

The first AI-Cafe of 2023 will put in the spotlight the ExaMode project on February 8th 2023 14-15h CEST.

Prof. Henning Müller will present “ExaMode: a practical approach towards using machine learning in histopathology”. The ExaMode project is funded by the European Commission to work on scalable solutions for AI in histopathology. The project thus follows a practical approach defining use cases, collecting multimodal data from hospitals and and then developing decision support tools. Aspects such as learning from limited data and multimodal learning will be addressed as well as working on aspects of explainability when building systems that are integrated into the clinical workflow.

Online regitration is free: https://t.co/q8HIJDE7n5

Article on color augmentation published in Pathology Informatics

Our work on “Data-driven color augmentation for H&E stained images in computational pathology“, led by Niccolò Marini, et al. has been published in the Elsevier Journal of Pathology Informatics.

In this work, we present a method to improve the efficiency of color augmentation methods by increasing the reliability of the augmented samples used for training computational pathology models. During CNN training, a database including over 2 million H&E color variations collected from private and public datasets is used as a reference to discard augmented data with color distributions that do not correspond to realistic data. 

The GitHub repository with the code is available online : https://lnkd.in/eY8byYcR ,

and the database including stain vectors: https://lnkd.in/eNf3bnTu

Special issue on medical imaging data for digital diagnostics open for submissions

Prof. Henning Müller is co-editor of the special issue “Medical imaging data for digital diagnostics” in Nature Scientific Data, open for submissions until 20th June 2023.

This Collection presents a series of articles describing annotated datasets of medical images and video. Data are presented without hypotheses or significant analyses to support improvements such as benchmarking or improving machine learning algorithms. All medical specialities are considered and data can be derived from study participants, tissue samples, electronic health records (EHRs) or other sources. All described datasets are assessed to ensure their open availability (where possible) or secure access controls (where required) via Scientific Data‘s editorial and peer review processes.

Article on upper limb motion analysis published in Applied Sciences

Our work on “Mapping of the Upper Limb Work-Space: Benchmarking Four Wrist Smoothness Metrics“, led by Alessandro Scano, Cristina Brambilla, Henning Müller and Manfredo Atzori has been published in the Applied Sciences journal, MDPI.

Smoothness-based models describe effectively skilled motion planning. Despite smoothness being often used as a measure of motor control and to evaluate clinical pathologies, so far, a smoothness map is not available for the whole workspace of the upper limb. In this work, we provide a map of the upper limb workspace comparing four smoothness metrics: the normalized jerk, the speed metric, the spectral arc length, and the number of speed peaks.

More details in the article available online: https://www.mdpi.com/2076-3417/12/24/12643

Scheme of the work. Upper limb movements (P2P and EXP) were performed in 5 sectors of the workspace by 15 subjects. The data were segmented in phases and the wrist kinematics and the articular angels were computed. Then, four smoothness metrics based on wrist motion were computed and smoothness was compared between sectors, mapped with respect to articular angles and in the 3D workspace for each of the smoothness metric. Finally, the correlations between pairs of metrics were computed.

Our work on taxonomy in AI interpretability highlighted in RSIPVision

A 4-pages article has been published in RSVIPVision, highlighting our work on A global taxonomy of interpretable AI, based on an excellent interview of Mara Graziani and Vincent Andrearczyk by Ralph Anzarouth.

More details on our work “A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences” are available in the paper.

This work was made possible thanks to the large collaboration of experts in various domains: Mara Graziani, Lidia Dutkiewicz, Davide Calvaresi, José Pereira Amorim, Katerina Yordanova, Mor Vered, Rahul Nair, Pedro Henriques Abreu, Tobias Blanke, Valeria Pulignano, John O. Prior, Lode Lauwaert, Wessel Reijers, Adrien Depeursinge, Vincent Andrearczyk and Henning Müller.

Congratulations to dr. Pierre Fontaine !

Pierre Fontaine successfully defended his PhD work on “Extraction and analysis of image features within a radiomics work-flow applied to prediction in radiotherapy”. His thesis focused on how to optimally exploit regions of interest in medical images when building radiomics models for prognosis prediction. His list of publications is available online: http://publications.hevs.ch/index.php/authors/show/2474

This work was co-supervised by

  • Dr Oscar Acosta, and Prof. Renaud De Crevoisier, Univ Rennes, CLCC Eugene Marquis, INSERM, LTSI – UMR 1099, F-35000 Rennes, France
  • Prof. Adrien Depeursinge, Institute of Informatics, University of Applied Sciences Western Switzerland (HESSO).
Pierre Fontaine defending his PhD thesis

Congratulations to Valentin Oreiller for his PhD !

Valentin Oreiller succesfully defended his PhD thesis on Validating and Optimizing the Specificity of Image Biomarkers for Personalized Medicine. In his work, supervised by Prof. Adrien Depeursinge (HES-SO) and, Prof. John O. Prior (CHUV), Valentin obtained excellent achievements:

  • He designed directional image operators that are Locally Rotation Invariant (LRI), based on the power spectrum and bispectrum of the circular harmonics expansion for 2D images or spherical harmonics expansion for 3D images.
  • He also tackled a crucial aspect in designing robust predictive models, that is the need for large datasets and benchmarks. An important part of his thesis is the organization of the HEad and neCK tumOR segmentation and outcome prediction in PET/CT images (HECKTOR) challenge.

His list of publications is available online: http://publications.hevs.ch/index.php/authors/show/2489

Valentin Oreiller (right) defending his PhD thesis, supervised by Prof. Adrien Depeursinge (left) and Prof. John Prior

Article published in European Journal of Hybrid Imaging

Our work on “Reproducibility of lung cancer radiomics features extracted from data‑driven respiratory gating and free‑breathing flow imaging in [18F]‑FDG PET/CT“, Daphné Faist et al., has been published in the Springer Nature European Journal of Hybrid Imaging.

The quality and reproducibility of radiomics studies are essential requirements for the standardisation of radiomics models. As recent data-driven respiratory gating (DDG) [18F]-FDG has shown superior diagnostic performance in lung cancer, we evaluated the impact of DDG on the reproducibility of radiomics features derived from [18F]-FDG PET/CT in comparison to free-breathing flow (FB) imaging.

This study showed that 131 out of 141 radiomics features calculated on all pulmonary lesions can be used interchangeably between DDG and FB PET/CT acquisitions. Besides, radiomics features derived from pulmonary lesions located inferior to the superior lobes are subject to greater variability as well as pulmonary lesions of smaller size.

Free-breathing flow (first row) vs data-driven respiratory gating (second row) 18F-FDG PET/CT

Review article on automated tumor segmentation in radiotherapy

Our review article “Automated Tumor Segmentation in Radiotherapy“, by Ricky Savjani et al. has been published in Seminars in Radiation Oncology. In this work, we present advances in gross tumor volume automatic segmentation made in multiple key sites: brain, head and neck, thorax, abdomen, and pelvis.

Automatic tumor segmentation can decrease clinical demand, provide consistency across clinicians and institutions for radiation treatment planning. Additionally, automatic segmentation can enable imaging analyses such as radiomics to construct and deploy large studies with thousands of patients.

(A) In this review, we highlight major advances in tumor autosegmentation for 5 major clinical sites: brain, head and neck, thorax, abdomen, and pelvis. (B) Successful autosegmentation models rely on several steps including: data collection and curation, pre-processing and data ingestion, splitting datasets into train/validate/test sets, hyper-parameter optimization and tuning, architecting networks, post-processing and visualization, and aggregating outputs from ensembles of networks.

Success of HECKTOR challenge at MICCAI 2022

The satellite event of HECKTOR (HEad and neCK TumOR segmentation and outcome prediction) at MICCAI 2022 was a great success in terms of participation and quality.

Congratulations to all participants, and particularly to the winners:

  • Task 1 (segmentation) winner: Andriy Myronenko et al., team NVAUTO
  • Task 2 (outcome prediction) winner: Louis Rebaud et al., team LITO

Submissions have now re-opened on grand-challenge, we are looking forward to new submissions.

Find more about the HECKTOR challenge in this amazing post.

New article presenting SKET – The Semantic Knowledge Extractor Tool

Our article entitled “Empowering Digital Pathology Applications through Explainable Knowledge Extraction Tools” has been published in Journal of Pathology Informatics.

The Semantic Knowledge Extractor Tool (SKET) is an unsupervised hybrid system that combines rule-based techniques with pre-trained machine learning models to extract key pathological concepts from diagnostic reports. SKET is a viable solution to reduce pathologists’ workload and can be used as a first, cheap solution to bootstrap supervised models in absence of manual annotations.

SKET architecture.

The SKET eXplained (SKET X) is a web-based system that supports pathologists and domain experts in the visual understanding of SKET predictions. SKET X can refine parameters and rules over time, thus improving the system effectiveness and increasing user’s trust and confidence

SKET X dashboard providing information about the executed SKET pipelines

Article published in npj Digital Medicine

Our paper on Unleashing the potential of digital pathology data by training computer-aided diagnosis models without human annotations, by Niccolò Marini et al., has been published in npj Digital Medicine and is available in open access.

This work presents an approach that removes the need for human experts to annotate data, using an automatic analysis of healthcare reports to create automatic annotations that can be used to train deep learning models. A case study on the classification of colon whole slide images shows the benefit of the approach to best exploit the potential of healthcare data from hospital workflows.

a Input data from the clinical workflow. b Image Classification pipeline c The textual report pipeline automatically analyzes pathologist reports, to identify meaningful concepts to be used as weak labels for the CNN.

Article on a global taxonomy of interpretable AI published in AI Review

Our work on A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences, by Mara Graziani et al. has been published in Artificial Intelligence Review, Springer.

On April 29th, 2021, we organized a debate to bring together researchers from multidisciplinary backgrounds to collaborate on a global definition of interpretability that may be used with high versatility in the documentation of social, cognitive, philosophical, ethical and legal concerns about AI. One of the many interesting outcomes is the definition of interpretable AI that was reached:

“An AI system is interpretable if it is possible to translate its working principles and outcomes in human-understandable language without affecting the validity of the system”.

The article is published and available online in open access.

Article published in npj Digital Medicine

Our article entitled “Unleashing the potential of digital pathology data by training computer-aided diagnosis models without human annotations“, by Niccolò Marini et al. has been published in npj Digital Medicine from Nature Portfolio.

In this work, we present an approach that removes the need for medical experts when building assisting tools for clinical decision support, paving the way for the exploitation of exa-scale sources of medical data collected worldwide from hospital workflows. 

The combination of large amounts of healthcare data with new artificial intelligence technologies are leading to the creation of new systems to assist medical experts during the diagnosis. 

However, the potential of the combination is not fully exploited because of a bottleneck represented by the need of medical experts to analyze and annotate healthcare data. 

This study presents an approach aiming to remove the need of human experts to annotate data: the reports corresponding to healthcare data (that can be CT scans, MRI, whole slide images) are automatically analyzed in order to create automatic annotations that can be used to train deep learning models. 

The approach involves two components: a natural language processing algorithm, called SKET (Semantic Knowledge Extractor Tool) that analyzes the reports and extracts meaningful concepts, and a computer vision algorithm, a convolutional neural network trained to classify medical images using the concepts extracted from SKET as weak labels.

The code is available in our GitHub repository for the computer vision algorithm and the natural language processing algorithm.

Teaching and Awards at the VISUM 2022 summer school

Henning Müller and Mara Graziani gave lectures, hands-on and mentoring at the VISion Understanding and Machine intelligence – VISUM 2022 summer school, in Porto.

  • Henning gave an excellent lecture on Explainable AI.
  • Under Mara’s mentorship, Laura O’Mahony from University of Limerick was awarded the best mentorship pitch ! Within the very short duration of the summer school, Laura and Mara came up with an interesting small project on using interpretability for model compression.
Mara presenting the on-site hand-on session for the track of Explainable AI

Article published in Bioinformatics with shared ImageJ plugin

Our work on automatic template detection for microscopy images “Steer’n’Detect: Fast 2D Template Detection with Accurate Orientation Estimation“, by V. Uhlmann, Z. Püspöki, A. Depeursinge, M. Unser, D. Sage and J. Fageot., has been accepted for publication in the Bioiformatics jounal.

In this work, we introduce Steer’n’Detect, an ImageJ plugin to detect patterns of interest at any orientation with high accuracy from a single template in 2D images.

Detection of rod-shaped bacteria with Steer’n’Detect (synthetic image degraded by self-similar Gaussian random field background noise).
Detection of spermatozoan axoneme with Steer’n’Detect (transmission electron microscopy image) (Reused from http://www.cellimagelibrary.org/images/35970). Top left: image crop used as template; bottom left: resulting detector; right: detection results