Article published in Bioinformatics with shared ImageJ plugin

Our work on automatic template detection for microscopy images “Steer’n’Detect: Fast 2D Template Detection with Accurate Orientation Estimation“, by V. Uhlmann, Z. Püspöki, A. Depeursinge, M. Unser, D. Sage and J. Fageot., has been accepted for publication in the Bioiformatics jounal.

In this work, we introduce Steer’n’Detect, an ImageJ plugin to detect patterns of interest at any orientation with high accuracy from a single template in 2D images.

Detection of rod-shaped bacteria with Steer’n’Detect (synthetic image degraded by self-similar Gaussian random field background noise).
Detection of spermatozoan axoneme with Steer’n’Detect (transmission electron microscopy image) (Reused from http://www.cellimagelibrary.org/images/35970). Top left: image crop used as template; bottom left: resulting detector; right: detection results

Paper on H&N segmentation accepted at EMBC 2022

Our paper on “Segmentation and Classification of Head and Neck Nodal Metastases and Primary Tumors in PET/CT”, by V. Andrearczyk, V. Oreiller, M. Jreige, J.Castelli, J. O. Prior and A. Depeursinge has been accepted for presentation at the IEEE International Engineering in Medicine and Biology Conference (EMBC), held in Glasgow, 11-15 July 2022.

The prediction of cancer characteristics, treatment planning and patient outcome from medical images generally requires tumor delineation. In Head and Neck cancer (H\&N), the automatic segmentation and differentiation of primary Gross Tumor Volumes (GTVt) and malignant lymph nodes (GTVn) is a necessary step for large-scale radiomics studies. We developed a bi-modal 3D U-Net model to automatically individually segment GTVt and GTVn in PET/CT images. The model is trained for multi-class and multi-components segmentation on the multi-centric HECKTOR 2020 dataset.

Illustrations of 2D PET (SUV scale 0-7 mg/L) and CT slices overlayed with GTVt (blue) and GTVn (red) automatic segmentation. The corresponding DSC are reported in the captions. The ground truth is in bright color, the prediction in dark color. (a,b) Correctly detected and segmented GTVt and GTVn; (c) One GTVn correctly segmented (right), one largely undersegmented (left); (d) GTVn misclassified as a GTVt. Standard DSC is reported for each case to evaluate the 3D segmentation of GTVt and GTVn.

HECKTOR challenge officially at MICCAI 2022

We are pleased to announce that the HECKTOR (HEad and neCK TumOR segmentation and outcome prediction in PET/CT images) challenge has been renewed for a third edition at MICCAI 2022 (September 18-22, 2022, Singapore).

This challenge was created in 2020 to create an opportunity for the comparison of automated algorithms dedicated to the segmentation of head and neck primary tumors in FDG PET/CT multimodal images. This first edition relied on a dataset of 254 patients from 5 centers and attracted about 20 challengers. In 2021, the challenge was renewed with a larger dataset (325 patients from 6 centers) with the addition of a second task, the prediction of progression-free survival (PFS). This second edition was quite successful, with twice the number of submissions (~40) for the first task and 30 for the second task [2].

This year, challengers are again invited to compete in the same tasks, although the segmentation one now includes the fully automated detection and segmentation of lymph nodes in addition to the primary tumor, and the dataset should contain more than a thousand patients collected from 9 participating centers.

For 2022, the challenge is now hosted on the platform grand-challenge.org: https://hecktor.grand-challenge.org/. Registrations will open soon.

The full list of MICCAI challenges is now online: https://conferences.miccai.org/2022/en/MICCAI2022-CHALLENGES.html

Paper accepted at the BioNLP workshop at ACL 2022

Our paper “DISTANT-CTO: A Zero Cost, Distantly Supervised Approach to Improve Low-Resource Entity Extraction Using Clinical Trials Literature”, by A. Dhrangadhariya and H. Müller is accepted for presentation at the BioNLP workshop at ACL2022 (Dublin, 22-27 May).

The paper takes strides toward democratizing entity/span level PICO recognition by automatically creating a large entity annotated dataset using open-access resources. PICO extraction is a vital but tiring process for writing systematic reviews.

The approach uses gestalt pattern matching and a modified version of the match scoring to flexibly align structured terms from http://clinicaltrials.gov onto the unstructured text and select high-confidence match candidates.

The method is extensible and data-centric which allows for constant extension of the dataset, and for optimization of the automatic annotation method for the recall-oriented PICO recognition.

Overview of the proposed DISTANT-CTO method

Article published in Computers in Biology and Medicine

Our work on “Grading diabetic retinopathy and prostate cancer diagnostic images with deep quantum ordinal regression“, by S. Toledo-Cortés D. H. Useche H. Müller and F. A. González, has been published in Computers in Biology and Medicine (Elsevier).

A new end-to-end neural probabilistic ordinal regression method is proposed to predict a probability distributions over a range of grades. The model, evaluated on prostate cancer diagnosis and diabetic retinopathy grade estimation, can also quantify its uncertainty.

Overview of the proposed approach for diabetic retinopathy (DR) grading. An Inception-V3 network was used as feature extractor for the eye fundus image. These features were
the input for the regressor model, which yielded a posterior probability distribution over the DR grades.

Winner of the BraTS-Reg challenge

Congratulations to Marek Wodzinski, Niccolò Marini and Henning Müller for winning the BraTS-Reg challenge hosted at ISBI 2022.

The goal of the challenge was to propose the most accurate medical image registration method to align pre-operative to follow-up MRIs. Our team proposed a hybrid approach based on a multilevel deep network combined with an iterative, instance optimization.

Paper accepted for presentation at MIDL 2022

Our paper on nucleus segmentation with local rotation invariance has been accepted for poster presentation at the Medical Imaging with Deep Learning (MIDL) conference 2022. As one of the first in-person conferences (hybrid) in our field, MIDL, held in Zürich, 6 – 8 July 2022, will be a great opportunity to catch up with researchers in our field and discuss our work.

Illustration of segmentation predictions robustness with respect to input orientation. The red color map indicates the average pixel-wise difference. which is averaged across the six pairs of 90◦ rotations

Full paper “Multi-Organ Nucleus Segmentation Using a Locally Rotation Invariant Bispectrum U-Net“, V. Oreiller, J. Fageot, V. Andrearczyk, J. O. Prior and A. Depeursinge, available online: https://openreview.net/forum?id=paGzvj2t_x

Article on feature stability published in Nature Scientific Reports

Our work on Assessing radiomics feature stability with simulated CT acquisitions has been published in Nature Scientific Reports. The authors K. Flouris, O. Jimenez-del-Toro, C. Aberle, M. Bach, R. Schaer, M. M. Obmann, B. Stieltjes, H. Müller, A. Depeursinge and E. Konukoglu developed and validated a Computed Tomography (CT) simulator and showed that the variability, stability and discriminative power of the radiomics features extracted from the virtual phantom images are similar to those observed in a tandem phantom study. They also demonstrated that the simulator can be utilised to assess radiomics features’ stability and discriminative power.

More details in the article available online: https://www.nature.com/articles/s41598-022-08301-1

Annotated regions of interest on the anthropomorphic phantom.

HECKTOR 2021 proceedings available online

The proceedings of the HEad and neCK TumOR segmentation and outcome prediction in PET/CT images (HECKTOR) 2021 challenge, at MICCAI, are now available in a Springer LNCS book.

https://link.springer.com/book/10.1007/978-3-030-98253-9

The participants papers describe the various innovative methods, and our overview paper provides a description of the challenge, the data, the methods and the results.

Stay tuned for the next edition of the challenge: HECKTOR 2022 !

Paper published in Clinical and Translational Radiation Oncology

In our paper “Cleaning Radiotherapy Contours for Radiomics Studies, is it Worth it? A Head and Neck Cancer Study“, published in ctRO (Elsevier), we study the benefit of delineating tumor contours specifically for radiomics analyses (automatic prognostic prediction of patients with Head and Neck cancer). The contours originating from radiotherapy often include parts of surrounding organs (e.g. trachea, bones) which impact the extraction of visual features that characterize the tumor.

Example of VOI delineation: Radiotherapy (green), Resegmented (purple), and Dedicated (blue) overlayed on a fused FDG-PET/CT image. The blue contour is closer to the true volume of the primary tumor.

SwissNeuroRehab, a new model of neurorehabilitation. FlagShip project funded by Innosuisse

In 2021, the Swiss innovation agency Innosuisse has launched the funding scheme Flagship aimed at stimulating systemic innovation and transdisciplinary collaboration to solve future challenges relevant for the Swiss economy and society. Among the 15 supported projects out of 84 submitted, Innosuisse has financed with 11.2 MCHF over 5 years the project SwissNeuroRehab.

As part of a large Swiss consortium working on this project, our group at the University of Applied Science of Western Switzerland (HES-SO) will lead the “Data” sub project.


SwissNeuroRehab aims at developing a novel model of neurorehabilitation along the continuum of care, from the hospital to home. In the first phase, the project will focus on stroke, traumatic brain injury and spinal cord injury. Through the partnership between university hospitals, research centers, neurorehabilitation clinics, therapists and the industry, SwissNeuroRehab will combine the best available approaches for neuro-rehabilitation with new digital and techonological methods to create innovative and efficient therapeutic programs tailored to the individual needs of patients and their
families.

More details available here (in french).

Abstracts Accepted at Conferences on Childhood Disability

We will be presenting our work at two conferences: the Swiss Academy of Childhood Disability (SACD), 20th January 2022 and the European Academy of Childhood Disability (EACD), which will take place 18-21 May 2022 in Barcelona.

At SACD, pour paper “Serious games embedded in virtual reality as a visual rehabilitation tool for individuals with pediatric amblyopia: A protocol for a crossover randomized controlled trial“, by C. Simon-Martinez , B. Backus , B. Dornbos , M.-P. Antoniou , M. Kropp, G. Thumann, W. Bouthour, H. Steffen and P. J. Matusz, received the 2nd prize on the category of study protocols.


Award ceremony at SACD, 2nd prize on the category of study protocols

At EACD, we will present our work on “The use of DeepLabCut to detect and quantify mirror movements in children with unilateral cerebral palsy“, by B. Berclaz, H. Haberfehlner, K. Klingels, H. Feys, P. Meyns, H. Müller and C. Simon-Martinez

At EACD, Cristina Simon-Martinez is also co-author on 2 other abstracts, one with the KU Leuven and one with the University Hospital in Bern, that were accepted as oral presentations! Meet our team there to discuss all this amazing work.

Article on Hand Prostheses Control published in Frontiers in AI

Our work on “Improving Robotic Hand Prostheses Control with Eye Tracking and Computer Vision: a Multimodal Approach based on the Visuomotor Behavior of Grasping“, by M. Cognolato, M. Atzori, R. Gassert and H. Müller, was published in Frontiers in Artificial Intelligence, section Machine Learning and Artificial Intelligence.

In this paper, we investigate a multimodal approach that exploits the use of eye-hand coordination to improve the control of myoelectric hand prostheses.

Example of a typical unimodal and multimodal analysis process flow.

Full paper available open-access online: https://www.frontiersin.org/articles/10.3389/frai.2021.744476/full?&utm_source=Email_to_authors_&utm_medium=Email&utm_content=T1_11.5e1_author&utm_campaign=Email_publication&field=&journalName=Frontiers_in_Artificial_Intelligence&id=744476

Congratulations to Mara Graziani for her PhD

Mara Graziani successfully defended her PhD thesis on the interpretability of deep learning for medical image analysis. This excellent work was supervised by Prof. Stephane Marchand-Maillet (UNIGE) and, Prof. Henning Müller (HES-SO, UNIGE). Prof. Mauricio Reyes (UniBE) was part of the external committee.

Mara defending her work (left) and later literally opening the black-box model with PhD gifts (right)

FishLab : An innovative video system to observe and take a census of fish populations

FishLab, led by the COREALIS and our institute of information systems at HES-SO, will analyze near real-time fish flows and migration using machine learning algorithms on videos. Preliminary experiments with three stations located in the Rhône and the Aare rivers already enabled the improvement of the embedded system to detect and count fish.

More information (in french or in german): here

Paper on Hand Prostheses Control published in Sensors

Our paper on “Questioning Domain Adaptation in Myoelectric Hand Prostheses Control: An Inter- and Intra-Subject Study“, Giulio Marano et al. has been published in Sensors as part of a Special Issue on Biomedical Sensors for Functional Mapping.

In this study, we question the benefit of domain adaptation in transfer learning techniques (using pre-trained models obtained from prior subjects) applied to machine learning algorithms for automatic myoelectric hand prostheses control.

For more information, check the paper now available online: https://www.mdpi.com/1424-8220/21/22/7500/pdf