CLEF 2017 working notes now available online

The CLEF 2017 Working Notes have been published in CEUR-WS as volume 1866:

The CLEF 2017 conference is the eighteenth edition of the popular CLEF campaign and workshop series which has run since 2000 contributing to the systematic evaluation of multilingual and multimodal information access systems, primarily through experimentation on shared tasks. In 2010 CLEF was launched in a new format, as a conference with research presentations, panels, poster and demo sessions and laboratory evaluation workshops. These are proposed and operated by groups of organizers volunteering their time and effort to define, promote, administrate and run an evaluation activity. CLEF 2017 was hosted by the ADAPT Centre , Dublin City University and Trinity College Dublin from the 11th to 14th September 2017. This year’s conference was also co-located with MediaEval and the program included joint sessions between both MediaEval and CLEF to allow for cross fertilisation.

Medical Computer Vision 2016 proceedings available online

The proceedings from the Medical Computer Vision 2016 workshop, from MICCAI2016 are now available online.

The goal of the MCV workshop is to explore the use of “big data” algorithms for harvesting, organizing and learning from large-scale medical imaging data sets and for general-purpose automatic understanding of medical images.
The BAMBI workshop aims to highlight the potential of using Bayesian or random field graphical models for advancing research in biomedical image analysis.

NVIDIA GPU grant approved for MEGANE PRO project

The NVIDIA GPU grant program awarded a Titan Xp to the MEGANE PRO project.

Hand amputations are highly impairing and can dramatically affect the capabilities people. Man-machine interfaces that can control hand prostheses have been developed, but natural control methods are still only rarely applied in real life. The requested GPU will allow us to develop this research field that has high scientific and social impact. The project includes the development of highly specific data classification and fusion algorithms based on convolutional and recurrent neural networks. The algorithms will allow to control a 3D printed prosthetic hand and/or a robotic hand simulator based on virtual reality in real time.

ICORR2017 paper selected for best poster competition

The ICORR 2017 submission ‘Repeatability of grasp recognition for robotic hand prosthesis control based on sEMG data’ by Francesca Palermo et al. was selected for the Rehabweek Best poster completion.

The IEEE International Conference on Rehabilitation Robotics (ICORR) 2017 will take place from July 17-20 in London, UK.


Control methods based on sEMG obtained promising results for hand prosthetics. Control system robustness is still often inadequate and does not allow the amputees to perform a large number of movements useful for everyday life. Only few studies analyzed the repeatability of sEMG classification of hand grasps. The main goals of this paper are to explore repeatability in sEMG data and to release a repeatability database with the recorded experiments. The data are recorded from 10 intact subjects repeating 7 grasps 12 times, twice a day for 5 days. The data are publicly available on the Ninapro web page. The analysis for the repeatability is based on the comparison of movement classification accuracy in several data acquisitions and for different subjects. The analysis is performed using mean absolute value and waveform length features and a Random Forest classifier. The accuracy obtained by training and testing on acquisitions at different times is on average 27.03% lower than training and testing on the same acquisition. The results obtained by training and testing on different acquisitions suggest that previous acquisitions can be used to train the classification algorithms. The inter-subject variability is remarkable, suggesting that specific characteristics of the subjects can affect repeatability and sEMG classification accuracy. In conclusion, the results of this paper can contribute to developing more robust control systems for hand prostheses, while the presented data allows researchers to test repeatability in further analyses.

VISCERAL book now available online

The book ‘Cloud-based benchmarking of medical image analysis’  related to the VISCERAL project can now be accessed online

This book presents the VISCERAL project benchmarks for analysis and retrieval of 3D medical images (CT and MRI) on a large scale, which used an innovative cloud-based evaluation approach where the image data were stored centrally on a cloud infrastructure and participants placed their programs in virtual machines on the cloud. The book presents the points of view of both the organizers of the VISCERAL benchmarks and the participants.

The book is divided into five parts. Part I presents the cloud-based benchmarking and Evaluation-as-a-Service paradigm that the VISCERAL benchmarks used. Part II focuses on the datasets of medical images annotated with ground truth created in VISCERAL that continue to be available for research. It also covers the practical aspects of obtaining permission to use medical data and manually annotating 3D medical images efficiently and effectively. The VISCERAL benchmarks are described in Part III, including a presentation and analysis of metrics used in evaluation of medical image analysis and search. Lastly, Parts IV and V present reports by some of the participants in the VISCERAL benchmarks, with Part IV devoted to the anatomy benchmarks and Part V to the retrieval benchmark.

This book has two main audiences: the datasets as well as the segmentation and retrieval results are of most interest to medical imaging researchers, while eScience and computational science experts benefit from the insights into using the Evaluation-as-a-Service paradigm for evaluation and benchmarking on huge amounts of data.