The Latsis University Prize aims to acknowledge outstanding work conducted by young researchers. Mara Graziani has been honored with this prestigious award in recognition of her remarkable contributions to the field of trustworthy AI. Her research has significantly enhanced the comprehensibility of deep learning models and their ability to generalize to unseen datasets.
Many of the current artificial intelligence (AI) algorithms operate as “black boxes,” meaning they lack transparency in explaining the reasoning behind their predictions. This lack of transparency poses significant challenges in the use and regulation of AI-based devices in high-stakes contexts.
During her Ph.D. at MedGIFT (HES-SO) and UNIGE, Mara developed several innovative methods that shed light on the inner workings of complex deep learning models. Additionally, she has made substantial progress in multi-task learning methods, guiding models to focus on essential features. This approach has proven to greatly enhance the models’ generalization capabilities and their resilience to domain shifts.
In addition to her remarkable research contributions, Mara was honored with the prize for her commitment to the AI community. In 2021, she initiated the “Introduction to Interpretable AI” expert network, aimed at fostering global discussions on deep learning interpretability. This initiative has reached hundreds of students worldwide and received support from various AI experts who have contributed through online seminars, lecture notes, and open-source code.
Link to the award ceremony here (38’45”)