{"id":2163,"date":"2021-01-15T07:41:43","date_gmt":"2021-01-15T07:41:43","guid":{"rendered":"http:\/\/medgift.hevs.ch\/wordpress\/?p=2163"},"modified":"2021-01-15T07:41:46","modified_gmt":"2021-01-15T07:41:46","slug":"invited-talk-of-mara-graziani-on-deep-learning-interpretability-at-cibm","status":"publish","type":"post","link":"https:\/\/medgift.hevs.ch\/wordpress\/invited-talk-of-mara-graziani-on-deep-learning-interpretability-at-cibm\/","title":{"rendered":"Invited talk of Mara Graziani on Deep Learning Interpretability at CIBM"},"content":{"rendered":"\n<p>Our PhD student Mara Graziani discussed the topic of defining machine learning interpretability at <a href=\"https:\/\/cibm.ch\/\">CIBM<\/a> (Center for Biomedical Imaging, Switzerland). She also presented our latest applications to the histopathology domain. In particular, she covered our recent work on the &#8220;Evaluation and Comparison of CNN Visual Explanations for Histopathology&#8221;. She then explained how interpretability can be used in a proactive way to improve model performance. You can watch her talk online at this link:&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/www.youtube.com\/watch?v=7hs21U-3hgk&amp;feature=youtu.be\" target=\"_blank\">https:\/\/www.youtube.com\/watch?v=7hs21U-3hgk&amp;feature=youtu.be<\/a><\/p>\n\n\n\n<p>Below, the information about the presentation:<\/p>\n\n\n\n<p><strong>Title: <\/strong>A myth-busting attempt for DL interpretability: discussing taxonomies, methodologies and applications to medical imaging.<\/p>\n\n\n\n<p><strong>Abstract:&nbsp;<\/strong><\/p>\n\n\n\n<p>Deep Learning (DL) models report almost perfect accuracy on some medical tasks, though this seems to plunge in real-world practices [1]. Started in 2009 as the generation of deep visualizations [2, 3], the field of interpretability has grown and developed over the years, with the intent of understanding why such failures happen and discovering hidden and erroneous behaviors. Several interpretability techniques were then developed to address the fundamentally incomplete problem of evaluating DL models on the sole task performance [4].&nbsp;<\/p>\n\n\n\n<p>While defining the key terms used in the field, I will try to bust some myths on DL interpretability: are explainable and interpretable the same thing? Is a \u201ctransparent\u201d model an \u201cinterpretable\u201d model? Besides, within the application in the field of medical imaging, I will describe the risk of confirmation bias and present our work on evaluating the reliability of interpretability methods. Finally, I will bring examples from our previous works on how interpretable AI can be used to improve model performance and reduce the distance between the clinical and the DL ontologies.&nbsp;<\/p>\n\n\n\n<p>[1] Yune, S., et al. &#8220;Real-world performance of deep-learning-based automated detection system for intracranial hemorrhage.&#8221; <em>2018 SIIM Conference on Machine Intelligence in Medical Imaging: San Francisco<\/em> (2018).<\/p>\n\n\n\n<p>[2] Erhan, D., et al. \u201cVisualizing Higher-Layer Features of a Deep Network.\u201d (2009).<\/p>\n\n\n\n<p>[3] Zeiler, Matthew D., and Rob Fergus. &#8220;Visualizing and understanding convolutional networks.&#8221; <em>European conference on computer vision<\/em>. Springer, Cham, 2014.<\/p>\n\n\n\n<p>[4] Doshi-Velez, Finale, and Been Kim. &#8220;Towards A Rigorous Science of Interpretable Machine Learning.&#8221; <em>stat<\/em> 1050 (2017): 2.<\/p>\n\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe loading=\"lazy\" class=\"youtube-player\" width=\"525\" height=\"296\" src=\"https:\/\/www.youtube.com\/embed\/7hs21U-3hgk?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent\" allowfullscreen=\"true\" style=\"border:0;\" sandbox=\"allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox\"><\/iframe><\/span>\n<\/div><\/figure>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Our PhD student Mara Graziani discussed the topic of defining machine learning interpretability at CIBM (Center for Biomedical Imaging, Switzerland). She also presented our latest applications to the histopathology domain. In particular, she covered our recent work on the &#8220;Evaluation and Comparison of CNN Visual Explanations for Histopathology&#8221;. She then explained how interpretability can be &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/medgift.hevs.ch\/wordpress\/invited-talk-of-mara-graziani-on-deep-learning-interpretability-at-cibm\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Invited talk of Mara Graziani on Deep Learning Interpretability at CIBM&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-2163","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p8AP2d-yT","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/medgift.hevs.ch\/wordpress\/wp-json\/wp\/v2\/posts\/2163","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/medgift.hevs.ch\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/medgift.hevs.ch\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/medgift.hevs.ch\/wordpress\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/medgift.hevs.ch\/wordpress\/wp-json\/wp\/v2\/comments?post=2163"}],"version-history":[{"count":5,"href":"https:\/\/medgift.hevs.ch\/wordpress\/wp-json\/wp\/v2\/posts\/2163\/revisions"}],"predecessor-version":[{"id":2168,"href":"https:\/\/medgift.hevs.ch\/wordpress\/wp-json\/wp\/v2\/posts\/2163\/revisions\/2168"}],"wp:attachment":[{"href":"https:\/\/medgift.hevs.ch\/wordpress\/wp-json\/wp\/v2\/media?parent=2163"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/medgift.hevs.ch\/wordpress\/wp-json\/wp\/v2\/categories?post=2163"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/medgift.hevs.ch\/wordpress\/wp-json\/wp\/v2\/tags?post=2163"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}