Our work on A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences, by Mara Graziani et al. has been published in Artificial Intelligence Review, Springer.
On April 29th, 2021, we organized a debate to bring together researchers from multidisciplinary backgrounds to collaborate on a global definition of interpretability that may be used with high versatility in the documentation of social, cognitive, philosophical, ethical and legal concerns about AI. One of the many interesting outcomes is the definition of interpretable AI that was reached:
“An AI system is interpretable if it is possible to translate its working principles and outcomes in human-understandable language without affecting the validity of the system”.
The article is published and available online in open access.