XAI: explaining black-box decisions for healthcare applications

Monday, March 4, 2019

12.00 p.m.

ISI seminar room 1st floor

Cecilia Panigutti - Scuola Normale Superiore, Pisa


Today the state-of-the-art performance in classification is achieved by the so-called “black boxes'', i.e., decision-making systems whose internal logic is obscure. Such models have the potential to greatly impact many industries, but their deployment in safety-critical domains is limited by their opacity. This is particularly true for the health-care system, where the adoption of machine learning  models is subject to several risks and limitations due to their lack of transparency. In this talk I will describe and motivate the need for explainable AI techniques and then I will present MARLENA, a model-agnostic method which explains multi-label black box decisions for health-care applications.


Cecilia Panigutti is currently in her 2nd year of the Data Science Ph.D. program provided by Scuola Normale Superiore di Pisa, Univeristà di Pisa, CNR, IMT Lucca and Scuola Superiore Sant’Anna. She graduated in Physics of Complex Sytems at Università di Torino in 2016 with a thesis on Machine Learning approaches to Predictive Maintenance, and, after working for one year as a junior data scientist at aizoOon technology consulting, she started her Ph.D. under the supervision of Dino Pedreschi. Her research focus is on Explainable Machine Learning techniques and their applications in the health-care system.