Aviv Ben-Arie

Counterfactual Explanations: The Future of Explainable AI?

Aviv Ben-Arie

Aviv Ben-Arie

Counterfactual Explanations: The Future of Explainable AI

Aviv Ben-Arie

Bio

Aviv is a Principal Data Scientist at Intuit, previously a Lead Data Scientist at PayPal. She specializes in fraud prevention and cybersecurity, and in the past, worked at the Prime Minister’s Office in the Cyber Security field, focusing on protocol analysis. Aviv graduated from Tel Aviv University with a double BSc in Computer Science and Life Science while specializing in Bioinformatics and continues to collaborate with Tel Aviv University on research areas revolving around Explainable AI. Aviv is a passionate volunteer (mentor and lecturer) and advocates for multiple Israeli organizations dedicated to promoting women in technology.

Bio

Aviv is a Staff Data Scientist at Intuit, previously a Lead Data Scientist at PayPal. She specializes in fraud prevention and cybersecurity, and in the past, worked at the Prime Minister’s Office in the Cyber Security field, focusing on protocol analysis. Aviv graduated from Tel Aviv University with a double BSc in Computer Science and Life Science while specializing in Bioinformatics and continues to collaborate with Tel Aviv University on research areas revolving around Explainable AI. Aviv is a passionate volunteer (mentor and lecturer) and advocates for multiple Israeli organizations dedicated to promoting women in technology.

Abstract

The ability to explain our models’ decisions is recently emerging as a standard requirement for models with a high impact on people’s lives. This necessity may pose several challenges, as most models used in the industry are not inherently explainable. Today, the most popular explainability methods are SHAP and LIME, each having its own disadvantages but offering convenient APIs, backed by solid mathematical foundations. In this talk, I will introduce a relatively new model explanation method – Counterfactual Explanations (CEs). CEs explanations which are based on minimal changes to a model input features, that lead the model to output a different (mostly opposite) predicted class. CEs have been shown to be more intuitive for humans to comprehend and provide actionable feedback to the end user – e.g. what can the user change in order to get a previously declined loan to be approved. I will review the different challenges in this novel field (such as how to ensure that the CE proposes changes which are feasible), provide a birds-eye view of the latest research and give my perspective, based on my research in collaboration with Tel-Aviv University, on the various aspects in which CEs can transform the way we understand our models.

Abstract

The ability to explain our models’ decisions is recently emerging as a standard requirement for models with a high impact on people’s lives. This necessity may pose several challenges, as most models used in the industry are not inherently explainable. Today, the most popular explainability methods are SHAP and LIME, each having its own disadvantages but offering convenient APIs, backed by solid mathematical foundations. In this talk, I will introduce a relatively new model explanation method – Counterfactual Explanations (CEs). CEs explanations which are based on minimal changes to a model input features, that lead the model to output a different (mostly opposite) predicted class. CEs have been shown to be more intuitive for humans to comprehend and provide actionable feedback to the end user – e.g. what can the user change in order to get a previously declined loan to be approved. I will review the different challenges in this novel field (such as how to ensure that the CE proposes changes which are feasible), provide a birds-eye view of the latest research and give my perspective, based on my research in collaboration with Tel-Aviv University, on the various aspects in which CEs can transform the way we understand our models.

Planned Agenda

8:45 Reception
9:30 Opening words by WiDS TLV ambassadors Or Basson and Noah Eyal Altman
9:40 Dr. Kira Radinski - Learning to predict the future of healthcare
10:10 Prof. Yonina Eldar - Model-Based Deep Learning: Applications to Imaging and Communications
10:40 Break
10:50 Lightning talks
12:20 Lunch & Poster session
13:20 Roundtable session & Poster session
14:05 Roundtable closure
14:20 Break
14:30 Dr. Anna Levant - 3D Metrology: Seeing the Unseen
15:00 Aviv Ben-Arie - Counterfactual Explanations: The Future of Explainable AI?
15:30 Closing remarks
15:40 End