Sheer Dangoor

Invoice Subject Line Recommender – When Behavioral Science Meets Machine Learning

Sheer Dangoor

Sheer Dangoor

Invoice Subject Line Recommender – When Behavioral Science Meets Machine Learning

Sheer Dangoor

Bio

Sheer is a Data Scientist at Intuit. In the past two years, she worked on fraud prevention models, and now she is a part of an AI invoicing mission team. Before joining Intuit, Sheer worked at the Weizmann Institute of Science as a data scientist, leading on-demand projects, including collaboration with top Israel universities. Sheer holds an M.Sc. in Brain Science from the Weizmann Institute of Science and a B.Sc. in Biology and Cognition with honors from the Hebrew University of Jerusalem.

 

Bio

Sheer is a Data Scientist at Intuit. In the past two years, she worked on fraud prevention models, and now she is a part of an AI invoicing mission team. Before joining Intuit, Sheer worked at the Weizmann Institute of Science as a data scientist, leading on-demand projects, including collaboration with top Israel universities. Sheer holds an M.Sc. in Brain Science from the Weizmann Institute of Science and a B.Sc. in Biology and Cognition with honors from the Hebrew University of Jerusalem.

Abstract

From Netflix to Spotify, the demand for personalization increases over time and can pay off both for the consumer and the relationship with the brand. One of the reinforcement learning algorithms designed to tackle this problem is Contextual Bandit. Contextual Bandit allows one to find the best strategy for a given user based on historical data and update the strategy overtime when the context changes. On a live field experiment with over 100K users, we explored a few Contextual Bandit applications, an existing library, and our implementation. By evaluating the results both in an off-policy setting, we suggest no single perfect solution.

Abstract

From Netflix to Spotify, the demand for personalization increases over time and can pay off both for the consumer and the relationship with the brand. One of the reinforcement learning algorithms designed to tackle this problem is Contextual Bandit. Contextual Bandit allows one to find the best strategy for a given user based on historical data and update the strategy overtime when the context changes. On a live field experiment with over 100K users, we explored a few Contextual Bandit applications, an existing library, and our implementation. By evaluating the results both in an off-policy setting, we suggest no single perfect solution.

Planned Agenda

8:45 Reception
9:30 Opening words by WiDS TLV ambassadors Or Basson and Noah Eyal Altman
9:40 Dr. Kira Radinski - Learning to predict the future of healthcare
10:10 Prof. Yonina Eldar - Model-Based Deep Learning: Applications to Imaging and Communications
10:40 Break
10:50 Lightning talks
12:20 Lunch & Poster session
13:20 Roundtable session & Poster session
14:05 Roundtable closure
14:20 Break
14:30 Dr. Anna Levant - 3D Metrology: Seeing the Unseen
15:00 Aviv Ben-Arie - Counterfactual Explanations: The Future of Explainable AI?
15:30 Closing remarks
15:40 End