Chen Amiraz

How a Thousand Little Failures Can Turn Into Success

Chen Amiraz

Chen Amiraz

How a Thousand Little Failures Can Turn Into Success

Chen Amiraz

Bio

Chen Amiraz is a Computer Science Ph.D. student at the Weizmann Institute of Science. Her research focuses on developing provably efficient algorithms for high-dimensional statistical learning problems. Chen holds an M.Sc. in Computer Science from the Weizmann Institute of Science (collaboration with MIT) and a B.Sc. in Electrical Engineering from the Technion (Magna Cum Laude). She was selected for the Google Women Techmakers Program, the Intel Academic Excellence Award, and the Technion EMET Excellence Program.

Bio

Chen Amiraz is a Computer Science Ph.D. student at the Weizmann Institute of Science. Her research focuses on developing provably efficient algorithms for high-dimensional statistical learning problems. Chen holds an M.Sc. in Computer Science from the Weizmann Institute of Science (collaboration with MIT) and a B.Sc. in Electrical Engineering from the Technion (Magna Cum Laude). She was selected for the Google Women Techmakers Program, the Intel Academic Excellence Award, and the Technion EMET Excellence Program.

Abstract

From mobile phones, through autonomous vehicles, to geographically spread-out data centers – modern distributed networks generate an abundance of data each day. Communication constraints and privacy concerns may prohibit sending the data to a central server to be jointly analyzed. This brings forth a novel challenge: how to jointly learn from data distributed across servers while keeping the communication costs low. In this talk, I’ll describe a distributed parametric estimation problem in which this goal is achievable. I’ll present a simple algorithm where each server sends only a short message to the center. While each server fails the estimation task with high probability, a central server can still learn from these messages and correctly estimate the parameter. Moreover, the total communication cost is not only lower than sending the entire distributed dataset, but even lower than just sending the data located on a single server.

 

Abstract

From mobile phones, through autonomous vehicles, to geographically spread-out data centers – modern distributed networks generate an abundance of data each day. Communication constraints and privacy concerns may prohibit sending the data to a central server to be jointly analyzed. This brings forth a novel challenge: how to jointly learn from data distributed across servers while keeping the communication costs low. In this talk, I’ll describe a distributed parametric estimation problem in which this goal is achievable. I’ll present a simple algorithm where each server sends only a short message to the center. While each server fails the estimation task with high probability, a central server can still learn from these messages and correctly estimate the parameter. Moreover, the total communication cost is not only lower than sending the entire distributed dataset, but even lower than just sending the data located on a single server.

Planned Agenda

8:45 Reception
9:30 Opening words by WiDS TLV ambassadors Or Basson and Noah Eyal Altman
9:40 Dr. Kira Radinski - Learning to predict the future of healthcare
10:10 Prof. Yonina Eldar - Model-Based Deep Learning: Applications to Imaging and Communications
10:40 Break
10:50 Lightning talks
12:20 Lunch & Poster session
13:20 Roundtable session & Poster session
14:05 Roundtable closure
14:20 Break
14:30 Dr. Anna Levant - 3D Metrology: Seeing the Unseen
15:00 Aviv Ben-Arie - Counterfactual Explanations: The Future of Explainable AI?
15:30 Closing remarks
15:40 End