Daphna Idelson

MAFAT Radar Challenge 2020: Insights and Lessons from the Winning Entry

Daphna Idelson

Daphna Idelson

MAFAT Radar Challenge 2020: Insights and Lessons from the Winning Entry

Daphna Idelson

Bio

Ms. Daphna Idelson is an accomplished AI Scientist and algorithms developer. She has a Computer Engineering degree from the Technion Israel Institute of Technology and extensive industry experience in video and image processing, computer vision, deep learning algorithms, representation learning, and large-scale similarity search, among other AI areas. Ms. Idelson has worked at NeoMagic, Nice Systems, Pixellot and is currently the Lead AI Scientist at GSI Technology. She applies her expertise to ground-breaking power/performance solutions with the Gemini Technology the Associative Processing Unit.

Bio

Ms. Daphna Idelson is an accomplished AI Scientist and algorithms developer. She has a Computer Engineering degree from the Technion Israel Institute of Technology and extensive industry experience in video and image processing, computer vision, deep learning algorithms, representation learning, and large-scale similarity search, among other AI areas. Ms. Idelson has worked at NeoMagic, Nice Systems, Pixellot and is currently the Lead AI Scientist at GSI Technology. She applies her expertise to ground-breaking power/performance solutions with the Gemini Technology the Associative Processing Unit.

Abstract

The Israeli Ministry of Defense’s R&D Directorate (MAFAT) held in 2020 an open competition for target classification in doppler-pulse radar signals, with the goal being able to distinguish between humans and animals in radar signal segments with high accuracy. The provided data included real world radar tracks of animal and human targets, detected by several sensors at different locations, and the requirement was to succeed in generalizing a correct prediction on new sensors and in new locations. The radar signals are transformed into spectrograms, which are visual representations of the frequencies of the signals over time. So while it may be an unrecognizable image to an untrained eye, it may still be solved with standard CNN models. Such signals record movement and differences between targets lies within the micro-doppler changes – small changes caused mainly by movement of the limbs (arms, legs, tail) but also by background clutter such as leaves, grass and weather, making it an exceptionally challenging task. In addition, the provided datasets for training and validation were highly unbalanced and scarce. The talk will share the challenges of working with the unique data and the core solution of our 1st place winning entry, discussing the methods of data balancing, augmentation, pre-processing, training-validation splits and evaluation metrics.

Abstract

The Israeli Ministry of Defense’s R&D Directorate (MAFAT) held in 2020 an open competition for target classification in doppler-pulse radar signals, with the goal being able to distinguish between humans and animals in radar signal segments with high accuracy. The provided data included real world radar tracks of animal and human targets, detected by several sensors at different locations, and the requirement was to succeed in generalizing a correct prediction on new sensors and in new locations. The radar signals are transformed into spectrograms, which are visual representations of the frequencies of the signals over time. So while it may be an unrecognizable image to an untrained eye, it may still be solved with standard CNN models. Such signals record movement and differences between targets lies within the micro-doppler changes – small changes caused mainly by movement of the limbs (arms, legs, tail) but also by background clutter such as leaves, grass and weather, making it an exceptionally challenging task. In addition, the provided datasets for training and validation were highly unbalanced and scarce. The talk will share the challenges of working with the unique data and the core solution of our 1st place winning entry, discussing the methods of data balancing, augmentation, pre-processing, training-validation splits and evaluation metrics.

Planned Agenda

8:45 Reception
9:30 Opening words by WiDS TLV ambassadors Or Basson and Noah Eyal Altman
9:40 Dr. Kira Radinski - Learning to predict the future of healthcare
10:10 Prof. Yonina Eldar - Model-Based Deep Learning: Applications to Imaging and Communications
10:40 Break
10:50 Lightning talks
12:20 Lunch & Poster session
13:20 Roundtable session & Poster session
14:05 Roundtable closure
14:20 Break
14:30 Dr. Anna Levant - 3D Metrology: Seeing the Unseen
15:00 Aviv Ben-Arie - Counterfactual Explanations: The Future of Explainable AI?
15:30 Closing remarks
15:40 End