After we release the first version of a model to production, our population may change according to the new actions driven by our model. In many cases, these actions affect our label. For example, if we stop a suspicious transaction, we will never know if the transaction was indeed fraudulent as it never took place. However, we want to train a new model version to be able to detect those suspicious transactions as well, so it will replace the existing one. Some format of A/B testing or a control group can be an excellent solution to solve this challenge, however in many cases, and it is not possible due to business, ethical, technical, or other reasons. In this round table, we will discuss the different implications of this issue. We’ll share our practical solutions and theoretical dreams on dealing with this challenge in our day to day, including transforming part of the problem to a regression problem, using self-learning, weak supervision, and other semi-supervised methods, and investigating the trade-offs of each method.