Text generation models are usually trained using the standard maximum likelihood objective. While it generally works well, these models were also shown to suffer from problems such as copying and repetitions, and in open-ended tasks, they sometimes produce boring, flat outputs and even logical flaws. In this talk, I will introduce the Unlikelihood loss and help control text generation. We will present its application to a wide range of problems, from repetitions and frequent word over usage to contradictions and gender bias. Finally, I will present our use case of applying it to train an English-teaching dialog agent.