Series
ML@GT Seminar Series

Series Type
Event Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit
Organizational Unit

Publication Search Results

Now showing 1 - 2 of 2
  • Item
    Do GANs Actually Learn the Distribution?
    (Georgia Institute of Technology, 2018-02-22) Arora, Sanjeev
    Generative Adversarial Nets (GANs) is a framework for training deep generative models, due to Goodfellow et al'13. It involves a competition between a generator net that tries to produce realistic images, and a discriminator that tries to distinguish the output from real images. The framework has been applied to many settings, but it has been open to quantify how well it does, though the images often look reasonable. In our paper in ICML'17 (joint with Ge, Liang, Ma, Zhang) we give an analysis for the case of finite discriminators and generators. On the positive side, we can show the existence of an equilibrium where generator succeeds in fooling the discriminator. On the negative side, we show that in this equilibrium, generator produces a distribution of fairly low support. This can be seen as a failure mode of the GANs framework. But in subsequent work in ICLR'18 (joint with Risteski and Zhang) we show that this failure mode exists in popular GANs frameworks, which we show learn distributions with fairly small support. We quantify this using our new "birthday paradox" test.
  • Item
    Data-Driven Dialogue Systems: Models, Algorithms, Evaluation, and Ethical Challenges
    (Georgia Institute of Technology, 2018-02-22) Pineau, Joelle
    The use of dialogue systems as a medium for human-machine interaction is an increasingly prevalent paradigm. A growing number of dialogue systems use conversation strategies that are learned from large datasets. In this talk I will review several recent models and algorithms based on both discriminative and generative models, and discuss new results on the proper performance measures for such systems. Finally, I will highlight potential ethical issues that arise in dialogue systems research, including: implicit biases, adversarial examples, privacy violations, and safety concerns.