We present a new framework to understand GAN training as alternating density ratio
estimation with divergence minimization. This provides a new interpretation for the
GAN generator objective used in practice and explains the problem of poor sample
diversity. Furthermore, we derive a family of objectives that target arbitrary
f-divergences without minimizing a lower bound, and use them to train generative
image models that target either improved sample quality or sample diversity.