Slide #1.

Generative Adversarial Nets ML Reading Group Xiao Lin Jul. 22 2015
More slides like this


Slide #2.

• I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio. "Generative adversarial nets." NIPS, 2014.
More slides like this


Slide #3.

Overview • Problem: Generate adversarial examples • Approach: Two player game • Theory: Discriminative learning of distribution • Potential application • “Generative” Neural Nets • Improving classification performance
More slides like this


Slide #4.

Adversarial: a bit of background Visualizing HoG features arXiv 2012 “Why did my detector fail?” ICCV 2013 Visualizing CNNs arXiv 2013 ECCV 2014 CNNs can go wildly wrong arXiv 2013 Generative Adversarial Nets NIPS 2014 Google Deep Dream Facebook Eyescream CVPR 2015 ICLR 2015
More slides like this


Slide #5.

A bit of background Visualizing HoG features arXiv 2012 “Why did my detector fail?” ICCV 2013 Visualizing CNNs arXiv 2013 ECCV 2014 CNNs can go wildly wrong arXiv 2013 Generative Adversarial Nets NIPS 2014 Google Deep Dream Facebook Eyescream CVPR 2015 ICLR 2015
More slides like this


Slide #6.

A bit of background Visualizing HoG features arXiv 2012 “Why did my detector fail?” ICCV 2013 Visualizing CNNs arXiv 2013 ECCV 2014 CNNs can go wildly wrong arXiv 2013 Generative Adversarial Nets NIPS 2014 Google Deep Dream Facebook Eyescream CVPR 2015 ICLR 2015
More slides like this


Slide #7.

A bit of background Visualizing HoG features arXiv 2012 “Why did my detector fail?” ICCV 2013 Visualizing CNNs arXiv 2013 ECCV 2014 CNNs can go wildly wrong arXiv 2013 Generative Adversarial Nets NIPS 2014 Google Deep Dream Facebook Eyescream CVPR 2015 ICLR 2015
More slides like this


Slide #8.

A bit of background Visualizing HoG features arXiv 2012 “Why did my detector fail?” ICCV 2013 Visualizing CNNs arXiv 2013 ECCV 2014 CNNs can go wildly wrong arXiv 2013 Generative Adversarial Nets NIPS 2014 Google Deep Dream Facebook Eyescream CVPR 2015 ICLR 2015
More slides like this


Slide #9.

A bit of background Visualizing HoG features arXiv 2012 “Why did my detector fail?” ICCV 2013 Visualizing CNNs arXiv 2013 ECCV 2014 CNNs can go wildly wrong arXiv 2013 Generative Adversarial Nets NIPS 2014 Google Deep Dream Facebook Eyescream CVPR 2015 ICLR 2015
More slides like this


Slide #10.

A bit of background Visualizing HoG features arXiv 2012 “Why did my detector fail?” ICCV 2013 Visualizing CNNs arXiv 2013 ECCV 2014 CNNs can go wildly wrong arXiv 2013 Generative Adversarial Nets NIPS 2014 Google Deep Dream Facebook Eyescream CVPR 2015 ICLR 2015
More slides like this


Slide #11.

Adversarial Framework • Discriminative model • “Police” • Learns to determine whether a sample is from the model distribution of the generative model or the data distribution • Generative model • A team of “counterfeiters” trying to produce fake currency • Try to fool the discriminative model with its model distribution • Until the counterfeits are indistinguishable from the genuine articles • Now the generative model generates a distribution indistinguishable from the data distribution
More slides like this


Slide #12.

Related work (Table 2) Given examples. Learn model params Observe part of the example Infer the rest Generate examples according to model distribution Given example. Compute probability Design a model family with parameter θ
More slides like this


Slide #13.

Flu Allergy Sinus Headache Nose=t Slide Credit: Dhruv Batra
More slides like this


Slide #14.

Flu Allergy Sinus Headache Nose=t Slide Credit: Dhruv Batra
More slides like this


Slide #15.

Generalized Denoising Autoencoder Y. Bengio, L. Yao, G. Alain and P. Vincent. "Generalized denoising auto-encoders as generative models." NIPS, 2013.
More slides like this


Slide #16.

Approach: Objective
More slides like this


Slide #17.

Approach: Objective 0/1=D(x) 1: From data or 0: fake ones from G x=G(z) Fake x x z, aka random noise
More slides like this


Slide #18.

Approach: Optimization Data D G
More slides like this


Slide #19.

Approach: Optimization Data D G Optimize D Improve G Eventually
More slides like this


Slide #20.

Approach: Optimization
More slides like this


Slide #21.

Approach: Convergence Best D given G: right in the middle of data and G
More slides like this


Slide #22.

Approach: Convergence Best G: G=data and V=-log4
More slides like this


Slide #23.

Approach: Convergence Data D G Optimize D Improve G Eventually
More slides like this


Slide #24.

Results
More slides like this


























Slide #37.

Problems • D must be in sync with G • Train G to optimal => all output collapse to 1 point • Tuning parameters • “Enough capacity” • Multi-mode distributions
More slides like this


Slide #38.

Future work
More slides like this