Original : [Tensorflow version]
Pytorch implementation of various GANs.
This repository was re-implemented with reference to tensorflow-generative-model-collections by Hwalsuk Lee
I tried to implement this repository as much as possible with tensorflow-generative-model-collections, But some models are a little different.
This repository is included code for CPU mode Pytorch, but i did not test. I tested only in GPU mode Pytorch.
- MNIST
- Fashion-MNIST
- CIFAR10
- SVHN
- STL10
- LSUN-bed
Lists (Table is borrowed from tensorflow-generative-model-collections)
Name | Paper Link | Value Function |
---|---|---|
GAN | Arxiv | |
LSGAN | Arxiv | |
WGAN | Arxiv | |
WGAN_GP | Arxiv | |
DRAGAN | Arxiv | |
CGAN | Arxiv | |
infoGAN | Arxiv | |
ACGAN | Arxiv | |
EBGAN | Arxiv | |
BEGAN | Arxiv |
Variants of GAN structure (Figures are borrowed from tensorflow-generative-model-collections)
Network architecture of generator and discriminator is the exaclty sames as in infoGAN paper.
For fair comparison of core ideas in all gan variants, all implementations for network architecture are kept same except EBGAN and BEGAN. Small modification is made for EBGAN/BEGAN, since those adopt auto-encoder strucutre for discriminator. But I tried to keep the capacity of discirminator.
The following results can be reproduced with command:
python main.py --dataset mnist --gan_type <TYPE> --epoch 50 --batch_size 64
All results are generated from the fixed noise vector.
Name | Epoch 1 | Epoch 25 | Epoch 50 | GIF |
---|---|---|---|---|
GAN | ||||
LSGAN | ||||
WGAN | ||||
WGAN_GP | ||||
DRAGAN | ||||
EBGAN | ||||
BEGAN |
Each row has the same noise vector and each column has the same label condition.
Name | Epoch 1 | Epoch 25 | Epoch 50 | GIF |
---|---|---|---|---|
CGAN | ||||
ACGAN | ||||
infoGAN |
All results have the same noise vector and label condition, but have different continous vector.
Name | Epoch 1 | Epoch 25 | Epoch 50 | GIF |
---|---|---|---|---|
infoGAN |
Name | Loss |
---|---|
GAN | |
LSGAN | |
WGAN | |
WGAN_GP | |
DRAGAN | |
EBGAN | |
BEGAN | |
CGAN | |
ACGAN | |
infoGAN |
Comments on network architecture in mnist are also applied to here.
Fashion-mnist is a recently proposed dataset consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. (T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle boot)
The following results can be reproduced with command:
python main.py --dataset fashion-mnist --gan_type <TYPE> --epoch 50 --batch_size 64
All results are generated from the fixed noise vector.
Name | Epoch 1 | Epoch 25 | Epoch 50 | GIF |
---|---|---|---|---|
GAN | ||||
LSGAN | ||||
WGAN | ||||
WGAN_GP | ||||
DRAGAN | ||||
EBGAN | ||||
BEGAN |
Each row has the same noise vector and each column has the same label condition.
Name | Epoch 1 | Epoch 25 | Epoch 50 | GIF |
---|---|---|---|---|
CGAN | ||||
ACGAN | ||||
infoGAN |
- ACGAN tends to fall into mode-collapse in tensorflow-generative-model-collections, but Pytorch ACGAN does not fall into mode-collapse.
All results have the same noise vector and label condition, but have different continous vector.
Name | Epoch 1 | Epoch 25 | Epoch 50 | GIF |
---|---|---|---|---|
infoGAN |
Name | Loss |
---|---|
GAN | |
LSGAN | |
WGAN | |
WGAN_GP | |
DRAGAN | |
EBGAN | |
BEGAN | |
CGAN | |
ACGAN | |
infoGAN |
The following shows basic folder structure.
├── main.py # gateway
├── data
│ ├── mnist # mnist data (not included in this repo)
│ ├── ...
│ ├── ...
│ └── fashion-mnist # fashion-mnist data (not included in this repo)
│
├── GAN.py # vainilla GAN
├── utils.py # utils
├── dataloader.py # dataloader
├── models # model files to be saved here
└── results # generation results to be saved here
- Ubuntu 16.04 LTS
- NVIDIA GTX 1080 ti
- cuda 9.0
- Python 3.5.2
- pytorch 0.4.0
- torchvision 0.2.1
- numpy 1.14.3
- matplotlib 2.2.2
- imageio 2.3.0
- scipy 1.1.0
This implementation has been based on tensorflow-generative-model-collections and tested with Pytorch 0.4.0 on Ubuntu 16.04 using GPU.