Skip to content

Commit

Permalink
Documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
tkarras committed Feb 1, 2021
1 parent 375a436 commit a6f45eb
Show file tree
Hide file tree
Showing 11 changed files with 439 additions and 203 deletions.
30 changes: 15 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ python dataset_tool.py --source=~/downloads/afhq/train/wild --dest=~/datasets/af
python dataset_tool.py --source=~/downloads/cifar-10-python.tar.gz --dest=~/datasets/cifar10.zip
```

**LSUN**: Download the desired LSUN categories in LMDB format from the [LSUN project page](https://www.yf.io/p/lsun/) and convert to ZIP archive:
**LSUN**: Download the desired categories from the [LSUN project page](https://www.yf.io/p/lsun/) and convert to ZIP archive:

```.bash
python dataset_tool.py --source=~/downloads/lsun/raw/cat_lmdb --dest=~/datasets/lsuncat200k.zip \
Expand Down Expand Up @@ -262,7 +262,7 @@ The training configuration can be further customized with additional command lin
* `--cond=1` enables class-conditional training (requires a dataset with labels).
* `--mirror=1` amplifies the dataset with x-flips. Often beneficial, even with ADA.
* `--resume=ffhq1024 --snap=10` performs transfer learning from FFHQ trained at 1024x1024.
* `--resume=~/training-runs/<NAME>/network-snapshot-<KIMG>.pkl` resumes a previous training run where it left off.
* `--resume=~/training-runs/<NAME>/network-snapshot-<INT>.pkl` resumes a previous training run.
* `--gamma=10` overrides R1 gamma. We recommend trying a couple of different values for each new dataset.
* `--aug=ada --target=0.7` adjusts ADA target value (default: 0.6).
* `--augpipe=blit` enables pixel blitting but disables all other augmentations.
Expand Down Expand Up @@ -293,7 +293,7 @@ The total training time depends heavily on resolution, number of GPUs, dataset,
| 1024x1024 | 4 | 11h 36m | 12d 02h | 40.1&ndash;40.8 | 8.4 GB | 21.9 GB
| 1024x1024 | 8 | 5h 54m | 6d 03h | 20.2&ndash;20.6 | 8.3 GB | 44.7 GB

The above measurements were done using NVIDIA Tesla V100 GPUs with default settings (`--cfg=auto --aug=ada --metrics=fid50k_full`). "sec/kimg" shows the expected range of variation in raw training performance, as reported in `log.txt`, and "GPU mem" and "CPU mem" show the peak memory consumption observed over the course of training.
The above measurements were done using NVIDIA Tesla V100 GPUs with default settings (`--cfg=auto --aug=ada --metrics=fid50k_full`). "sec/kimg" shows the expected range of variation in raw training performance, as reported in `log.txt`. "GPU mem" and "CPU mem" show the highest observed memory consumption, excluding the peak at the beginning caused by `torch.backends.cudnn.benchmark`.

In typical cases, 25000 kimg or more is needed to reach convergence, but the results are already quite reasonable around 5000 kimg. 1000 kimg is often enough for transfer learning, which tends to converge significantly faster. The following figure shows example convergence curves for different datasets as a function of wallclock time, using the same settings as above:

Expand Down Expand Up @@ -325,23 +325,23 @@ We employ the following metrics in the ADA paper. Execution time and GPU memory

| Metric | Time | GPU mem | Description |
| :----- | :----: | :-----: | :---------- |
| `fid50k_full` | 13 min | 1.8 GB | Fr&eacute;chet inception distance<sup>[1]</sup> against the full dataset.
| `kid50k_full` | 13 min | 1.8 GB | Kernel inception distance<sup>[2]</sup> against the full dataset.
| `pr50k3_full` | 13 min | 4.1 GB | Precision and recall<sup>[3]</sup> againt the full dataset.
| `is50k` | 13 min | 1.8 GB | Inception score<sup>[4]</sup> for CIFAR-10.
| `fid50k_full` | 13 min | 1.8 GB | Fr&eacute;chet inception distance<sup>[1]</sup> against the full dataset
| `kid50k_full` | 13 min | 1.8 GB | Kernel inception distance<sup>[2]</sup> against the full dataset
| `pr50k3_full` | 13 min | 4.1 GB | Precision and recall<sup>[3]</sup> againt the full dataset
| `is50k` | 13 min | 1.8 GB | Inception score<sup>[4]</sup> for CIFAR-10

In addition, the following metrics from the [StyleGAN](https://github.com/NVlabs/stylegan) and [StyleGAN2](https://github.com/NVlabs/stylegan2) papers are also supported:

| Metric | Time | GPU mem | Description |
| :------------ | :----: | :-----: | :---------- |
| `fid50k` | 13 min | 1.8 GB | Fr&eacute;chet inception distance against 50k real images.
| `kid50k` | 13 min | 1.8 GB | Kernel inception distance against 50k real images.
| `pr50k3` | 13 min | 4.1 GB | Precision and recall against 50k real images.
| `ppl2_wend` | 36 min | 2.4 GB | Perceptual path length<sup>[5]</sup> in W at path endpoints against full image.
| `ppl_zfull` | 36 min | 2.4 GB | Perceptual path length in Z for full paths against cropped image.
| `ppl_wfull` | 36 min | 2.4 GB | Perceptual path length in W for full paths against cropped image.
| `ppl_zend` | 36 min | 2.4 GB | Perceptual path length in Z at path endpoints against cropped image.
| `ppl_wend` | 36 min | 2.4 GB | Perceptual path length in W at path endpoints against cropped image.
| `fid50k` | 13 min | 1.8 GB | Fr&eacute;chet inception distance against 50k real images
| `kid50k` | 13 min | 1.8 GB | Kernel inception distance against 50k real images
| `pr50k3` | 13 min | 4.1 GB | Precision and recall against 50k real images
| `ppl2_wend` | 36 min | 2.4 GB | Perceptual path length<sup>[5]</sup> in W, endpoints, full image
| `ppl_zfull` | 36 min | 2.4 GB | Perceptual path length in Z, full paths, cropped image
| `ppl_wfull` | 36 min | 2.4 GB | Perceptual path length in W, full paths, cropped image
| `ppl_zend` | 36 min | 2.4 GB | Perceptual path length in Z, endpoints, cropped image
| `ppl_wend` | 36 min | 2.4 GB | Perceptual path length in W, endpoints, cropped image

References:
1. [GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium](https://arxiv.org/abs/1706.08500), Heusel et al. 2017
Expand Down
2 changes: 1 addition & 1 deletion dnnlib/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
Expand Down
100 changes: 50 additions & 50 deletions docs/dataset-tool-help.txt
Original file line number Diff line number Diff line change
@@ -1,50 +1,50 @@
Usage: dataset_tool.py [OPTIONS]
Convert an image dataset into a dataset archive usable with StyleGAN2 ADA
PyTorch.
The input dataset format is guessed from the --source argument:
--source *_lmdb/ - Load LSUN dataset
--source cifar-10-python.tar.gz - Load CIFAR-10 dataset
--source path/ - Recursively load all images from path/
--source dataset.zip - Recursively load all images from dataset.zip
The output dataset format can be either an image folder or a zip archive.
Specifying the output format and path:
--dest /path/to/dir - Save output files under /path/to/dir
--dest /path/to/dataset.zip - Save output files into /path/to/dataset.zip archive
Images within the dataset archive will be stored as uncompressed PNG.
Image scale/crop and resolution requirements:
Output images must be square-shaped and they must all have the same power-
of-two dimensions.
To scale arbitrary input image size to a specific width and height, use
the --width and --height options. Output resolution will be either the
original input resolution (if --width/--height was not specified) or the
one specified with --width/height.
Use the --transform=center-crop or --transform=center-crop-wide options to
apply a center crop transform on the input image. These options should be
used with the --width and --height options. For example:
python dataset_tool.py --source LSUN/raw/cat_lmdb --dest /tmp/lsun_cat \
--transform=center-crop-wide --width 512 --height=384
Options:
--source PATH Directory or archive name for input dataset
[required]
--dest PATH Output directory or archive name for output
dataset [required]
--max-images INTEGER Output only up to `max-images` images
--resize-filter [box|lanczos] Filter to use when resizing images for
output resolution [default: lanczos]
--transform [center-crop|center-crop-wide]
Input crop/resize mode
--width INTEGER Output width
--height INTEGER Output height
--help Show this message and exit.
Usage: dataset_tool.py [OPTIONS]

Convert an image dataset into a dataset archive usable with StyleGAN2 ADA
PyTorch.

The input dataset format is guessed from the --source argument:

--source *_lmdb/ - Load LSUN dataset
--source cifar-10-python.tar.gz - Load CIFAR-10 dataset
--source path/ - Recursively load all images from path/
--source dataset.zip - Recursively load all images from dataset.zip

The output dataset format can be either an image folder or a zip archive.
Specifying the output format and path:

--dest /path/to/dir - Save output files under /path/to/dir
--dest /path/to/dataset.zip - Save output files into /path/to/dataset.zip archive

Images within the dataset archive will be stored as uncompressed PNG.

Image scale/crop and resolution requirements:

Output images must be square-shaped and they must all have the same power-
of-two dimensions.

To scale arbitrary input image size to a specific width and height, use
the --width and --height options. Output resolution will be either the
original input resolution (if --width/--height was not specified) or the
one specified with --width/height.

Use the --transform=center-crop or --transform=center-crop-wide options to
apply a center crop transform on the input image. These options should be
used with the --width and --height options. For example:

python dataset_tool.py --source LSUN/raw/cat_lmdb --dest /tmp/lsun_cat \
--transform=center-crop-wide --width 512 --height=384

Options:
--source PATH Directory or archive name for input dataset
[required]
--dest PATH Output directory or archive name for output
dataset [required]
--max-images INTEGER Output only up to `max-images` images
--resize-filter [box|lanczos] Filter to use when resizing images for
output resolution [default: lanczos]
--transform [center-crop|center-crop-wide]
Input crop/resize mode
--width INTEGER Output width
--height INTEGER Output height
--help Show this message and exit.
138 changes: 69 additions & 69 deletions docs/train-help.txt
Original file line number Diff line number Diff line change
@@ -1,69 +1,69 @@
Usage: train.py [OPTIONS]
Train a GAN using the techniques described in the paper "Training
Generative Adversarial Networks with Limited Data".
Examples:
# Train with custom images using 1 GPU.
python train.py --outdir=~/training-runs --data=~/my-image-folder
# Train class-conditional CIFAR-10 using 2 GPUs.
python train.py --outdir=~/training-runs --data=~/datasets/cifar10.zip \
--gpus=2 --cfg=cifar --cond=1
# Transfer learn MetFaces from FFHQ using 4 GPUs.
python train.py --outdir=~/training-runs --data=~/datasets/metfaces.zip \
--gpus=4 --cfg=paper1024 --mirror=1 --resume=ffhq1024 --snap=10
# Reproduce original StyleGAN2 config F.
python train.py --outdir=~/training-runs --data=~/datasets/ffhq.zip \
--gpus=8 --cfg=stylegan2 --mirror=1 --aug=noaug
Base configs (--cfg):
auto Automatically select reasonable defaults based on resolution
and GPU count. Good starting point for new datasets.
stylegan2 Reproduce results for StyleGAN2 config F at 1024x1024.
paper256 Reproduce results for FFHQ and LSUN Cat at 256x256.
paper512 Reproduce results for BreCaHAD and AFHQ at 512x512.
paper1024 Reproduce results for MetFaces at 1024x1024.
cifar Reproduce results for CIFAR-10 at 32x32.
Transfer learning source networks (--resume):
ffhq256 FFHQ trained at 256x256 resolution.
ffhq512 FFHQ trained at 512x512 resolution.
ffhq1024 FFHQ trained at 1024x1024 resolution.
celebahq256 CelebA-HQ trained at 256x256 resolution.
lsundog256 LSUN Dog trained at 256x256 resolution.
<PATH or URL> Custom network pickle.
Options:
--outdir DIR Where to save the results [required]
--gpus INT Number of GPUs to use [default: 1]
--snap INT Snapshot interval [default: 50 ticks]
--metrics LIST Comma-separated list or "none" [default:
fid50k_full]
--seed INT Random seed [default: 0]
-n, --dry-run Print training options and exit
--data PATH Training data (directory or zip) [required]
--cond BOOL Train conditional model based on dataset
labels [default: false]
--subset INT Train with only N images [default: all]
--mirror BOOL Enable dataset x-flips [default: false]
--cfg [auto|stylegan2|paper256|paper512|paper1024|cifar]
Base config [default: auto]
--gamma FLOAT Override R1 gamma
--kimg INT Override training duration
--batch INT Override batch size
--aug [noaug|ada|fixed] Augmentation mode [default: ada]
--p FLOAT Augmentation probability for --aug=fixed
--target FLOAT ADA target value for --aug=ada
--augpipe [blit|geom|color|filter|noise|cutout|bg|bgc|bgcf|bgcfn|bgcfnc]
Augmentation pipeline [default: bgc]
--resume PKL Resume training [default: noresume]
--freezed INT Freeze-D [default: 0 layers]
--fp32 BOOL Disable mixed-precision training
--nhwc BOOL Use NHWC memory format with FP16
--nobench BOOL Disable cuDNN benchmarking
--workers INT Override number of DataLoader workers
--help Show this message and exit.
Usage: train.py [OPTIONS]

Train a GAN using the techniques described in the paper "Training
Generative Adversarial Networks with Limited Data".

Examples:

# Train with custom images using 1 GPU.
python train.py --outdir=~/training-runs --data=~/my-image-folder

# Train class-conditional CIFAR-10 using 2 GPUs.
python train.py --outdir=~/training-runs --data=~/datasets/cifar10.zip \
--gpus=2 --cfg=cifar --cond=1

# Transfer learn MetFaces from FFHQ using 4 GPUs.
python train.py --outdir=~/training-runs --data=~/datasets/metfaces.zip \
--gpus=4 --cfg=paper1024 --mirror=1 --resume=ffhq1024 --snap=10

# Reproduce original StyleGAN2 config F.
python train.py --outdir=~/training-runs --data=~/datasets/ffhq.zip \
--gpus=8 --cfg=stylegan2 --mirror=1 --aug=noaug

Base configs (--cfg):
auto Automatically select reasonable defaults based on resolution
and GPU count. Good starting point for new datasets.
stylegan2 Reproduce results for StyleGAN2 config F at 1024x1024.
paper256 Reproduce results for FFHQ and LSUN Cat at 256x256.
paper512 Reproduce results for BreCaHAD and AFHQ at 512x512.
paper1024 Reproduce results for MetFaces at 1024x1024.
cifar Reproduce results for CIFAR-10 at 32x32.

Transfer learning source networks (--resume):
ffhq256 FFHQ trained at 256x256 resolution.
ffhq512 FFHQ trained at 512x512 resolution.
ffhq1024 FFHQ trained at 1024x1024 resolution.
celebahq256 CelebA-HQ trained at 256x256 resolution.
lsundog256 LSUN Dog trained at 256x256 resolution.
<PATH or URL> Custom network pickle.

Options:
--outdir DIR Where to save the results [required]
--gpus INT Number of GPUs to use [default: 1]
--snap INT Snapshot interval [default: 50 ticks]
--metrics LIST Comma-separated list or "none" [default:
fid50k_full]
--seed INT Random seed [default: 0]
-n, --dry-run Print training options and exit
--data PATH Training data (directory or zip) [required]
--cond BOOL Train conditional model based on dataset
labels [default: false]
--subset INT Train with only N images [default: all]
--mirror BOOL Enable dataset x-flips [default: false]
--cfg [auto|stylegan2|paper256|paper512|paper1024|cifar]
Base config [default: auto]
--gamma FLOAT Override R1 gamma
--kimg INT Override training duration
--batch INT Override batch size
--aug [noaug|ada|fixed] Augmentation mode [default: ada]
--p FLOAT Augmentation probability for --aug=fixed
--target FLOAT ADA target value for --aug=ada
--augpipe [blit|geom|color|filter|noise|cutout|bg|bgc|bgcf|bgcfn|bgcfnc]
Augmentation pipeline [default: bgc]
--resume PKL Resume training [default: noresume]
--freezed INT Freeze-D [default: 0 layers]
--fp32 BOOL Disable mixed-precision training
--nhwc BOOL Use NHWC memory format with FP16
--nobench BOOL Disable cuDNN benchmarking
--workers INT Override number of DataLoader workers
--help Show this message and exit.
9 changes: 7 additions & 2 deletions metrics/frechet_inception_distance.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,21 @@
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.

"""Frechet Inception Distance (FID) from the paper
"GANs trained by a two time-scale update rule converge to a local Nash
equilibrium". Matches the original implementation by Heusel et al. at
https://github.com/bioinf-jku/TTUR/blob/master/fid.py"""

import numpy as np
import scipy.linalg

from . import metric_utils

#----------------------------------------------------------------------------

def compute_fid(opts, max_real, num_gen):
# Direct TorchScript translation of http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz
detector_url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt'
detector_kwargs = dict(return_features=True)
detector_kwargs = dict(return_features=True) # Return raw features before the softmax layer.

mu_real, sigma_real = metric_utils.compute_feature_stats_for_dataset(
opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs,
Expand Down
Loading

0 comments on commit a6f45eb

Please sign in to comment.