This repo contains simple reimplementation of Denoising Diffusion Probabilistic Model paper: Ho et al
Following [1] we used Fashion MNIST dataset.
pip install -r requirements.txt
docker build -t ddpm .
docker run -it ddpm
python src/train.py
There is a couple of settings you may want to specify:
--batch_size
- set depending on your gpu memory available--num_epochs
- num epoch to train the model--diffusion_timesteps
- how many diffusion steps to make
Load checkpoint (check Releases).
Run the following code:
from src.diffusion import GaussianDiffusion, linear_beta_schedule
from src.unet import Unet
import torch
import matplotlib.pyplot as plt
unet = Unet(channels=1, dim_mults=(1, 2, 4), dim=28)
checkpoint = torch.load("<checkpoint-path>")
unet.load_state_dict(checkpoint["model_state_dict"])
unet.to("cuda:1")
timesteps = 300
diffusion = GaussianDiffusion(noise_schedule=linear_beta_schedule, timesteps=timesteps)
result = diffusion.sample(model=unet, image_size=28, batch_size=64, channels=1)
image_index = 8
image = (res[-1][image_index] + 1) * 0.5
plt.imshow(image.reshape(28, 28, 1), cmap="gray")
You can also use DDIM sampling [4]. To do that, modify the example in the following way:
from src.diffusion import SamplingMethod
result = diffusion.sample(
model=unet,
image_size=28,
batch_size=64,
channels=1,
sampling_method=SamplingMethod.DDIM
)
Fashion MNIST dataset samples
Generated samples
[1] The Annotated Diffusion Model.
[2] Denoising Diffusion Probabilistic Model, in Pytorch.