This repository contains a PyTorch implementation of Generating Videos with Scene Dynamics by Carl Vondrick, Hamed Pirsiavash, Antonio Torralba, to appear at NIPS 2016. The model learns to generate tiny videos using adversarial networks.
I hope you find this implementation useful.
Below are some selected videos during the training of the network. Currently, due to hardware limitations, the videos generated are not high-quality. Processing huge dataset of videos needs a powerful GPU. Also, unnoticed bugs in the implementation could be degrading the results.
The code requires a pytorch installation.
To train a generator for video, see main.py. This file will construct the networks, start many threads to load data, and train the networks.
The data used in the training is from the golf videos from the original paper. Make sure you modify self.data_root in the data_loader.py once you download the dataset. The dataset can be downloaded from the website outlined in the Reference section.