Skip to content
forked from batsa003/videogan

PyTorch implementation of VideoGAN for Generating Videos with Scene Dynamics

Notifications You must be signed in to change notification settings

Lan-st/videogan

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyTorch Implementation of Generating Videos with Scene Dynamics.

This repository contains a PyTorch implementation of Generating Videos with Scene Dynamics by Carl Vondrick, Hamed Pirsiavash, Antonio Torralba, to appear at NIPS 2016. The model learns to generate tiny videos using adversarial networks.

I hope you find this implementation useful.

Example Generations

Below are some selected videos during the training of the network. Currently, due to hardware limitations, the videos generated are not high-quality. Processing huge dataset of videos needs a powerful GPU. Also, unnoticed bugs in the implementation could be degrading the results.

Training

The code requires a pytorch installation.

To train a generator for video, see main.py. This file will construct the networks, start many threads to load data, and train the networks.

Data

The data used in the training is from the golf videos from the original paper. Make sure you modify self.data_root in the data_loader.py once you download the dataset. The dataset can be downloaded from the website outlined in the Reference section.

Reference:

http://carlvondrick.com/tinyvideo/

About

PyTorch implementation of VideoGAN for Generating Videos with Scene Dynamics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%