Skip to content
/ Latte Public

Latte: Latent Diffusion Transformer for Video Generation.

License

Notifications You must be signed in to change notification settings

Vchitect/Latte

Repository files navigation

Latte: Latent Diffusion Transformer for Video Generation
Official PyTorch Implementation

arXiv Project Page HF Demo Hugging Face Spaces slack badge

Static Badge Static Badge

This repo contains PyTorch model definitions, pre-trained weights, training/sampling code and evaluation code for our paper Latte: Latent Diffusion Transformer for Video Generation.

Latte: Latent Diffusion Transformer for Video Generation
Xin Ma, Yaohui Wang*, Xinyuan Chen, Gengyun Jia, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, Yu Qiao (*Corresponding Author & Project Lead)

latte-1.mp4

News

  • (πŸ”₯ New) Jul 11, 2024 πŸ’₯ Latte-1 is now integrated into diffusers. Thanks to @yiyixuxu, @sayakpaul, @a-r-r-o-w and @DN6. You can easily run Latte using the following code. We also support inference with 4/8-bit quantization, which can reduce GPU memory from 17 GB to 9 GB. Please refer to this tutorial for more information.
# Please update the version of diffusers at leaset to 0.30.0
from diffusers import LattePipeline
from diffusers.models import AutoencoderKLTemporalDecoder
from torchvision.utils import save_image
import torch
import imageio

torch.manual_seed(0)

device = "cuda" if torch.cuda.is_available() else "cpu"
video_length = 16 # 1 (text-to-image) or 16 (text-to-video)
pipe = LattePipeline.from_pretrained("maxin-cn/Latte-1", torch_dtype=torch.float16).to(device)

# Using temporal decoder of VAE
vae = AutoencoderKLTemporalDecoder.from_pretrained("maxin-cn/Latte-1", subfolder="vae_temporal_decoder", torch_dtype=torch.float16).to(device)
pipe.vae = vae

prompt = "a cat wearing sunglasses and working as a lifeguard at pool."
videos = pipe(prompt, video_length=video_length, output_type='pt').frames.cpu()
  • (πŸ”₯ New) Jun 26, 2024 πŸ’₯ Latte is supported by VideoSys, which is a user-friendly, high-performance infrastructure for video generation.

  • (πŸ”₯ New) May 23, 2024 πŸ’₯ Latte-1 is released! Pre-trained model can be downloaded here. We support both T2V and T2I. Please run bash sample/t2v.sh and bash sample/t2i.sh respectively.

  • (πŸ”₯ New) Feb 24, 2024 πŸ’₯ We are very grateful that researchers and developers like our work. We will continue to update our LatteT2V model, hoping that our efforts can help the community develop. Our Latte discord channel is created for discussions. Coders are welcome to contribute.

  • (πŸ”₯ New) Jan 9, 2024 πŸ’₯ An updated LatteT2V model initialized with the PixArt-Ξ± is released, the checkpoint can be found here.

  • (πŸ”₯ New) Oct 31, 2023 πŸ’₯ The training and inference code is released. All checkpoints (including FaceForensics, SkyTimelapse, UCF101, and Taichi-HD) can be found here. In addition, the LatteT2V inference code is provided.

Setup

First, download and set up the repo:

git clone https://github.com/Vchitect/Latte
cd Latte

We provide an environment.yml file that can be used to create a Conda environment. If you only want to run pre-trained models locally on CPU, you can remove the cudatoolkit and pytorch-cuda requirements from the file.

conda env create -f environment.yml
conda activate latte

Sampling

You can sample from our pre-trained Latte models with sample.py. Weights for our pre-trained Latte model can be found here. The script has various arguments to adjust sampling steps, change the classifier-free guidance scale, etc. For example, to sample from our model on FaceForensics, you can use:

bash sample/ffs.sh

or if you want to sample hundreds of videos, you can use the following script with Pytorch DDP:

bash sample/ffs_ddp.sh

If you want to try generating videos from text, just run bash sample/t2v.sh. All related checkpoints will download automatically.

If you would like to measure the quantitative metrics of your generated results, please refer to here.

Training

We provide a training script for Latte in train.py. The structure of the datasets can be found here. This script can be used to train class-conditional and unconditional Latte models. To launch Latte (256x256) training with N GPUs on the FaceForensics dataset :

torchrun --nnodes=1 --nproc_per_node=N train.py --config ./configs/ffs/ffs_train.yaml

or If you have a cluster that uses slurm, you can also train Latte's model using the following scripts:

sbatch slurm_scripts/ffs.slurm

We also provide the video-image joint training scripts train_with_img.py. Similar to train.py scripts, these scripts can be also used to train class-conditional and unconditional Latte models. For example, if you want to train the Latte model on the FaceForensics dataset, you can use:

torchrun --nnodes=1 --nproc_per_node=N train_with_img.py --config ./configs/ffs/ffs_img_train.yaml

If you are familiar with PyTorch Lightning, you can also use the training script train_pl.py and train_with_img_pl.py provided by @zhang.haojie,

python train_pl.py --config ./configs/ffs/ffs_train.yaml

or

python train_with_img_pl.py --config ./configs/ffs/ffs_img_train.yaml

This script automatically detects available GPUs and uses distributed training.

Contact Us

Yaohui Wang: wangyaohui@pjlab.org.cn Xin Ma: xin.ma1@monash.edu

Citation

If you find this work useful for your research, please consider citing it.

@article{ma2024latte,
  title={Latte: Latent Diffusion Transformer for Video Generation},
  author={Ma, Xin and Wang, Yaohui and Jia, Gengyun and Chen, Xinyuan and Liu, Ziwei and Li, Yuan-Fang and Chen, Cunjian and Qiao, Yu},
  journal={arXiv preprint arXiv:2401.03048},
  year={2024}
}

Acknowledgments

Latte has been greatly inspired by the following amazing works and teams: DiT and PixArt-Ξ±, we thank all the contributors for open-sourcing.

License

The code and model weights are licensed under LICENSE.

About

Latte: Latent Diffusion Transformer for Video Generation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published