Skip to content

Text-to-video generation: CogVideoX (2024) and CogVideo (ICLR 2023)

License

Notifications You must be signed in to change notification settings

TanaroSch/CogVideoUI

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

CogVideo && CogVideoX GUI Improvement

δΈ­ζ–‡ι˜…θ―»

ζ—₯本θͺžγ§θͺ­γ‚€

Experience the CogVideoX-5B model online at πŸ€— Huggingface Space or πŸ€– ModelScope Space

πŸ“š View the paper and user guide

πŸ‘‹ Join our WeChat and Discord

πŸ“ Visit QingYing and API Platform to experience larger-scale commercial video generation models.

Update and News

  • πŸ”₯πŸ”₯ News: 2024/8/27: The CogVideoX-2B model's open-source license has been changed to the Apache 2.0 License.
  • πŸ”₯πŸ”₯ News: 2024/8/27: We have open-sourced a larger model in the CogVideoX series, CogVideoX-5B. We have significantly optimized the model's inference performance, greatly lowering the inference threshold. You can run CogVideoX-2B on older GPUs like the GTX 1080TI, and run the CogVideoX-5B model on mid-range GPUs like the RTX 3060. Please ensure you update and install the dependencies according to the requirements, and refer to the cli_demo for inference code.
  • πŸ”₯ News: 2024/8/20: VEnhancer now supports enhancing videos generated by CogVideoX, achieving higher resolution and higher quality video rendering. We welcome you to try it out by following the tutorial.
  • πŸ”₯ News: 2024/8/15: The SwissArmyTransformer dependency in CogVideoX has been upgraded to 0.4.12. Fine-tuning no longer requires installing SwissArmyTransformer from source. Additionally, the Tied VAE technique has been applied in the implementation within the diffusers library. Please install diffusers and accelerate libraries from source. Inference for CogVideoX now requires only 12GB of VRAM. The inference code needs to be modified. Please check cli_demo.
  • πŸ”₯ News: 2024/8/12: The CogVideoX paper has been uploaded to arxiv. Feel free to check out the paper.
  • πŸ”₯ News: 2024/8/7: CogVideoX has been integrated into diffusers version 0.30.0. Inference can now be performed on a single 3090 GPU. For more details, please refer to the code.
  • πŸ”₯ News: 2024/8/6: We have also open-sourced 3D Causal VAE used in CogVideoX-2B, which can reconstruct the video almost losslessly.
  • πŸ”₯ News: 2024/8/6: We have open-sourced CogVideoX-2B,the first model in the CogVideoX series of video generation models.
  • 🌱 Source: 2022/5/19: We have open-sourced CogVideo (now you can see in CogVideo branch),the first open-sourced pretrained text-to-video model, and you can check ICLR'23 CogVideo Paper for technical details.

More powerful models with larger parameter sizes are on the way~ Stay tuned!

Table of Contents

Jump to a specific section:

Quick Start

Prompt Optimization

Before running the model, please refer to this guide to see how we use large models like GLM-4 (or other comparable products, such as GPT-4) to optimize the model. This is crucial because the model is trained with long prompts, and a good prompt directly impacts the quality of the video generation. A prompt enhancing template to paste into LLM web interfaces can be generated in the GUI.

SAT

Please make sure your Python version is between 3.10 and 3.12, inclusive of both 3.10 and 3.12.

Follow instructions in sat_demo: Contains the inference code and fine-tuning code of SAT weights. It is recommended to improve based on the CogVideoX model structure. Innovative researchers use this code to better perform rapid stacking and development.

Diffusers

Please make sure your Python version is between 3.10 and 3.12, inclusive of both 3.10 and 3.12.

conda create -n cog python==3.10
conda activate cog
pip install -r requirements.txt

Then follow diffusers_demo: A more detailed explanation of the inference code, mentioning the significance of common parameters.

Gallery

CogVideoX-5B

5b_1.mp4
5b_2.mp4
5b_3.mp4
5b_4.mp4
5b_5.mp4
5b_6.mp4
5b_7.mp4
5b_8.mp4

CogVideoX-2B

1.mp4
2.mp4
3.mp4
4.mp4

To view the corresponding prompt words for the gallery, please click here

Model Introduction

CogVideoX is an open-source version of the video generation model originating from QingYing. The table below displays the list of video generation models we currently offer, along with their foundational information.

Model Name CogVideoX-2B CogVideoX-5B
Model Description Entry-level model, balancing compatibility. Low cost for running and secondary development. Larger model with higher video generation quality and better visual effects.
Inference Precision FP16* (Recommended), BF16, FP32, FP8*, INT8, no support for INT4 BF16 (Recommended), FP16, FP32, FP8*, INT8, no support for INT4
Single GPU VRAM Consumption FP16: 18GB using SAT / 12.5GB* using diffusers
INT8: 7.8GB* using diffusers with torchao
BF16: 26GB using SAT / 20.7GB* using diffusers
INT8: 11.4GB* using diffusers with torchao
Multi-GPU Inference VRAM Consumption FP16: 10GB* using diffusers BF16: 15GB* using diffusers
Inference Speed
(Step = 50, FP/BF16)
Single A100: ~90 seconds
Single H100: ~45 seconds
Single 4090: ~120 seconds
Single A100: ~180 seconds
Single H100: ~90 seconds
Single 4090: ~330 seconds
Fine-tuning Precision FP16 BF16
Fine-tuning VRAM Consumption (per GPU) 47 GB (bs=1, LORA)
61 GB (bs=2, LORA)
62GB (bs=1, SFT)
63 GB (bs=1, LORA)
80 GB (bs=2, LORA)
75GB (bs=1, SFT)
Prompt Language English*
Prompt Length Limit 226 Tokens
Video Length 6 Seconds
Frame Rate 8 Frames per Second
Video Resolution 720 x 480, no support for other resolutions (including fine-tuning)
Positional Encoding 3d_sincos_pos_embed 3d_rope_pos_embed
Download Page (Diffusers) πŸ€— HuggingFace
πŸ€– ModelScope
🟣 WiseModel
πŸ€— HuggingFace
πŸ€– ModelScope
🟣 WiseModel
Download Page (SAT) SAT

Data Explanation

  • When testing with the diffusers library, the enable_model_cpu_offload() option and pipe.vae.enable_tiling() optimization were enabled. This solution has not been tested for actual VRAM/memory usage on devices other than * NVIDIA A100/H100*. Generally, this solution can be adapted to all devices with NVIDIA Ampere architecture and above. If optimization is disabled, VRAM usage will increase significantly, with peak VRAM approximately 3 times the value in the table.
  • When performing multi-GPU inference, the enable_model_cpu_offload() optimization needs to be disabled.
  • Using an INT8 model will result in reduced inference speed. This is done to accommodate GPUs with lower VRAM, allowing inference to run properly with minimal video quality loss, though the inference speed will be significantly reduced.
  • The 2B model is trained using FP16 precision, while the 5B model is trained using BF16 precision. It is recommended to use the precision used in model training for inference.
  • PytorchAO and Optimum-quanto can be used to quantize the Text Encoder, Transformer and VAE modules to lower the memory requirement of CogVideoX. This makes it possible to run the model on free-tier T4 Colab or smaller VRAM GPUs as well! It is also worth noting that TorchAO quantization is fully compatible with torch.compile, which allows for much faster inference speed. FP8 precision must be used on NVIDIA H100 and above devices, requiring source installation of the torch, torchao, diffusers, and accelerate Python packages. CUDA 12.4 is recommended.
  • Inference speed testing also used the aforementioned VRAM optimization scheme. Without VRAM optimization, inference speed increases by about 10%. Only models using diffusers support quantization.
  • The model only supports English input; other languages can be translated to English during large model refinements.

Friendly Links

We highly welcome contributions from the community and actively contribute to the open-source community. The following works have already been adapted for CogVideoX, and we invite everyone to use them:

  • Xorbits Inference: A powerful and comprehensive distributed inference framework, allowing you to easily deploy your own models or the latest cutting-edge open-source models with just one click.
  • VideoSys: VideoSys provides a user-friendly, high-performance infrastructure for video generation, with full pipeline support and continuous integration of the latest models and techniques.

Project Structure

This open-source repository will guide developers to quickly get started with the basic usage and fine-tuning examples of the CogVideoX open-source model.

Inference

To run the CogVideoX demo using local models and one batch script to start the program, follow these steps:

  1. Directory Structure: Ensure your directory structure looks like this:

    CogVideo/
    β”œβ”€β”€ models/
    β”œβ”€β”€ output/
    β”œβ”€β”€ CogVideoUI/
    β”‚   └── inference/
    β”‚       └── streamlit_web_demo.py
    └── run_cog_video.bat
    
  2. Local Models: Place your downloaded models in the models/ directory. The script will automatically detect and list available models.

  3. Batch Script: The run_cog_video.bat file should be in the root CogVideo/ directory. Its contents should be:

    @echo off
    title cog
    
    :: Change to the directory containing the batch file
    cd /d "%~dp0"
    
    :: Activate the Conda environment named "cog"
    call conda activate cog
    
    :: Run the Python script
    streamlit run "CogVideoUI\inference\streamlit_web_demo.py"
    
    :: Deactivate the Conda environment
    call conda deactivate
    
    pause
  4. Running the Demo:

    • Double-click the run_cog_video.bat file.
    • This will open a command prompt, activate the "cog" Conda environment, and start the Streamlit web application.
    • The application will open in your default web browser.
  5. Using the Web Interface:

    • The interface will display a list of available local models.
    • If no models are found, you'll be prompted to download one.
    • Enter your prompt and adjust settings as needed.
    • Click "Generate Video" to create your video.
  6. Output: Generated videos will be saved in the output/ directory, organized by timestamp.

  • dcli_demo: A more detailed inference code explanation, including the significance of common parameters. All of this is covered here.
  • cli_demo_quantization: Quantized model inference code that can run on devices with lower memory. You can also modify this code to support running CogVideoX models in FP8 precision.
  • diffusers_vae_demo: Code for running VAE inference separately.
  • space demo: The same GUI code as used in the Huggingface Space, with frame interpolation and super-resolution tools integrated.
  • convert_demo: How to convert user input into long-form input suitable for CogVideoX. Since CogVideoX is trained on long texts, we need to transform the input text distribution to match the training data using an LLM. The script defaults to using GLM-4, but it can be replaced with GPT, Gemini, or any other large language model.
  • gradio_web_demo: A simple Gradio web application demonstrating how to use the CogVideoX-2B / 5B model to generate videos. Similar to our Huggingface Space, you can use this script to run a simple web application for video generation.
cd inference
# For Linux and Windows users
python gradio_web_demo.py

# For macOS with Apple Silicon users, Intel not supported, this maybe 20x slower than RTX 4090
PYTORCH_ENABLE_MPS_FALLBACK=1 python gradio_web_demo.py

sat

  • sat_demo: Contains the inference code and fine-tuning code of SAT weights. It is recommended to improve based on the CogVideoX model structure. Innovative researchers use this code to better perform rapid stacking and development.

Tools

This folder contains some tools for model conversion / caption generation, etc.

  • convert_weight_sat2hf: Convert SAT model weights to Huggingface model weights.
  • caption_demo: Caption tool, a model that understands videos and outputs them in text.

CogVideo(ICLR'23)

The official repo for the paper: CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers is on the CogVideo branch

CogVideo is able to generate relatively high-frame-rate videos. A 4-second clip of 32 frames is shown below.

High-frame-rate sample

Intro images

cogvideo.mp4

The demo for CogVideo is at https://models.aminer.cn/cogvideo, where you can get hands-on practice on text-to-video generation. The original input is in Chinese.

Citation

🌟 If you find our work helpful, please leave us a star and cite our paper.

@article{yang2024cogvideox,
  title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer},
  author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others},
  journal={arXiv preprint arXiv:2408.06072},
  year={2024}
}
@article{hong2022cogvideo,
  title={CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers},
  author={Hong, Wenyi and Ding, Ming and Zheng, Wendi and Liu, Xinghan and Tang, Jie},
  journal={arXiv preprint arXiv:2205.15868},
  year={2022}
}

Open Source Project Plan

  • CogVideoX Model Open Source
    • CogVideoX Model Inference Example (CLI / Web Demo)
    • CogVideoX Online Experience Example (Huggingface Space)
    • CogVideoX Open Source Model API Interface Example (Huggingface)
    • CogVideoX Model Fine-Tuning Example (SAT)
    • CogVideoX Model Fine-Tuning Example (Huggingface Diffusers)
    • CogVideoX-5B Open Source (Adapted to CogVideoX-2B Suite)
    • CogVideoX Technical Report Released
    • CogVideoX Technical Explanation Video
  • CogVideoX Peripheral Tools
    • Basic Video Super-Resolution / Frame Interpolation Suite
    • Inference Framework Adaptation
    • ComfyUI Full Ecosystem Tools

We welcome your contributions! You can click here for more information.

License Agreement

The code in this repository is released under the Apache 2.0 License.

The CogVideoX-2B model (including its corresponding Transformers module and VAE module) is released under the Apache 2.0 License.

The CogVideoX-5B model (Transformers module) is released under the CogVideoX LICENSE.

About

Text-to-video generation: CogVideoX (2024) and CogVideo (ICLR 2023)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%