Skip to content
/ IDArb Public

Official repo for "IDArb: Intrinsic Decomposition for arbitrary number of input views and illuminations"

License

Notifications You must be signed in to change notification settings

Lizb6626/IDArb

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and Illuminations

     

Zhibing Li1, Tong Wu1 †, Jing Tan1, Mengchen Zhang2,3, Jiaqi Wang3, Dahua Lin1,3 †
1The Chinese University of Hong Kong 2Zhejiang University 3Shanghai AI Laboratory
†: Corresponding Authors

v2_1.mp4
  • Release inference code and pretrained checkpoints.
  • Release training dataset.
  • Release training code.

News

  • [01.24] We have released the training code!
  • [12.24] We have released the complete dataset and rendering script.

Install

Our environment has been tested on CUDA 11.8 with A100.

git clone git@github.com:Lizb6626/IDArb.git && cd IDArb
conda create -n idarb python==3.8 -y
conda activate idarb
conda install pytorch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -r requirements.txt

Inference

Single Image Intrinsic Decomposition

python main.py --data_dir example/single --output_dir output/single --input_type single

Multi-view Intrinsic Decomposition

For multi-view intrinsic decomposition, camera pose can be incorporated by enabling the --cam option.

## --num_views: number of input views

# Without camera pose information
python main.py --data_dir example/multi --output_dir output/multi --input_type multi --num_views 4

# With camera pose information
python main.py --data_dir example/multi --output_dir output/multi --input_type multi --num_views 4 --cam

Training

Dataset

The training data consists of a combination of our Arb-Objaverse, ABO, and G-Objaverse datasets. You can access the curated version. We are also working on releasing the uncurated version, which contains renderings of 347k 3D models.

Training Script

To train the model, update the dataset_root in the configuration file configs/train.yaml. Then, run the following command:

accelerate launch --config_file configs/acc/8gpu.yaml train.py --config configs/train.yaml

Acknowledgement

This project relies on many amazing repositories. Thanks to the authors for sharing their code and data.

Citation

@article{li2024idarb,
  author    = {Li, Zhibing and Wu, Tong and Tan, Jing and Zhang, Mengchen and Wang, Jiaqi and Lin, Dahua},
  title     = {IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and Illuminations},
  journal   = {arXiv preprint arXiv:2412.12083},
  year      = {2024},
}

About

Official repo for "IDArb: Intrinsic Decomposition for arbitrary number of input views and illuminations"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages