Yehonathan Litman, Or Patashnik, Kangle Deng, Aviral Agrawal, Rushikesh Zawar,
Fernando De la Torre, Shubham Tulsiani
conda create -n materialfusion python=3.10
conda activate materialfusion
pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
pip install git+https://github.com/NVlabs/nvdiffrast/
pip install imageio PyOpenGL glfw xatlas gdown wget kornia diffusers["torch"] transformers bpy numpy==1.26.4 scipy lpips
BlenderVault is a curated dataset containing 11,709 synthetic Blender objects designed by artists. The objects are diverse in nature and contain high quality property assets that are extracted and used to generate training data for our material diffusion prior. The object files are available for download here. Due to size, the dataset was split into 12 partitions.
blendervault_video.mp4
We evaluate MaterialFusion and our material diffusion prior on a mixture of synthetic and real image data. To download the NeRF, NeRFactor, Stanford-ORB, MaterialFusion, and StableMaterial datasets, run the data/download_datasets.py
script. The configs that correspond to each object and dataset are included at configs/materialfusion
and are needed by MaterialFusion.
If you would like to download the datasets individually, follow the links below and put them in the data
directory:
-
NeRF-Synthetic - Extract this into
data/nerf_synthetic
. -
NeRFactor - The four datasets should be placed in the folders
data\nerfactor\hotdog_2163
,data\nerfactor\drums_3072
,data\nerfactor\ficus_2188
, anddata\nerfactor\lego_3072
. -
MaterialFusion Dataset - This is a dataset containing multi-view images of 9 unseen objects from BlenderVault. Extract them into
data/materialfusion_dataset
. -
StableMaterial Dataset - This dataset contains 4 images per object for 8 unseen objects from BlenderVault for evaluating the material diffusion prior. Extract them into
data/stablematerial_dataset
. -
Stanford-ORB - Download and extract
blender_LDR.tar.gz
intodata/blender_LDR
.
The training data assumes poses are available and the background is masked out. For synthetic data, you may use the BlenderVault rendering script (which we will soon upload) or use NeRFactor's code. For real data, you can use SAM to mask out the background and process your images. You can also use COLMAP to estimate the poses.
To begin training MaterialFusion:
python run_materialfusion.py --config configs/materialfusion/vault-box.json
Note that MaterialFusion uses a lot of memory during inverse rendering and may crash. To alleviate this you can reduce the batch
parameter in the config file or pass a smaller batch to StableMaterial via the --sds_batch_limiter
flag:
python run_materialfusion.py --config configs/materialfusion/vault-box.json --sds_batch_limiter 2
Once you are done training MaterialFusion, the output folder will contain the recovered mesh geometry, material maps, and environment lighting. They can be loaded into Blender using blender.py.
We also provide the inference and training code for our material diffusion prior.
The model checkpoints are on HuggingFace and will be downloaded automatically. For simplicity, we provide checkpoints for the single-view and multi-view models.
To train the StableMaterial, render some data using render_blender.py
and call:
python train_stablematerial.py --train_data_dir /path/to/blender_renders --run_name my_run --train_batch_size 4
StableMaterial doesn't need pose information and only assumes masked images. StableMaterial-MV requires pose information in addition to masked images. After downloading BlenderVault or Objaverse .glb
files, you can use render_blender.py
:
python render_blender.py --objects_path /path/to/objects
The resulting dataset will be stored in blender_output/blender_renders
.
For single view prediction (StableMaterial):
python run_stablematerial.py --data_path data/stablematerial_dataset/<object_id>/train/<object_id>
Results will be saved in out/stablematerial_pred/single_view/<object_id>
.
For multi-view prediction (StableMaterial-MV)
python run_stablematerial.py --data_path data/stablematerial_dataset/<object_id> --num_views 4
Results will be saved in out/stablematerial_pred/multi_view/<object_id>
.
The StableMaterial checkpoints can be downloaded with the data/download_stablematerial_ckpts.py
script.
-
StableMaterial - Extract this into
data/stablematerial-model
. -
StableMaterial-MV - This checkpoint attends to information across views to predict materials that are consistent across multiple views. This helps with difficult views, as seen in the cup example above. Extract this into
data/stablematerial-mv-model
.
If you use any parts of our work, please cite the following:
@inproceedings{litman2025materialfusion,
author = {Yehonathan Litman and Or Patashnik and Kangle Deng and Aviral Agrawal and Rushikesh Zawar and Fernando De la Torre and Shubham Tulsiani},
title = {MaterialFusion: Enhancing Inverse Rendering with Material Diffusion Priors},
booktitle = {3DV},
year = {2025}
}
MaterialFusion was built on top of nvdiffrecmc, check it out!