Pytorch implementation of "Deep MVS gone wild" published at 3DV2021 (Paper | website)
This repository provides the code to reproduce the experiments of the paper. It implements extensive comparison of Deep MVS architecture, training data and supervision.
If you find this repository useful for your research, please consider citing
@article{
author = {Darmon, Fran{\c{c}}ois and
Bascle, B{\'{e}}n{\'{e}}dicte and
Devaux, Jean{-}Cl{\'{e}}ment and
Monasse, Pascal and
Aubry, Mathieu},
title = {Deep Multi-View Stereo gone wild},
year = {2021},
url = {International Conference on 3D Vision},
}
-
Python packages: see
requirements.txt
-
Fusibile:
git clone https://github.com/YoYo000/fusibile
cd fusibile
cmake .
make .
ln -s EXE ./fusibile
- COLMAP: see the github repository for installation details then link colmap executable with
ln -s COLMAP_DIR/build/src/exe/colmap colmap
You may find all the pretrained models here (120 Mo) or alternatively you can train models using the following instructions.
Download the following data and extract to folder datasets
- DTU training (19 Go)
- BlendedMVS (27.5 Go)
- Megadepth: MegadepthV1 (199 Go) Geometry (8 Go)
The directory structure should be as follow:
datasets
├─ blended
├─ dtu_train
├─ MegaDepth_v1
├─ undistorted_md_geometry
The data is already preprocessed for DTU and BlendedMVS. For MegaDepth, run python preprocess.py
for generating the training data.
The training script is train.py
, launch python train.py --help
for all the options. For example
python train.py --architecture vis_mvsnet --dataset md --supervised --logdir best_sup --world_size 4 --batch_size 4
for training the best performing setup for images in the wild.python train.py --architecture mvsnet-s --dataset md --unsupervised --upsample --occ_masking --epochs 5 --lrepochs 4:10 --logdir best_unsup --world_size 3
for the best unsupervised model.
The models are saved in folder trained_models
We provide code for both depthmap evaluation and 3D reconstruction evaluation
Download the following links and extract them to datasets
-
BlendedMVS (27.5 GB) same link as BlendedMVS training data
-
YFCC depth maps (1.1Go)
-
DTU MVS benchmark: Create directory
datasets/dtu_eval
and extract the following files- Images (500Mo), rename it as
images
folder - Ground truth (6.3Go)
- evaluation files (6.3Go), the evaluation only need
ObsMask
folder
In the end the folder structure should be
datasets ├─ dtu_eval ├─ ObsMask ├─ images ├─ Points ├─ stl
- Images (500Mo), rename it as
-
YFCC 3D reconstruction (1.5Go)
python depthmap_eval.py --model MODEL --dataset DATA
MODEL
is the name of a folder found intrained_models
DATA
is the evaluation dataset, eitheryfcc
orblended
See python reconstruction_pipeline.py --help
for a complete list of parameters for 3D reconstruction.
For running the whole evaluation for a trained model with the parameters used in the paper, run
scripts/eval3d_dtu.sh --model MODEL (--compute_metrics)
for DTU evaluationscripts/eval3d_yfcc.sh --model MODEL (--compute_metrics)
for YFCC 3D evaluation
The reconstruction will be located in datasets/dtu_eval/Points
or datasets/yfcc_data/Points
This repository is inspired by MVSNet_pytorch and MVSNet repositories. We also adapt the official implementations of Vis_MVSNet and CVP_MVSNet.
Deep MVS Gone Wild All rights reseved to Thales LAS and ENPC.
This code is freely available for academic use only and Provided “as is” without any warranty.
Modification are allowed for academic research provided that the following conditions are met :
* Redistributions of source code or any format must retain the above copyright notice and this list of conditions.
* Neither the name of Thales LAS and ENPC nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.