code
git clone git@github.com:sword4869/nerf-pytorch.git
cd nerf-pytorch
conda create -n nerf python=3.10 -y
conda activate nerf
pip install -r requirements.txt
The LLFF data loader requires ImageMagick.
You will also need the LLFF code (and COLMAP) set up to compute poses if you want to run on your own real data.
Download data for two example datasets: nerf_llff_data
的 fern
和 nerf_synthetic
的 lego
# this bash is so lower than directly downloading the following link.
bash download_example_data.sh
To play with other scenes presented in the paper, download the data here. Place the downloaded dataset according to the following directory structure:
├── configs
│ ├── ...
│
├── data
│ ├── nerf_llff_data
│ │ └── fern
│ │ └── flower # downloaded llff dataset
│ │ └── horns # downloaded llff dataset
| | └── ...
| ├── nerf_synthetic
| | └── lego
| | └── ship # downloaded synthetic dataset
| | └── ...
To train a low-res
lego
NeRF:
python run_nerf.py --config configs/lego.txt
After training for 100k iterations (~4 hours on a single 2080 Ti), you can find the following video at logs/lego_test/lego_test_spiral_100000_rgb.mp4
.
To train a low-res
fern
NeRF:
python run_nerf.py --config configs/fern.txt
After training for 200k iterations (~8 hours on a single 2080 Ti), you can find the following video at logs/fern_test/render_poses_200000_rgb.mp4
and logs/fern_test/render_poses_200000_disp.mp4
You can download the pre-trained models here. Place the downloaded directory in ./logs
in order to test it later. See the following directory structure for an example:
├── logs
│ ├── <expname in config>
│ ├── flower_test # downloaded logs
│ ├── trex_test # downloaded logs
python run_nerf.py --config configs/{DATASET}.txt --render_only