git clone https://github.com/ChaoyueSong/REACTO.git --recursive
cd REACTO
conda env create -f environment.yml
conda activate reacto
bash scripts/install-deps.sh
Our environment is the same as Lab4D, check here for some installation issues.
The preprocessed data used in the paper is available here, you can check the data format. To preprocess your own video, you can run:
# Args: sequence name, number of object, text prompt (segmentation, use other for non-human/non-quad), category from {human, quad, arti, other}, gpu id
python scripts/run_preprocess.py real_laptop 1 other arti "0"
Check this tutorial for more details. We modified the data processing code in Lab4D so that it can also be used for videos containing multiple objects. Both text prompt and category can be lists separated by commas. For example, when there are multiple objects in the video, you can run:
python scripts/run_preprocess.py birds-over-river 3 other,other,other other,quad,human "0"
We will then perform camera annotation and object tracking for each object one by one.
- Release the dataset and data preprocess codes.
- Release training code.
- Release the pretrained models.
@inproceedings{song2024reacto,
title={REACTO: Reconstructing Articulated Objects from a Single Video},
author={Song, Chaoyue and Wei, Jiacheng and Foo, Chuan Sheng and Lin, Guosheng and Liu, Fayao},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5384--5395},
year={2024}
}
This code is heavily based on Lab4D. We thank the authors for their wonderful code!