DensePose aims at learning and establishing dense correspondences between image pixels and 3D object geometry for deformable objects, such as humans or animals. In this repository, we provide the code to train and evaluate DensePose R-CNN and various tools to visualize DensePose annotations and results.
There are two main paradigms that are used within DensePose project.
For chart-based estimation, 3D object mesh is split into charts and
for each pixel the model estimates chart index I
and local chart coordinates (U, V)
.
Please follow the link above to find a detailed overview
of the method, links to trained models along with their performance evaluation in the
Model Zoo and
references to the corresponding papers.
To establish continuous surface embeddings, the model simultaneously learns descriptors for mesh vertices and for image pixels. The embeddings are put into correspondence, thus the location of each pixel on the 3D model is derived. Please follow the link above to find a detailed overview of the method, links to trained models along with their performance evaluation in the Model Zoo and references to the corresponding papers.
See Getting Started
Please check the dedicated pages for chart-based model zoo and for continuous surface embeddings model zoo.
- June 2021: DensePose CSE with Cycle Losses
- March 2021: DensePose CSE (a framework to extend DensePose to various categories using 3D models) and DensePose Evolution (a framework to bootstrap DensePose on unlabeled data) released
- April 2020: DensePose Confidence Estimation and Model Zoo Improvements
Detectron2 is released under the Apache 2.0 license
If you use DensePose, please refer to the BibTeX entries for chart-based models and for continuous surface embeddings.