Skip to content

[IJCV 2024] P3Former: Position-Guided Point Cloud Panoptic Segmentation Transformer

Notifications You must be signed in to change notification settings

OpenRobotLab/P3Former

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

P3Former: Position-Guided Point Cloud Panoptic Segmentation Transformer

main figure

Introduction

This is an official release of Position-Guided Point Cloud Panoptic Segmentation Transformer.

Abstract

DEtection TRansformer (DETR) started a trend that uses a group of learnable queries for unified visual perception. This work begins by applying this appealing paradigm to LiDAR-based point cloud segmentation and obtains a simple yet effective baseline. Although the naive adaptation obtains fair results, the instance segmentation performance is noticeably inferior to previous works. By diving into the details, we observe that instances in the sparse point clouds are relatively small to the whole scene and often have similar geometry but lack distinctive appearance for segmentation, which are rare in the image domain. Considering instances in 3D are more featured by their positional information, we emphasize their roles during the modeling and design a robust Mixed-parameterized Positional Embedding (MPE) to guide the segmentation process. It is embedded into backbone features and later guides the mask prediction and query update processes iteratively, leading to Position-Aware Segmentation (PA-Seg) and Masked Focal Attention (MFA). All these designs impel the queries to attend to specific regions and identify various instances. The method, named Position-guided Point cloud Panoptic segmentation transFormer (P3Former), outperforms previous state-of-the-art methods by 3.4% and 1.2% PQ on SemanticKITTI and nuScenes benchmark, respectively. The source code and models are available at https://github.com/SmartBot-PJLab/P3Former.

Results

SemanticKITTI test

$\mathrm{PQ}$ $\mathrm{PQ^{\dagger}}$ $\mathrm{RQ}$ $\mathrm{SQ}$ $\mathrm{PQ}^{\mathrm{Th}}$ $\mathrm{PQ}^{\mathrm{St}}$ Download Config
65.3 67.8 74.9 86.6 67.4 63.7 model config

SemanticKITTI validation

$\mathrm{PQ}$ $\mathrm{PQ^{\dagger}}$ $\mathrm{RQ}$ $\mathrm{SQ}$ $\mathrm{PQ}^{\mathrm{Th}}$ $\mathrm{PQ}^{\mathrm{St}}$ Download Config
62.6 66.2 72.4 76.2 69.4 57.7 model config
  • Pretraining a backbone is helpful to stablize the the training process and get slightly better results. You can pretrain a model with config.

Installation

conda create -n p3former python==3.8 -y
conda activate p3former
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/
whl/cu111/torch_stable.html
pip install openmim
mim install mmengine==0.7.4
mim install mmcv==2.0.0rc4
mim install mmdet==3.0.0
mim install mmdet3d==1.1.0
wget https://data.pyg.org/whl/torch-1.10.0%2Bcu113/torch_scatter-2.0.9-cp38-cp38-linux_x86_64.whl
pip install torch_scatter-2.0.9-cp38-cp38-linux_x86_64.whl

Usage

Data preparation

Semantickitti

data/
├── semantickitti
│   ├── sequences
│   │   ├── 00
│   │   |   ├── labels
│   │   |   ├── velodyne
│   │   ├── 01
│   │   ├── ...
│   ├── semantickitti_infos_train.pkl
│   ├── semantickitti_infos_val.pkl
│   ├── semantickitti_infos_test.pkl

You can generate *.pkl by excuting

python tools/create_data.py semantickitti --root-path data/semantickitti --out-dir data/semantickitti --extra-tag semantickitti

Training and testing

# train
sh dist_train.sh $CONFIG $GPUS

# val
sh dist_test.sh $CONFIG $CHECKPOINT $GPUS

# test
sh dist_test.sh $CONFIG $CHECKPOINT $GPUS

Citation

@article{xiao2023p3former,
    title={Position-Guided Point Cloud Panoptic Segmentation Transformer},
    author={Xiao, Zeqi and Zhang, Wenwei and Wang, Tai and Chen Change Loy and Lin, Dahua and Pang, Jiangmiao},
    journal={arXiv preprint},
    year={2023}
}

Acknowledgements

We thank the contributors of MMDetection3D and the authors of Cylinder3D and K-Net for their great work.

About

[IJCV 2024] P3Former: Position-Guided Point Cloud Panoptic Segmentation Transformer

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published