Stars
This is the implementation of the paper “PS-TTL: Prototype-based Soft-labels and Test-Time Learning for Few-shot Object Detection” (MM 2024)
This is the implementation of the paper "FSD-BEV: Foreground Self-Distillation for Multi-view 3D Object Detection" (ECCV 2024)
[Information Fusion 2024] A Survey on Occupancy Perception for Autonomous Driving: The Information Fusion Perspective
This repository contains the PyTorch implementation of the CVPR'2024 paper (Highlight), IS-Fusion: Instance-Scene Collaborative Fusion for Multimodal 3D Object Detection.
[IROS2024] Camera-Radar Fusion for BEV Map and Object Segmentation
[CVPR 2024] This repo contains the code for our paper: Rethinking Few-shot 3D Point Cloud Semantic Segmentation
MultiCorrupt: A benchmark for robust multi-modal 3D object detection, evaluating LiDAR-Camera fusion models in autonomous driving. Includes diverse corruption types (e.g., misalignment, miscalibrat…
EMIFF: Enhanced Multi-scale Image Feature Fusion for Vehicle-Infrastructure Cooperative 3D Object Detection
Leveraging Anchor-based LiDAR 3D Object Detection via Point Assisted Sample Selection
code for Eliminating Cross-modal Conflicts in BEV Space for LiDAR-Camera 3D Object Detection
[CVPR 2022] SS3D: Sparsely-supervised 3d object detection from point cloud
[ICLR2024] HEAL: An Extensible Framework for Open Heterogeneous Collaborative Perception ➡️ All You Need for Multi-Modality Collaborative Perception!
ICLR2024: LiDAR-PTQ: Post-Training Quantization for Point Cloud 3D Object Detection.
Pytorch implementation of the paper 'Towards Scenario Generalization for Vision-based Roadside 3D Object Detection'
Codes for ICLR 2024: "MixSup: Mixed-grained Supervision for Label-efficient LiDAR-based 3D Object Detection"
[CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI
[ICLR'24 Spotlight] Uni3D: 3D Visual Representation from BAAI