Highlights
- Pro
Stars
This repository is for "Robust Loop Closure by Textual Cues in Challenging Environments".
[arXiv'24] VistaDream: Sampling multiview consistent images for single-view scene reconstruction
[IEEE RA-L 2024 & ICRA'25] CoFiI2P: Coarse-to-Fine Correspondences-Based Image-to-Point Cloud Registration
CodexLabsLLC / Colosseum
Forked from microsoft/AirSimOpen source simulator for autonomous robotics built on Unreal Engine with support for Unity
[CVPR'24] LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry
[IROS 2024] I2EKF-LO: A Dual-Iteration Extended Kalman Filter based LiDAR Odometry
Roadside GNSS Enhanced Multi-Sensor Integration for Intelligent Vehicles in Urban Areas
[CVPR2024] SchurVINS: Schur Complement-Based Lightweight Visual Inertial Navigation System
AnyLoc: Universal Visual Place Recognition (RA-L 2023)
DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021)
Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
Altium Designs for DepthAI Carrier Boards
LiDAR degeneracy dataset for LiDAR-radar-inertial fusion methods.
[ICLR 2024] AGILE3D: Attention Guided Interactive Multi-object 3D Segmentation
📍PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency [TRO' 24]
mikh3x4 / nerf-navigation
Forked from yenchenlin/nerf-pytorchCode for the Nerf Navigation Paper. Implements a trajectory optimiser and state estimator which use NeRFs as an environment representation
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
[IEEE RAL'24 & IROS'24] Mobile-Seed: Joint Semantic Segmentation and Boundary Detection for Mobile Robots
PatchAugNet: Patch feature augmentation-based heterogeneous point cloud place recognition in large-scale street scenes
Official implementation for CVPR'2021 paper Neural Deformation Graphs
A project demonstrating how to use the libs of cuPCL.
OmniControl: Control Any Joint at Any Time for Human Motion Generation, ICLR 2024