This repository implements a deep learning-based Monocular Visual SLAM. The key idea is to integrate a pre-trained depth estimation model and a camera optical flow estimation algorithm to construct the Visual SLAM Front-end. Using self-supervised learning, we enable camera ego-motion estimation, similar to Monodepth2. In the future, once the training results of each module (depth, flow, pose) are stabilized, we aim to integrate SLAM Backend algorithms as well.
- Deep-based Visual SLAM (VO/VIO)
- Monocular Depth Estimation
- Optical Flow Estimation
- Visual Odometry (with inertial sensors)
Future work
Future work