Skip to content

Commit

Permalink
2017-12-29 New features: Add map merge, pose graph reuse, online temp…
Browse files Browse the repository at this point in the history
…oral calibration function, and support rolling shutter camera
  • Loading branch information
qintonguav committed Dec 29, 2017
1 parent f5904b1 commit 96e770e
Show file tree
Hide file tree
Showing 83 changed files with 4,236 additions and 4,199 deletions.
58 changes: 34 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# VINS-Mono
## A Robust and Versatile Monocular Visual-Inertial State Estimator
###2017-12-29 New features: Add map merge, pose graph reuse, online temporal calibration function, and support rolling shutter camera. Map reuse videos:

VINS-Mono is a real-time SLAM framework for **Monocular Visual-Inertial Systems**. It uses an optimization-based sliding window formulation for providing high-accuracy visual-inertial odometry. It features efficient IMU pre-integration with bias correction, automatic estimator initialization, online extrinsic calibration, failure detection and recovery, loop detection, and global pose graph optimization. VINS-Mono is primarily designed for state estimation and feedback control of autonomous drones, but it is also capable of providing accurate localization for AR applications. This code runs on **Linux**, and is fully integrated with **ROS**. For **iOS** mobile implementation, please go to [VINS-Mobile](https://github.com/HKUST-Aerial-Robotics/VINS-Mobile).
VINS-Mono is a real-time SLAM framework for **Monocular Visual-Inertial Systems**. It uses an optimization-based sliding window formulation for providing high-accuracy visual-inertial odometry. It features efficient IMU pre-integration with bias correction, automatic estimator initialization, online extrinsic calibration, failure detection and recovery, loop detection, and global pose graph optimization, map merge, pose graph reuse, online temporal calibration, rolling shutter support. VINS-Mono is primarily designed for state estimation and feedback control of autonomous drones, but it is also capable of providing accurate localization for AR applications. This code runs on **Linux**, and is fully integrated with **ROS**. For **iOS** mobile implementation, please go to [VINS-Mobile](https://github.com/HKUST-Aerial-Robotics/VINS-Mobile).

**Authors:** [Tong Qin](https://github.com/qintony), [Peiliang Li](https://github.com/PeiliangLi), [Zhenfei Yang](https://github.com/dvorak0), and [Shaojie Shen](http://www.ece.ust.hk/ece.php/profile/facultydetail/eeshaojie) from the [HUKST Aerial Robotics Group](http://uav.ust.hk/)

Expand Down Expand Up @@ -47,20 +48,17 @@ alt="Mobile platform" width="240" height="180" border="10" /></a>

## 1. Prerequisites
1.1 **Ubuntu** and **ROS**
Ubuntu 14.04 16.04.
ROS Indigo, Kinetic. [ROS Installation](http://wiki.ros.org/indigo/Installation/Ubuntu)
Ubuntu 14.04 / 16.04.
ROS Indigo / Kinetic. [ROS Installation](http://wiki.ros.org/indigo/Installation/Ubuntu)
additional ROS pacakge
```
sudo apt-get install ros-YOUR_DISTRO-cv-bridge ros-YOUR_DISTRO-tf ros-YOUR_DISTRO-message-filters ros-YOUR_DISTRO-image-transport
```
If you install ROS Kinetic, please update opencv3 with
```
sudo apt-get install ros-kinetic-opencv3
```


1.2. **Ceres Solver**
Follow [Ceres Installation](http://ceres-solver.org/installation.html), remember to **make install**.
(Our testing environment: Ubuntu 14.04, ROS Indigo, OpenCV 2.4.8, Eigen 3.2.0)
(Our testing environment: Ubuntu 16.04, ROS Kinetic, OpenCV 3.3.1, Eigen 3.3.3)

## 2. Build VINS-Mono on ROS
Clone the repository and catkin_make:
Expand All @@ -72,29 +70,36 @@ Clone the repository and catkin_make:
source ~/catkin_ws/devel/setup.bash
```

## 3. Performance on EuRoC dataset
3.1 Download [EuRoC MAV Dataset](http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets). Although it contains stereo cameras, we only use one camera.

3.2 Open three terminals, launch the vins_estimator , rviz and play the bag file respectively. Take MH_05 as example
## 3. Performance on Public datasets
Download [EuRoC MAV Dataset](http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets). Although it contains stereo cameras, we only use one camera. The system also works with [ETH-asl cla dataset](http://robotics.ethz.ch/~asl-datasets/maplab/multi_session_mapping_CLA/bags/). We take EuRoC as the example.
###3.1 visual-inertial odometry and loop closure
3.1.1 Open three terminals, launch the vins_estimator , rviz and play the bag file respectively. Take MH_01 for example
```
roslaunch vins_estimator euroc.launch
roslaunch vins_estimator vins_rviz.launch
rosbag play YOUR_PATH_TO_DATASET/MH_05_difficult.bag
roslaunch roslaunch pose_graph euroc.launch
rosbag play YOUR_PATH_TO_DATASET/MH_01_easy.bag
```
(If you fail to open vins_rviz.launch, just open an empty rviz, then load the config file: file -> Open Config-> YOUR_VINS_FOLDER/config/vins_rviz_config.rviz)
/vins_estimator/path is the IMU center's trajectory, /vins_estimator/odometry is the IMU center's odometry and /vins_estimator/camera_pose is the camera's pose.

3.3 (Optional) Visualize ground truth. We write a naive benchmark publisher to help you visualize the ground truth. It uses a naive strategy to align VINS with ground truth. Just for visualization. not for quantitative comparison on academic publications.
3.1.2 (Optional) Visualize ground truth. We write a naive benchmark publisher to help you visualize the ground truth. It uses a naive strategy to align VINS with ground truth. Just for visualization. not for quantitative comparison on academic publications.
```
roslaunch benchmark_publisher publish.launch sequence_name:=MH_05_difficult
```
(Green line is VINS result, red line is ground truth).

3.4 (Optional) You can even run EuRoC **without extrinsic parameters** between camera and IMU. We will calibrate them online. Replace the first command with:
3.1.3 (Optional) You can even run EuRoC **without extrinsic parameters** between camera and IMU. We will calibrate them online. Replace the first command with:
```
roslaunch vins_estimator euroc_no_extrinsic_param.launch
```
**No extrinsic parameters** in that config file. Waiting a few seconds for initial calibration. Sometimes you cannot feel any difference as the calibration is done quickly.
###3.2 map merge
After playing MH_01 bag, you can continue playing MH_02 bag, MH_03 bag ... The system will merge them according to the loop closure.
###3.3 map reuse
3.3.1 map save
Set the **pose_graph_save_path** in the config file (YOUR_VINS_FOLEDER/config/euroc/euroc_config.yaml). After playing MH_01 bag, enter **s** in vins_estimator terminal. The current pose graph will be saved.
3.3.2 map load
Set the **load_previous_pose_graph** to 1 before doing 3.1.1. The system will load previous pose graph from **pose_graph_save_path**. Then you can play MH_02 bag. New sequence will be aligned to the previous pose graph.

## 4. AR Demo
4.1 Download the [bag file](https://www.dropbox.com/s/s29oygyhwmllw9k/ar_box.bag?dl=0), which is collected from HKUST Robotic Institute. For friends in mainland China, download from [bag file](https://pan.baidu.com/s/1geEyHNl).
Expand All @@ -111,23 +116,28 @@ We put one 0.8m x 0.8m x 0.8m virtual box in front of your view.

Suppose you are familiar with ROS and you can get a camera and an IMU with raw metric measurements in ROS topic, you can follow these steps to set up your device. For beginners, we highly recommend you to first try out [VINS-Mobile](https://github.com/HKUST-Aerial-Robotics/VINS-Mobile) if you have iOS devices since you don't need to set up anything.

5.1 Change to your topic name in the config file. The image should exceed 20Hz and IMU should exceed 100Hz. Both image and IMU should have the accurate time stamp.
5.1 Change to your topic name in the config file. The image should exceed 20Hz and IMU should exceed 100Hz. Both image and IMU should have the accurate time stamp. IMU should contain absolute acceleration values including gravity.

5.2 Camera calibration:

We support the [pinhole model](http://docs.opencv.org/2.4.8/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html) and the [MEI model](http://www.robots.ox.ac.uk/~cmei/articles/single_viewpoint_calib_mei_07.pdf). You can calibrate your camera with any tools you like. Just write the parameters in the config file in the right format.
We support the [pinhole model](http://docs.opencv.org/2.4.8/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html) and the [MEI model](http://www.robots.ox.ac.uk/~cmei/articles/single_viewpoint_calib_mei_07.pdf). You can calibrate your camera with any tools you like. Just write the parameters in the config file in the right format. If you use rolling shutter camera, please carefully calibrate your camera, making sure the reprojection error is less than 0.5 pixel.

5.3 **Camera-Imu extrinsic parameters**:

If you have seen the config files for EuRoC and AR demos, you can find that we can estimate and refine them online. If you familiar with transformation, you can figure out the rotation and position by your eyes or via hand measurements. Then write these values into config as the initial guess. Our estimator will refine extrinsic parameters online. If you don't know anything about the camera-IMU transformation, just ignore the extrinsic parameters and set the **estimate_extrinsic** to **2**, and rotate your device set at the beginning for a few seconds. When the system works successfully, we will save the calibration result. you can use these result as initial values for next time. An example of how to set the extrinsic parameters is in[extrinsic_parameter_example](https://github.com/HKUST-Aerial-Robotics/VINS-Mono/blob/master/config/extrinsic_parameter_example.pdf)

5.3 Camera-Imu extrinsic parameters:
5.4 **Temporal calibration**:
Most self-made visual-inertial sensor sets are unsynchronized. You can set **estimate_td** to 1 to online estimate the time offset between your camera and IMU.

If you have seen the config files for EuRoC and AR demos, you can find that we just use coarse values. If you familiar with transformation, you can figure out the rotation and position by your eyes or via hand measurements. Then write these values into config as the initial guess. Our estimator will refine extrinsic parameters online. If you don't know anything about the camera-IMU transformation, just ignore the extrinsic parameters and set the **estimate_extrinsic** to **2**, and rotate your device set at the beginning for a few seconds. When the system works successfully, we will save the calibration result. you can use these result as initial values for next time. An example of how to set the extrinsic parameters is in[extrinsic_parameter_example](https://github.com/HKUST-Aerial-Robotics/VINS-Mono/blob/master/config/extrinsic_parameter_example.pdf)
5.5 **Rolling shutter**:
For rolling shutter camera (carefully calibrated, reprojection error under 0.5 pixel), set **rolling_shutter** to 1. Also, you should set rolling shutter readout time **rolling_shutter_tr**, which is from sensor datasheet(usually 0-0.05s, not exposure time). Don't try web camera, the web camera is so awful.

5.4 Other parameter settings: Details are included in the config file.
5.6 Other parameter settings: Details are included in the config file.

5.5 Performance on different devices:
5.7 Performance on different devices:

(global shutter camera + synchronized high-end IMU, e.g. VI-Sensor) > (global shutter camera + synchronized low-end IMU, e.g. camera+DJI A3) > (global camera + unsync high frequency IMU) > (global camera + unsync low frequency IMU) > (rolling camera + unsync low frequency IMU).
(global shutter camera + synchronized high-end IMU, e.g. VI-Sensor) > (global shutter camera + synchronized low-end IMU) > (global camera + unsync high frequency IMU) > (global camera + unsync low frequency IMU) > (rolling camera + unsync low frequency IMU).

**DO NOT** start with a rolling shutter camera and unsync IMU (such as DJI M100 + Logitech web camera) at beginning.

## 6. Acknowledgements
We use [ceres solver](http://ceres-solver.org/) for non-linear optimization and [DBoW2](https://github.com/dorian3d/DBoW2) for loop detection, and a generic [camera model](https://github.com/hengli/camodocal).
Expand Down
12 changes: 0 additions & 12 deletions ar_demo/launch/ar_A3.launch

This file was deleted.

30 changes: 15 additions & 15 deletions ar_demo/src/ar_demo_node.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -349,22 +349,22 @@ void draw_object(cv::Mat &AR_image)
}
}

void callback(const ImageConstPtr& img_msg, const geometry_msgs::PoseStamped::ConstPtr& pose_msg)
void callback(const ImageConstPtr& img_msg, const nav_msgs::Odometry::ConstPtr pose_msg)
{
//throw the first few unstable pose
if(img_cnt < 20)
if(img_cnt < 50)
{
img_cnt ++;
return;
}
//ROS_INFO("sync callback!");
Vector3d camera_p(pose_msg->pose.position.x,
pose_msg->pose.position.y,
pose_msg->pose.position.z);
Quaterniond camera_q(pose_msg->pose.orientation.w,
pose_msg->pose.orientation.x,
pose_msg->pose.orientation.y,
pose_msg->pose.orientation.z);
Vector3d camera_p(pose_msg->pose.pose.position.x,
pose_msg->pose.pose.position.y,
pose_msg->pose.pose.position.z);
Quaterniond camera_q(pose_msg->pose.pose.orientation.w,
pose_msg->pose.pose.orientation.x,
pose_msg->pose.pose.orientation.y,
pose_msg->pose.pose.orientation.z);

//test plane
Vector3d cam_z(0, 0, -1);
Expand Down Expand Up @@ -426,18 +426,18 @@ void point_callback(const sensor_msgs::PointCloudConstPtr &point_msg)
}
if (max_index == -1)
return;
int neigh_num = height_range[max_index - 1] + height_range[max_index] + height_range[max_index + 1];
double neigh_height = (height_sum[max_index - 1] + height_sum[max_index] + height_sum[max_index + 1]) / neigh_num;
//ROS_WARN("detect ground plain, height %f", neigh_height);
if (neigh_num < (int)point_msg->points.size() / 2)
int tmp_num = height_range[max_index - 1] + height_range[max_index] + height_range[max_index + 1];
double new_height = (height_sum[max_index - 1] + height_sum[max_index] + height_sum[max_index + 1]) / tmp_num;
//ROS_WARN("detect ground plain, height %f", new_height);
if (tmp_num < (int)point_msg->points.size() / 2)
{
//ROS_INFO("points not enough");
return;
}
//update height
for (int i = 0; i < cube_num; i++)
{
Cube_center[i].z() = neigh_height + box_length / 2.0;
Cube_center[i].z() = new_height + box_length / 2.0;
}
add_object();

Expand All @@ -451,7 +451,7 @@ void img_callback(const ImageConstPtr& img_msg)
else
return;
}
void pose_callback(const geometry_msgs::PoseStamped::ConstPtr& pose_msg)
void pose_callback(const nav_msgs::Odometry::ConstPtr &pose_msg)
{
if(!pose_init)
{
Expand Down
4 changes: 2 additions & 2 deletions benchmark_publisher/launch/publish.launch
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
<launch>
<!--MH_01_easy MH_02_easy MH_03_medium MH_04_difficult MH_05_difficult V1_01_easy V1_02_medium V1_03_difficult V2_01_easy V2_02_medium V2_03_difficult -->
<arg name="sequence_name" default = "MH_05_difficult" />
<arg name="sequence_name" default = "MH_01_easy" />

<node name="benchmark_publisher" pkg="benchmark_publisher" type="benchmark_publisher" output="screen">
<param name="data_name" type="string" value="$(find benchmark_publisher)/config/$(arg sequence_name)/data.csv" />
<remap from="~estimated_odometry" to="/vins_estimator/odometry" />
</node>
<!--
<node pkg="rosbag" type="play" name="player" output="log"
args="/home/tony-ws/bag/ijrr_euroc_mav_dataset/$(arg sequence_name)/$(arg sequence_name).bag -r 2" />
args="/home/tony-ws1/bag/ijrr_euroc_mav_dataset/$(arg sequence_name)/$(arg sequence_name).bag -r 2" />
-->
</launch>
30 changes: 15 additions & 15 deletions camera_model/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -34,20 +34,20 @@ include_directories(
include_directories("include")


# add_executable(Calibration
# src/intrinsic_calib.cc
# src/chessboard/Chessboard.cc
# src/calib/CameraCalibration.cc
# src/camera_models/Camera.cc
# src/camera_models/CameraFactory.cc
# src/camera_models/CostFunctionFactory.cc
# src/camera_models/PinholeCamera.cc
# src/camera_models/CataCamera.cc
# src/camera_models/EquidistantCamera.cc
# src/camera_models/ScaramuzzaCamera.cc
# src/sparse_graph/Transform.cc
# src/gpl/gpl.cc
# src/gpl/EigenQuaternionParameterization.cc)
add_executable(Calibration
src/intrinsic_calib.cc
src/chessboard/Chessboard.cc
src/calib/CameraCalibration.cc
src/camera_models/Camera.cc
src/camera_models/CameraFactory.cc
src/camera_models/CostFunctionFactory.cc
src/camera_models/PinholeCamera.cc
src/camera_models/CataCamera.cc
src/camera_models/EquidistantCamera.cc
src/camera_models/ScaramuzzaCamera.cc
src/sparse_graph/Transform.cc
src/gpl/gpl.cc
src/gpl/EigenQuaternionParameterization.cc)

add_library(camera_model
src/chessboard/Chessboard.cc
Expand All @@ -63,5 +63,5 @@ add_library(camera_model
src/gpl/gpl.cc
src/gpl/EigenQuaternionParameterization.cc)

# target_link_libraries(Calibration ${Boost_LIBRARIES} ${OpenCV_LIBS} ${CERES_LIBRARIES})
target_link_libraries(Calibration ${Boost_LIBRARIES} ${OpenCV_LIBS} ${CERES_LIBRARIES})
target_link_libraries(camera_model ${Boost_LIBRARIES} ${OpenCV_LIBS} ${CERES_LIBRARIES})
Loading

0 comments on commit 96e770e

Please sign in to comment.