This is the official DOPE ROS package for detection and 6-DoF pose estimation of known objects from an RGB camera. The network has been trained on the following YCB objects: cracker box, sugar box, tomato soup can, mustard bottle, potted meat can, and gelatin box. For more details, see our CoRL 2018 paper and video.
Note: The instructions below refer to inference only. Training code is also provided but not supported.
-
Set up system / Docker image
We have tested on Ubuntu 16.04 with ROS Kinetic with an NVIDIA Titan X with python 2.7. The code may work on other systems. If you do not have the full ROS install, you may need to install some packages, e.g.,
apt-get install ros-kinetic-cv-bridge
Alternatively, use the provided Docker image and skip to Step #5.
-
Create a catkin workspace (if you do not already have one). To create a catkin workspace, follow these instructions:
$ mkdir -p ~/catkin_ws/src # Replace `catkin_ws` with the name of your workspace $ cd ~/catkin_ws/ $ catkin_make
-
Download the DOPE code
$ cd ~/catkin_ws/src $ git clone https://github.com/NVlabs/Deep_Object_Pose.git dope
-
Install dependencies
$ cd ~/catkin_ws/src/dope $ pip install -r requirements.txt
-
Build
$ cd ~/catkin_ws $ catkin_make
-
Download the weights and save them to the
weights
folder, i.e.,~/catkin_ws/src/dope/weights/
.
-
Start ROS master
$ cd ~/catkin_ws $ source devel/setup.bash $ roscore
-
Start camera node
Start the ROS driver for your camera. If you are using a webcam, try one of these:
The camera must publish a correct
camera_info
topic to enable DOPE to compute the correct poses. Basically all ROS drivers have acamera_info_url
parameter where you can set the calibration info (but most ROS drivers include a reasonable default).For details see the camera tutorial.
-
Edit config info (if desired) in
~/catkin_ws/src/dope/config/config_pose.yaml
topic_camera
: RGB topic to listen totopic_camera_info
: camera info topic to listen totopic_publishing
: topic name for publishinginput_is_rectified
: Whether the input images are rectified. It is strongly suggested to use a rectified input topic.weights
: dictionary of object names and there weights path name, comment out any line to disable detection/estimation of that objectdimension
: dictionary of dimensions for the objects (key values must match theweights
names)draw_colors
: dictionary of object colors (key values must match theweights
names)thresh_points
: Thresholding the confidence for object detection; increase this value if you see too many false positives, reduce it if objects are not detected.downscale_height
: If the input image is larger than this, scale it down to this pixel height. Very large input images eat up all the GPU memory and slow down inference. Also, DOPE works best when the object size (in pixels) has appeared in the training data (which is downscaled to 400 px). For these reasons, downscaling large input images to something reasonable (e.g., 400-500 px) improves memory consumption, inference speed and recognition results.
-
Start DOPE node
$ roslaunch dope dope.launch [config:=/path/to/my_config.yaml] # Config file is optional; default is `config_pose.yaml`
-
The following ROS topics are published:
/dope/webcam_rgb_raw # RGB images from camera /dope/dimension_[obj_name] # dimensions of object /dope/pose_[obj_name] # timestamped pose of object /dope/rgb_points # RGB images with detected cuboids overlaid
Note:
[obj_name]
is in {cracker, gelatin, meat, mustard, soup, sugar} -
To debug in RViz,
rosrun rviz rviz
, then eitherAdd > Image
to view the raw RGB image or the image with cuboids overlaidAdd > Pose
to view the object coordinate frame in 3D. If you do not have a coordinate frame set up, you can run this static transformation:rosrun tf2_ros static_transform_publisher 0 0 0 0.7071 0 0 -0.7071 world <camera_frame_id>
, where<camera_frame_id>
is theframe_id
of your input camera messages. Make sure that in RViz'sGlobal Options
, theFixed Frame
is set toworld
. Alternatively, you can skip thestatic_transform_publisher
step and directly set theFixed Frame
to your<camera_frame_id>
.
-
If
rosrun
does not find the package ([rospack] Error: package 'dope' not found
), be sure that you calledsource devel/setup.bash
as mentioned above. To find the package, runrospack find dope
.
DOPE returns the poses of the objects in the camera coordinate frame. DOPE uses the aligned YCB models, which can be obtained using NVDU (see the nvdu_ycb
command).
If you use this tool in a research project, please cite as follows:
@inproceedings{tremblay2018corl:dope,
author = {Jonathan Tremblay and Thang To and Balakumar Sundaralingam and Yu Xiang and Dieter Fox and Stan Birchfield},
title = {Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects},
booktitle = {Conference on Robot Learning (CoRL)},
url = "https://arxiv.org/abs/1809.10790",
year = 2018
}
Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license.
Thanks to Jeffrey Smith (jeffreys@nvidia.com) for creating the Docker image.
Jonathan Tremblay (jtremblay@nvidia.com), Stan Birchfield (sbirchfield@nvidia.com)