Working with perception using MoveIt! and Gazebo

Till now, in MoveIt!, we have worked with arm only. In this section, we will see how to interface a 3D vision sensor data to MoveIt!. The sensor can be either simulated using Gazebo or you can directly interface an RGB-D sensor such as Kinect or Xtion Pro using the openni_launch package. Here we will work using Gazebo simulation.

We will add sensors to MoveIt! for vision assisted pick and place. We will create a grasp table and a grasp object in gazebo for the pick and place operation. We will add two custom models called grasp_table and grasp_object. The sample models are located along with the chapter codes and it should copy to the ~/.gazebo/models folder for accessing the models from gazebo. ...

Get ROS Programming: Building Powerful Robots now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.