-
 

Package Summary

Tags No category tags.
Version 0.3.3
License BSD
Build type CATKIN
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/turtlebot/turtlebot_arm.git
VCS Type git
VCS Version kinetic-devel
Last Updated 2020-02-06
Dev Status DEVELOPED
CI status Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

turtlebot_arm_object_manipulation contains a demo allowing the TurtleBot arm to recognize, pick and place objects on a level surface using interactive markers.

Additional Links

Maintainers

  • Jorge Santos

Authors

  • Jorge Santos

turtlebot_arm_object_manipulation

This package contains an experimental full overhaul of the turtlebot_arm_block_manipulation demo. The later worked mostly with custom pieces of code done for demonstrate the turtlebot arm:

  • A pcl-based detector of cubes known size, assuming an horizontal surface at a known height
  • Interactive markers for choosing the targets
  • Pick and place using “move to target pose” move group commands (prior to indigo, it used its own controller, instead of MoveIt!)
  • Basic state machine implemented as a C++ program

On this version, I wanted to combine existing powerful tools to make the demo much more flexible and generalist (and learn about those tools in the process!). So here we use:

  • ORK tabletop for table segmentation and database stored objects recognition. The table pose don’t need to be known in advance, though I still assume it is horizontal. Do not forget to add the objects you want to recognize to the database! (see an example below)
  • The interactive markers for choosing the targets remains pretty the same, though I now identify them with the object name retrieved from the database.
  • I use now pick and place move group commands. The gripper closed position is obtained from the object mesh, asuming it can be retrieved (that’s why we start the object_recognition_ros/object_information_server)
  • More sophisticated control using a SMACH state machine. New features are:
    • User can interact with the SMACH, as it is wrapped with an action server. Valid commands are: start, reset, fold [the arm] and quit
    • Fold the arm if no objects are found, to increase camera vision field
    • Clear the octomap if a motion plan fails. HINT: I removed this because it requires to add 3D sensors to MoveIt!, what is not the default turtlebot_arm configuration. You can take the modified SMACH from my own robot packages thorp_smach.
    • Shutdown all on quit

See here a demo video.

Collision-free arm trajectories

turtlebot_arm_moveit_config doesn’t include 3D sensors on MoveIt!, so you will not see the octomap collision map unless you include it by yourself as explained here.

Add objects to recognize to the database

To add the 2.5 cm side cube that comes on this package, type:

rosrun object_recognition_core object_add.py -n cube_2_5 -d "2.5 cm side cube" --commit
rosrun object_recognition_core mesh_add.py YOUR_OBJECT_ID `rospack find turtlebot_arm_object_manipulation`/meshes/cube_2_5.stl --commit

WARNING

This demo requires some code changes still not released:

  • New planning scene interface methods, implemented on this pending PR. Upon merging, the demo will not compile.
  • Fix this issue with the gripper controller. You can use my fork meanwhile.
CHANGELOG

Changelog for package turtlebot_arm_object_manipulation

Forthcoming

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged turtlebot_arm_object_manipulation at Robotics Stack Exchange