![]() |
mobile_manipulation_rl_demo package from opendr repoopendr_bridge opendr_data_generation opendr_perception opendr_planning opendr_simulation opendr_interface mobile_manipulation_rl_demo franka_description panda_moveit_config single_demo_grasping_demo fmp_slam_eval map_simulator openslam_gmapping gmapping slam_gmapping mobile_manipulation_rl |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache2.0 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning |
Checkout URI | https://github.com/opendr-eu/opendr.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-01-29 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | deep-learning robotics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daniel Honerkamp
Authors
OpenDR mobile manipulation demo
Live demo of mobile manipulation using the OpenDR toolkit.
Set-up
Follow the ROS-setup described for the mobile_manipulation tool.
Running the example
Mobile manipulation tasks in the analytical environment can be run as follows:
python3 mobile_manipulation_demo.py --env pr2
Available tasks include RndStartRndGoals
, RestrictedWs
, PickNPlace
, Door
, Drawer
. Specific tasks can be specified by adding the --eval_tasks
flag. By default it will evaluate all tasks sequentially.
The robot can be specified by the --env
flag to chose between the PR2
and the Tiago
. By default this will load a pretrained model.
To run this with controllers in gazebo or the real world, pass in the --eval_worlds
flag with values gazebo
or real_world
. By default it will use the analytical (sim) environment. Note that running on the real robot requires the robot to be set up and to specify goals for the end-effector motions.
For other options, see the arg flags in mobile_manipulation_demo.py
.
Acknowledgement
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR). This publication reflects the authors’ views only. The European Commission is not responsible for any use that may be made of the information it contains.
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mobile_manipulation_rl_demo at Robotics Stack Exchange
![]() |
mobile_manipulation_rl_demo package from opendr repoopendr_bridge opendr_data_generation opendr_perception opendr_planning opendr_simulation opendr_interface mobile_manipulation_rl_demo franka_description panda_moveit_config single_demo_grasping_demo fmp_slam_eval map_simulator openslam_gmapping gmapping slam_gmapping mobile_manipulation_rl |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache2.0 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning |
Checkout URI | https://github.com/opendr-eu/opendr.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-01-29 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | deep-learning robotics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daniel Honerkamp
Authors
OpenDR mobile manipulation demo
Live demo of mobile manipulation using the OpenDR toolkit.
Set-up
Follow the ROS-setup described for the mobile_manipulation tool.
Running the example
Mobile manipulation tasks in the analytical environment can be run as follows:
python3 mobile_manipulation_demo.py --env pr2
Available tasks include RndStartRndGoals
, RestrictedWs
, PickNPlace
, Door
, Drawer
. Specific tasks can be specified by adding the --eval_tasks
flag. By default it will evaluate all tasks sequentially.
The robot can be specified by the --env
flag to chose between the PR2
and the Tiago
. By default this will load a pretrained model.
To run this with controllers in gazebo or the real world, pass in the --eval_worlds
flag with values gazebo
or real_world
. By default it will use the analytical (sim) environment. Note that running on the real robot requires the robot to be set up and to specify goals for the end-effector motions.
For other options, see the arg flags in mobile_manipulation_demo.py
.
Acknowledgement
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR). This publication reflects the authors’ views only. The European Commission is not responsible for any use that may be made of the information it contains.
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mobile_manipulation_rl_demo at Robotics Stack Exchange
![]() |
mobile_manipulation_rl_demo package from opendr repoopendr_bridge opendr_data_generation opendr_perception opendr_planning opendr_simulation opendr_interface mobile_manipulation_rl_demo franka_description panda_moveit_config single_demo_grasping_demo fmp_slam_eval map_simulator openslam_gmapping gmapping slam_gmapping mobile_manipulation_rl |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache2.0 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning |
Checkout URI | https://github.com/opendr-eu/opendr.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-01-29 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | deep-learning robotics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daniel Honerkamp
Authors
OpenDR mobile manipulation demo
Live demo of mobile manipulation using the OpenDR toolkit.
Set-up
Follow the ROS-setup described for the mobile_manipulation tool.
Running the example
Mobile manipulation tasks in the analytical environment can be run as follows:
python3 mobile_manipulation_demo.py --env pr2
Available tasks include RndStartRndGoals
, RestrictedWs
, PickNPlace
, Door
, Drawer
. Specific tasks can be specified by adding the --eval_tasks
flag. By default it will evaluate all tasks sequentially.
The robot can be specified by the --env
flag to chose between the PR2
and the Tiago
. By default this will load a pretrained model.
To run this with controllers in gazebo or the real world, pass in the --eval_worlds
flag with values gazebo
or real_world
. By default it will use the analytical (sim) environment. Note that running on the real robot requires the robot to be set up and to specify goals for the end-effector motions.
For other options, see the arg flags in mobile_manipulation_demo.py
.
Acknowledgement
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR). This publication reflects the authors’ views only. The European Commission is not responsible for any use that may be made of the information it contains.
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mobile_manipulation_rl_demo at Robotics Stack Exchange
![]() |
mobile_manipulation_rl_demo package from opendr repoopendr_bridge opendr_data_generation opendr_perception opendr_planning opendr_simulation opendr_interface mobile_manipulation_rl_demo franka_description panda_moveit_config single_demo_grasping_demo fmp_slam_eval map_simulator openslam_gmapping gmapping slam_gmapping mobile_manipulation_rl |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache2.0 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning |
Checkout URI | https://github.com/opendr-eu/opendr.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-01-29 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | deep-learning robotics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daniel Honerkamp
Authors
OpenDR mobile manipulation demo
Live demo of mobile manipulation using the OpenDR toolkit.
Set-up
Follow the ROS-setup described for the mobile_manipulation tool.
Running the example
Mobile manipulation tasks in the analytical environment can be run as follows:
python3 mobile_manipulation_demo.py --env pr2
Available tasks include RndStartRndGoals
, RestrictedWs
, PickNPlace
, Door
, Drawer
. Specific tasks can be specified by adding the --eval_tasks
flag. By default it will evaluate all tasks sequentially.
The robot can be specified by the --env
flag to chose between the PR2
and the Tiago
. By default this will load a pretrained model.
To run this with controllers in gazebo or the real world, pass in the --eval_worlds
flag with values gazebo
or real_world
. By default it will use the analytical (sim) environment. Note that running on the real robot requires the robot to be set up and to specify goals for the end-effector motions.
For other options, see the arg flags in mobile_manipulation_demo.py
.
Acknowledgement
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR). This publication reflects the authors’ views only. The European Commission is not responsible for any use that may be made of the information it contains.
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mobile_manipulation_rl_demo at Robotics Stack Exchange
![]() |
mobile_manipulation_rl_demo package from opendr repoopendr_bridge opendr_data_generation opendr_perception opendr_planning opendr_simulation opendr_interface mobile_manipulation_rl_demo franka_description panda_moveit_config single_demo_grasping_demo fmp_slam_eval map_simulator openslam_gmapping gmapping slam_gmapping mobile_manipulation_rl |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache2.0 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning |
Checkout URI | https://github.com/opendr-eu/opendr.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-01-29 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | deep-learning robotics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daniel Honerkamp
Authors
OpenDR mobile manipulation demo
Live demo of mobile manipulation using the OpenDR toolkit.
Set-up
Follow the ROS-setup described for the mobile_manipulation tool.
Running the example
Mobile manipulation tasks in the analytical environment can be run as follows:
python3 mobile_manipulation_demo.py --env pr2
Available tasks include RndStartRndGoals
, RestrictedWs
, PickNPlace
, Door
, Drawer
. Specific tasks can be specified by adding the --eval_tasks
flag. By default it will evaluate all tasks sequentially.
The robot can be specified by the --env
flag to chose between the PR2
and the Tiago
. By default this will load a pretrained model.
To run this with controllers in gazebo or the real world, pass in the --eval_worlds
flag with values gazebo
or real_world
. By default it will use the analytical (sim) environment. Note that running on the real robot requires the robot to be set up and to specify goals for the end-effector motions.
For other options, see the arg flags in mobile_manipulation_demo.py
.
Acknowledgement
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR). This publication reflects the authors’ views only. The European Commission is not responsible for any use that may be made of the information it contains.
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mobile_manipulation_rl_demo at Robotics Stack Exchange
![]() |
mobile_manipulation_rl_demo package from opendr repoopendr_bridge opendr_data_generation opendr_perception opendr_planning opendr_simulation opendr_interface mobile_manipulation_rl_demo franka_description panda_moveit_config single_demo_grasping_demo fmp_slam_eval map_simulator openslam_gmapping gmapping slam_gmapping mobile_manipulation_rl |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache2.0 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning |
Checkout URI | https://github.com/opendr-eu/opendr.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-01-29 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | deep-learning robotics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daniel Honerkamp
Authors
OpenDR mobile manipulation demo
Live demo of mobile manipulation using the OpenDR toolkit.
Set-up
Follow the ROS-setup described for the mobile_manipulation tool.
Running the example
Mobile manipulation tasks in the analytical environment can be run as follows:
python3 mobile_manipulation_demo.py --env pr2
Available tasks include RndStartRndGoals
, RestrictedWs
, PickNPlace
, Door
, Drawer
. Specific tasks can be specified by adding the --eval_tasks
flag. By default it will evaluate all tasks sequentially.
The robot can be specified by the --env
flag to chose between the PR2
and the Tiago
. By default this will load a pretrained model.
To run this with controllers in gazebo or the real world, pass in the --eval_worlds
flag with values gazebo
or real_world
. By default it will use the analytical (sim) environment. Note that running on the real robot requires the robot to be set up and to specify goals for the end-effector motions.
For other options, see the arg flags in mobile_manipulation_demo.py
.
Acknowledgement
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR). This publication reflects the authors’ views only. The European Commission is not responsible for any use that may be made of the information it contains.
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mobile_manipulation_rl_demo at Robotics Stack Exchange
![]() |
mobile_manipulation_rl_demo package from opendr repoopendr_bridge opendr_data_generation opendr_perception opendr_planning opendr_simulation opendr_interface mobile_manipulation_rl_demo franka_description panda_moveit_config single_demo_grasping_demo fmp_slam_eval map_simulator openslam_gmapping gmapping slam_gmapping mobile_manipulation_rl |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache2.0 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning |
Checkout URI | https://github.com/opendr-eu/opendr.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-01-29 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | deep-learning robotics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daniel Honerkamp
Authors
OpenDR mobile manipulation demo
Live demo of mobile manipulation using the OpenDR toolkit.
Set-up
Follow the ROS-setup described for the mobile_manipulation tool.
Running the example
Mobile manipulation tasks in the analytical environment can be run as follows:
python3 mobile_manipulation_demo.py --env pr2
Available tasks include RndStartRndGoals
, RestrictedWs
, PickNPlace
, Door
, Drawer
. Specific tasks can be specified by adding the --eval_tasks
flag. By default it will evaluate all tasks sequentially.
The robot can be specified by the --env
flag to chose between the PR2
and the Tiago
. By default this will load a pretrained model.
To run this with controllers in gazebo or the real world, pass in the --eval_worlds
flag with values gazebo
or real_world
. By default it will use the analytical (sim) environment. Note that running on the real robot requires the robot to be set up and to specify goals for the end-effector motions.
For other options, see the arg flags in mobile_manipulation_demo.py
.
Acknowledgement
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR). This publication reflects the authors’ views only. The European Commission is not responsible for any use that may be made of the information it contains.
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mobile_manipulation_rl_demo at Robotics Stack Exchange
![]() |
mobile_manipulation_rl_demo package from opendr repoopendr_bridge opendr_data_generation opendr_perception opendr_planning opendr_simulation opendr_interface mobile_manipulation_rl_demo franka_description panda_moveit_config single_demo_grasping_demo fmp_slam_eval map_simulator openslam_gmapping gmapping slam_gmapping mobile_manipulation_rl |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache2.0 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning |
Checkout URI | https://github.com/opendr-eu/opendr.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-01-29 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | deep-learning robotics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daniel Honerkamp
Authors
OpenDR mobile manipulation demo
Live demo of mobile manipulation using the OpenDR toolkit.
Set-up
Follow the ROS-setup described for the mobile_manipulation tool.
Running the example
Mobile manipulation tasks in the analytical environment can be run as follows:
python3 mobile_manipulation_demo.py --env pr2
Available tasks include RndStartRndGoals
, RestrictedWs
, PickNPlace
, Door
, Drawer
. Specific tasks can be specified by adding the --eval_tasks
flag. By default it will evaluate all tasks sequentially.
The robot can be specified by the --env
flag to chose between the PR2
and the Tiago
. By default this will load a pretrained model.
To run this with controllers in gazebo or the real world, pass in the --eval_worlds
flag with values gazebo
or real_world
. By default it will use the analytical (sim) environment. Note that running on the real robot requires the robot to be set up and to specify goals for the end-effector motions.
For other options, see the arg flags in mobile_manipulation_demo.py
.
Acknowledgement
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR). This publication reflects the authors’ views only. The European Commission is not responsible for any use that may be made of the information it contains.
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mobile_manipulation_rl_demo at Robotics Stack Exchange
![]() |
mobile_manipulation_rl_demo package from opendr repoopendr_bridge opendr_data_generation opendr_perception opendr_planning opendr_simulation opendr_interface mobile_manipulation_rl_demo franka_description panda_moveit_config single_demo_grasping_demo fmp_slam_eval map_simulator openslam_gmapping gmapping slam_gmapping mobile_manipulation_rl |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache2.0 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning |
Checkout URI | https://github.com/opendr-eu/opendr.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-01-29 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | deep-learning robotics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daniel Honerkamp
Authors
OpenDR mobile manipulation demo
Live demo of mobile manipulation using the OpenDR toolkit.
Set-up
Follow the ROS-setup described for the mobile_manipulation tool.
Running the example
Mobile manipulation tasks in the analytical environment can be run as follows:
python3 mobile_manipulation_demo.py --env pr2
Available tasks include RndStartRndGoals
, RestrictedWs
, PickNPlace
, Door
, Drawer
. Specific tasks can be specified by adding the --eval_tasks
flag. By default it will evaluate all tasks sequentially.
The robot can be specified by the --env
flag to chose between the PR2
and the Tiago
. By default this will load a pretrained model.
To run this with controllers in gazebo or the real world, pass in the --eval_worlds
flag with values gazebo
or real_world
. By default it will use the analytical (sim) environment. Note that running on the real robot requires the robot to be set up and to specify goals for the end-effector motions.
For other options, see the arg flags in mobile_manipulation_demo.py
.
Acknowledgement
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR). This publication reflects the authors’ views only. The European Commission is not responsible for any use that may be made of the information it contains.