No version for distro humble showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 3.0.0
License Apache License v2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning
Checkout URI https://github.com/opendr-eu/opendr.git
VCS Type git
VCS Version master
Last Updated 2025-01-29
Dev Status UNKNOWN
Released UNRELEASED
Tags deep-learning robotics
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

OpenDR ROS2 nodes for the perception package

Additional Links

No additional links.

Maintainers

  • OpenDR Project Coordinator

Authors

No additional authors.

OpenDR Perception Package

This package contains ROS2 nodes related to the perception package of OpenDR.


Prerequisites

Before you can run any of the toolkit’s ROS2 nodes, some prerequisites need to be fulfilled:

  1. First of all, you need to set up the required packages and build your workspace.
  2. (Optional for nodes with RGB input)

    For basic usage and testing, all the toolkit’s ROS2 nodes that use RGB images are set up to expect input from a basic webcam using the default package usb_cam which is installed with OpenDR. You can run the webcam node in a new terminal:

    ros2 run usb_cam usb_cam_node_exe
    
By default, the USB cam node publishes images on `/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.**
  1. (Optional for nodes with audio input or audiovisual input)

    For basic usage and testing, the toolkit’s ROS2 nodes that use audio as input are set up to expect input from a basic audio device using the default package audio_common which is installed with OpenDR. You can run the audio node in a new terminal:

    ros2 launch audio_capture capture_wave.launch.xml
    
By default, the audio capture node publishes audio data on `/audio/audio` and the audio input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing audio, **make sure to change the input topic accordingly.**

Notes

  • Display output images with rqt_image_view

    For any node that outputs images, rqt_image_view can be used to display them by running the following command:

    ros2 run rqt_image_view rqt_image_view &
    
A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window.
Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view.
  • Echo node output

    All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command:

    ros2 topic echo /opendr/topic_name
    
You can find out the default topic name for each node, in its documentation below.
  • Increase performance by disabling output

    Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing None in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance.

    An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations.

  • Logging the node performance in the console

    OpenDR provides the utility performance node to log performance messages in the console for the running node. You can set the performance_topic of the node you are using and also run the performance node to get the time it takes for the node to process a single input and its average speed expressed in frames per second.

  • An example diagram of OpenDR nodes running

    Face Detection ROS2 node running diagram

    • On the left, the usb_cam node can be seen, which is using a system camera to publish images on the /image_raw topic.
    • In the middle, OpenDR’s face detection node is running taking as input the published image. By default, the node has its input topic set to /image_raw.
    • To the right the two output topics of the face detection node can be seen. The bottom topic /opendr/image_faces_annotated is the annotated image which can be easily viewed with rqt_image_view as explained earlier. The other topic /opendr/faces is the detection message which contains the detected faces’ detailed information. This message can be easily viewed by running ros2 topic echo /opendr/faces in a terminal.

RGB input nodes

Pose Estimation ROS2 Node

You can find the pose estimation ROS2 node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit’s pose estimation tool whose documentation can be found here. The node publishes the detected poses in OpenDR’s 2D pose message format, which saves a list of OpenDR’s keypoint message format.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the pose detection node:

    ros2 run opendr_perception pose_estimation
    
The following optional arguments are available:    - `-h, --help`: show a help message and exit    - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)    - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)    - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)    - `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages (default=`None`, disabled)    - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)    - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy
  1. Default output topics:
    • Output images: /opendr/image_pose_annotated

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged opendr_perception at Robotics Stack Exchange

No version for distro jazzy showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 3.0.0
License Apache License v2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning
Checkout URI https://github.com/opendr-eu/opendr.git
VCS Type git
VCS Version master
Last Updated 2025-01-29
Dev Status UNKNOWN
Released UNRELEASED
Tags deep-learning robotics
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

OpenDR ROS2 nodes for the perception package

Additional Links

No additional links.

Maintainers

  • OpenDR Project Coordinator

Authors

No additional authors.

OpenDR Perception Package

This package contains ROS2 nodes related to the perception package of OpenDR.


Prerequisites

Before you can run any of the toolkit’s ROS2 nodes, some prerequisites need to be fulfilled:

  1. First of all, you need to set up the required packages and build your workspace.
  2. (Optional for nodes with RGB input)

    For basic usage and testing, all the toolkit’s ROS2 nodes that use RGB images are set up to expect input from a basic webcam using the default package usb_cam which is installed with OpenDR. You can run the webcam node in a new terminal:

    ros2 run usb_cam usb_cam_node_exe
    
By default, the USB cam node publishes images on `/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.**
  1. (Optional for nodes with audio input or audiovisual input)

    For basic usage and testing, the toolkit’s ROS2 nodes that use audio as input are set up to expect input from a basic audio device using the default package audio_common which is installed with OpenDR. You can run the audio node in a new terminal:

    ros2 launch audio_capture capture_wave.launch.xml
    
By default, the audio capture node publishes audio data on `/audio/audio` and the audio input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing audio, **make sure to change the input topic accordingly.**

Notes

  • Display output images with rqt_image_view

    For any node that outputs images, rqt_image_view can be used to display them by running the following command:

    ros2 run rqt_image_view rqt_image_view &
    
A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window.
Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view.
  • Echo node output

    All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command:

    ros2 topic echo /opendr/topic_name
    
You can find out the default topic name for each node, in its documentation below.
  • Increase performance by disabling output

    Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing None in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance.

    An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations.

  • Logging the node performance in the console

    OpenDR provides the utility performance node to log performance messages in the console for the running node. You can set the performance_topic of the node you are using and also run the performance node to get the time it takes for the node to process a single input and its average speed expressed in frames per second.

  • An example diagram of OpenDR nodes running

    Face Detection ROS2 node running diagram

    • On the left, the usb_cam node can be seen, which is using a system camera to publish images on the /image_raw topic.
    • In the middle, OpenDR’s face detection node is running taking as input the published image. By default, the node has its input topic set to /image_raw.
    • To the right the two output topics of the face detection node can be seen. The bottom topic /opendr/image_faces_annotated is the annotated image which can be easily viewed with rqt_image_view as explained earlier. The other topic /opendr/faces is the detection message which contains the detected faces’ detailed information. This message can be easily viewed by running ros2 topic echo /opendr/faces in a terminal.

RGB input nodes

Pose Estimation ROS2 Node

You can find the pose estimation ROS2 node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit’s pose estimation tool whose documentation can be found here. The node publishes the detected poses in OpenDR’s 2D pose message format, which saves a list of OpenDR’s keypoint message format.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the pose detection node:

    ros2 run opendr_perception pose_estimation
    
The following optional arguments are available:    - `-h, --help`: show a help message and exit    - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)    - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)    - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)    - `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages (default=`None`, disabled)    - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)    - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy
  1. Default output topics:
    • Output images: /opendr/image_pose_annotated

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged opendr_perception at Robotics Stack Exchange

No version for distro kilted showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 3.0.0
License Apache License v2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning
Checkout URI https://github.com/opendr-eu/opendr.git
VCS Type git
VCS Version master
Last Updated 2025-01-29
Dev Status UNKNOWN
Released UNRELEASED
Tags deep-learning robotics
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

OpenDR ROS2 nodes for the perception package

Additional Links

No additional links.

Maintainers

  • OpenDR Project Coordinator

Authors

No additional authors.

OpenDR Perception Package

This package contains ROS2 nodes related to the perception package of OpenDR.


Prerequisites

Before you can run any of the toolkit’s ROS2 nodes, some prerequisites need to be fulfilled:

  1. First of all, you need to set up the required packages and build your workspace.
  2. (Optional for nodes with RGB input)

    For basic usage and testing, all the toolkit’s ROS2 nodes that use RGB images are set up to expect input from a basic webcam using the default package usb_cam which is installed with OpenDR. You can run the webcam node in a new terminal:

    ros2 run usb_cam usb_cam_node_exe
    
By default, the USB cam node publishes images on `/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.**
  1. (Optional for nodes with audio input or audiovisual input)

    For basic usage and testing, the toolkit’s ROS2 nodes that use audio as input are set up to expect input from a basic audio device using the default package audio_common which is installed with OpenDR. You can run the audio node in a new terminal:

    ros2 launch audio_capture capture_wave.launch.xml
    
By default, the audio capture node publishes audio data on `/audio/audio` and the audio input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing audio, **make sure to change the input topic accordingly.**

Notes

  • Display output images with rqt_image_view

    For any node that outputs images, rqt_image_view can be used to display them by running the following command:

    ros2 run rqt_image_view rqt_image_view &
    
A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window.
Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view.
  • Echo node output

    All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command:

    ros2 topic echo /opendr/topic_name
    
You can find out the default topic name for each node, in its documentation below.
  • Increase performance by disabling output

    Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing None in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance.

    An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations.

  • Logging the node performance in the console

    OpenDR provides the utility performance node to log performance messages in the console for the running node. You can set the performance_topic of the node you are using and also run the performance node to get the time it takes for the node to process a single input and its average speed expressed in frames per second.

  • An example diagram of OpenDR nodes running

    Face Detection ROS2 node running diagram

    • On the left, the usb_cam node can be seen, which is using a system camera to publish images on the /image_raw topic.
    • In the middle, OpenDR’s face detection node is running taking as input the published image. By default, the node has its input topic set to /image_raw.
    • To the right the two output topics of the face detection node can be seen. The bottom topic /opendr/image_faces_annotated is the annotated image which can be easily viewed with rqt_image_view as explained earlier. The other topic /opendr/faces is the detection message which contains the detected faces’ detailed information. This message can be easily viewed by running ros2 topic echo /opendr/faces in a terminal.

RGB input nodes

Pose Estimation ROS2 Node

You can find the pose estimation ROS2 node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit’s pose estimation tool whose documentation can be found here. The node publishes the detected poses in OpenDR’s 2D pose message format, which saves a list of OpenDR’s keypoint message format.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the pose detection node:

    ros2 run opendr_perception pose_estimation
    
The following optional arguments are available:    - `-h, --help`: show a help message and exit    - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)    - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)    - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)    - `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages (default=`None`, disabled)    - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)    - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy
  1. Default output topics:
    • Output images: /opendr/image_pose_annotated

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged opendr_perception at Robotics Stack Exchange

No version for distro rolling showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 3.0.0
License Apache License v2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning
Checkout URI https://github.com/opendr-eu/opendr.git
VCS Type git
VCS Version master
Last Updated 2025-01-29
Dev Status UNKNOWN
Released UNRELEASED
Tags deep-learning robotics
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

OpenDR ROS2 nodes for the perception package

Additional Links

No additional links.

Maintainers

  • OpenDR Project Coordinator

Authors

No additional authors.

OpenDR Perception Package

This package contains ROS2 nodes related to the perception package of OpenDR.


Prerequisites

Before you can run any of the toolkit’s ROS2 nodes, some prerequisites need to be fulfilled:

  1. First of all, you need to set up the required packages and build your workspace.
  2. (Optional for nodes with RGB input)

    For basic usage and testing, all the toolkit’s ROS2 nodes that use RGB images are set up to expect input from a basic webcam using the default package usb_cam which is installed with OpenDR. You can run the webcam node in a new terminal:

    ros2 run usb_cam usb_cam_node_exe
    
By default, the USB cam node publishes images on `/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.**
  1. (Optional for nodes with audio input or audiovisual input)

    For basic usage and testing, the toolkit’s ROS2 nodes that use audio as input are set up to expect input from a basic audio device using the default package audio_common which is installed with OpenDR. You can run the audio node in a new terminal:

    ros2 launch audio_capture capture_wave.launch.xml
    
By default, the audio capture node publishes audio data on `/audio/audio` and the audio input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing audio, **make sure to change the input topic accordingly.**

Notes

  • Display output images with rqt_image_view

    For any node that outputs images, rqt_image_view can be used to display them by running the following command:

    ros2 run rqt_image_view rqt_image_view &
    
A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window.
Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view.
  • Echo node output

    All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command:

    ros2 topic echo /opendr/topic_name
    
You can find out the default topic name for each node, in its documentation below.
  • Increase performance by disabling output

    Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing None in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance.

    An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations.

  • Logging the node performance in the console

    OpenDR provides the utility performance node to log performance messages in the console for the running node. You can set the performance_topic of the node you are using and also run the performance node to get the time it takes for the node to process a single input and its average speed expressed in frames per second.

  • An example diagram of OpenDR nodes running

    Face Detection ROS2 node running diagram

    • On the left, the usb_cam node can be seen, which is using a system camera to publish images on the /image_raw topic.
    • In the middle, OpenDR’s face detection node is running taking as input the published image. By default, the node has its input topic set to /image_raw.
    • To the right the two output topics of the face detection node can be seen. The bottom topic /opendr/image_faces_annotated is the annotated image which can be easily viewed with rqt_image_view as explained earlier. The other topic /opendr/faces is the detection message which contains the detected faces’ detailed information. This message can be easily viewed by running ros2 topic echo /opendr/faces in a terminal.

RGB input nodes

Pose Estimation ROS2 Node

You can find the pose estimation ROS2 node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit’s pose estimation tool whose documentation can be found here. The node publishes the detected poses in OpenDR’s 2D pose message format, which saves a list of OpenDR’s keypoint message format.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the pose detection node:

    ros2 run opendr_perception pose_estimation
    
The following optional arguments are available:    - `-h, --help`: show a help message and exit    - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)    - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)    - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)    - `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages (default=`None`, disabled)    - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)    - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy
  1. Default output topics:
    • Output images: /opendr/image_pose_annotated

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged opendr_perception at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 3.0.0
License Apache License v2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning
Checkout URI https://github.com/opendr-eu/opendr.git
VCS Type git
VCS Version master
Last Updated 2025-01-29
Dev Status UNKNOWN
Released UNRELEASED
Tags deep-learning robotics
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

OpenDR ROS2 nodes for the perception package

Additional Links

No additional links.

Maintainers

  • OpenDR Project Coordinator

Authors

No additional authors.

OpenDR Perception Package

This package contains ROS2 nodes related to the perception package of OpenDR.


Prerequisites

Before you can run any of the toolkit’s ROS2 nodes, some prerequisites need to be fulfilled:

  1. First of all, you need to set up the required packages and build your workspace.
  2. (Optional for nodes with RGB input)

    For basic usage and testing, all the toolkit’s ROS2 nodes that use RGB images are set up to expect input from a basic webcam using the default package usb_cam which is installed with OpenDR. You can run the webcam node in a new terminal:

    ros2 run usb_cam usb_cam_node_exe
    
By default, the USB cam node publishes images on `/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.**
  1. (Optional for nodes with audio input or audiovisual input)

    For basic usage and testing, the toolkit’s ROS2 nodes that use audio as input are set up to expect input from a basic audio device using the default package audio_common which is installed with OpenDR. You can run the audio node in a new terminal:

    ros2 launch audio_capture capture_wave.launch.xml
    
By default, the audio capture node publishes audio data on `/audio/audio` and the audio input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing audio, **make sure to change the input topic accordingly.**

Notes

  • Display output images with rqt_image_view

    For any node that outputs images, rqt_image_view can be used to display them by running the following command:

    ros2 run rqt_image_view rqt_image_view &
    
A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window.
Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view.
  • Echo node output

    All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command:

    ros2 topic echo /opendr/topic_name
    
You can find out the default topic name for each node, in its documentation below.
  • Increase performance by disabling output

    Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing None in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance.

    An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations.

  • Logging the node performance in the console

    OpenDR provides the utility performance node to log performance messages in the console for the running node. You can set the performance_topic of the node you are using and also run the performance node to get the time it takes for the node to process a single input and its average speed expressed in frames per second.

  • An example diagram of OpenDR nodes running

    Face Detection ROS2 node running diagram

    • On the left, the usb_cam node can be seen, which is using a system camera to publish images on the /image_raw topic.
    • In the middle, OpenDR’s face detection node is running taking as input the published image. By default, the node has its input topic set to /image_raw.
    • To the right the two output topics of the face detection node can be seen. The bottom topic /opendr/image_faces_annotated is the annotated image which can be easily viewed with rqt_image_view as explained earlier. The other topic /opendr/faces is the detection message which contains the detected faces’ detailed information. This message can be easily viewed by running ros2 topic echo /opendr/faces in a terminal.

RGB input nodes

Pose Estimation ROS2 Node

You can find the pose estimation ROS2 node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit’s pose estimation tool whose documentation can be found here. The node publishes the detected poses in OpenDR’s 2D pose message format, which saves a list of OpenDR’s keypoint message format.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the pose detection node:

    ros2 run opendr_perception pose_estimation
    
The following optional arguments are available:    - `-h, --help`: show a help message and exit    - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)    - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)    - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)    - `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages (default=`None`, disabled)    - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)    - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy
  1. Default output topics:
    • Output images: /opendr/image_pose_annotated

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged opendr_perception at Robotics Stack Exchange

No version for distro galactic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 3.0.0
License Apache License v2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning
Checkout URI https://github.com/opendr-eu/opendr.git
VCS Type git
VCS Version master
Last Updated 2025-01-29
Dev Status UNKNOWN
Released UNRELEASED
Tags deep-learning robotics
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

OpenDR ROS2 nodes for the perception package

Additional Links

No additional links.

Maintainers

  • OpenDR Project Coordinator

Authors

No additional authors.

OpenDR Perception Package

This package contains ROS2 nodes related to the perception package of OpenDR.


Prerequisites

Before you can run any of the toolkit’s ROS2 nodes, some prerequisites need to be fulfilled:

  1. First of all, you need to set up the required packages and build your workspace.
  2. (Optional for nodes with RGB input)

    For basic usage and testing, all the toolkit’s ROS2 nodes that use RGB images are set up to expect input from a basic webcam using the default package usb_cam which is installed with OpenDR. You can run the webcam node in a new terminal:

    ros2 run usb_cam usb_cam_node_exe
    
By default, the USB cam node publishes images on `/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.**
  1. (Optional for nodes with audio input or audiovisual input)

    For basic usage and testing, the toolkit’s ROS2 nodes that use audio as input are set up to expect input from a basic audio device using the default package audio_common which is installed with OpenDR. You can run the audio node in a new terminal:

    ros2 launch audio_capture capture_wave.launch.xml
    
By default, the audio capture node publishes audio data on `/audio/audio` and the audio input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing audio, **make sure to change the input topic accordingly.**

Notes

  • Display output images with rqt_image_view

    For any node that outputs images, rqt_image_view can be used to display them by running the following command:

    ros2 run rqt_image_view rqt_image_view &
    
A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window.
Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view.
  • Echo node output

    All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command:

    ros2 topic echo /opendr/topic_name
    
You can find out the default topic name for each node, in its documentation below.
  • Increase performance by disabling output

    Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing None in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance.

    An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations.

  • Logging the node performance in the console

    OpenDR provides the utility performance node to log performance messages in the console for the running node. You can set the performance_topic of the node you are using and also run the performance node to get the time it takes for the node to process a single input and its average speed expressed in frames per second.

  • An example diagram of OpenDR nodes running

    Face Detection ROS2 node running diagram

    • On the left, the usb_cam node can be seen, which is using a system camera to publish images on the /image_raw topic.
    • In the middle, OpenDR’s face detection node is running taking as input the published image. By default, the node has its input topic set to /image_raw.
    • To the right the two output topics of the face detection node can be seen. The bottom topic /opendr/image_faces_annotated is the annotated image which can be easily viewed with rqt_image_view as explained earlier. The other topic /opendr/faces is the detection message which contains the detected faces’ detailed information. This message can be easily viewed by running ros2 topic echo /opendr/faces in a terminal.

RGB input nodes

Pose Estimation ROS2 Node

You can find the pose estimation ROS2 node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit’s pose estimation tool whose documentation can be found here. The node publishes the detected poses in OpenDR’s 2D pose message format, which saves a list of OpenDR’s keypoint message format.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the pose detection node:

    ros2 run opendr_perception pose_estimation
    
The following optional arguments are available:    - `-h, --help`: show a help message and exit    - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)    - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)    - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)    - `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages (default=`None`, disabled)    - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)    - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy
  1. Default output topics:
    • Output images: /opendr/image_pose_annotated

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged opendr_perception at Robotics Stack Exchange

No version for distro iron showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 3.0.0
License Apache License v2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning
Checkout URI https://github.com/opendr-eu/opendr.git
VCS Type git
VCS Version master
Last Updated 2025-01-29
Dev Status UNKNOWN
Released UNRELEASED
Tags deep-learning robotics
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

OpenDR ROS2 nodes for the perception package

Additional Links

No additional links.

Maintainers

  • OpenDR Project Coordinator

Authors

No additional authors.

OpenDR Perception Package

This package contains ROS2 nodes related to the perception package of OpenDR.


Prerequisites

Before you can run any of the toolkit’s ROS2 nodes, some prerequisites need to be fulfilled:

  1. First of all, you need to set up the required packages and build your workspace.
  2. (Optional for nodes with RGB input)

    For basic usage and testing, all the toolkit’s ROS2 nodes that use RGB images are set up to expect input from a basic webcam using the default package usb_cam which is installed with OpenDR. You can run the webcam node in a new terminal:

    ros2 run usb_cam usb_cam_node_exe
    
By default, the USB cam node publishes images on `/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.**
  1. (Optional for nodes with audio input or audiovisual input)

    For basic usage and testing, the toolkit’s ROS2 nodes that use audio as input are set up to expect input from a basic audio device using the default package audio_common which is installed with OpenDR. You can run the audio node in a new terminal:

    ros2 launch audio_capture capture_wave.launch.xml
    
By default, the audio capture node publishes audio data on `/audio/audio` and the audio input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing audio, **make sure to change the input topic accordingly.**

Notes

  • Display output images with rqt_image_view

    For any node that outputs images, rqt_image_view can be used to display them by running the following command:

    ros2 run rqt_image_view rqt_image_view &
    
A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window.
Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view.
  • Echo node output

    All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command:

    ros2 topic echo /opendr/topic_name
    
You can find out the default topic name for each node, in its documentation below.
  • Increase performance by disabling output

    Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing None in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance.

    An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations.

  • Logging the node performance in the console

    OpenDR provides the utility performance node to log performance messages in the console for the running node. You can set the performance_topic of the node you are using and also run the performance node to get the time it takes for the node to process a single input and its average speed expressed in frames per second.

  • An example diagram of OpenDR nodes running

    Face Detection ROS2 node running diagram

    • On the left, the usb_cam node can be seen, which is using a system camera to publish images on the /image_raw topic.
    • In the middle, OpenDR’s face detection node is running taking as input the published image. By default, the node has its input topic set to /image_raw.
    • To the right the two output topics of the face detection node can be seen. The bottom topic /opendr/image_faces_annotated is the annotated image which can be easily viewed with rqt_image_view as explained earlier. The other topic /opendr/faces is the detection message which contains the detected faces’ detailed information. This message can be easily viewed by running ros2 topic echo /opendr/faces in a terminal.

RGB input nodes

Pose Estimation ROS2 Node

You can find the pose estimation ROS2 node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit’s pose estimation tool whose documentation can be found here. The node publishes the detected poses in OpenDR’s 2D pose message format, which saves a list of OpenDR’s keypoint message format.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the pose detection node:

    ros2 run opendr_perception pose_estimation
    
The following optional arguments are available:    - `-h, --help`: show a help message and exit    - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)    - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)    - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)    - `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages (default=`None`, disabled)    - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)    - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy
  1. Default output topics:
    • Output images: /opendr/image_pose_annotated

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged opendr_perception at Robotics Stack Exchange

No version for distro melodic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 3.0.0
License Apache License v2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning
Checkout URI https://github.com/opendr-eu/opendr.git
VCS Type git
VCS Version master
Last Updated 2025-01-29
Dev Status UNKNOWN
Released UNRELEASED
Tags deep-learning robotics
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

OpenDR ROS2 nodes for the perception package

Additional Links

No additional links.

Maintainers

  • OpenDR Project Coordinator

Authors

No additional authors.

OpenDR Perception Package

This package contains ROS2 nodes related to the perception package of OpenDR.


Prerequisites

Before you can run any of the toolkit’s ROS2 nodes, some prerequisites need to be fulfilled:

  1. First of all, you need to set up the required packages and build your workspace.
  2. (Optional for nodes with RGB input)

    For basic usage and testing, all the toolkit’s ROS2 nodes that use RGB images are set up to expect input from a basic webcam using the default package usb_cam which is installed with OpenDR. You can run the webcam node in a new terminal:

    ros2 run usb_cam usb_cam_node_exe
    
By default, the USB cam node publishes images on `/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.**
  1. (Optional for nodes with audio input or audiovisual input)

    For basic usage and testing, the toolkit’s ROS2 nodes that use audio as input are set up to expect input from a basic audio device using the default package audio_common which is installed with OpenDR. You can run the audio node in a new terminal:

    ros2 launch audio_capture capture_wave.launch.xml
    
By default, the audio capture node publishes audio data on `/audio/audio` and the audio input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing audio, **make sure to change the input topic accordingly.**

Notes

  • Display output images with rqt_image_view

    For any node that outputs images, rqt_image_view can be used to display them by running the following command:

    ros2 run rqt_image_view rqt_image_view &
    
A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window.
Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view.
  • Echo node output

    All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command:

    ros2 topic echo /opendr/topic_name
    
You can find out the default topic name for each node, in its documentation below.
  • Increase performance by disabling output

    Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing None in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance.

    An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations.

  • Logging the node performance in the console

    OpenDR provides the utility performance node to log performance messages in the console for the running node. You can set the performance_topic of the node you are using and also run the performance node to get the time it takes for the node to process a single input and its average speed expressed in frames per second.

  • An example diagram of OpenDR nodes running

    Face Detection ROS2 node running diagram

    • On the left, the usb_cam node can be seen, which is using a system camera to publish images on the /image_raw topic.
    • In the middle, OpenDR’s face detection node is running taking as input the published image. By default, the node has its input topic set to /image_raw.
    • To the right the two output topics of the face detection node can be seen. The bottom topic /opendr/image_faces_annotated is the annotated image which can be easily viewed with rqt_image_view as explained earlier. The other topic /opendr/faces is the detection message which contains the detected faces’ detailed information. This message can be easily viewed by running ros2 topic echo /opendr/faces in a terminal.

RGB input nodes

Pose Estimation ROS2 Node

You can find the pose estimation ROS2 node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit’s pose estimation tool whose documentation can be found here. The node publishes the detected poses in OpenDR’s 2D pose message format, which saves a list of OpenDR’s keypoint message format.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the pose detection node:

    ros2 run opendr_perception pose_estimation
    
The following optional arguments are available:    - `-h, --help`: show a help message and exit    - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)    - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)    - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)    - `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages (default=`None`, disabled)    - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)    - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy
  1. Default output topics:
    • Output images: /opendr/image_pose_annotated

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged opendr_perception at Robotics Stack Exchange

No version for distro noetic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 3.0.0
License Apache License v2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning
Checkout URI https://github.com/opendr-eu/opendr.git
VCS Type git
VCS Version master
Last Updated 2025-01-29
Dev Status UNKNOWN
Released UNRELEASED
Tags deep-learning robotics
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

OpenDR ROS2 nodes for the perception package

Additional Links

No additional links.

Maintainers

  • OpenDR Project Coordinator

Authors

No additional authors.

OpenDR Perception Package

This package contains ROS2 nodes related to the perception package of OpenDR.


Prerequisites

Before you can run any of the toolkit’s ROS2 nodes, some prerequisites need to be fulfilled:

  1. First of all, you need to set up the required packages and build your workspace.
  2. (Optional for nodes with RGB input)

    For basic usage and testing, all the toolkit’s ROS2 nodes that use RGB images are set up to expect input from a basic webcam using the default package usb_cam which is installed with OpenDR. You can run the webcam node in a new terminal:

    ros2 run usb_cam usb_cam_node_exe
    
By default, the USB cam node publishes images on `/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.**
  1. (Optional for nodes with audio input or audiovisual input)

    For basic usage and testing, the toolkit’s ROS2 nodes that use audio as input are set up to expect input from a basic audio device using the default package audio_common which is installed with OpenDR. You can run the audio node in a new terminal:

    ros2 launch audio_capture capture_wave.launch.xml
    
By default, the audio capture node publishes audio data on `/audio/audio` and the audio input nodes subscribe to this topic if not provided with an input topic argument. 
As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing audio, **make sure to change the input topic accordingly.**

Notes

  • Display output images with rqt_image_view

    For any node that outputs images, rqt_image_view can be used to display them by running the following command:

    ros2 run rqt_image_view rqt_image_view &
    
A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window.
Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view.
  • Echo node output

    All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command:

    ros2 topic echo /opendr/topic_name
    
You can find out the default topic name for each node, in its documentation below.
  • Increase performance by disabling output

    Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing None in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance.

    An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations.

  • Logging the node performance in the console

    OpenDR provides the utility performance node to log performance messages in the console for the running node. You can set the performance_topic of the node you are using and also run the performance node to get the time it takes for the node to process a single input and its average speed expressed in frames per second.

  • An example diagram of OpenDR nodes running

    Face Detection ROS2 node running diagram

    • On the left, the usb_cam node can be seen, which is using a system camera to publish images on the /image_raw topic.
    • In the middle, OpenDR’s face detection node is running taking as input the published image. By default, the node has its input topic set to /image_raw.
    • To the right the two output topics of the face detection node can be seen. The bottom topic /opendr/image_faces_annotated is the annotated image which can be easily viewed with rqt_image_view as explained earlier. The other topic /opendr/faces is the detection message which contains the detected faces’ detailed information. This message can be easily viewed by running ros2 topic echo /opendr/faces in a terminal.

RGB input nodes

Pose Estimation ROS2 Node

You can find the pose estimation ROS2 node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit’s pose estimation tool whose documentation can be found here. The node publishes the detected poses in OpenDR’s 2D pose message format, which saves a list of OpenDR’s keypoint message format.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the pose detection node:

    ros2 run opendr_perception pose_estimation
    
The following optional arguments are available:    - `-h, --help`: show a help message and exit    - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)    - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)    - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)    - `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages (default=`None`, disabled)    - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)    - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy
  1. Default output topics:
    • Output images: /opendr/image_pose_annotated

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged opendr_perception at Robotics Stack Exchange