![]() |
ros2_benchmark repositorybenchmarking performance performance-testing ros2 ros2-humble ros2_benchmark ros2_benchmark_interfaces |
Repository Summary
Description | Benchmark the performance of your ROS 2 graphs |
Checkout URI | https://github.com/nvidia-isaac-ros/ros2_benchmark.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-07-25 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | benchmarking performance performance-testing ros2 ros2-humble |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
Name | Version |
---|---|
ros2_benchmark | 3.2.12 |
ros2_benchmark_interfaces | 3.2.5 |
README
ros2_benchmark

Overview
Robots are real-time systems which require complex graphs of heterogeneous computation to perform perception, planning, and control. These graphs of computation need to perform work deterministically and with known latency. The computing platform has a fixed budget for heterogeneous computation (TOPS) and throughput; computation is typically performed on multiple CPUs, GPUs, and additional special purpose, fixed function hardware accelerators.
ros2_benchmark
provides the tools for measuring the throughput, latency, and compute utilization of these complex graphs without altering the code under test. The results can be used to make informed design decisions on how best a robotics application can meet its real-time requirements. Results can be used to optimize system performance by tracking results over time against changes in the implementation and can be used in the development of program flow monitors to detect anomalies during operation of the real-time robotics application.
This tooling allows for realistic assessments of robotics application performance under load including message transport costs in RCL for practical benchmarking indicative of your real-world results. Message transport costs can be measured intra-process or inter-process including DDS overhead with support for type adaptation. This tooling does not require modification of the graph of nodes under test to measure results, allowing both open source and proprietary solutions to be measured with the same tools in an non-intrusive way. Input for benchmarking is standardized with available rosbag datasets accompanying this package.
Designed for local developer use or in CI/CD platforms, these tools can be containerized to run on cloud native platforms such as Kubernetes. The tools are commercially hardened over tens of thousands of runs. We use this nightly on 7 hardware platforms using aarch64
and x86_64
architectures on multiple graph configurations.

ros2_benchmark
uses the benchmark controller to orchestrate the data loader, playback and monitor nodes to perform benchmark runs, and calculate performance results into a benchmark report.
The data loader node fetches input data from rosbag. Input data is pre-processed using a configurable graph of nodes, and buffered into memory in the playback node which supports a plug-in for type adaptation. The graph benchmarked runs unmodified with input from the playback node controlling the data rate to output received at the monitor node.

ros2_benchmark
loads data from rosbag(s), performs any data pre-processing using a graph of ros nodes, and buffers the input data for benchmarking. If measuring peak throughput, the auto finder runs the graph under benchmark at multiple publisher rates to find the maximum publisher rate with less than 5% drops through the graph, otherwise it uses the specified fixed publishing rate or the timing from the rosbag.
The graph under benchmark is measured multiple times, with calculated results in a benchmark report.
Table of Contents
Latest Update
Update 2023-11-16: Add support for live benchmark mode
Supported Platforms
This package is designed and tested to be compatible aarch64 and x86_64 platforms using ROS 2 Humble.
Platform hardware | Platform software | ROS Version |
---|---|---|
aarch64 x86_64 |
Ubuntu 20.04+ | ROS 2 Humble |
Note:
ros2_benchmark
has been tested on multiple computing platforms including Intel NUC Corei7 11th Gen and Jetson Orin.
Quickstart
To use and learn to use ros2_benchmark
, start by running a sample benchmark. Follow the steps below to start measuring the performance of an AprilTag node with ros2_benchmark
.
- Install ROS 2 Humble natively (see here) or launch official Docker container with ROS 2 Humble pre-installed:
docker run -it ros:humble
- Setup convenience environment variables and install tools.
export R2B_WS_HOME=~/ros_ws && \
export ROS2_BENCHMARK_OVERRIDE_ASSETS_ROOT=$R2B_WS_HOME/src/ros2_benchmark/assets && \
sudo apt-get update && sudo apt-get install -y git git-lfs wget
- Clone this repository along with an available implementation of Apriltag detection and install dependencies.
mkdir -p $R2B_WS_HOME/src && cd $R2B_WS_HOME/src && \
git clone https://github.com/NVIDIA-ISAAC-ROS/ros2_benchmark.git && \
git clone https://github.com/christianrauch/apriltag_ros.git && \
cd $R2B_WS_HOME && \
sudo apt-get update && \
rosdep update && rosdep install -i -r --from-paths src --rosdistro humble -y
- Clone and build
image_proc
package with patch to fix incompatible QoS settings.
```bash
cd $R2B_WS_HOME/src &&
git clone https://github.com/ros-perception/vision_opencv.git && cd vision_opencv && git checkout humble &&
cd $R2B_WS_HOME/src &&
git clone https://github.com/ros-perception/image_pipeline.git && cd image_pipeline && git checkout humble &&
git config user.email “benchmarking@ros2_benchmark.com” && git config user.name “ROS 2 Developer” &&
wget https://raw.githubusercontent.com/NVIDIA-ISAAC-ROS/ros2_benchmark/main/resources/patch/resize_qos_profile.patch &&
git apply resize_qos_profile.patch &&
cd $R2B_WS_HOME &&
sudo apt-get update &&
rosdep update && rosdep install -i -r –from-paths src –rosdistro humble -y && \
File truncated at 100 lines see the full file
CONTRIBUTING
Isaac ROS Contribution Rules
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
Contributors must sign-off each commit by adding a Signed-off-by: ...
line to commit messages to certify that they have the right to submit
the code they are contributing to the project according to the
Developer Certificate of Origin (DCO).