No version for distro humble showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.1.0
License BSD 3.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Framework to evaluate peformance of ROS 2
Checkout URI https://github.com/irobot-ros/ros2-performance.git
VCS Type git
VCS Version rolling
Last Updated 2025-04-10
Dev Status UNKNOWN
Released UNRELEASED
Tags benchmark performance cpp ros2
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

Benchmark applications to test ros2 performance

Additional Links

No additional links.

Maintainers

  • Juan Oxoby

Authors

  • Juan Oxoby

Benchmark Application

This folder contains a benchmark application to test the performance of a ROS2 system.

To run the benchmark, supply a .json topology file, specifying a complete ROS2 system, to irobot_benchmark. The application will load the complete ROS2 system from the topology file and will begin passing messages between the different nodes.

For the duration of the test, statistical data will be collected, the usage of resources (CPU utilization and RAM consumption) and message latencies.

After the user-specified duration of time, the application will output the results as human-readable, space-delimited log files. Comma-delimited output is also available, specified via a flag to irobot_benchmark

Topologies

Multiple topologies are provided in the topology folder. Two examples are Sierra Nevada and Mont Blanc. Sierra Nevada is light 10-node system while Mont Blanc is a heavier and more complex 20-node system.

Usage

Follow the instructions for building the performance_test framework.

First, source the environment:

source performances_ws/install/local_setup.bash

Example run:

cd performances_ws/install/lib/irobot_benchmark
./irobot_benchmark topology/sierra_nevada.json -t 60 --ipc on

This will run Sierra Nevada for 60 seconds and with Intra-Process-Communication activated. For more options, run ./irobot_benchmark --help.

Output

After running the application, a folder will be created in the current working directory along with four different files inside it:

  • latency_all.txt
  • latency_total.txt
  • resources.txt
  • events.txt

Benchmark results

The following are sample files that have been obtained running Sierra Nevada on a RaspberryPi 3.

latency_all.txt:

node           topic          size[b]   received[#]    late[#]   too_late[#]    lost[#]   mean[us]  sd[us]    min[us]   max[us]   freq[hz]  duration[s]
lyon           amazon         36        12001          11        0              0         602       145       345       4300      100       120
hamburg        danube         8         12001          15        0              0         796       233       362       5722      100       120
hamburg        ganges         16        12001          10        0              0         557       119       302       4729      100       120
hamburg        nile           16        12001          18        0              0         658       206       300       5258      100       120
hamburg        tigris         16        12000          17        0              0         736       225       310       5994      100       120
osaka          parana         12        12001          32        0              0         636       236       346       4343      100       120
mandalay       danube         8         12001          16        0              0         791       189       418       6991      100       120
mandalay       salween        48        1201           1         0              0         663       297       391       6911      10        120
ponce          danube         8         12001          15        0              0         882       203       437       7270      100       120
ponce          missouri       10000     1201           0         0              0         881       245       434       3664      10        120
ponce          volga          8         241            0         0              0         954       586       413       4010      2         120
barcelona      mekong         100       241            0         0              0         844       297       425       2074      2         120
georgetown     lena           50        1201           1         0              0         707       302       368       8392      10        120
geneva         congo          16        1201           1         0              0         691       298       353       7218      10        120
geneva         danube         8         12001          26        0              0         1008      227       480       7025      100       120
geneva         parana         12        12001          40        0              0         760       275       368       4351      100       120
arequipa       arkansas       16        1201           1         2              0         810       1079      379       37064     10        120

latency_total.txt:

received[#]    mean[us]  late[#]   late[%]   too_late[#]    too_late[%]    lost[#]   lost[%]
126496         744       204       0.1613    2              0.001581       0         0

There are different message classifications depending on their latency.

  • A message is classified as too_late when its latency is greater than min(period, 50ms), where period is the publishing period of that particular topic.
  • A message is classified as late if it’s not classified as too_late but its latency is greater than min(0.2*period, 5ms).
  • The idea is that a real system could still work with a few late messages but not too_late messages.
  • Note that there are cli options to change these thresholds (for more info: ./irobot_benchmark --help).
  • A lost message is a message that never arrived.
    • A lost message is detected when the subscriber receives a message with a tracking number greater than the one expected.
    • The assumption here is that the messages always arrive in chronological order, i.e., a message A sent before a message B will either arrive before B or get lost, but will never arrive after B.
  • The rest of the messages are classified as on_time.

``` Message classifications by their latency

    • + | | | | | | | | | | | | | | | +——————————-+——————————-+

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged irobot_benchmark at Robotics Stack Exchange

No version for distro jazzy showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.1.0
License BSD 3.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Framework to evaluate peformance of ROS 2
Checkout URI https://github.com/irobot-ros/ros2-performance.git
VCS Type git
VCS Version rolling
Last Updated 2025-04-10
Dev Status UNKNOWN
Released UNRELEASED
Tags benchmark performance cpp ros2
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

Benchmark applications to test ros2 performance

Additional Links

No additional links.

Maintainers

  • Juan Oxoby

Authors

  • Juan Oxoby

Benchmark Application

This folder contains a benchmark application to test the performance of a ROS2 system.

To run the benchmark, supply a .json topology file, specifying a complete ROS2 system, to irobot_benchmark. The application will load the complete ROS2 system from the topology file and will begin passing messages between the different nodes.

For the duration of the test, statistical data will be collected, the usage of resources (CPU utilization and RAM consumption) and message latencies.

After the user-specified duration of time, the application will output the results as human-readable, space-delimited log files. Comma-delimited output is also available, specified via a flag to irobot_benchmark

Topologies

Multiple topologies are provided in the topology folder. Two examples are Sierra Nevada and Mont Blanc. Sierra Nevada is light 10-node system while Mont Blanc is a heavier and more complex 20-node system.

Usage

Follow the instructions for building the performance_test framework.

First, source the environment:

source performances_ws/install/local_setup.bash

Example run:

cd performances_ws/install/lib/irobot_benchmark
./irobot_benchmark topology/sierra_nevada.json -t 60 --ipc on

This will run Sierra Nevada for 60 seconds and with Intra-Process-Communication activated. For more options, run ./irobot_benchmark --help.

Output

After running the application, a folder will be created in the current working directory along with four different files inside it:

  • latency_all.txt
  • latency_total.txt
  • resources.txt
  • events.txt

Benchmark results

The following are sample files that have been obtained running Sierra Nevada on a RaspberryPi 3.

latency_all.txt:

node           topic          size[b]   received[#]    late[#]   too_late[#]    lost[#]   mean[us]  sd[us]    min[us]   max[us]   freq[hz]  duration[s]
lyon           amazon         36        12001          11        0              0         602       145       345       4300      100       120
hamburg        danube         8         12001          15        0              0         796       233       362       5722      100       120
hamburg        ganges         16        12001          10        0              0         557       119       302       4729      100       120
hamburg        nile           16        12001          18        0              0         658       206       300       5258      100       120
hamburg        tigris         16        12000          17        0              0         736       225       310       5994      100       120
osaka          parana         12        12001          32        0              0         636       236       346       4343      100       120
mandalay       danube         8         12001          16        0              0         791       189       418       6991      100       120
mandalay       salween        48        1201           1         0              0         663       297       391       6911      10        120
ponce          danube         8         12001          15        0              0         882       203       437       7270      100       120
ponce          missouri       10000     1201           0         0              0         881       245       434       3664      10        120
ponce          volga          8         241            0         0              0         954       586       413       4010      2         120
barcelona      mekong         100       241            0         0              0         844       297       425       2074      2         120
georgetown     lena           50        1201           1         0              0         707       302       368       8392      10        120
geneva         congo          16        1201           1         0              0         691       298       353       7218      10        120
geneva         danube         8         12001          26        0              0         1008      227       480       7025      100       120
geneva         parana         12        12001          40        0              0         760       275       368       4351      100       120
arequipa       arkansas       16        1201           1         2              0         810       1079      379       37064     10        120

latency_total.txt:

received[#]    mean[us]  late[#]   late[%]   too_late[#]    too_late[%]    lost[#]   lost[%]
126496         744       204       0.1613    2              0.001581       0         0

There are different message classifications depending on their latency.

  • A message is classified as too_late when its latency is greater than min(period, 50ms), where period is the publishing period of that particular topic.
  • A message is classified as late if it’s not classified as too_late but its latency is greater than min(0.2*period, 5ms).
  • The idea is that a real system could still work with a few late messages but not too_late messages.
  • Note that there are cli options to change these thresholds (for more info: ./irobot_benchmark --help).
  • A lost message is a message that never arrived.
    • A lost message is detected when the subscriber receives a message with a tracking number greater than the one expected.
    • The assumption here is that the messages always arrive in chronological order, i.e., a message A sent before a message B will either arrive before B or get lost, but will never arrive after B.
  • The rest of the messages are classified as on_time.

``` Message classifications by their latency

    • + | | | | | | | | | | | | | | | +——————————-+——————————-+

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged irobot_benchmark at Robotics Stack Exchange

No version for distro kilted showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.1.0
License BSD 3.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Framework to evaluate peformance of ROS 2
Checkout URI https://github.com/irobot-ros/ros2-performance.git
VCS Type git
VCS Version rolling
Last Updated 2025-04-10
Dev Status UNKNOWN
Released UNRELEASED
Tags benchmark performance cpp ros2
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

Benchmark applications to test ros2 performance

Additional Links

No additional links.

Maintainers

  • Juan Oxoby

Authors

  • Juan Oxoby

Benchmark Application

This folder contains a benchmark application to test the performance of a ROS2 system.

To run the benchmark, supply a .json topology file, specifying a complete ROS2 system, to irobot_benchmark. The application will load the complete ROS2 system from the topology file and will begin passing messages between the different nodes.

For the duration of the test, statistical data will be collected, the usage of resources (CPU utilization and RAM consumption) and message latencies.

After the user-specified duration of time, the application will output the results as human-readable, space-delimited log files. Comma-delimited output is also available, specified via a flag to irobot_benchmark

Topologies

Multiple topologies are provided in the topology folder. Two examples are Sierra Nevada and Mont Blanc. Sierra Nevada is light 10-node system while Mont Blanc is a heavier and more complex 20-node system.

Usage

Follow the instructions for building the performance_test framework.

First, source the environment:

source performances_ws/install/local_setup.bash

Example run:

cd performances_ws/install/lib/irobot_benchmark
./irobot_benchmark topology/sierra_nevada.json -t 60 --ipc on

This will run Sierra Nevada for 60 seconds and with Intra-Process-Communication activated. For more options, run ./irobot_benchmark --help.

Output

After running the application, a folder will be created in the current working directory along with four different files inside it:

  • latency_all.txt
  • latency_total.txt
  • resources.txt
  • events.txt

Benchmark results

The following are sample files that have been obtained running Sierra Nevada on a RaspberryPi 3.

latency_all.txt:

node           topic          size[b]   received[#]    late[#]   too_late[#]    lost[#]   mean[us]  sd[us]    min[us]   max[us]   freq[hz]  duration[s]
lyon           amazon         36        12001          11        0              0         602       145       345       4300      100       120
hamburg        danube         8         12001          15        0              0         796       233       362       5722      100       120
hamburg        ganges         16        12001          10        0              0         557       119       302       4729      100       120
hamburg        nile           16        12001          18        0              0         658       206       300       5258      100       120
hamburg        tigris         16        12000          17        0              0         736       225       310       5994      100       120
osaka          parana         12        12001          32        0              0         636       236       346       4343      100       120
mandalay       danube         8         12001          16        0              0         791       189       418       6991      100       120
mandalay       salween        48        1201           1         0              0         663       297       391       6911      10        120
ponce          danube         8         12001          15        0              0         882       203       437       7270      100       120
ponce          missouri       10000     1201           0         0              0         881       245       434       3664      10        120
ponce          volga          8         241            0         0              0         954       586       413       4010      2         120
barcelona      mekong         100       241            0         0              0         844       297       425       2074      2         120
georgetown     lena           50        1201           1         0              0         707       302       368       8392      10        120
geneva         congo          16        1201           1         0              0         691       298       353       7218      10        120
geneva         danube         8         12001          26        0              0         1008      227       480       7025      100       120
geneva         parana         12        12001          40        0              0         760       275       368       4351      100       120
arequipa       arkansas       16        1201           1         2              0         810       1079      379       37064     10        120

latency_total.txt:

received[#]    mean[us]  late[#]   late[%]   too_late[#]    too_late[%]    lost[#]   lost[%]
126496         744       204       0.1613    2              0.001581       0         0

There are different message classifications depending on their latency.

  • A message is classified as too_late when its latency is greater than min(period, 50ms), where period is the publishing period of that particular topic.
  • A message is classified as late if it’s not classified as too_late but its latency is greater than min(0.2*period, 5ms).
  • The idea is that a real system could still work with a few late messages but not too_late messages.
  • Note that there are cli options to change these thresholds (for more info: ./irobot_benchmark --help).
  • A lost message is a message that never arrived.
    • A lost message is detected when the subscriber receives a message with a tracking number greater than the one expected.
    • The assumption here is that the messages always arrive in chronological order, i.e., a message A sent before a message B will either arrive before B or get lost, but will never arrive after B.
  • The rest of the messages are classified as on_time.

``` Message classifications by their latency

    • + | | | | | | | | | | | | | | | +——————————-+——————————-+

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged irobot_benchmark at Robotics Stack Exchange

No version for distro rolling showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.1.0
License BSD 3.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Framework to evaluate peformance of ROS 2
Checkout URI https://github.com/irobot-ros/ros2-performance.git
VCS Type git
VCS Version rolling
Last Updated 2025-04-10
Dev Status UNKNOWN
Released UNRELEASED
Tags benchmark performance cpp ros2
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

Benchmark applications to test ros2 performance

Additional Links

No additional links.

Maintainers

  • Juan Oxoby

Authors

  • Juan Oxoby

Benchmark Application

This folder contains a benchmark application to test the performance of a ROS2 system.

To run the benchmark, supply a .json topology file, specifying a complete ROS2 system, to irobot_benchmark. The application will load the complete ROS2 system from the topology file and will begin passing messages between the different nodes.

For the duration of the test, statistical data will be collected, the usage of resources (CPU utilization and RAM consumption) and message latencies.

After the user-specified duration of time, the application will output the results as human-readable, space-delimited log files. Comma-delimited output is also available, specified via a flag to irobot_benchmark

Topologies

Multiple topologies are provided in the topology folder. Two examples are Sierra Nevada and Mont Blanc. Sierra Nevada is light 10-node system while Mont Blanc is a heavier and more complex 20-node system.

Usage

Follow the instructions for building the performance_test framework.

First, source the environment:

source performances_ws/install/local_setup.bash

Example run:

cd performances_ws/install/lib/irobot_benchmark
./irobot_benchmark topology/sierra_nevada.json -t 60 --ipc on

This will run Sierra Nevada for 60 seconds and with Intra-Process-Communication activated. For more options, run ./irobot_benchmark --help.

Output

After running the application, a folder will be created in the current working directory along with four different files inside it:

  • latency_all.txt
  • latency_total.txt
  • resources.txt
  • events.txt

Benchmark results

The following are sample files that have been obtained running Sierra Nevada on a RaspberryPi 3.

latency_all.txt:

node           topic          size[b]   received[#]    late[#]   too_late[#]    lost[#]   mean[us]  sd[us]    min[us]   max[us]   freq[hz]  duration[s]
lyon           amazon         36        12001          11        0              0         602       145       345       4300      100       120
hamburg        danube         8         12001          15        0              0         796       233       362       5722      100       120
hamburg        ganges         16        12001          10        0              0         557       119       302       4729      100       120
hamburg        nile           16        12001          18        0              0         658       206       300       5258      100       120
hamburg        tigris         16        12000          17        0              0         736       225       310       5994      100       120
osaka          parana         12        12001          32        0              0         636       236       346       4343      100       120
mandalay       danube         8         12001          16        0              0         791       189       418       6991      100       120
mandalay       salween        48        1201           1         0              0         663       297       391       6911      10        120
ponce          danube         8         12001          15        0              0         882       203       437       7270      100       120
ponce          missouri       10000     1201           0         0              0         881       245       434       3664      10        120
ponce          volga          8         241            0         0              0         954       586       413       4010      2         120
barcelona      mekong         100       241            0         0              0         844       297       425       2074      2         120
georgetown     lena           50        1201           1         0              0         707       302       368       8392      10        120
geneva         congo          16        1201           1         0              0         691       298       353       7218      10        120
geneva         danube         8         12001          26        0              0         1008      227       480       7025      100       120
geneva         parana         12        12001          40        0              0         760       275       368       4351      100       120
arequipa       arkansas       16        1201           1         2              0         810       1079      379       37064     10        120

latency_total.txt:

received[#]    mean[us]  late[#]   late[%]   too_late[#]    too_late[%]    lost[#]   lost[%]
126496         744       204       0.1613    2              0.001581       0         0

There are different message classifications depending on their latency.

  • A message is classified as too_late when its latency is greater than min(period, 50ms), where period is the publishing period of that particular topic.
  • A message is classified as late if it’s not classified as too_late but its latency is greater than min(0.2*period, 5ms).
  • The idea is that a real system could still work with a few late messages but not too_late messages.
  • Note that there are cli options to change these thresholds (for more info: ./irobot_benchmark --help).
  • A lost message is a message that never arrived.
    • A lost message is detected when the subscriber receives a message with a tracking number greater than the one expected.
    • The assumption here is that the messages always arrive in chronological order, i.e., a message A sent before a message B will either arrive before B or get lost, but will never arrive after B.
  • The rest of the messages are classified as on_time.

``` Message classifications by their latency

    • + | | | | | | | | | | | | | | | +——————————-+——————————-+

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged irobot_benchmark at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 0.1.0
License BSD 3.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Framework to evaluate peformance of ROS 2
Checkout URI https://github.com/irobot-ros/ros2-performance.git
VCS Type git
VCS Version rolling
Last Updated 2025-04-10
Dev Status UNKNOWN
Released UNRELEASED
Tags benchmark performance cpp ros2
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

Benchmark applications to test ros2 performance

Additional Links

No additional links.

Maintainers

  • Juan Oxoby

Authors

  • Juan Oxoby

Benchmark Application

This folder contains a benchmark application to test the performance of a ROS2 system.

To run the benchmark, supply a .json topology file, specifying a complete ROS2 system, to irobot_benchmark. The application will load the complete ROS2 system from the topology file and will begin passing messages between the different nodes.

For the duration of the test, statistical data will be collected, the usage of resources (CPU utilization and RAM consumption) and message latencies.

After the user-specified duration of time, the application will output the results as human-readable, space-delimited log files. Comma-delimited output is also available, specified via a flag to irobot_benchmark

Topologies

Multiple topologies are provided in the topology folder. Two examples are Sierra Nevada and Mont Blanc. Sierra Nevada is light 10-node system while Mont Blanc is a heavier and more complex 20-node system.

Usage

Follow the instructions for building the performance_test framework.

First, source the environment:

source performances_ws/install/local_setup.bash

Example run:

cd performances_ws/install/lib/irobot_benchmark
./irobot_benchmark topology/sierra_nevada.json -t 60 --ipc on

This will run Sierra Nevada for 60 seconds and with Intra-Process-Communication activated. For more options, run ./irobot_benchmark --help.

Output

After running the application, a folder will be created in the current working directory along with four different files inside it:

  • latency_all.txt
  • latency_total.txt
  • resources.txt
  • events.txt

Benchmark results

The following are sample files that have been obtained running Sierra Nevada on a RaspberryPi 3.

latency_all.txt:

node           topic          size[b]   received[#]    late[#]   too_late[#]    lost[#]   mean[us]  sd[us]    min[us]   max[us]   freq[hz]  duration[s]
lyon           amazon         36        12001          11        0              0         602       145       345       4300      100       120
hamburg        danube         8         12001          15        0              0         796       233       362       5722      100       120
hamburg        ganges         16        12001          10        0              0         557       119       302       4729      100       120
hamburg        nile           16        12001          18        0              0         658       206       300       5258      100       120
hamburg        tigris         16        12000          17        0              0         736       225       310       5994      100       120
osaka          parana         12        12001          32        0              0         636       236       346       4343      100       120
mandalay       danube         8         12001          16        0              0         791       189       418       6991      100       120
mandalay       salween        48        1201           1         0              0         663       297       391       6911      10        120
ponce          danube         8         12001          15        0              0         882       203       437       7270      100       120
ponce          missouri       10000     1201           0         0              0         881       245       434       3664      10        120
ponce          volga          8         241            0         0              0         954       586       413       4010      2         120
barcelona      mekong         100       241            0         0              0         844       297       425       2074      2         120
georgetown     lena           50        1201           1         0              0         707       302       368       8392      10        120
geneva         congo          16        1201           1         0              0         691       298       353       7218      10        120
geneva         danube         8         12001          26        0              0         1008      227       480       7025      100       120
geneva         parana         12        12001          40        0              0         760       275       368       4351      100       120
arequipa       arkansas       16        1201           1         2              0         810       1079      379       37064     10        120

latency_total.txt:

received[#]    mean[us]  late[#]   late[%]   too_late[#]    too_late[%]    lost[#]   lost[%]
126496         744       204       0.1613    2              0.001581       0         0

There are different message classifications depending on their latency.

  • A message is classified as too_late when its latency is greater than min(period, 50ms), where period is the publishing period of that particular topic.
  • A message is classified as late if it’s not classified as too_late but its latency is greater than min(0.2*period, 5ms).
  • The idea is that a real system could still work with a few late messages but not too_late messages.
  • Note that there are cli options to change these thresholds (for more info: ./irobot_benchmark --help).
  • A lost message is a message that never arrived.
    • A lost message is detected when the subscriber receives a message with a tracking number greater than the one expected.
    • The assumption here is that the messages always arrive in chronological order, i.e., a message A sent before a message B will either arrive before B or get lost, but will never arrive after B.
  • The rest of the messages are classified as on_time.

``` Message classifications by their latency

    • + | | | | | | | | | | | | | | | +——————————-+——————————-+

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged irobot_benchmark at Robotics Stack Exchange

No version for distro galactic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.1.0
License BSD 3.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Framework to evaluate peformance of ROS 2
Checkout URI https://github.com/irobot-ros/ros2-performance.git
VCS Type git
VCS Version rolling
Last Updated 2025-04-10
Dev Status UNKNOWN
Released UNRELEASED
Tags benchmark performance cpp ros2
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

Benchmark applications to test ros2 performance

Additional Links

No additional links.

Maintainers

  • Juan Oxoby

Authors

  • Juan Oxoby

Benchmark Application

This folder contains a benchmark application to test the performance of a ROS2 system.

To run the benchmark, supply a .json topology file, specifying a complete ROS2 system, to irobot_benchmark. The application will load the complete ROS2 system from the topology file and will begin passing messages between the different nodes.

For the duration of the test, statistical data will be collected, the usage of resources (CPU utilization and RAM consumption) and message latencies.

After the user-specified duration of time, the application will output the results as human-readable, space-delimited log files. Comma-delimited output is also available, specified via a flag to irobot_benchmark

Topologies

Multiple topologies are provided in the topology folder. Two examples are Sierra Nevada and Mont Blanc. Sierra Nevada is light 10-node system while Mont Blanc is a heavier and more complex 20-node system.

Usage

Follow the instructions for building the performance_test framework.

First, source the environment:

source performances_ws/install/local_setup.bash

Example run:

cd performances_ws/install/lib/irobot_benchmark
./irobot_benchmark topology/sierra_nevada.json -t 60 --ipc on

This will run Sierra Nevada for 60 seconds and with Intra-Process-Communication activated. For more options, run ./irobot_benchmark --help.

Output

After running the application, a folder will be created in the current working directory along with four different files inside it:

  • latency_all.txt
  • latency_total.txt
  • resources.txt
  • events.txt

Benchmark results

The following are sample files that have been obtained running Sierra Nevada on a RaspberryPi 3.

latency_all.txt:

node           topic          size[b]   received[#]    late[#]   too_late[#]    lost[#]   mean[us]  sd[us]    min[us]   max[us]   freq[hz]  duration[s]
lyon           amazon         36        12001          11        0              0         602       145       345       4300      100       120
hamburg        danube         8         12001          15        0              0         796       233       362       5722      100       120
hamburg        ganges         16        12001          10        0              0         557       119       302       4729      100       120
hamburg        nile           16        12001          18        0              0         658       206       300       5258      100       120
hamburg        tigris         16        12000          17        0              0         736       225       310       5994      100       120
osaka          parana         12        12001          32        0              0         636       236       346       4343      100       120
mandalay       danube         8         12001          16        0              0         791       189       418       6991      100       120
mandalay       salween        48        1201           1         0              0         663       297       391       6911      10        120
ponce          danube         8         12001          15        0              0         882       203       437       7270      100       120
ponce          missouri       10000     1201           0         0              0         881       245       434       3664      10        120
ponce          volga          8         241            0         0              0         954       586       413       4010      2         120
barcelona      mekong         100       241            0         0              0         844       297       425       2074      2         120
georgetown     lena           50        1201           1         0              0         707       302       368       8392      10        120
geneva         congo          16        1201           1         0              0         691       298       353       7218      10        120
geneva         danube         8         12001          26        0              0         1008      227       480       7025      100       120
geneva         parana         12        12001          40        0              0         760       275       368       4351      100       120
arequipa       arkansas       16        1201           1         2              0         810       1079      379       37064     10        120

latency_total.txt:

received[#]    mean[us]  late[#]   late[%]   too_late[#]    too_late[%]    lost[#]   lost[%]
126496         744       204       0.1613    2              0.001581       0         0

There are different message classifications depending on their latency.

  • A message is classified as too_late when its latency is greater than min(period, 50ms), where period is the publishing period of that particular topic.
  • A message is classified as late if it’s not classified as too_late but its latency is greater than min(0.2*period, 5ms).
  • The idea is that a real system could still work with a few late messages but not too_late messages.
  • Note that there are cli options to change these thresholds (for more info: ./irobot_benchmark --help).
  • A lost message is a message that never arrived.
    • A lost message is detected when the subscriber receives a message with a tracking number greater than the one expected.
    • The assumption here is that the messages always arrive in chronological order, i.e., a message A sent before a message B will either arrive before B or get lost, but will never arrive after B.
  • The rest of the messages are classified as on_time.

``` Message classifications by their latency

    • + | | | | | | | | | | | | | | | +——————————-+——————————-+

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged irobot_benchmark at Robotics Stack Exchange

No version for distro iron showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.1.0
License BSD 3.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Framework to evaluate peformance of ROS 2
Checkout URI https://github.com/irobot-ros/ros2-performance.git
VCS Type git
VCS Version rolling
Last Updated 2025-04-10
Dev Status UNKNOWN
Released UNRELEASED
Tags benchmark performance cpp ros2
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

Benchmark applications to test ros2 performance

Additional Links

No additional links.

Maintainers

  • Juan Oxoby

Authors

  • Juan Oxoby

Benchmark Application

This folder contains a benchmark application to test the performance of a ROS2 system.

To run the benchmark, supply a .json topology file, specifying a complete ROS2 system, to irobot_benchmark. The application will load the complete ROS2 system from the topology file and will begin passing messages between the different nodes.

For the duration of the test, statistical data will be collected, the usage of resources (CPU utilization and RAM consumption) and message latencies.

After the user-specified duration of time, the application will output the results as human-readable, space-delimited log files. Comma-delimited output is also available, specified via a flag to irobot_benchmark

Topologies

Multiple topologies are provided in the topology folder. Two examples are Sierra Nevada and Mont Blanc. Sierra Nevada is light 10-node system while Mont Blanc is a heavier and more complex 20-node system.

Usage

Follow the instructions for building the performance_test framework.

First, source the environment:

source performances_ws/install/local_setup.bash

Example run:

cd performances_ws/install/lib/irobot_benchmark
./irobot_benchmark topology/sierra_nevada.json -t 60 --ipc on

This will run Sierra Nevada for 60 seconds and with Intra-Process-Communication activated. For more options, run ./irobot_benchmark --help.

Output

After running the application, a folder will be created in the current working directory along with four different files inside it:

  • latency_all.txt
  • latency_total.txt
  • resources.txt
  • events.txt

Benchmark results

The following are sample files that have been obtained running Sierra Nevada on a RaspberryPi 3.

latency_all.txt:

node           topic          size[b]   received[#]    late[#]   too_late[#]    lost[#]   mean[us]  sd[us]    min[us]   max[us]   freq[hz]  duration[s]
lyon           amazon         36        12001          11        0              0         602       145       345       4300      100       120
hamburg        danube         8         12001          15        0              0         796       233       362       5722      100       120
hamburg        ganges         16        12001          10        0              0         557       119       302       4729      100       120
hamburg        nile           16        12001          18        0              0         658       206       300       5258      100       120
hamburg        tigris         16        12000          17        0              0         736       225       310       5994      100       120
osaka          parana         12        12001          32        0              0         636       236       346       4343      100       120
mandalay       danube         8         12001          16        0              0         791       189       418       6991      100       120
mandalay       salween        48        1201           1         0              0         663       297       391       6911      10        120
ponce          danube         8         12001          15        0              0         882       203       437       7270      100       120
ponce          missouri       10000     1201           0         0              0         881       245       434       3664      10        120
ponce          volga          8         241            0         0              0         954       586       413       4010      2         120
barcelona      mekong         100       241            0         0              0         844       297       425       2074      2         120
georgetown     lena           50        1201           1         0              0         707       302       368       8392      10        120
geneva         congo          16        1201           1         0              0         691       298       353       7218      10        120
geneva         danube         8         12001          26        0              0         1008      227       480       7025      100       120
geneva         parana         12        12001          40        0              0         760       275       368       4351      100       120
arequipa       arkansas       16        1201           1         2              0         810       1079      379       37064     10        120

latency_total.txt:

received[#]    mean[us]  late[#]   late[%]   too_late[#]    too_late[%]    lost[#]   lost[%]
126496         744       204       0.1613    2              0.001581       0         0

There are different message classifications depending on their latency.

  • A message is classified as too_late when its latency is greater than min(period, 50ms), where period is the publishing period of that particular topic.
  • A message is classified as late if it’s not classified as too_late but its latency is greater than min(0.2*period, 5ms).
  • The idea is that a real system could still work with a few late messages but not too_late messages.
  • Note that there are cli options to change these thresholds (for more info: ./irobot_benchmark --help).
  • A lost message is a message that never arrived.
    • A lost message is detected when the subscriber receives a message with a tracking number greater than the one expected.
    • The assumption here is that the messages always arrive in chronological order, i.e., a message A sent before a message B will either arrive before B or get lost, but will never arrive after B.
  • The rest of the messages are classified as on_time.

``` Message classifications by their latency

    • + | | | | | | | | | | | | | | | +——————————-+——————————-+

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged irobot_benchmark at Robotics Stack Exchange

No version for distro melodic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.1.0
License BSD 3.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Framework to evaluate peformance of ROS 2
Checkout URI https://github.com/irobot-ros/ros2-performance.git
VCS Type git
VCS Version rolling
Last Updated 2025-04-10
Dev Status UNKNOWN
Released UNRELEASED
Tags benchmark performance cpp ros2
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

Benchmark applications to test ros2 performance

Additional Links

No additional links.

Maintainers

  • Juan Oxoby

Authors

  • Juan Oxoby

Benchmark Application

This folder contains a benchmark application to test the performance of a ROS2 system.

To run the benchmark, supply a .json topology file, specifying a complete ROS2 system, to irobot_benchmark. The application will load the complete ROS2 system from the topology file and will begin passing messages between the different nodes.

For the duration of the test, statistical data will be collected, the usage of resources (CPU utilization and RAM consumption) and message latencies.

After the user-specified duration of time, the application will output the results as human-readable, space-delimited log files. Comma-delimited output is also available, specified via a flag to irobot_benchmark

Topologies

Multiple topologies are provided in the topology folder. Two examples are Sierra Nevada and Mont Blanc. Sierra Nevada is light 10-node system while Mont Blanc is a heavier and more complex 20-node system.

Usage

Follow the instructions for building the performance_test framework.

First, source the environment:

source performances_ws/install/local_setup.bash

Example run:

cd performances_ws/install/lib/irobot_benchmark
./irobot_benchmark topology/sierra_nevada.json -t 60 --ipc on

This will run Sierra Nevada for 60 seconds and with Intra-Process-Communication activated. For more options, run ./irobot_benchmark --help.

Output

After running the application, a folder will be created in the current working directory along with four different files inside it:

  • latency_all.txt
  • latency_total.txt
  • resources.txt
  • events.txt

Benchmark results

The following are sample files that have been obtained running Sierra Nevada on a RaspberryPi 3.

latency_all.txt:

node           topic          size[b]   received[#]    late[#]   too_late[#]    lost[#]   mean[us]  sd[us]    min[us]   max[us]   freq[hz]  duration[s]
lyon           amazon         36        12001          11        0              0         602       145       345       4300      100       120
hamburg        danube         8         12001          15        0              0         796       233       362       5722      100       120
hamburg        ganges         16        12001          10        0              0         557       119       302       4729      100       120
hamburg        nile           16        12001          18        0              0         658       206       300       5258      100       120
hamburg        tigris         16        12000          17        0              0         736       225       310       5994      100       120
osaka          parana         12        12001          32        0              0         636       236       346       4343      100       120
mandalay       danube         8         12001          16        0              0         791       189       418       6991      100       120
mandalay       salween        48        1201           1         0              0         663       297       391       6911      10        120
ponce          danube         8         12001          15        0              0         882       203       437       7270      100       120
ponce          missouri       10000     1201           0         0              0         881       245       434       3664      10        120
ponce          volga          8         241            0         0              0         954       586       413       4010      2         120
barcelona      mekong         100       241            0         0              0         844       297       425       2074      2         120
georgetown     lena           50        1201           1         0              0         707       302       368       8392      10        120
geneva         congo          16        1201           1         0              0         691       298       353       7218      10        120
geneva         danube         8         12001          26        0              0         1008      227       480       7025      100       120
geneva         parana         12        12001          40        0              0         760       275       368       4351      100       120
arequipa       arkansas       16        1201           1         2              0         810       1079      379       37064     10        120

latency_total.txt:

received[#]    mean[us]  late[#]   late[%]   too_late[#]    too_late[%]    lost[#]   lost[%]
126496         744       204       0.1613    2              0.001581       0         0

There are different message classifications depending on their latency.

  • A message is classified as too_late when its latency is greater than min(period, 50ms), where period is the publishing period of that particular topic.
  • A message is classified as late if it’s not classified as too_late but its latency is greater than min(0.2*period, 5ms).
  • The idea is that a real system could still work with a few late messages but not too_late messages.
  • Note that there are cli options to change these thresholds (for more info: ./irobot_benchmark --help).
  • A lost message is a message that never arrived.
    • A lost message is detected when the subscriber receives a message with a tracking number greater than the one expected.
    • The assumption here is that the messages always arrive in chronological order, i.e., a message A sent before a message B will either arrive before B or get lost, but will never arrive after B.
  • The rest of the messages are classified as on_time.

``` Message classifications by their latency

    • + | | | | | | | | | | | | | | | +——————————-+——————————-+

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged irobot_benchmark at Robotics Stack Exchange

No version for distro noetic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.1.0
License BSD 3.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Framework to evaluate peformance of ROS 2
Checkout URI https://github.com/irobot-ros/ros2-performance.git
VCS Type git
VCS Version rolling
Last Updated 2025-04-10
Dev Status UNKNOWN
Released UNRELEASED
Tags benchmark performance cpp ros2
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

Benchmark applications to test ros2 performance

Additional Links

No additional links.

Maintainers

  • Juan Oxoby

Authors

  • Juan Oxoby

Benchmark Application

This folder contains a benchmark application to test the performance of a ROS2 system.

To run the benchmark, supply a .json topology file, specifying a complete ROS2 system, to irobot_benchmark. The application will load the complete ROS2 system from the topology file and will begin passing messages between the different nodes.

For the duration of the test, statistical data will be collected, the usage of resources (CPU utilization and RAM consumption) and message latencies.

After the user-specified duration of time, the application will output the results as human-readable, space-delimited log files. Comma-delimited output is also available, specified via a flag to irobot_benchmark

Topologies

Multiple topologies are provided in the topology folder. Two examples are Sierra Nevada and Mont Blanc. Sierra Nevada is light 10-node system while Mont Blanc is a heavier and more complex 20-node system.

Usage

Follow the instructions for building the performance_test framework.

First, source the environment:

source performances_ws/install/local_setup.bash

Example run:

cd performances_ws/install/lib/irobot_benchmark
./irobot_benchmark topology/sierra_nevada.json -t 60 --ipc on

This will run Sierra Nevada for 60 seconds and with Intra-Process-Communication activated. For more options, run ./irobot_benchmark --help.

Output

After running the application, a folder will be created in the current working directory along with four different files inside it:

  • latency_all.txt
  • latency_total.txt
  • resources.txt
  • events.txt

Benchmark results

The following are sample files that have been obtained running Sierra Nevada on a RaspberryPi 3.

latency_all.txt:

node           topic          size[b]   received[#]    late[#]   too_late[#]    lost[#]   mean[us]  sd[us]    min[us]   max[us]   freq[hz]  duration[s]
lyon           amazon         36        12001          11        0              0         602       145       345       4300      100       120
hamburg        danube         8         12001          15        0              0         796       233       362       5722      100       120
hamburg        ganges         16        12001          10        0              0         557       119       302       4729      100       120
hamburg        nile           16        12001          18        0              0         658       206       300       5258      100       120
hamburg        tigris         16        12000          17        0              0         736       225       310       5994      100       120
osaka          parana         12        12001          32        0              0         636       236       346       4343      100       120
mandalay       danube         8         12001          16        0              0         791       189       418       6991      100       120
mandalay       salween        48        1201           1         0              0         663       297       391       6911      10        120
ponce          danube         8         12001          15        0              0         882       203       437       7270      100       120
ponce          missouri       10000     1201           0         0              0         881       245       434       3664      10        120
ponce          volga          8         241            0         0              0         954       586       413       4010      2         120
barcelona      mekong         100       241            0         0              0         844       297       425       2074      2         120
georgetown     lena           50        1201           1         0              0         707       302       368       8392      10        120
geneva         congo          16        1201           1         0              0         691       298       353       7218      10        120
geneva         danube         8         12001          26        0              0         1008      227       480       7025      100       120
geneva         parana         12        12001          40        0              0         760       275       368       4351      100       120
arequipa       arkansas       16        1201           1         2              0         810       1079      379       37064     10        120

latency_total.txt:

received[#]    mean[us]  late[#]   late[%]   too_late[#]    too_late[%]    lost[#]   lost[%]
126496         744       204       0.1613    2              0.001581       0         0

There are different message classifications depending on their latency.

  • A message is classified as too_late when its latency is greater than min(period, 50ms), where period is the publishing period of that particular topic.
  • A message is classified as late if it’s not classified as too_late but its latency is greater than min(0.2*period, 5ms).
  • The idea is that a real system could still work with a few late messages but not too_late messages.
  • Note that there are cli options to change these thresholds (for more info: ./irobot_benchmark --help).
  • A lost message is a message that never arrived.
    • A lost message is detected when the subscriber receives a message with a tracking number greater than the one expected.
    • The assumption here is that the messages always arrive in chronological order, i.e., a message A sent before a message B will either arrive before B or get lost, but will never arrive after B.
  • The rest of the messages are classified as on_time.

``` Message classifications by their latency

    • + | | | | | | | | | | | | | | | +——————————-+——————————-+

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged irobot_benchmark at Robotics Stack Exchange