![]() |
autoware_reference_system package from reference-system repoautoware_reference_system reference_interfaces reference_system |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 1.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | A reference system that simulates real-world systems in order to more fairly compare various configurations of executors and other settings |
Checkout URI | https://github.com/ros-realtime/reference-system.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2023-09-17 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | middleware cpp ros2 |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Evan Flynn
Authors
- Christian Eltzschig
Profiling executors using the Autoware reference system
Introduction
This tutorial incorporates the open-sourced autoware_reference_system
and can be used to fairly and repeatably test
the performance of the various executors available within the greater ROS 2 community.
The example simulates a real world scenario, Autoware.Auto and its LiDAR data pipeline, that can be used to evaluate the performance of the executor. To this end, the example comes with built-in performance measurements that make it easy to compare the performance between executor implementations in a repeatable way.
Quick Start
Some tools are provided in order to automate and standardize the report generation process for this
autoware_reference_system
.
First, install and build the dependencies
python3 -m pip install psrecord bokeh # optional dependency: networkx
cd workspace
colcon build --packages-up-to autoware_reference_system
The easiest way to run the benchmarks is through the ctest
interface. Rebuild the package
with the RUN_BENCHMARK
option and run colcon test
:
colcon build --packages-select autoware_reference_system \
--cmake-force-configure --cmake-args -DRUN_BENCHMARK=ON
colcon test --packages-select autoware_reference_system
After the tests have run, reports can be found as .html
files in
$ROS_HOME/benchmark_autoware_reference_system/<timestamp>
($ROS_HOME
defaults to ~/.ros
).
The symlink $ROS_HOME/benchmark_autoware_reference_system/latest
always points to the latest
results. Detailed reports to individual test runs can be found in subdirectories of the form
<duration>/<middleware>/<executable>
.
More details on all the supported CMake arguments can be found in the supported CMake argument section below.
By default the tests uses the default ROS 2 middleware set for the system.
To run the tests for all available RMWs, add the
-DALL_RMWS=ON
CMake argument to the colcon build
step.
The test duration can be configured through the RUN_TIMES
variable in CMakelists.txt
.
A separate set of tests is created for each chosen runtime.
Test Results and Reports
Reports are automatically generated depending on which tests are run. The main test directory
($ROS_HOME/benchmark_autoware_reference_system/latest
by default) contains the summary
reports,
which aggregate metrics across all tested configurations.
Below this main test directory, each tested configuration has a subdirectory of the form
<duration>/<middleware>/<executable name>
. This directory contains the raw trace data and
additional per-test reports in .html
format.
Tweaking the benchmark setup
To get more fine-grained control over the benchmarking process invoke the benchmark script directly. To get a summary of the available options, call
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py --help
As an example, to run all benchmarks starting with autoware_
and the
autoware_default_multithreaded
benchmark for 15 seconds run
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py \
15 'autoware_*'
The --logdir
option can be used to store the measurement results and reports in a custom
directory, without adding a timestamp. Note that this may overwrite existing measurement
results in the same directory.
Key Performance Indicators (KPIs)
The performance measurement evaluates the executor using the following metrics. In general, the lowest value within each KPI is considered to be the better performance.
-
CPU utilization
- In general a lower CPU utilization is better since it enables you to choose a smaller CPU or have more functionality on a larger CPU for other things.
-
Memory utilization
- In general a lower memory utilization is better since it enables you to choose a smaller memory or have more space for other things
-
Number of dropped sensor samples in transform nodes
- The nodes in the reference system always use the most recent sensor data (i.e., use a history depth of 1)
File truncated at 100 lines see the full file
Changelog for package autoware_reference_system
v1.1.0
- Add Iron ROS 2 distribution
- Remove EoL distributions Foxy and Galactic
- Remove legacy hack for rosidl_geneartor_py
v1.0.0
-
Add first changelog
-
Bump version of reference_system packages to 1.0.0
-
Skip callback group exe if distro is Foxy
-
Update reference_system docs, logic in various places
-
Migrate benchmark scripts to python
-
clean up reporting code, adjust title and label sizes for figures in reports
-
[91] add unit and integration tests for the reference system, fix some bugs found by tests
-
Added note on super user privileges.
-
Adding autoware_default_prioritized and autoware_default_cbg only to test set if super user rights available. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Fixed uncrustify finding. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Do not exit but print warning if thread prioritization fails. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
add skip_tracing cmake arg to readme
-
update memory individual report text sizes as well
-
increase label sizes for figures
-
Under Foxy, exclude executable using callback-group interface of Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Added executables for prioritized and callback-group-level Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
default to not run benchmark tests
-
Make cpu benchmark timings consistent
-
switch to use cmake options
-
patch version bump to include mutex for prints
-
remove extra line
-
fix flake8 errors
-
return none not no
-
handle case where log file line is incomplete
-
initial release for each package
-
sort axis labels along with data for latency plots
-
only run tests for 5s by default
-
update dependency list, add warnings to test section
-
update node graph
-
clean up reports
-
add behavior planner jitter
-
use candlesticks to show min, max, and std dev
-
add std trace type, generate summary report
-
fix dropped message count for now
-
apply feedback from pr
-
fix flake8 errors
-
create node graph from list of tuples
-
fix flake8 errors
-
rebase, refactor report gen, fix dropped msg count
-
clean up report generation code
-
add prototype latency figure to report
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_reference_system at Robotics Stack Exchange
![]() |
autoware_reference_system package from reference-system repoautoware_reference_system reference_interfaces reference_system |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 1.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | A reference system that simulates real-world systems in order to more fairly compare various configurations of executors and other settings |
Checkout URI | https://github.com/ros-realtime/reference-system.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2023-09-17 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | middleware cpp ros2 |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Evan Flynn
Authors
- Christian Eltzschig
Profiling executors using the Autoware reference system
Introduction
This tutorial incorporates the open-sourced autoware_reference_system
and can be used to fairly and repeatably test
the performance of the various executors available within the greater ROS 2 community.
The example simulates a real world scenario, Autoware.Auto and its LiDAR data pipeline, that can be used to evaluate the performance of the executor. To this end, the example comes with built-in performance measurements that make it easy to compare the performance between executor implementations in a repeatable way.
Quick Start
Some tools are provided in order to automate and standardize the report generation process for this
autoware_reference_system
.
First, install and build the dependencies
python3 -m pip install psrecord bokeh # optional dependency: networkx
cd workspace
colcon build --packages-up-to autoware_reference_system
The easiest way to run the benchmarks is through the ctest
interface. Rebuild the package
with the RUN_BENCHMARK
option and run colcon test
:
colcon build --packages-select autoware_reference_system \
--cmake-force-configure --cmake-args -DRUN_BENCHMARK=ON
colcon test --packages-select autoware_reference_system
After the tests have run, reports can be found as .html
files in
$ROS_HOME/benchmark_autoware_reference_system/<timestamp>
($ROS_HOME
defaults to ~/.ros
).
The symlink $ROS_HOME/benchmark_autoware_reference_system/latest
always points to the latest
results. Detailed reports to individual test runs can be found in subdirectories of the form
<duration>/<middleware>/<executable>
.
More details on all the supported CMake arguments can be found in the supported CMake argument section below.
By default the tests uses the default ROS 2 middleware set for the system.
To run the tests for all available RMWs, add the
-DALL_RMWS=ON
CMake argument to the colcon build
step.
The test duration can be configured through the RUN_TIMES
variable in CMakelists.txt
.
A separate set of tests is created for each chosen runtime.
Test Results and Reports
Reports are automatically generated depending on which tests are run. The main test directory
($ROS_HOME/benchmark_autoware_reference_system/latest
by default) contains the summary
reports,
which aggregate metrics across all tested configurations.
Below this main test directory, each tested configuration has a subdirectory of the form
<duration>/<middleware>/<executable name>
. This directory contains the raw trace data and
additional per-test reports in .html
format.
Tweaking the benchmark setup
To get more fine-grained control over the benchmarking process invoke the benchmark script directly. To get a summary of the available options, call
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py --help
As an example, to run all benchmarks starting with autoware_
and the
autoware_default_multithreaded
benchmark for 15 seconds run
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py \
15 'autoware_*'
The --logdir
option can be used to store the measurement results and reports in a custom
directory, without adding a timestamp. Note that this may overwrite existing measurement
results in the same directory.
Key Performance Indicators (KPIs)
The performance measurement evaluates the executor using the following metrics. In general, the lowest value within each KPI is considered to be the better performance.
-
CPU utilization
- In general a lower CPU utilization is better since it enables you to choose a smaller CPU or have more functionality on a larger CPU for other things.
-
Memory utilization
- In general a lower memory utilization is better since it enables you to choose a smaller memory or have more space for other things
-
Number of dropped sensor samples in transform nodes
- The nodes in the reference system always use the most recent sensor data (i.e., use a history depth of 1)
File truncated at 100 lines see the full file
Changelog for package autoware_reference_system
v1.1.0
- Add Iron ROS 2 distribution
- Remove EoL distributions Foxy and Galactic
- Remove legacy hack for rosidl_geneartor_py
v1.0.0
-
Add first changelog
-
Bump version of reference_system packages to 1.0.0
-
Skip callback group exe if distro is Foxy
-
Update reference_system docs, logic in various places
-
Migrate benchmark scripts to python
-
clean up reporting code, adjust title and label sizes for figures in reports
-
[91] add unit and integration tests for the reference system, fix some bugs found by tests
-
Added note on super user privileges.
-
Adding autoware_default_prioritized and autoware_default_cbg only to test set if super user rights available. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Fixed uncrustify finding. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Do not exit but print warning if thread prioritization fails. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
add skip_tracing cmake arg to readme
-
update memory individual report text sizes as well
-
increase label sizes for figures
-
Under Foxy, exclude executable using callback-group interface of Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Added executables for prioritized and callback-group-level Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
default to not run benchmark tests
-
Make cpu benchmark timings consistent
-
switch to use cmake options
-
patch version bump to include mutex for prints
-
remove extra line
-
fix flake8 errors
-
return none not no
-
handle case where log file line is incomplete
-
initial release for each package
-
sort axis labels along with data for latency plots
-
only run tests for 5s by default
-
update dependency list, add warnings to test section
-
update node graph
-
clean up reports
-
add behavior planner jitter
-
use candlesticks to show min, max, and std dev
-
add std trace type, generate summary report
-
fix dropped message count for now
-
apply feedback from pr
-
fix flake8 errors
-
create node graph from list of tuples
-
fix flake8 errors
-
rebase, refactor report gen, fix dropped msg count
-
clean up report generation code
-
add prototype latency figure to report
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_reference_system at Robotics Stack Exchange
![]() |
autoware_reference_system package from reference-system repoautoware_reference_system reference_interfaces reference_system |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 1.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | A reference system that simulates real-world systems in order to more fairly compare various configurations of executors and other settings |
Checkout URI | https://github.com/ros-realtime/reference-system.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2023-09-17 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | middleware cpp ros2 |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Evan Flynn
Authors
- Christian Eltzschig
Profiling executors using the Autoware reference system
Introduction
This tutorial incorporates the open-sourced autoware_reference_system
and can be used to fairly and repeatably test
the performance of the various executors available within the greater ROS 2 community.
The example simulates a real world scenario, Autoware.Auto and its LiDAR data pipeline, that can be used to evaluate the performance of the executor. To this end, the example comes with built-in performance measurements that make it easy to compare the performance between executor implementations in a repeatable way.
Quick Start
Some tools are provided in order to automate and standardize the report generation process for this
autoware_reference_system
.
First, install and build the dependencies
python3 -m pip install psrecord bokeh # optional dependency: networkx
cd workspace
colcon build --packages-up-to autoware_reference_system
The easiest way to run the benchmarks is through the ctest
interface. Rebuild the package
with the RUN_BENCHMARK
option and run colcon test
:
colcon build --packages-select autoware_reference_system \
--cmake-force-configure --cmake-args -DRUN_BENCHMARK=ON
colcon test --packages-select autoware_reference_system
After the tests have run, reports can be found as .html
files in
$ROS_HOME/benchmark_autoware_reference_system/<timestamp>
($ROS_HOME
defaults to ~/.ros
).
The symlink $ROS_HOME/benchmark_autoware_reference_system/latest
always points to the latest
results. Detailed reports to individual test runs can be found in subdirectories of the form
<duration>/<middleware>/<executable>
.
More details on all the supported CMake arguments can be found in the supported CMake argument section below.
By default the tests uses the default ROS 2 middleware set for the system.
To run the tests for all available RMWs, add the
-DALL_RMWS=ON
CMake argument to the colcon build
step.
The test duration can be configured through the RUN_TIMES
variable in CMakelists.txt
.
A separate set of tests is created for each chosen runtime.
Test Results and Reports
Reports are automatically generated depending on which tests are run. The main test directory
($ROS_HOME/benchmark_autoware_reference_system/latest
by default) contains the summary
reports,
which aggregate metrics across all tested configurations.
Below this main test directory, each tested configuration has a subdirectory of the form
<duration>/<middleware>/<executable name>
. This directory contains the raw trace data and
additional per-test reports in .html
format.
Tweaking the benchmark setup
To get more fine-grained control over the benchmarking process invoke the benchmark script directly. To get a summary of the available options, call
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py --help
As an example, to run all benchmarks starting with autoware_
and the
autoware_default_multithreaded
benchmark for 15 seconds run
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py \
15 'autoware_*'
The --logdir
option can be used to store the measurement results and reports in a custom
directory, without adding a timestamp. Note that this may overwrite existing measurement
results in the same directory.
Key Performance Indicators (KPIs)
The performance measurement evaluates the executor using the following metrics. In general, the lowest value within each KPI is considered to be the better performance.
-
CPU utilization
- In general a lower CPU utilization is better since it enables you to choose a smaller CPU or have more functionality on a larger CPU for other things.
-
Memory utilization
- In general a lower memory utilization is better since it enables you to choose a smaller memory or have more space for other things
-
Number of dropped sensor samples in transform nodes
- The nodes in the reference system always use the most recent sensor data (i.e., use a history depth of 1)
File truncated at 100 lines see the full file
Changelog for package autoware_reference_system
v1.1.0
- Add Iron ROS 2 distribution
- Remove EoL distributions Foxy and Galactic
- Remove legacy hack for rosidl_geneartor_py
v1.0.0
-
Add first changelog
-
Bump version of reference_system packages to 1.0.0
-
Skip callback group exe if distro is Foxy
-
Update reference_system docs, logic in various places
-
Migrate benchmark scripts to python
-
clean up reporting code, adjust title and label sizes for figures in reports
-
[91] add unit and integration tests for the reference system, fix some bugs found by tests
-
Added note on super user privileges.
-
Adding autoware_default_prioritized and autoware_default_cbg only to test set if super user rights available. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Fixed uncrustify finding. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Do not exit but print warning if thread prioritization fails. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
add skip_tracing cmake arg to readme
-
update memory individual report text sizes as well
-
increase label sizes for figures
-
Under Foxy, exclude executable using callback-group interface of Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Added executables for prioritized and callback-group-level Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
default to not run benchmark tests
-
Make cpu benchmark timings consistent
-
switch to use cmake options
-
patch version bump to include mutex for prints
-
remove extra line
-
fix flake8 errors
-
return none not no
-
handle case where log file line is incomplete
-
initial release for each package
-
sort axis labels along with data for latency plots
-
only run tests for 5s by default
-
update dependency list, add warnings to test section
-
update node graph
-
clean up reports
-
add behavior planner jitter
-
use candlesticks to show min, max, and std dev
-
add std trace type, generate summary report
-
fix dropped message count for now
-
apply feedback from pr
-
fix flake8 errors
-
create node graph from list of tuples
-
fix flake8 errors
-
rebase, refactor report gen, fix dropped msg count
-
clean up report generation code
-
add prototype latency figure to report
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_reference_system at Robotics Stack Exchange
![]() |
autoware_reference_system package from reference-system repoautoware_reference_system reference_interfaces reference_system |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 1.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | A reference system that simulates real-world systems in order to more fairly compare various configurations of executors and other settings |
Checkout URI | https://github.com/ros-realtime/reference-system.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2023-09-17 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | middleware cpp ros2 |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Evan Flynn
Authors
- Christian Eltzschig
Profiling executors using the Autoware reference system
Introduction
This tutorial incorporates the open-sourced autoware_reference_system
and can be used to fairly and repeatably test
the performance of the various executors available within the greater ROS 2 community.
The example simulates a real world scenario, Autoware.Auto and its LiDAR data pipeline, that can be used to evaluate the performance of the executor. To this end, the example comes with built-in performance measurements that make it easy to compare the performance between executor implementations in a repeatable way.
Quick Start
Some tools are provided in order to automate and standardize the report generation process for this
autoware_reference_system
.
First, install and build the dependencies
python3 -m pip install psrecord bokeh # optional dependency: networkx
cd workspace
colcon build --packages-up-to autoware_reference_system
The easiest way to run the benchmarks is through the ctest
interface. Rebuild the package
with the RUN_BENCHMARK
option and run colcon test
:
colcon build --packages-select autoware_reference_system \
--cmake-force-configure --cmake-args -DRUN_BENCHMARK=ON
colcon test --packages-select autoware_reference_system
After the tests have run, reports can be found as .html
files in
$ROS_HOME/benchmark_autoware_reference_system/<timestamp>
($ROS_HOME
defaults to ~/.ros
).
The symlink $ROS_HOME/benchmark_autoware_reference_system/latest
always points to the latest
results. Detailed reports to individual test runs can be found in subdirectories of the form
<duration>/<middleware>/<executable>
.
More details on all the supported CMake arguments can be found in the supported CMake argument section below.
By default the tests uses the default ROS 2 middleware set for the system.
To run the tests for all available RMWs, add the
-DALL_RMWS=ON
CMake argument to the colcon build
step.
The test duration can be configured through the RUN_TIMES
variable in CMakelists.txt
.
A separate set of tests is created for each chosen runtime.
Test Results and Reports
Reports are automatically generated depending on which tests are run. The main test directory
($ROS_HOME/benchmark_autoware_reference_system/latest
by default) contains the summary
reports,
which aggregate metrics across all tested configurations.
Below this main test directory, each tested configuration has a subdirectory of the form
<duration>/<middleware>/<executable name>
. This directory contains the raw trace data and
additional per-test reports in .html
format.
Tweaking the benchmark setup
To get more fine-grained control over the benchmarking process invoke the benchmark script directly. To get a summary of the available options, call
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py --help
As an example, to run all benchmarks starting with autoware_
and the
autoware_default_multithreaded
benchmark for 15 seconds run
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py \
15 'autoware_*'
The --logdir
option can be used to store the measurement results and reports in a custom
directory, without adding a timestamp. Note that this may overwrite existing measurement
results in the same directory.
Key Performance Indicators (KPIs)
The performance measurement evaluates the executor using the following metrics. In general, the lowest value within each KPI is considered to be the better performance.
-
CPU utilization
- In general a lower CPU utilization is better since it enables you to choose a smaller CPU or have more functionality on a larger CPU for other things.
-
Memory utilization
- In general a lower memory utilization is better since it enables you to choose a smaller memory or have more space for other things
-
Number of dropped sensor samples in transform nodes
- The nodes in the reference system always use the most recent sensor data (i.e., use a history depth of 1)
File truncated at 100 lines see the full file
Changelog for package autoware_reference_system
v1.1.0
- Add Iron ROS 2 distribution
- Remove EoL distributions Foxy and Galactic
- Remove legacy hack for rosidl_geneartor_py
v1.0.0
-
Add first changelog
-
Bump version of reference_system packages to 1.0.0
-
Skip callback group exe if distro is Foxy
-
Update reference_system docs, logic in various places
-
Migrate benchmark scripts to python
-
clean up reporting code, adjust title and label sizes for figures in reports
-
[91] add unit and integration tests for the reference system, fix some bugs found by tests
-
Added note on super user privileges.
-
Adding autoware_default_prioritized and autoware_default_cbg only to test set if super user rights available. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Fixed uncrustify finding. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Do not exit but print warning if thread prioritization fails. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
add skip_tracing cmake arg to readme
-
update memory individual report text sizes as well
-
increase label sizes for figures
-
Under Foxy, exclude executable using callback-group interface of Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Added executables for prioritized and callback-group-level Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
default to not run benchmark tests
-
Make cpu benchmark timings consistent
-
switch to use cmake options
-
patch version bump to include mutex for prints
-
remove extra line
-
fix flake8 errors
-
return none not no
-
handle case where log file line is incomplete
-
initial release for each package
-
sort axis labels along with data for latency plots
-
only run tests for 5s by default
-
update dependency list, add warnings to test section
-
update node graph
-
clean up reports
-
add behavior planner jitter
-
use candlesticks to show min, max, and std dev
-
add std trace type, generate summary report
-
fix dropped message count for now
-
apply feedback from pr
-
fix flake8 errors
-
create node graph from list of tuples
-
fix flake8 errors
-
rebase, refactor report gen, fix dropped msg count
-
clean up report generation code
-
add prototype latency figure to report
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_reference_system at Robotics Stack Exchange
![]() |
autoware_reference_system package from reference-system repoautoware_reference_system reference_interfaces reference_system |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 1.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | A reference system that simulates real-world systems in order to more fairly compare various configurations of executors and other settings |
Checkout URI | https://github.com/ros-realtime/reference-system.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2023-09-17 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | middleware cpp ros2 |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Evan Flynn
Authors
- Christian Eltzschig
Profiling executors using the Autoware reference system
Introduction
This tutorial incorporates the open-sourced autoware_reference_system
and can be used to fairly and repeatably test
the performance of the various executors available within the greater ROS 2 community.
The example simulates a real world scenario, Autoware.Auto and its LiDAR data pipeline, that can be used to evaluate the performance of the executor. To this end, the example comes with built-in performance measurements that make it easy to compare the performance between executor implementations in a repeatable way.
Quick Start
Some tools are provided in order to automate and standardize the report generation process for this
autoware_reference_system
.
First, install and build the dependencies
python3 -m pip install psrecord bokeh # optional dependency: networkx
cd workspace
colcon build --packages-up-to autoware_reference_system
The easiest way to run the benchmarks is through the ctest
interface. Rebuild the package
with the RUN_BENCHMARK
option and run colcon test
:
colcon build --packages-select autoware_reference_system \
--cmake-force-configure --cmake-args -DRUN_BENCHMARK=ON
colcon test --packages-select autoware_reference_system
After the tests have run, reports can be found as .html
files in
$ROS_HOME/benchmark_autoware_reference_system/<timestamp>
($ROS_HOME
defaults to ~/.ros
).
The symlink $ROS_HOME/benchmark_autoware_reference_system/latest
always points to the latest
results. Detailed reports to individual test runs can be found in subdirectories of the form
<duration>/<middleware>/<executable>
.
More details on all the supported CMake arguments can be found in the supported CMake argument section below.
By default the tests uses the default ROS 2 middleware set for the system.
To run the tests for all available RMWs, add the
-DALL_RMWS=ON
CMake argument to the colcon build
step.
The test duration can be configured through the RUN_TIMES
variable in CMakelists.txt
.
A separate set of tests is created for each chosen runtime.
Test Results and Reports
Reports are automatically generated depending on which tests are run. The main test directory
($ROS_HOME/benchmark_autoware_reference_system/latest
by default) contains the summary
reports,
which aggregate metrics across all tested configurations.
Below this main test directory, each tested configuration has a subdirectory of the form
<duration>/<middleware>/<executable name>
. This directory contains the raw trace data and
additional per-test reports in .html
format.
Tweaking the benchmark setup
To get more fine-grained control over the benchmarking process invoke the benchmark script directly. To get a summary of the available options, call
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py --help
As an example, to run all benchmarks starting with autoware_
and the
autoware_default_multithreaded
benchmark for 15 seconds run
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py \
15 'autoware_*'
The --logdir
option can be used to store the measurement results and reports in a custom
directory, without adding a timestamp. Note that this may overwrite existing measurement
results in the same directory.
Key Performance Indicators (KPIs)
The performance measurement evaluates the executor using the following metrics. In general, the lowest value within each KPI is considered to be the better performance.
-
CPU utilization
- In general a lower CPU utilization is better since it enables you to choose a smaller CPU or have more functionality on a larger CPU for other things.
-
Memory utilization
- In general a lower memory utilization is better since it enables you to choose a smaller memory or have more space for other things
-
Number of dropped sensor samples in transform nodes
- The nodes in the reference system always use the most recent sensor data (i.e., use a history depth of 1)
File truncated at 100 lines see the full file
Changelog for package autoware_reference_system
v1.1.0
- Add Iron ROS 2 distribution
- Remove EoL distributions Foxy and Galactic
- Remove legacy hack for rosidl_geneartor_py
v1.0.0
-
Add first changelog
-
Bump version of reference_system packages to 1.0.0
-
Skip callback group exe if distro is Foxy
-
Update reference_system docs, logic in various places
-
Migrate benchmark scripts to python
-
clean up reporting code, adjust title and label sizes for figures in reports
-
[91] add unit and integration tests for the reference system, fix some bugs found by tests
-
Added note on super user privileges.
-
Adding autoware_default_prioritized and autoware_default_cbg only to test set if super user rights available. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Fixed uncrustify finding. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Do not exit but print warning if thread prioritization fails. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
add skip_tracing cmake arg to readme
-
update memory individual report text sizes as well
-
increase label sizes for figures
-
Under Foxy, exclude executable using callback-group interface of Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Added executables for prioritized and callback-group-level Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
default to not run benchmark tests
-
Make cpu benchmark timings consistent
-
switch to use cmake options
-
patch version bump to include mutex for prints
-
remove extra line
-
fix flake8 errors
-
return none not no
-
handle case where log file line is incomplete
-
initial release for each package
-
sort axis labels along with data for latency plots
-
only run tests for 5s by default
-
update dependency list, add warnings to test section
-
update node graph
-
clean up reports
-
add behavior planner jitter
-
use candlesticks to show min, max, and std dev
-
add std trace type, generate summary report
-
fix dropped message count for now
-
apply feedback from pr
-
fix flake8 errors
-
create node graph from list of tuples
-
fix flake8 errors
-
rebase, refactor report gen, fix dropped msg count
-
clean up report generation code
-
add prototype latency figure to report
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_reference_system at Robotics Stack Exchange
![]() |
autoware_reference_system package from reference-system repoautoware_reference_system reference_interfaces reference_system |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 1.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | A reference system that simulates real-world systems in order to more fairly compare various configurations of executors and other settings |
Checkout URI | https://github.com/ros-realtime/reference-system.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2023-09-17 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | middleware cpp ros2 |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Evan Flynn
Authors
- Christian Eltzschig
Profiling executors using the Autoware reference system
Introduction
This tutorial incorporates the open-sourced autoware_reference_system
and can be used to fairly and repeatably test
the performance of the various executors available within the greater ROS 2 community.
The example simulates a real world scenario, Autoware.Auto and its LiDAR data pipeline, that can be used to evaluate the performance of the executor. To this end, the example comes with built-in performance measurements that make it easy to compare the performance between executor implementations in a repeatable way.
Quick Start
Some tools are provided in order to automate and standardize the report generation process for this
autoware_reference_system
.
First, install and build the dependencies
python3 -m pip install psrecord bokeh # optional dependency: networkx
cd workspace
colcon build --packages-up-to autoware_reference_system
The easiest way to run the benchmarks is through the ctest
interface. Rebuild the package
with the RUN_BENCHMARK
option and run colcon test
:
colcon build --packages-select autoware_reference_system \
--cmake-force-configure --cmake-args -DRUN_BENCHMARK=ON
colcon test --packages-select autoware_reference_system
After the tests have run, reports can be found as .html
files in
$ROS_HOME/benchmark_autoware_reference_system/<timestamp>
($ROS_HOME
defaults to ~/.ros
).
The symlink $ROS_HOME/benchmark_autoware_reference_system/latest
always points to the latest
results. Detailed reports to individual test runs can be found in subdirectories of the form
<duration>/<middleware>/<executable>
.
More details on all the supported CMake arguments can be found in the supported CMake argument section below.
By default the tests uses the default ROS 2 middleware set for the system.
To run the tests for all available RMWs, add the
-DALL_RMWS=ON
CMake argument to the colcon build
step.
The test duration can be configured through the RUN_TIMES
variable in CMakelists.txt
.
A separate set of tests is created for each chosen runtime.
Test Results and Reports
Reports are automatically generated depending on which tests are run. The main test directory
($ROS_HOME/benchmark_autoware_reference_system/latest
by default) contains the summary
reports,
which aggregate metrics across all tested configurations.
Below this main test directory, each tested configuration has a subdirectory of the form
<duration>/<middleware>/<executable name>
. This directory contains the raw trace data and
additional per-test reports in .html
format.
Tweaking the benchmark setup
To get more fine-grained control over the benchmarking process invoke the benchmark script directly. To get a summary of the available options, call
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py --help
As an example, to run all benchmarks starting with autoware_
and the
autoware_default_multithreaded
benchmark for 15 seconds run
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py \
15 'autoware_*'
The --logdir
option can be used to store the measurement results and reports in a custom
directory, without adding a timestamp. Note that this may overwrite existing measurement
results in the same directory.
Key Performance Indicators (KPIs)
The performance measurement evaluates the executor using the following metrics. In general, the lowest value within each KPI is considered to be the better performance.
-
CPU utilization
- In general a lower CPU utilization is better since it enables you to choose a smaller CPU or have more functionality on a larger CPU for other things.
-
Memory utilization
- In general a lower memory utilization is better since it enables you to choose a smaller memory or have more space for other things
-
Number of dropped sensor samples in transform nodes
- The nodes in the reference system always use the most recent sensor data (i.e., use a history depth of 1)
File truncated at 100 lines see the full file
Changelog for package autoware_reference_system
v1.1.0
- Add Iron ROS 2 distribution
- Remove EoL distributions Foxy and Galactic
- Remove legacy hack for rosidl_geneartor_py
v1.0.0
-
Add first changelog
-
Bump version of reference_system packages to 1.0.0
-
Skip callback group exe if distro is Foxy
-
Update reference_system docs, logic in various places
-
Migrate benchmark scripts to python
-
clean up reporting code, adjust title and label sizes for figures in reports
-
[91] add unit and integration tests for the reference system, fix some bugs found by tests
-
Added note on super user privileges.
-
Adding autoware_default_prioritized and autoware_default_cbg only to test set if super user rights available. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Fixed uncrustify finding. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Do not exit but print warning if thread prioritization fails. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
add skip_tracing cmake arg to readme
-
update memory individual report text sizes as well
-
increase label sizes for figures
-
Under Foxy, exclude executable using callback-group interface of Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Added executables for prioritized and callback-group-level Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
default to not run benchmark tests
-
Make cpu benchmark timings consistent
-
switch to use cmake options
-
patch version bump to include mutex for prints
-
remove extra line
-
fix flake8 errors
-
return none not no
-
handle case where log file line is incomplete
-
initial release for each package
-
sort axis labels along with data for latency plots
-
only run tests for 5s by default
-
update dependency list, add warnings to test section
-
update node graph
-
clean up reports
-
add behavior planner jitter
-
use candlesticks to show min, max, and std dev
-
add std trace type, generate summary report
-
fix dropped message count for now
-
apply feedback from pr
-
fix flake8 errors
-
create node graph from list of tuples
-
fix flake8 errors
-
rebase, refactor report gen, fix dropped msg count
-
clean up report generation code
-
add prototype latency figure to report
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_reference_system at Robotics Stack Exchange
![]() |
autoware_reference_system package from reference-system repoautoware_reference_system reference_interfaces reference_system |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 1.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | A reference system that simulates real-world systems in order to more fairly compare various configurations of executors and other settings |
Checkout URI | https://github.com/ros-realtime/reference-system.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2023-09-17 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | middleware cpp ros2 |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Evan Flynn
Authors
- Christian Eltzschig
Profiling executors using the Autoware reference system
Introduction
This tutorial incorporates the open-sourced autoware_reference_system
and can be used to fairly and repeatably test
the performance of the various executors available within the greater ROS 2 community.
The example simulates a real world scenario, Autoware.Auto and its LiDAR data pipeline, that can be used to evaluate the performance of the executor. To this end, the example comes with built-in performance measurements that make it easy to compare the performance between executor implementations in a repeatable way.
Quick Start
Some tools are provided in order to automate and standardize the report generation process for this
autoware_reference_system
.
First, install and build the dependencies
python3 -m pip install psrecord bokeh # optional dependency: networkx
cd workspace
colcon build --packages-up-to autoware_reference_system
The easiest way to run the benchmarks is through the ctest
interface. Rebuild the package
with the RUN_BENCHMARK
option and run colcon test
:
colcon build --packages-select autoware_reference_system \
--cmake-force-configure --cmake-args -DRUN_BENCHMARK=ON
colcon test --packages-select autoware_reference_system
After the tests have run, reports can be found as .html
files in
$ROS_HOME/benchmark_autoware_reference_system/<timestamp>
($ROS_HOME
defaults to ~/.ros
).
The symlink $ROS_HOME/benchmark_autoware_reference_system/latest
always points to the latest
results. Detailed reports to individual test runs can be found in subdirectories of the form
<duration>/<middleware>/<executable>
.
More details on all the supported CMake arguments can be found in the supported CMake argument section below.
By default the tests uses the default ROS 2 middleware set for the system.
To run the tests for all available RMWs, add the
-DALL_RMWS=ON
CMake argument to the colcon build
step.
The test duration can be configured through the RUN_TIMES
variable in CMakelists.txt
.
A separate set of tests is created for each chosen runtime.
Test Results and Reports
Reports are automatically generated depending on which tests are run. The main test directory
($ROS_HOME/benchmark_autoware_reference_system/latest
by default) contains the summary
reports,
which aggregate metrics across all tested configurations.
Below this main test directory, each tested configuration has a subdirectory of the form
<duration>/<middleware>/<executable name>
. This directory contains the raw trace data and
additional per-test reports in .html
format.
Tweaking the benchmark setup
To get more fine-grained control over the benchmarking process invoke the benchmark script directly. To get a summary of the available options, call
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py --help
As an example, to run all benchmarks starting with autoware_
and the
autoware_default_multithreaded
benchmark for 15 seconds run
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py \
15 'autoware_*'
The --logdir
option can be used to store the measurement results and reports in a custom
directory, without adding a timestamp. Note that this may overwrite existing measurement
results in the same directory.
Key Performance Indicators (KPIs)
The performance measurement evaluates the executor using the following metrics. In general, the lowest value within each KPI is considered to be the better performance.
-
CPU utilization
- In general a lower CPU utilization is better since it enables you to choose a smaller CPU or have more functionality on a larger CPU for other things.
-
Memory utilization
- In general a lower memory utilization is better since it enables you to choose a smaller memory or have more space for other things
-
Number of dropped sensor samples in transform nodes
- The nodes in the reference system always use the most recent sensor data (i.e., use a history depth of 1)
File truncated at 100 lines see the full file
Changelog for package autoware_reference_system
v1.1.0
- Add Iron ROS 2 distribution
- Remove EoL distributions Foxy and Galactic
- Remove legacy hack for rosidl_geneartor_py
v1.0.0
-
Add first changelog
-
Bump version of reference_system packages to 1.0.0
-
Skip callback group exe if distro is Foxy
-
Update reference_system docs, logic in various places
-
Migrate benchmark scripts to python
-
clean up reporting code, adjust title and label sizes for figures in reports
-
[91] add unit and integration tests for the reference system, fix some bugs found by tests
-
Added note on super user privileges.
-
Adding autoware_default_prioritized and autoware_default_cbg only to test set if super user rights available. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Fixed uncrustify finding. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Do not exit but print warning if thread prioritization fails. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
add skip_tracing cmake arg to readme
-
update memory individual report text sizes as well
-
increase label sizes for figures
-
Under Foxy, exclude executable using callback-group interface of Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Added executables for prioritized and callback-group-level Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
default to not run benchmark tests
-
Make cpu benchmark timings consistent
-
switch to use cmake options
-
patch version bump to include mutex for prints
-
remove extra line
-
fix flake8 errors
-
return none not no
-
handle case where log file line is incomplete
-
initial release for each package
-
sort axis labels along with data for latency plots
-
only run tests for 5s by default
-
update dependency list, add warnings to test section
-
update node graph
-
clean up reports
-
add behavior planner jitter
-
use candlesticks to show min, max, and std dev
-
add std trace type, generate summary report
-
fix dropped message count for now
-
apply feedback from pr
-
fix flake8 errors
-
create node graph from list of tuples
-
fix flake8 errors
-
rebase, refactor report gen, fix dropped msg count
-
clean up report generation code
-
add prototype latency figure to report
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_reference_system at Robotics Stack Exchange
![]() |
autoware_reference_system package from reference-system repoautoware_reference_system reference_interfaces reference_system |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 1.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | A reference system that simulates real-world systems in order to more fairly compare various configurations of executors and other settings |
Checkout URI | https://github.com/ros-realtime/reference-system.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2023-09-17 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | middleware cpp ros2 |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Evan Flynn
Authors
- Christian Eltzschig
Profiling executors using the Autoware reference system
Introduction
This tutorial incorporates the open-sourced autoware_reference_system
and can be used to fairly and repeatably test
the performance of the various executors available within the greater ROS 2 community.
The example simulates a real world scenario, Autoware.Auto and its LiDAR data pipeline, that can be used to evaluate the performance of the executor. To this end, the example comes with built-in performance measurements that make it easy to compare the performance between executor implementations in a repeatable way.
Quick Start
Some tools are provided in order to automate and standardize the report generation process for this
autoware_reference_system
.
First, install and build the dependencies
python3 -m pip install psrecord bokeh # optional dependency: networkx
cd workspace
colcon build --packages-up-to autoware_reference_system
The easiest way to run the benchmarks is through the ctest
interface. Rebuild the package
with the RUN_BENCHMARK
option and run colcon test
:
colcon build --packages-select autoware_reference_system \
--cmake-force-configure --cmake-args -DRUN_BENCHMARK=ON
colcon test --packages-select autoware_reference_system
After the tests have run, reports can be found as .html
files in
$ROS_HOME/benchmark_autoware_reference_system/<timestamp>
($ROS_HOME
defaults to ~/.ros
).
The symlink $ROS_HOME/benchmark_autoware_reference_system/latest
always points to the latest
results. Detailed reports to individual test runs can be found in subdirectories of the form
<duration>/<middleware>/<executable>
.
More details on all the supported CMake arguments can be found in the supported CMake argument section below.
By default the tests uses the default ROS 2 middleware set for the system.
To run the tests for all available RMWs, add the
-DALL_RMWS=ON
CMake argument to the colcon build
step.
The test duration can be configured through the RUN_TIMES
variable in CMakelists.txt
.
A separate set of tests is created for each chosen runtime.
Test Results and Reports
Reports are automatically generated depending on which tests are run. The main test directory
($ROS_HOME/benchmark_autoware_reference_system/latest
by default) contains the summary
reports,
which aggregate metrics across all tested configurations.
Below this main test directory, each tested configuration has a subdirectory of the form
<duration>/<middleware>/<executable name>
. This directory contains the raw trace data and
additional per-test reports in .html
format.
Tweaking the benchmark setup
To get more fine-grained control over the benchmarking process invoke the benchmark script directly. To get a summary of the available options, call
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py --help
As an example, to run all benchmarks starting with autoware_
and the
autoware_default_multithreaded
benchmark for 15 seconds run
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py \
15 'autoware_*'
The --logdir
option can be used to store the measurement results and reports in a custom
directory, without adding a timestamp. Note that this may overwrite existing measurement
results in the same directory.
Key Performance Indicators (KPIs)
The performance measurement evaluates the executor using the following metrics. In general, the lowest value within each KPI is considered to be the better performance.
-
CPU utilization
- In general a lower CPU utilization is better since it enables you to choose a smaller CPU or have more functionality on a larger CPU for other things.
-
Memory utilization
- In general a lower memory utilization is better since it enables you to choose a smaller memory or have more space for other things
-
Number of dropped sensor samples in transform nodes
- The nodes in the reference system always use the most recent sensor data (i.e., use a history depth of 1)
File truncated at 100 lines see the full file
Changelog for package autoware_reference_system
v1.1.0
- Add Iron ROS 2 distribution
- Remove EoL distributions Foxy and Galactic
- Remove legacy hack for rosidl_geneartor_py
v1.0.0
-
Add first changelog
-
Bump version of reference_system packages to 1.0.0
-
Skip callback group exe if distro is Foxy
-
Update reference_system docs, logic in various places
-
Migrate benchmark scripts to python
-
clean up reporting code, adjust title and label sizes for figures in reports
-
[91] add unit and integration tests for the reference system, fix some bugs found by tests
-
Added note on super user privileges.
-
Adding autoware_default_prioritized and autoware_default_cbg only to test set if super user rights available. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Fixed uncrustify finding. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Do not exit but print warning if thread prioritization fails. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
add skip_tracing cmake arg to readme
-
update memory individual report text sizes as well
-
increase label sizes for figures
-
Under Foxy, exclude executable using callback-group interface of Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Added executables for prioritized and callback-group-level Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
default to not run benchmark tests
-
Make cpu benchmark timings consistent
-
switch to use cmake options
-
patch version bump to include mutex for prints
-
remove extra line
-
fix flake8 errors
-
return none not no
-
handle case where log file line is incomplete
-
initial release for each package
-
sort axis labels along with data for latency plots
-
only run tests for 5s by default
-
update dependency list, add warnings to test section
-
update node graph
-
clean up reports
-
add behavior planner jitter
-
use candlesticks to show min, max, and std dev
-
add std trace type, generate summary report
-
fix dropped message count for now
-
apply feedback from pr
-
fix flake8 errors
-
create node graph from list of tuples
-
fix flake8 errors
-
rebase, refactor report gen, fix dropped msg count
-
clean up report generation code
-
add prototype latency figure to report
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_reference_system at Robotics Stack Exchange
![]() |
autoware_reference_system package from reference-system repoautoware_reference_system reference_interfaces reference_system |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 1.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | A reference system that simulates real-world systems in order to more fairly compare various configurations of executors and other settings |
Checkout URI | https://github.com/ros-realtime/reference-system.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2023-09-17 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | middleware cpp ros2 |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Evan Flynn
Authors
- Christian Eltzschig
Profiling executors using the Autoware reference system
Introduction
This tutorial incorporates the open-sourced autoware_reference_system
and can be used to fairly and repeatably test
the performance of the various executors available within the greater ROS 2 community.
The example simulates a real world scenario, Autoware.Auto and its LiDAR data pipeline, that can be used to evaluate the performance of the executor. To this end, the example comes with built-in performance measurements that make it easy to compare the performance between executor implementations in a repeatable way.
Quick Start
Some tools are provided in order to automate and standardize the report generation process for this
autoware_reference_system
.
First, install and build the dependencies
python3 -m pip install psrecord bokeh # optional dependency: networkx
cd workspace
colcon build --packages-up-to autoware_reference_system
The easiest way to run the benchmarks is through the ctest
interface. Rebuild the package
with the RUN_BENCHMARK
option and run colcon test
:
colcon build --packages-select autoware_reference_system \
--cmake-force-configure --cmake-args -DRUN_BENCHMARK=ON
colcon test --packages-select autoware_reference_system
After the tests have run, reports can be found as .html
files in
$ROS_HOME/benchmark_autoware_reference_system/<timestamp>
($ROS_HOME
defaults to ~/.ros
).
The symlink $ROS_HOME/benchmark_autoware_reference_system/latest
always points to the latest
results. Detailed reports to individual test runs can be found in subdirectories of the form
<duration>/<middleware>/<executable>
.
More details on all the supported CMake arguments can be found in the supported CMake argument section below.
By default the tests uses the default ROS 2 middleware set for the system.
To run the tests for all available RMWs, add the
-DALL_RMWS=ON
CMake argument to the colcon build
step.
The test duration can be configured through the RUN_TIMES
variable in CMakelists.txt
.
A separate set of tests is created for each chosen runtime.
Test Results and Reports
Reports are automatically generated depending on which tests are run. The main test directory
($ROS_HOME/benchmark_autoware_reference_system/latest
by default) contains the summary
reports,
which aggregate metrics across all tested configurations.
Below this main test directory, each tested configuration has a subdirectory of the form
<duration>/<middleware>/<executable name>
. This directory contains the raw trace data and
additional per-test reports in .html
format.
Tweaking the benchmark setup
To get more fine-grained control over the benchmarking process invoke the benchmark script directly. To get a summary of the available options, call
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py --help
As an example, to run all benchmarks starting with autoware_
and the
autoware_default_multithreaded
benchmark for 15 seconds run
python3 $(ros2 pkg prefix --share autoware_reference_system)/scripts/benchmark.py \
15 'autoware_*'
The --logdir
option can be used to store the measurement results and reports in a custom
directory, without adding a timestamp. Note that this may overwrite existing measurement
results in the same directory.
Key Performance Indicators (KPIs)
The performance measurement evaluates the executor using the following metrics. In general, the lowest value within each KPI is considered to be the better performance.
-
CPU utilization
- In general a lower CPU utilization is better since it enables you to choose a smaller CPU or have more functionality on a larger CPU for other things.
-
Memory utilization
- In general a lower memory utilization is better since it enables you to choose a smaller memory or have more space for other things
-
Number of dropped sensor samples in transform nodes
- The nodes in the reference system always use the most recent sensor data (i.e., use a history depth of 1)
File truncated at 100 lines see the full file
Changelog for package autoware_reference_system
v1.1.0
- Add Iron ROS 2 distribution
- Remove EoL distributions Foxy and Galactic
- Remove legacy hack for rosidl_geneartor_py
v1.0.0
-
Add first changelog
-
Bump version of reference_system packages to 1.0.0
-
Skip callback group exe if distro is Foxy
-
Update reference_system docs, logic in various places
-
Migrate benchmark scripts to python
-
clean up reporting code, adjust title and label sizes for figures in reports
-
[91] add unit and integration tests for the reference system, fix some bugs found by tests
-
Added note on super user privileges.
-
Adding autoware_default_prioritized and autoware_default_cbg only to test set if super user rights available. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Fixed uncrustify finding. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Do not exit but print warning if thread prioritization fails. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
add skip_tracing cmake arg to readme
-
update memory individual report text sizes as well
-
increase label sizes for figures
-
Under Foxy, exclude executable using callback-group interface of Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
Added executables for prioritized and callback-group-level Executor. Signed-off-by: Ralph Lange <<ralph.lange@de.bosch.com>>
-
default to not run benchmark tests
-
Make cpu benchmark timings consistent
-
switch to use cmake options
-
patch version bump to include mutex for prints
-
remove extra line
-
fix flake8 errors
-
return none not no
-
handle case where log file line is incomplete
-
initial release for each package
-
sort axis labels along with data for latency plots
-
only run tests for 5s by default
-
update dependency list, add warnings to test section
-
update node graph
-
clean up reports
-
add behavior planner jitter
-
use candlesticks to show min, max, and std dev
-
add std trace type, generate summary report
-
fix dropped message count for now
-
apply feedback from pr
-
fix flake8 errors
-
create node graph from list of tuples
-
fix flake8 errors
-
rebase, refactor report gen, fix dropped msg count
-
clean up report generation code
-
add prototype latency figure to report
File truncated at 100 lines see the full file