Repository Summary
| Description | A testing library and CLI for replaying ROS nodes. |
| Checkout URI | https://github.com/polymathrobotics/replay_testing.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2025-10-22 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Tags | No category tags. |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| replay_testing | 0.0.3 |
README
Replay Testing
A ROS2-based framework for configuring, authoring and running replay tests.
Features include:
- MCAP replay and automatic recording of assets for offline review
- Baked-in Unittest support for MCAP asserts
- Parametric sweeps
- Easy-to-use CMake for running in CI
- Lightweight CLI for running quickly
What is Replay Tesing?
Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!
All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.
Release Status
| Distro | Dev | Doc | Src | Ubuntu x64 |
|---|---|---|---|---|
| Rolling | ||||
| Kilted | ||||
| Jazzy | ||||
| Humble |
Usage
CLI
ros2 run replay_testing replay_test [REPLAY_TEST_PATH]
Run @analyze only on a previous run:
ros2 run replay_testing replay_test [REPLAY_TEST_PATH] --analyze [RUN_ID]
For other args:
ros2 run replay_testing replay_test --help
colcon test and CMake
This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.
To use:
find_package(replay_testing REQUIRED)
..
if(BUILD_TESTING)
add_replay_test([REPLAY_TEST_PATH])
endif()
If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.
Authoring Replay Tests
Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.
Replay testing has three distinct phases, all of which are required to run a replay test:
Filter Fixtures @fixtures
For collecting and preparing your fixtures to be run against your launch specification. Duties include:
- Provides a mechanism for specifying your input fixtures (e.g.
lidar_data.mcap). If you want to store your MCAPs outside of source control, see Storing MCAP below. - Filtering out any expected output topics that will be produced from the
runstep. - Produces a
filtered_fixture.mcapasset that is used against therunstep - Asserts that specified input topics are present
- (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack
Here is how you use it:
@fixtures.parameterize([LocalFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
required_input_topics = ["/vehicle/cmd_vel"]
expected_output_topics = ["/user/cmd_vel"]
Run @run
Specify a launch description that will run against the replayed fixture. Usage:
@run.default()
class Run:
def generate_launch_description(self) -> LaunchDescription:
return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")
File truncated at 100 lines see the full file
CONTRIBUTING
Repository Summary
| Description | A testing library and CLI for replaying ROS nodes. |
| Checkout URI | https://github.com/polymathrobotics/replay_testing.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2025-10-22 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Tags | No category tags. |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| replay_testing | 0.0.3 |
README
Replay Testing
A ROS2-based framework for configuring, authoring and running replay tests.
Features include:
- MCAP replay and automatic recording of assets for offline review
- Baked-in Unittest support for MCAP asserts
- Parametric sweeps
- Easy-to-use CMake for running in CI
- Lightweight CLI for running quickly
What is Replay Tesing?
Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!
All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.
Release Status
| Distro | Dev | Doc | Src | Ubuntu x64 |
|---|---|---|---|---|
| Rolling | ||||
| Kilted | ||||
| Jazzy | ||||
| Humble |
Usage
CLI
ros2 run replay_testing replay_test [REPLAY_TEST_PATH]
Run @analyze only on a previous run:
ros2 run replay_testing replay_test [REPLAY_TEST_PATH] --analyze [RUN_ID]
For other args:
ros2 run replay_testing replay_test --help
colcon test and CMake
This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.
To use:
find_package(replay_testing REQUIRED)
..
if(BUILD_TESTING)
add_replay_test([REPLAY_TEST_PATH])
endif()
If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.
Authoring Replay Tests
Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.
Replay testing has three distinct phases, all of which are required to run a replay test:
Filter Fixtures @fixtures
For collecting and preparing your fixtures to be run against your launch specification. Duties include:
- Provides a mechanism for specifying your input fixtures (e.g.
lidar_data.mcap). If you want to store your MCAPs outside of source control, see Storing MCAP below. - Filtering out any expected output topics that will be produced from the
runstep. - Produces a
filtered_fixture.mcapasset that is used against therunstep - Asserts that specified input topics are present
- (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack
Here is how you use it:
@fixtures.parameterize([LocalFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
required_input_topics = ["/vehicle/cmd_vel"]
expected_output_topics = ["/user/cmd_vel"]
Run @run
Specify a launch description that will run against the replayed fixture. Usage:
@run.default()
class Run:
def generate_launch_description(self) -> LaunchDescription:
return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")
File truncated at 100 lines see the full file
CONTRIBUTING
Repository Summary
| Description | A testing library and CLI for replaying ROS nodes. |
| Checkout URI | https://github.com/polymathrobotics/replay_testing.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2025-10-22 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Tags | No category tags. |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| replay_testing | 0.0.3 |
README
Replay Testing
A ROS2-based framework for configuring, authoring and running replay tests.
Features include:
- MCAP replay and automatic recording of assets for offline review
- Baked-in Unittest support for MCAP asserts
- Parametric sweeps
- Easy-to-use CMake for running in CI
- Lightweight CLI for running quickly
What is Replay Tesing?
Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!
All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.
Release Status
| Distro | Dev | Doc | Src | Ubuntu x64 |
|---|---|---|---|---|
| Rolling | ||||
| Kilted | ||||
| Jazzy | ||||
| Humble |
Usage
CLI
ros2 run replay_testing replay_test [REPLAY_TEST_PATH]
Run @analyze only on a previous run:
ros2 run replay_testing replay_test [REPLAY_TEST_PATH] --analyze [RUN_ID]
For other args:
ros2 run replay_testing replay_test --help
colcon test and CMake
This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.
To use:
find_package(replay_testing REQUIRED)
..
if(BUILD_TESTING)
add_replay_test([REPLAY_TEST_PATH])
endif()
If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.
Authoring Replay Tests
Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.
Replay testing has three distinct phases, all of which are required to run a replay test:
Filter Fixtures @fixtures
For collecting and preparing your fixtures to be run against your launch specification. Duties include:
- Provides a mechanism for specifying your input fixtures (e.g.
lidar_data.mcap). If you want to store your MCAPs outside of source control, see Storing MCAP below. - Filtering out any expected output topics that will be produced from the
runstep. - Produces a
filtered_fixture.mcapasset that is used against therunstep - Asserts that specified input topics are present
- (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack
Here is how you use it:
@fixtures.parameterize([LocalFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
required_input_topics = ["/vehicle/cmd_vel"]
expected_output_topics = ["/user/cmd_vel"]
Run @run
Specify a launch description that will run against the replayed fixture. Usage:
@run.default()
class Run:
def generate_launch_description(self) -> LaunchDescription:
return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")
File truncated at 100 lines see the full file
CONTRIBUTING
Repository Summary
| Description | A testing library and CLI for replaying ROS nodes. |
| Checkout URI | https://github.com/polymathrobotics/replay_testing.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2025-10-22 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Tags | No category tags. |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| replay_testing | 0.0.3 |
README
Replay Testing
A ROS2-based framework for configuring, authoring and running replay tests.
Features include:
- MCAP replay and automatic recording of assets for offline review
- Baked-in Unittest support for MCAP asserts
- Parametric sweeps
- Easy-to-use CMake for running in CI
- Lightweight CLI for running quickly
What is Replay Tesing?
Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!
All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.
Release Status
| Distro | Dev | Doc | Src | Ubuntu x64 |
|---|---|---|---|---|
| Rolling | ||||
| Kilted | ||||
| Jazzy | ||||
| Humble |
Usage
CLI
ros2 run replay_testing replay_test [REPLAY_TEST_PATH]
Run @analyze only on a previous run:
ros2 run replay_testing replay_test [REPLAY_TEST_PATH] --analyze [RUN_ID]
For other args:
ros2 run replay_testing replay_test --help
colcon test and CMake
This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.
To use:
find_package(replay_testing REQUIRED)
..
if(BUILD_TESTING)
add_replay_test([REPLAY_TEST_PATH])
endif()
If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.
Authoring Replay Tests
Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.
Replay testing has three distinct phases, all of which are required to run a replay test:
Filter Fixtures @fixtures
For collecting and preparing your fixtures to be run against your launch specification. Duties include:
- Provides a mechanism for specifying your input fixtures (e.g.
lidar_data.mcap). If you want to store your MCAPs outside of source control, see Storing MCAP below. - Filtering out any expected output topics that will be produced from the
runstep. - Produces a
filtered_fixture.mcapasset that is used against therunstep - Asserts that specified input topics are present
- (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack
Here is how you use it:
@fixtures.parameterize([LocalFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
required_input_topics = ["/vehicle/cmd_vel"]
expected_output_topics = ["/user/cmd_vel"]
Run @run
Specify a launch description that will run against the replayed fixture. Usage:
@run.default()
class Run:
def generate_launch_description(self) -> LaunchDescription:
return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")
File truncated at 100 lines see the full file