Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Fumiya Watanabe
- Kosuke Takeuchi
- Kotaro Uetake
- Kyoichi Sugahara
- Yoshi Ri
- Junya Sasaki
Authors
- Kosuke Takeuchi
Perception Evaluator
A node for evaluating the output of perception systems.
Purpose
This module allows for the evaluation of how accurately perception results are generated without the need for annotations. It is capable of confirming performance and can evaluate results from a few seconds prior, enabling online execution.
Inner-workings / Algorithms
The evaluated metrics are as follows:
- predicted_path_deviation
- predicted_path_deviation_variance
- lateral_deviation
- yaw_deviation
- yaw_rate
- total_objects_count
- average_objects_count
- interval_objects_count
Predicted Path Deviation / Predicted Path Deviation Variance
Compare the predicted path of past objects with their actual traveled path to determine the deviation for MOVING OBJECTS. For each object, calculate the mean distance between the predicted path points and the corresponding points on the actual path, up to the specified time step. In other words, this calculates the Average Displacement Error (ADE). The target object to be evaluated is the object from $T_N$ seconds ago, where $T_N$ is the maximum value of the prediction time horizon $[T_1, T_2, …, T_N]$.
[!NOTE] The object from $T_N$ seconds ago is the target object for all metrics. This is to unify the time of the target object across metrics.
- $n_{points}$ : Number of points in the predicted path
- $T$ : Time horizon for prediction evaluation.
- $dt$ : Time interval of the predicted path
- $d_i$ : Distance between the predicted path and the actual traveled path at path point $i$
- $ADE$ : Mean deviation of the predicted path for the target object.
- $Var$ : Variance of the predicted path deviation for the target object.
The final predicted path deviation metrics are calculated by averaging the mean deviation of the predicted path for all objects of the same class, and then calculating the mean, maximum, and minimum values of the mean deviation.
- $n_{objects}$ : Number of objects
- $ADE_{mean}$ : Mean deviation of the predicted path through all objects
- $ADE_{max}$ : Maximum deviation of the predicted path through all objects
- $ADE_{min}$ : Minimum deviation of the predicted path through all objects
- $Var_{mean}$ : Mean variance of the predicted path deviation through all objects
- $Var_{max}$ : Maximum variance of the predicted path deviation through all objects
- $Var_{min}$ : Minimum variance of the predicted path deviation through all objects
The actual metric name is determined by the object class and time horizon. For example, predicted_path_deviation_variance_CAR_5.00
Lateral Deviation
Calculates lateral deviation between the smoothed traveled trajectory and the perceived position to evaluate the stability of lateral position recognition for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The lateral deviation is calculated by comparing the smoothed traveled trajectory with the perceived position of the past object whose timestamp is $T=T_n$ seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Deviation
Calculates the deviation between the recognized yaw angle of an past object and the yaw azimuth angle of the smoothed traveled trajectory for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The yaw deviation is calculated by comparing the yaw azimuth angle of smoothed traveled trajectory with the perceived orientation of the past object whose timestamp is $T=T_n$ seconds ago.
For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Rate
Calculates the yaw rate of an object based on the change in yaw angle from the previous time step. It is evaluated for STATIONARY OBJECTS and assesses the stability of yaw rate recognition. The yaw rate is calculated by comparing the yaw angle of the past object with the yaw angle of the object received in the previous cycle. Here, t2 is the timestamp that is $T_n$ seconds ago.
Object Counts
Counts the number of detections for each object class within the specified detection range. These metrics are measured for the most recent object not past objects.
File truncated at 100 lines see the full file
Changelog for package autoware_perception_online_evaluator
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Kang, Mete Fatih Cırıt
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
-
chore: refine maintainer list (#10110)
- chore: remove Miura from maintainer
* chore: add Taekjin-san to perception_utils package maintainer ---------
-
feat(autoware_vehicle_info_utils): replace autoware_universe_utils with autoware_utils (#10167)
-
Contributors: Fumiya Watanabe, Ryohsuke Mitsudome, Shunsuke Miura, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/perception_analytics_publisher.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
- launch/perception_online_evaluator.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
Messages
Services
Plugins
Recent questions tagged autoware_perception_online_evaluator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Fumiya Watanabe
- Kosuke Takeuchi
- Kotaro Uetake
- Kyoichi Sugahara
- Yoshi Ri
- Junya Sasaki
Authors
- Kosuke Takeuchi
Perception Evaluator
A node for evaluating the output of perception systems.
Purpose
This module allows for the evaluation of how accurately perception results are generated without the need for annotations. It is capable of confirming performance and can evaluate results from a few seconds prior, enabling online execution.
Inner-workings / Algorithms
The evaluated metrics are as follows:
- predicted_path_deviation
- predicted_path_deviation_variance
- lateral_deviation
- yaw_deviation
- yaw_rate
- total_objects_count
- average_objects_count
- interval_objects_count
Predicted Path Deviation / Predicted Path Deviation Variance
Compare the predicted path of past objects with their actual traveled path to determine the deviation for MOVING OBJECTS. For each object, calculate the mean distance between the predicted path points and the corresponding points on the actual path, up to the specified time step. In other words, this calculates the Average Displacement Error (ADE). The target object to be evaluated is the object from $T_N$ seconds ago, where $T_N$ is the maximum value of the prediction time horizon $[T_1, T_2, …, T_N]$.
[!NOTE] The object from $T_N$ seconds ago is the target object for all metrics. This is to unify the time of the target object across metrics.
- $n_{points}$ : Number of points in the predicted path
- $T$ : Time horizon for prediction evaluation.
- $dt$ : Time interval of the predicted path
- $d_i$ : Distance between the predicted path and the actual traveled path at path point $i$
- $ADE$ : Mean deviation of the predicted path for the target object.
- $Var$ : Variance of the predicted path deviation for the target object.
The final predicted path deviation metrics are calculated by averaging the mean deviation of the predicted path for all objects of the same class, and then calculating the mean, maximum, and minimum values of the mean deviation.
- $n_{objects}$ : Number of objects
- $ADE_{mean}$ : Mean deviation of the predicted path through all objects
- $ADE_{max}$ : Maximum deviation of the predicted path through all objects
- $ADE_{min}$ : Minimum deviation of the predicted path through all objects
- $Var_{mean}$ : Mean variance of the predicted path deviation through all objects
- $Var_{max}$ : Maximum variance of the predicted path deviation through all objects
- $Var_{min}$ : Minimum variance of the predicted path deviation through all objects
The actual metric name is determined by the object class and time horizon. For example, predicted_path_deviation_variance_CAR_5.00
Lateral Deviation
Calculates lateral deviation between the smoothed traveled trajectory and the perceived position to evaluate the stability of lateral position recognition for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The lateral deviation is calculated by comparing the smoothed traveled trajectory with the perceived position of the past object whose timestamp is $T=T_n$ seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Deviation
Calculates the deviation between the recognized yaw angle of an past object and the yaw azimuth angle of the smoothed traveled trajectory for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The yaw deviation is calculated by comparing the yaw azimuth angle of smoothed traveled trajectory with the perceived orientation of the past object whose timestamp is $T=T_n$ seconds ago.
For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Rate
Calculates the yaw rate of an object based on the change in yaw angle from the previous time step. It is evaluated for STATIONARY OBJECTS and assesses the stability of yaw rate recognition. The yaw rate is calculated by comparing the yaw angle of the past object with the yaw angle of the object received in the previous cycle. Here, t2 is the timestamp that is $T_n$ seconds ago.
Object Counts
Counts the number of detections for each object class within the specified detection range. These metrics are measured for the most recent object not past objects.
File truncated at 100 lines see the full file
Changelog for package autoware_perception_online_evaluator
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Kang, Mete Fatih Cırıt
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
-
chore: refine maintainer list (#10110)
- chore: remove Miura from maintainer
* chore: add Taekjin-san to perception_utils package maintainer ---------
-
feat(autoware_vehicle_info_utils): replace autoware_universe_utils with autoware_utils (#10167)
-
Contributors: Fumiya Watanabe, Ryohsuke Mitsudome, Shunsuke Miura, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/perception_analytics_publisher.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
- launch/perception_online_evaluator.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
Messages
Services
Plugins
Recent questions tagged autoware_perception_online_evaluator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Fumiya Watanabe
- Kosuke Takeuchi
- Kotaro Uetake
- Kyoichi Sugahara
- Yoshi Ri
- Junya Sasaki
Authors
- Kosuke Takeuchi
Perception Evaluator
A node for evaluating the output of perception systems.
Purpose
This module allows for the evaluation of how accurately perception results are generated without the need for annotations. It is capable of confirming performance and can evaluate results from a few seconds prior, enabling online execution.
Inner-workings / Algorithms
The evaluated metrics are as follows:
- predicted_path_deviation
- predicted_path_deviation_variance
- lateral_deviation
- yaw_deviation
- yaw_rate
- total_objects_count
- average_objects_count
- interval_objects_count
Predicted Path Deviation / Predicted Path Deviation Variance
Compare the predicted path of past objects with their actual traveled path to determine the deviation for MOVING OBJECTS. For each object, calculate the mean distance between the predicted path points and the corresponding points on the actual path, up to the specified time step. In other words, this calculates the Average Displacement Error (ADE). The target object to be evaluated is the object from $T_N$ seconds ago, where $T_N$ is the maximum value of the prediction time horizon $[T_1, T_2, …, T_N]$.
[!NOTE] The object from $T_N$ seconds ago is the target object for all metrics. This is to unify the time of the target object across metrics.
- $n_{points}$ : Number of points in the predicted path
- $T$ : Time horizon for prediction evaluation.
- $dt$ : Time interval of the predicted path
- $d_i$ : Distance between the predicted path and the actual traveled path at path point $i$
- $ADE$ : Mean deviation of the predicted path for the target object.
- $Var$ : Variance of the predicted path deviation for the target object.
The final predicted path deviation metrics are calculated by averaging the mean deviation of the predicted path for all objects of the same class, and then calculating the mean, maximum, and minimum values of the mean deviation.
- $n_{objects}$ : Number of objects
- $ADE_{mean}$ : Mean deviation of the predicted path through all objects
- $ADE_{max}$ : Maximum deviation of the predicted path through all objects
- $ADE_{min}$ : Minimum deviation of the predicted path through all objects
- $Var_{mean}$ : Mean variance of the predicted path deviation through all objects
- $Var_{max}$ : Maximum variance of the predicted path deviation through all objects
- $Var_{min}$ : Minimum variance of the predicted path deviation through all objects
The actual metric name is determined by the object class and time horizon. For example, predicted_path_deviation_variance_CAR_5.00
Lateral Deviation
Calculates lateral deviation between the smoothed traveled trajectory and the perceived position to evaluate the stability of lateral position recognition for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The lateral deviation is calculated by comparing the smoothed traveled trajectory with the perceived position of the past object whose timestamp is $T=T_n$ seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Deviation
Calculates the deviation between the recognized yaw angle of an past object and the yaw azimuth angle of the smoothed traveled trajectory for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The yaw deviation is calculated by comparing the yaw azimuth angle of smoothed traveled trajectory with the perceived orientation of the past object whose timestamp is $T=T_n$ seconds ago.
For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Rate
Calculates the yaw rate of an object based on the change in yaw angle from the previous time step. It is evaluated for STATIONARY OBJECTS and assesses the stability of yaw rate recognition. The yaw rate is calculated by comparing the yaw angle of the past object with the yaw angle of the object received in the previous cycle. Here, t2 is the timestamp that is $T_n$ seconds ago.
Object Counts
Counts the number of detections for each object class within the specified detection range. These metrics are measured for the most recent object not past objects.
File truncated at 100 lines see the full file
Changelog for package autoware_perception_online_evaluator
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Kang, Mete Fatih Cırıt
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
-
chore: refine maintainer list (#10110)
- chore: remove Miura from maintainer
* chore: add Taekjin-san to perception_utils package maintainer ---------
-
feat(autoware_vehicle_info_utils): replace autoware_universe_utils with autoware_utils (#10167)
-
Contributors: Fumiya Watanabe, Ryohsuke Mitsudome, Shunsuke Miura, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/perception_analytics_publisher.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
- launch/perception_online_evaluator.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
Messages
Services
Plugins
Recent questions tagged autoware_perception_online_evaluator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Fumiya Watanabe
- Kosuke Takeuchi
- Kotaro Uetake
- Kyoichi Sugahara
- Yoshi Ri
- Junya Sasaki
Authors
- Kosuke Takeuchi
Perception Evaluator
A node for evaluating the output of perception systems.
Purpose
This module allows for the evaluation of how accurately perception results are generated without the need for annotations. It is capable of confirming performance and can evaluate results from a few seconds prior, enabling online execution.
Inner-workings / Algorithms
The evaluated metrics are as follows:
- predicted_path_deviation
- predicted_path_deviation_variance
- lateral_deviation
- yaw_deviation
- yaw_rate
- total_objects_count
- average_objects_count
- interval_objects_count
Predicted Path Deviation / Predicted Path Deviation Variance
Compare the predicted path of past objects with their actual traveled path to determine the deviation for MOVING OBJECTS. For each object, calculate the mean distance between the predicted path points and the corresponding points on the actual path, up to the specified time step. In other words, this calculates the Average Displacement Error (ADE). The target object to be evaluated is the object from $T_N$ seconds ago, where $T_N$ is the maximum value of the prediction time horizon $[T_1, T_2, …, T_N]$.
[!NOTE] The object from $T_N$ seconds ago is the target object for all metrics. This is to unify the time of the target object across metrics.
- $n_{points}$ : Number of points in the predicted path
- $T$ : Time horizon for prediction evaluation.
- $dt$ : Time interval of the predicted path
- $d_i$ : Distance between the predicted path and the actual traveled path at path point $i$
- $ADE$ : Mean deviation of the predicted path for the target object.
- $Var$ : Variance of the predicted path deviation for the target object.
The final predicted path deviation metrics are calculated by averaging the mean deviation of the predicted path for all objects of the same class, and then calculating the mean, maximum, and minimum values of the mean deviation.
- $n_{objects}$ : Number of objects
- $ADE_{mean}$ : Mean deviation of the predicted path through all objects
- $ADE_{max}$ : Maximum deviation of the predicted path through all objects
- $ADE_{min}$ : Minimum deviation of the predicted path through all objects
- $Var_{mean}$ : Mean variance of the predicted path deviation through all objects
- $Var_{max}$ : Maximum variance of the predicted path deviation through all objects
- $Var_{min}$ : Minimum variance of the predicted path deviation through all objects
The actual metric name is determined by the object class and time horizon. For example, predicted_path_deviation_variance_CAR_5.00
Lateral Deviation
Calculates lateral deviation between the smoothed traveled trajectory and the perceived position to evaluate the stability of lateral position recognition for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The lateral deviation is calculated by comparing the smoothed traveled trajectory with the perceived position of the past object whose timestamp is $T=T_n$ seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Deviation
Calculates the deviation between the recognized yaw angle of an past object and the yaw azimuth angle of the smoothed traveled trajectory for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The yaw deviation is calculated by comparing the yaw azimuth angle of smoothed traveled trajectory with the perceived orientation of the past object whose timestamp is $T=T_n$ seconds ago.
For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Rate
Calculates the yaw rate of an object based on the change in yaw angle from the previous time step. It is evaluated for STATIONARY OBJECTS and assesses the stability of yaw rate recognition. The yaw rate is calculated by comparing the yaw angle of the past object with the yaw angle of the object received in the previous cycle. Here, t2 is the timestamp that is $T_n$ seconds ago.
Object Counts
Counts the number of detections for each object class within the specified detection range. These metrics are measured for the most recent object not past objects.
File truncated at 100 lines see the full file
Changelog for package autoware_perception_online_evaluator
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Kang, Mete Fatih Cırıt
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
-
chore: refine maintainer list (#10110)
- chore: remove Miura from maintainer
* chore: add Taekjin-san to perception_utils package maintainer ---------
-
feat(autoware_vehicle_info_utils): replace autoware_universe_utils with autoware_utils (#10167)
-
Contributors: Fumiya Watanabe, Ryohsuke Mitsudome, Shunsuke Miura, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/perception_analytics_publisher.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
- launch/perception_online_evaluator.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
Messages
Services
Plugins
Recent questions tagged autoware_perception_online_evaluator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Fumiya Watanabe
- Kosuke Takeuchi
- Kotaro Uetake
- Kyoichi Sugahara
- Yoshi Ri
- Junya Sasaki
Authors
- Kosuke Takeuchi
Perception Evaluator
A node for evaluating the output of perception systems.
Purpose
This module allows for the evaluation of how accurately perception results are generated without the need for annotations. It is capable of confirming performance and can evaluate results from a few seconds prior, enabling online execution.
Inner-workings / Algorithms
The evaluated metrics are as follows:
- predicted_path_deviation
- predicted_path_deviation_variance
- lateral_deviation
- yaw_deviation
- yaw_rate
- total_objects_count
- average_objects_count
- interval_objects_count
Predicted Path Deviation / Predicted Path Deviation Variance
Compare the predicted path of past objects with their actual traveled path to determine the deviation for MOVING OBJECTS. For each object, calculate the mean distance between the predicted path points and the corresponding points on the actual path, up to the specified time step. In other words, this calculates the Average Displacement Error (ADE). The target object to be evaluated is the object from $T_N$ seconds ago, where $T_N$ is the maximum value of the prediction time horizon $[T_1, T_2, …, T_N]$.
[!NOTE] The object from $T_N$ seconds ago is the target object for all metrics. This is to unify the time of the target object across metrics.
- $n_{points}$ : Number of points in the predicted path
- $T$ : Time horizon for prediction evaluation.
- $dt$ : Time interval of the predicted path
- $d_i$ : Distance between the predicted path and the actual traveled path at path point $i$
- $ADE$ : Mean deviation of the predicted path for the target object.
- $Var$ : Variance of the predicted path deviation for the target object.
The final predicted path deviation metrics are calculated by averaging the mean deviation of the predicted path for all objects of the same class, and then calculating the mean, maximum, and minimum values of the mean deviation.
- $n_{objects}$ : Number of objects
- $ADE_{mean}$ : Mean deviation of the predicted path through all objects
- $ADE_{max}$ : Maximum deviation of the predicted path through all objects
- $ADE_{min}$ : Minimum deviation of the predicted path through all objects
- $Var_{mean}$ : Mean variance of the predicted path deviation through all objects
- $Var_{max}$ : Maximum variance of the predicted path deviation through all objects
- $Var_{min}$ : Minimum variance of the predicted path deviation through all objects
The actual metric name is determined by the object class and time horizon. For example, predicted_path_deviation_variance_CAR_5.00
Lateral Deviation
Calculates lateral deviation between the smoothed traveled trajectory and the perceived position to evaluate the stability of lateral position recognition for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The lateral deviation is calculated by comparing the smoothed traveled trajectory with the perceived position of the past object whose timestamp is $T=T_n$ seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Deviation
Calculates the deviation between the recognized yaw angle of an past object and the yaw azimuth angle of the smoothed traveled trajectory for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The yaw deviation is calculated by comparing the yaw azimuth angle of smoothed traveled trajectory with the perceived orientation of the past object whose timestamp is $T=T_n$ seconds ago.
For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Rate
Calculates the yaw rate of an object based on the change in yaw angle from the previous time step. It is evaluated for STATIONARY OBJECTS and assesses the stability of yaw rate recognition. The yaw rate is calculated by comparing the yaw angle of the past object with the yaw angle of the object received in the previous cycle. Here, t2 is the timestamp that is $T_n$ seconds ago.
Object Counts
Counts the number of detections for each object class within the specified detection range. These metrics are measured for the most recent object not past objects.
File truncated at 100 lines see the full file
Changelog for package autoware_perception_online_evaluator
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Kang, Mete Fatih Cırıt
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
-
chore: refine maintainer list (#10110)
- chore: remove Miura from maintainer
* chore: add Taekjin-san to perception_utils package maintainer ---------
-
feat(autoware_vehicle_info_utils): replace autoware_universe_utils with autoware_utils (#10167)
-
Contributors: Fumiya Watanabe, Ryohsuke Mitsudome, Shunsuke Miura, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/perception_analytics_publisher.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
- launch/perception_online_evaluator.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
Messages
Services
Plugins
Recent questions tagged autoware_perception_online_evaluator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Fumiya Watanabe
- Kosuke Takeuchi
- Kotaro Uetake
- Kyoichi Sugahara
- Yoshi Ri
- Junya Sasaki
Authors
- Kosuke Takeuchi
Perception Evaluator
A node for evaluating the output of perception systems.
Purpose
This module allows for the evaluation of how accurately perception results are generated without the need for annotations. It is capable of confirming performance and can evaluate results from a few seconds prior, enabling online execution.
Inner-workings / Algorithms
The evaluated metrics are as follows:
- predicted_path_deviation
- predicted_path_deviation_variance
- lateral_deviation
- yaw_deviation
- yaw_rate
- total_objects_count
- average_objects_count
- interval_objects_count
Predicted Path Deviation / Predicted Path Deviation Variance
Compare the predicted path of past objects with their actual traveled path to determine the deviation for MOVING OBJECTS. For each object, calculate the mean distance between the predicted path points and the corresponding points on the actual path, up to the specified time step. In other words, this calculates the Average Displacement Error (ADE). The target object to be evaluated is the object from $T_N$ seconds ago, where $T_N$ is the maximum value of the prediction time horizon $[T_1, T_2, …, T_N]$.
[!NOTE] The object from $T_N$ seconds ago is the target object for all metrics. This is to unify the time of the target object across metrics.
- $n_{points}$ : Number of points in the predicted path
- $T$ : Time horizon for prediction evaluation.
- $dt$ : Time interval of the predicted path
- $d_i$ : Distance between the predicted path and the actual traveled path at path point $i$
- $ADE$ : Mean deviation of the predicted path for the target object.
- $Var$ : Variance of the predicted path deviation for the target object.
The final predicted path deviation metrics are calculated by averaging the mean deviation of the predicted path for all objects of the same class, and then calculating the mean, maximum, and minimum values of the mean deviation.
- $n_{objects}$ : Number of objects
- $ADE_{mean}$ : Mean deviation of the predicted path through all objects
- $ADE_{max}$ : Maximum deviation of the predicted path through all objects
- $ADE_{min}$ : Minimum deviation of the predicted path through all objects
- $Var_{mean}$ : Mean variance of the predicted path deviation through all objects
- $Var_{max}$ : Maximum variance of the predicted path deviation through all objects
- $Var_{min}$ : Minimum variance of the predicted path deviation through all objects
The actual metric name is determined by the object class and time horizon. For example, predicted_path_deviation_variance_CAR_5.00
Lateral Deviation
Calculates lateral deviation between the smoothed traveled trajectory and the perceived position to evaluate the stability of lateral position recognition for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The lateral deviation is calculated by comparing the smoothed traveled trajectory with the perceived position of the past object whose timestamp is $T=T_n$ seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Deviation
Calculates the deviation between the recognized yaw angle of an past object and the yaw azimuth angle of the smoothed traveled trajectory for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The yaw deviation is calculated by comparing the yaw azimuth angle of smoothed traveled trajectory with the perceived orientation of the past object whose timestamp is $T=T_n$ seconds ago.
For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Rate
Calculates the yaw rate of an object based on the change in yaw angle from the previous time step. It is evaluated for STATIONARY OBJECTS and assesses the stability of yaw rate recognition. The yaw rate is calculated by comparing the yaw angle of the past object with the yaw angle of the object received in the previous cycle. Here, t2 is the timestamp that is $T_n$ seconds ago.
Object Counts
Counts the number of detections for each object class within the specified detection range. These metrics are measured for the most recent object not past objects.
File truncated at 100 lines see the full file
Changelog for package autoware_perception_online_evaluator
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Kang, Mete Fatih Cırıt
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
-
chore: refine maintainer list (#10110)
- chore: remove Miura from maintainer
* chore: add Taekjin-san to perception_utils package maintainer ---------
-
feat(autoware_vehicle_info_utils): replace autoware_universe_utils with autoware_utils (#10167)
-
Contributors: Fumiya Watanabe, Ryohsuke Mitsudome, Shunsuke Miura, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/perception_analytics_publisher.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
- launch/perception_online_evaluator.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
Messages
Services
Plugins
Recent questions tagged autoware_perception_online_evaluator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Fumiya Watanabe
- Kosuke Takeuchi
- Kotaro Uetake
- Kyoichi Sugahara
- Yoshi Ri
- Junya Sasaki
Authors
- Kosuke Takeuchi
Perception Evaluator
A node for evaluating the output of perception systems.
Purpose
This module allows for the evaluation of how accurately perception results are generated without the need for annotations. It is capable of confirming performance and can evaluate results from a few seconds prior, enabling online execution.
Inner-workings / Algorithms
The evaluated metrics are as follows:
- predicted_path_deviation
- predicted_path_deviation_variance
- lateral_deviation
- yaw_deviation
- yaw_rate
- total_objects_count
- average_objects_count
- interval_objects_count
Predicted Path Deviation / Predicted Path Deviation Variance
Compare the predicted path of past objects with their actual traveled path to determine the deviation for MOVING OBJECTS. For each object, calculate the mean distance between the predicted path points and the corresponding points on the actual path, up to the specified time step. In other words, this calculates the Average Displacement Error (ADE). The target object to be evaluated is the object from $T_N$ seconds ago, where $T_N$ is the maximum value of the prediction time horizon $[T_1, T_2, …, T_N]$.
[!NOTE] The object from $T_N$ seconds ago is the target object for all metrics. This is to unify the time of the target object across metrics.
- $n_{points}$ : Number of points in the predicted path
- $T$ : Time horizon for prediction evaluation.
- $dt$ : Time interval of the predicted path
- $d_i$ : Distance between the predicted path and the actual traveled path at path point $i$
- $ADE$ : Mean deviation of the predicted path for the target object.
- $Var$ : Variance of the predicted path deviation for the target object.
The final predicted path deviation metrics are calculated by averaging the mean deviation of the predicted path for all objects of the same class, and then calculating the mean, maximum, and minimum values of the mean deviation.
- $n_{objects}$ : Number of objects
- $ADE_{mean}$ : Mean deviation of the predicted path through all objects
- $ADE_{max}$ : Maximum deviation of the predicted path through all objects
- $ADE_{min}$ : Minimum deviation of the predicted path through all objects
- $Var_{mean}$ : Mean variance of the predicted path deviation through all objects
- $Var_{max}$ : Maximum variance of the predicted path deviation through all objects
- $Var_{min}$ : Minimum variance of the predicted path deviation through all objects
The actual metric name is determined by the object class and time horizon. For example, predicted_path_deviation_variance_CAR_5.00
Lateral Deviation
Calculates lateral deviation between the smoothed traveled trajectory and the perceived position to evaluate the stability of lateral position recognition for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The lateral deviation is calculated by comparing the smoothed traveled trajectory with the perceived position of the past object whose timestamp is $T=T_n$ seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Deviation
Calculates the deviation between the recognized yaw angle of an past object and the yaw azimuth angle of the smoothed traveled trajectory for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The yaw deviation is calculated by comparing the yaw azimuth angle of smoothed traveled trajectory with the perceived orientation of the past object whose timestamp is $T=T_n$ seconds ago.
For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Rate
Calculates the yaw rate of an object based on the change in yaw angle from the previous time step. It is evaluated for STATIONARY OBJECTS and assesses the stability of yaw rate recognition. The yaw rate is calculated by comparing the yaw angle of the past object with the yaw angle of the object received in the previous cycle. Here, t2 is the timestamp that is $T_n$ seconds ago.
Object Counts
Counts the number of detections for each object class within the specified detection range. These metrics are measured for the most recent object not past objects.
File truncated at 100 lines see the full file
Changelog for package autoware_perception_online_evaluator
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Kang, Mete Fatih Cırıt
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
-
chore: refine maintainer list (#10110)
- chore: remove Miura from maintainer
* chore: add Taekjin-san to perception_utils package maintainer ---------
-
feat(autoware_vehicle_info_utils): replace autoware_universe_utils with autoware_utils (#10167)
-
Contributors: Fumiya Watanabe, Ryohsuke Mitsudome, Shunsuke Miura, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/perception_analytics_publisher.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
- launch/perception_online_evaluator.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
Messages
Services
Plugins
Recent questions tagged autoware_perception_online_evaluator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Fumiya Watanabe
- Kosuke Takeuchi
- Kotaro Uetake
- Kyoichi Sugahara
- Yoshi Ri
- Junya Sasaki
Authors
- Kosuke Takeuchi
Perception Evaluator
A node for evaluating the output of perception systems.
Purpose
This module allows for the evaluation of how accurately perception results are generated without the need for annotations. It is capable of confirming performance and can evaluate results from a few seconds prior, enabling online execution.
Inner-workings / Algorithms
The evaluated metrics are as follows:
- predicted_path_deviation
- predicted_path_deviation_variance
- lateral_deviation
- yaw_deviation
- yaw_rate
- total_objects_count
- average_objects_count
- interval_objects_count
Predicted Path Deviation / Predicted Path Deviation Variance
Compare the predicted path of past objects with their actual traveled path to determine the deviation for MOVING OBJECTS. For each object, calculate the mean distance between the predicted path points and the corresponding points on the actual path, up to the specified time step. In other words, this calculates the Average Displacement Error (ADE). The target object to be evaluated is the object from $T_N$ seconds ago, where $T_N$ is the maximum value of the prediction time horizon $[T_1, T_2, …, T_N]$.
[!NOTE] The object from $T_N$ seconds ago is the target object for all metrics. This is to unify the time of the target object across metrics.
- $n_{points}$ : Number of points in the predicted path
- $T$ : Time horizon for prediction evaluation.
- $dt$ : Time interval of the predicted path
- $d_i$ : Distance between the predicted path and the actual traveled path at path point $i$
- $ADE$ : Mean deviation of the predicted path for the target object.
- $Var$ : Variance of the predicted path deviation for the target object.
The final predicted path deviation metrics are calculated by averaging the mean deviation of the predicted path for all objects of the same class, and then calculating the mean, maximum, and minimum values of the mean deviation.
- $n_{objects}$ : Number of objects
- $ADE_{mean}$ : Mean deviation of the predicted path through all objects
- $ADE_{max}$ : Maximum deviation of the predicted path through all objects
- $ADE_{min}$ : Minimum deviation of the predicted path through all objects
- $Var_{mean}$ : Mean variance of the predicted path deviation through all objects
- $Var_{max}$ : Maximum variance of the predicted path deviation through all objects
- $Var_{min}$ : Minimum variance of the predicted path deviation through all objects
The actual metric name is determined by the object class and time horizon. For example, predicted_path_deviation_variance_CAR_5.00
Lateral Deviation
Calculates lateral deviation between the smoothed traveled trajectory and the perceived position to evaluate the stability of lateral position recognition for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The lateral deviation is calculated by comparing the smoothed traveled trajectory with the perceived position of the past object whose timestamp is $T=T_n$ seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Deviation
Calculates the deviation between the recognized yaw angle of an past object and the yaw azimuth angle of the smoothed traveled trajectory for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The yaw deviation is calculated by comparing the yaw azimuth angle of smoothed traveled trajectory with the perceived orientation of the past object whose timestamp is $T=T_n$ seconds ago.
For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Rate
Calculates the yaw rate of an object based on the change in yaw angle from the previous time step. It is evaluated for STATIONARY OBJECTS and assesses the stability of yaw rate recognition. The yaw rate is calculated by comparing the yaw angle of the past object with the yaw angle of the object received in the previous cycle. Here, t2 is the timestamp that is $T_n$ seconds ago.
Object Counts
Counts the number of detections for each object class within the specified detection range. These metrics are measured for the most recent object not past objects.
File truncated at 100 lines see the full file
Changelog for package autoware_perception_online_evaluator
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Kang, Mete Fatih Cırıt
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
-
chore: refine maintainer list (#10110)
- chore: remove Miura from maintainer
* chore: add Taekjin-san to perception_utils package maintainer ---------
-
feat(autoware_vehicle_info_utils): replace autoware_universe_utils with autoware_utils (#10167)
-
Contributors: Fumiya Watanabe, Ryohsuke Mitsudome, Shunsuke Miura, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/perception_analytics_publisher.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
- launch/perception_online_evaluator.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
Messages
Services
Plugins
Recent questions tagged autoware_perception_online_evaluator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Fumiya Watanabe
- Kosuke Takeuchi
- Kotaro Uetake
- Kyoichi Sugahara
- Yoshi Ri
- Junya Sasaki
Authors
- Kosuke Takeuchi
Perception Evaluator
A node for evaluating the output of perception systems.
Purpose
This module allows for the evaluation of how accurately perception results are generated without the need for annotations. It is capable of confirming performance and can evaluate results from a few seconds prior, enabling online execution.
Inner-workings / Algorithms
The evaluated metrics are as follows:
- predicted_path_deviation
- predicted_path_deviation_variance
- lateral_deviation
- yaw_deviation
- yaw_rate
- total_objects_count
- average_objects_count
- interval_objects_count
Predicted Path Deviation / Predicted Path Deviation Variance
Compare the predicted path of past objects with their actual traveled path to determine the deviation for MOVING OBJECTS. For each object, calculate the mean distance between the predicted path points and the corresponding points on the actual path, up to the specified time step. In other words, this calculates the Average Displacement Error (ADE). The target object to be evaluated is the object from $T_N$ seconds ago, where $T_N$ is the maximum value of the prediction time horizon $[T_1, T_2, …, T_N]$.
[!NOTE] The object from $T_N$ seconds ago is the target object for all metrics. This is to unify the time of the target object across metrics.
- $n_{points}$ : Number of points in the predicted path
- $T$ : Time horizon for prediction evaluation.
- $dt$ : Time interval of the predicted path
- $d_i$ : Distance between the predicted path and the actual traveled path at path point $i$
- $ADE$ : Mean deviation of the predicted path for the target object.
- $Var$ : Variance of the predicted path deviation for the target object.
The final predicted path deviation metrics are calculated by averaging the mean deviation of the predicted path for all objects of the same class, and then calculating the mean, maximum, and minimum values of the mean deviation.
- $n_{objects}$ : Number of objects
- $ADE_{mean}$ : Mean deviation of the predicted path through all objects
- $ADE_{max}$ : Maximum deviation of the predicted path through all objects
- $ADE_{min}$ : Minimum deviation of the predicted path through all objects
- $Var_{mean}$ : Mean variance of the predicted path deviation through all objects
- $Var_{max}$ : Maximum variance of the predicted path deviation through all objects
- $Var_{min}$ : Minimum variance of the predicted path deviation through all objects
The actual metric name is determined by the object class and time horizon. For example, predicted_path_deviation_variance_CAR_5.00
Lateral Deviation
Calculates lateral deviation between the smoothed traveled trajectory and the perceived position to evaluate the stability of lateral position recognition for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The lateral deviation is calculated by comparing the smoothed traveled trajectory with the perceived position of the past object whose timestamp is $T=T_n$ seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Deviation
Calculates the deviation between the recognized yaw angle of an past object and the yaw azimuth angle of the smoothed traveled trajectory for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The yaw deviation is calculated by comparing the yaw azimuth angle of smoothed traveled trajectory with the perceived orientation of the past object whose timestamp is $T=T_n$ seconds ago.
For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Rate
Calculates the yaw rate of an object based on the change in yaw angle from the previous time step. It is evaluated for STATIONARY OBJECTS and assesses the stability of yaw rate recognition. The yaw rate is calculated by comparing the yaw angle of the past object with the yaw angle of the object received in the previous cycle. Here, t2 is the timestamp that is $T_n$ seconds ago.
Object Counts
Counts the number of detections for each object class within the specified detection range. These metrics are measured for the most recent object not past objects.
File truncated at 100 lines see the full file
Changelog for package autoware_perception_online_evaluator
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Kang, Mete Fatih Cırıt
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
-
chore: refine maintainer list (#10110)
- chore: remove Miura from maintainer
* chore: add Taekjin-san to perception_utils package maintainer ---------
-
feat(autoware_vehicle_info_utils): replace autoware_universe_utils with autoware_utils (#10167)
-
Contributors: Fumiya Watanabe, Ryohsuke Mitsudome, Shunsuke Miura, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/perception_analytics_publisher.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]
- launch/perception_online_evaluator.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]