Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
marker_radar_lidar_calibrator
A tutorial for this calibrator can be found here
Purpose
The package marker_radar_lidar_calibrator
performs extrinsic calibration between radar and 3d lidar sensors used in autonomous driving and robotics.
Currently, the calibrator only supports radars whose detection interface includes distance and azimuth angle, but do not offer elevation angle. For example, ARS408 radars can be calibrated with this tool. Also, note that the 3d lidar should have a high enough resolution to present several returns on the radar reflector (calibration target).
Inner-workings / Algorithms
The calibrator computes the center of the reflectors from the pointcloud and pairs them to the radar objects/tracks. Afterwards, both an SVD-based and a yaw-only rotation estimation algorithm are applied to these matched points to estimate the rigid transformation between sensors.
Due to the complexity of the problem, the process in split in the following steps: constructing a background model, extracting the foreground to detect reflectors, matching and filtering the lidar and radar detections, and estimating the rigid transformation between the radar and lidar sensors.
In what follows, we proceed to explain each step, making a point to put emphasis on the parts that the user must take into consideration to use phis package effectively.
*Note: although the radar can provide either detections and/or objects/tracks, we treat them as points in this package, and as such may refer to the radar pointcloud when needed.
Step 1: Background model construction
Detecting corner reflectors in an unknown environment, without imposing impractical restrictions on the reflectors themselves, the operators, or the environment, it a challenging problem. From the perspective of the lidar, radar reflectors may be confused with the floor or other metallic objects, and from the radar’s perspective, although corner reflectors are detected by the sensor (the user must confirm it themselves before attempting to use this tool!), other objects are also detected, with no practical way to tell them apart most of the time.
For these reasons, we avoid addressing the full problem an instead leverage the use of background models. To do this, the user must first present the sensors an environment with no radar reflectors nor any dynamic objects (mostly persons) in the space that is to be used for calibration. The tool will collect data for a set period of time or until there is no new information. For each modality, this data is then turned into voxels, marking the space of each occupied voxel as background
in the following steps.
Step 2: Foreground extraction and reflector detection
Once the background models for both sensors have been prepared, new data gets filtered using the background models to leave only the foreground.
Before placing radar reflectors, the foreground data should ideally be empty, and once placing them, only the reflectors and the people holding them should appear as foreground. In practice, however, even small variations in the load of the vehicle can cause ground points to escape the background models and be marked as foreground (a phenomenon exclusive to the lidars). To address this issue, we also employ a RANSAC-based ground segmentation algorithm to avoid these ground points being processed in downstream steps.
All foreground radar objects are automatically categorized as potential reflector detections. For foreground lidar points, however, the reflector detection process involves more steps:
- We first apply a clustering algorithm on the lidar foreground points and discard clusters with a number of points below a predefined threshold.
- Compute the highest point of each cluster and discard it if the highest point exceeds
reflector_max_height
. This is required to discard the clusters corresponding to users (we assume the operators are taller than the reflectors). - Finally, we average all points within a
reflector_radius
from the highest point to estimate the center point of the reflector.
The following images illustrate the background construction and foreground extraction process respectively. Although the radar pre-processing is presented, the process remains the same for the lidar.
During background model construction (left image), the blue voxels (presented as 2d grid for visualization purposes) are marked as background since sensor data is present in said voxels.
Once background model construction finishes and the foreground extraction process begins (right image), only points that fall outside previous background-marked voxels are considered as foreground. In this example, the points hitting the corner reflector and a human are marked as foreground (note that those points’ voxels, here marked in green are disjoint with those of the background).
Background model construction. |
Foreground extraction |
Step 3: Matching and filtering
The output of the previous step consists of two lists of points of potentials radar reflector candidates for each sensor. However, it is not possible to directly match points among these lists, and they are expected to contain a high number of false positives on both sensors.
To address this issue, we rely on a heuristic that leverages the accuracy of initial calibration. Usually, robot/vehicle CAD designs allow an initial calibration with an accuracy of a few centimeters/degrees, and direct sensor calibration is only used to refine it.
Using the initial radar-lidar calibration, we project each lidar corner reflector candidate into the radar coordinates and for each candidate we compute the closest candidate from the other modality. We consider real radar-lidar pairs of corner reflectors those pairs who are mutually their closest candidate.
Matches using this heuristic can still contain incorrect pairs and false positives, which is why we employ a Kalman filter to both improve the estimations and check for temporal consistency (false positives are not usually consistent in time).
Once matches’ estimations converge (using a covariance matrix criteria), they are added to the calibration list.
Step 4: Rigid transformation estimation
After matching detection pairs, we apply rigid transformation estimation algorithms to those pairs to estimate the transformation between the radar and lidar sensors. We currently support two algorithms: a 2d SVD-based method and a yaw-only rotation method.
2d SVD-based method
In this method, we reduce the problem to a 2d transformation estimation since radar detections lack a z component (elevation is fixed to zero).
However, because lidar detections are in the lidar frame and likely involve a 3d transformation (non-zero roll and\or pitch) to the radar frame, we transform the lidar detections to a frame dubbed the radar parallel
frame and then set their z component to zero. The radar parallel
frame has only a 2d transformation (x, y, yaw) relative to the radar frame. By dropping the z-component we explicitly give up on computing a 3D pose, which was not possible due to the nature of the radar.
In autonomous vehicles, radars are mounted in a way designed to minimize pitch and roll angles, maximizing their performance and measurement range. This means the radar sensors are aligned as parallel as possible to the ground plane, making the base_link
a suitable choice for the radar parallel
frame.
**Note: this assumes that the lidar to radar parallel
frame is either hardcoded or previously calibrated
Next, we apply the SVD-based rigid transformation estimation algorithm between the lidar detections in the radar parallel frame and the radar detections in the radar frame. This allows us to estimate the transformation between the lidar and radar by multiplying the radar-to-radar-parallel transformation (calibrated) with the radar-parallel-to-lidar transformation (known before-handed). The SVD-based algorithm, provided by PCL, leverages SVD to find the optimal rotation component and then computes the translation component based on the rotation.
Yaw-only rotation method
This method, on the other hand, utilizes the initial radar-to-lidar transformation to calculate lidar detections in the radar frame. We then calculate the average yaw angle difference of all pairs, considering only yaw rotation between the lidar and radar detections in the radar frame, to estimate a yaw-only rotation transformation in the radar frame. Finally, we estimate the transformation between the lidar and radar by multiplying the yaw-only rotation transformation with the initial radar-to-lidar transformation.
Generally, the 2d SVD-based method is preferred when valid; otherwise, the yaw-only rotation method is used as the calibration output.
Diagram
Below, you can see how the algorithm is implemented in the marker_radar_lidar_calibrator
package.
ROS Interfaces
Input
Name | Type | Description |
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- calibration_service_name [default: calibrate_radar_lidar]
- rviz [default: true]
- radar_parallel_frame [default: front_unit_base_link]
- input_lidar_pointcloud [default: /sensing/lidar/front_lower/pointcloud_raw]
- input_radar_objects [default: /sensing/radar/front_center/objects_raw]
- use_lidar_initial_crop_box_filter [default: false]
- lidar_initial_crop_box_min_x [default: -50.0]
- lidar_initial_crop_box_min_y [default: -50.0]
- lidar_initial_crop_box_min_z [default: -50.0]
- lidar_initial_crop_box_max_x [default: 50.0]
- lidar_initial_crop_box_max_y [default: 50.0]
- lidar_initial_crop_box_max_z [default: 50.0]
- use_radar_initial_crop_box_filter [default: false]
- radar_initial_crop_box_min_x [default: -50.0]
- radar_initial_crop_box_min_y [default: -50.0]
- radar_initial_crop_box_min_z [default: -50.0]
- radar_initial_crop_box_max_x [default: 50.0]
- radar_initial_crop_box_max_y [default: 50.0]
- radar_initial_crop_box_max_z [default: 50.0]
Messages
Services
Plugins
Recent questions tagged marker_radar_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
marker_radar_lidar_calibrator
A tutorial for this calibrator can be found here
Purpose
The package marker_radar_lidar_calibrator
performs extrinsic calibration between radar and 3d lidar sensors used in autonomous driving and robotics.
Currently, the calibrator only supports radars whose detection interface includes distance and azimuth angle, but do not offer elevation angle. For example, ARS408 radars can be calibrated with this tool. Also, note that the 3d lidar should have a high enough resolution to present several returns on the radar reflector (calibration target).
Inner-workings / Algorithms
The calibrator computes the center of the reflectors from the pointcloud and pairs them to the radar objects/tracks. Afterwards, both an SVD-based and a yaw-only rotation estimation algorithm are applied to these matched points to estimate the rigid transformation between sensors.
Due to the complexity of the problem, the process in split in the following steps: constructing a background model, extracting the foreground to detect reflectors, matching and filtering the lidar and radar detections, and estimating the rigid transformation between the radar and lidar sensors.
In what follows, we proceed to explain each step, making a point to put emphasis on the parts that the user must take into consideration to use phis package effectively.
*Note: although the radar can provide either detections and/or objects/tracks, we treat them as points in this package, and as such may refer to the radar pointcloud when needed.
Step 1: Background model construction
Detecting corner reflectors in an unknown environment, without imposing impractical restrictions on the reflectors themselves, the operators, or the environment, it a challenging problem. From the perspective of the lidar, radar reflectors may be confused with the floor or other metallic objects, and from the radar’s perspective, although corner reflectors are detected by the sensor (the user must confirm it themselves before attempting to use this tool!), other objects are also detected, with no practical way to tell them apart most of the time.
For these reasons, we avoid addressing the full problem an instead leverage the use of background models. To do this, the user must first present the sensors an environment with no radar reflectors nor any dynamic objects (mostly persons) in the space that is to be used for calibration. The tool will collect data for a set period of time or until there is no new information. For each modality, this data is then turned into voxels, marking the space of each occupied voxel as background
in the following steps.
Step 2: Foreground extraction and reflector detection
Once the background models for both sensors have been prepared, new data gets filtered using the background models to leave only the foreground.
Before placing radar reflectors, the foreground data should ideally be empty, and once placing them, only the reflectors and the people holding them should appear as foreground. In practice, however, even small variations in the load of the vehicle can cause ground points to escape the background models and be marked as foreground (a phenomenon exclusive to the lidars). To address this issue, we also employ a RANSAC-based ground segmentation algorithm to avoid these ground points being processed in downstream steps.
All foreground radar objects are automatically categorized as potential reflector detections. For foreground lidar points, however, the reflector detection process involves more steps:
- We first apply a clustering algorithm on the lidar foreground points and discard clusters with a number of points below a predefined threshold.
- Compute the highest point of each cluster and discard it if the highest point exceeds
reflector_max_height
. This is required to discard the clusters corresponding to users (we assume the operators are taller than the reflectors). - Finally, we average all points within a
reflector_radius
from the highest point to estimate the center point of the reflector.
The following images illustrate the background construction and foreground extraction process respectively. Although the radar pre-processing is presented, the process remains the same for the lidar.
During background model construction (left image), the blue voxels (presented as 2d grid for visualization purposes) are marked as background since sensor data is present in said voxels.
Once background model construction finishes and the foreground extraction process begins (right image), only points that fall outside previous background-marked voxels are considered as foreground. In this example, the points hitting the corner reflector and a human are marked as foreground (note that those points’ voxels, here marked in green are disjoint with those of the background).
Background model construction. |
Foreground extraction |
Step 3: Matching and filtering
The output of the previous step consists of two lists of points of potentials radar reflector candidates for each sensor. However, it is not possible to directly match points among these lists, and they are expected to contain a high number of false positives on both sensors.
To address this issue, we rely on a heuristic that leverages the accuracy of initial calibration. Usually, robot/vehicle CAD designs allow an initial calibration with an accuracy of a few centimeters/degrees, and direct sensor calibration is only used to refine it.
Using the initial radar-lidar calibration, we project each lidar corner reflector candidate into the radar coordinates and for each candidate we compute the closest candidate from the other modality. We consider real radar-lidar pairs of corner reflectors those pairs who are mutually their closest candidate.
Matches using this heuristic can still contain incorrect pairs and false positives, which is why we employ a Kalman filter to both improve the estimations and check for temporal consistency (false positives are not usually consistent in time).
Once matches’ estimations converge (using a covariance matrix criteria), they are added to the calibration list.
Step 4: Rigid transformation estimation
After matching detection pairs, we apply rigid transformation estimation algorithms to those pairs to estimate the transformation between the radar and lidar sensors. We currently support two algorithms: a 2d SVD-based method and a yaw-only rotation method.
2d SVD-based method
In this method, we reduce the problem to a 2d transformation estimation since radar detections lack a z component (elevation is fixed to zero).
However, because lidar detections are in the lidar frame and likely involve a 3d transformation (non-zero roll and\or pitch) to the radar frame, we transform the lidar detections to a frame dubbed the radar parallel
frame and then set their z component to zero. The radar parallel
frame has only a 2d transformation (x, y, yaw) relative to the radar frame. By dropping the z-component we explicitly give up on computing a 3D pose, which was not possible due to the nature of the radar.
In autonomous vehicles, radars are mounted in a way designed to minimize pitch and roll angles, maximizing their performance and measurement range. This means the radar sensors are aligned as parallel as possible to the ground plane, making the base_link
a suitable choice for the radar parallel
frame.
**Note: this assumes that the lidar to radar parallel
frame is either hardcoded or previously calibrated
Next, we apply the SVD-based rigid transformation estimation algorithm between the lidar detections in the radar parallel frame and the radar detections in the radar frame. This allows us to estimate the transformation between the lidar and radar by multiplying the radar-to-radar-parallel transformation (calibrated) with the radar-parallel-to-lidar transformation (known before-handed). The SVD-based algorithm, provided by PCL, leverages SVD to find the optimal rotation component and then computes the translation component based on the rotation.
Yaw-only rotation method
This method, on the other hand, utilizes the initial radar-to-lidar transformation to calculate lidar detections in the radar frame. We then calculate the average yaw angle difference of all pairs, considering only yaw rotation between the lidar and radar detections in the radar frame, to estimate a yaw-only rotation transformation in the radar frame. Finally, we estimate the transformation between the lidar and radar by multiplying the yaw-only rotation transformation with the initial radar-to-lidar transformation.
Generally, the 2d SVD-based method is preferred when valid; otherwise, the yaw-only rotation method is used as the calibration output.
Diagram
Below, you can see how the algorithm is implemented in the marker_radar_lidar_calibrator
package.
ROS Interfaces
Input
Name | Type | Description |
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- calibration_service_name [default: calibrate_radar_lidar]
- rviz [default: true]
- radar_parallel_frame [default: front_unit_base_link]
- input_lidar_pointcloud [default: /sensing/lidar/front_lower/pointcloud_raw]
- input_radar_objects [default: /sensing/radar/front_center/objects_raw]
- use_lidar_initial_crop_box_filter [default: false]
- lidar_initial_crop_box_min_x [default: -50.0]
- lidar_initial_crop_box_min_y [default: -50.0]
- lidar_initial_crop_box_min_z [default: -50.0]
- lidar_initial_crop_box_max_x [default: 50.0]
- lidar_initial_crop_box_max_y [default: 50.0]
- lidar_initial_crop_box_max_z [default: 50.0]
- use_radar_initial_crop_box_filter [default: false]
- radar_initial_crop_box_min_x [default: -50.0]
- radar_initial_crop_box_min_y [default: -50.0]
- radar_initial_crop_box_min_z [default: -50.0]
- radar_initial_crop_box_max_x [default: 50.0]
- radar_initial_crop_box_max_y [default: 50.0]
- radar_initial_crop_box_max_z [default: 50.0]
Messages
Services
Plugins
Recent questions tagged marker_radar_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
marker_radar_lidar_calibrator
A tutorial for this calibrator can be found here
Purpose
The package marker_radar_lidar_calibrator
performs extrinsic calibration between radar and 3d lidar sensors used in autonomous driving and robotics.
Currently, the calibrator only supports radars whose detection interface includes distance and azimuth angle, but do not offer elevation angle. For example, ARS408 radars can be calibrated with this tool. Also, note that the 3d lidar should have a high enough resolution to present several returns on the radar reflector (calibration target).
Inner-workings / Algorithms
The calibrator computes the center of the reflectors from the pointcloud and pairs them to the radar objects/tracks. Afterwards, both an SVD-based and a yaw-only rotation estimation algorithm are applied to these matched points to estimate the rigid transformation between sensors.
Due to the complexity of the problem, the process in split in the following steps: constructing a background model, extracting the foreground to detect reflectors, matching and filtering the lidar and radar detections, and estimating the rigid transformation between the radar and lidar sensors.
In what follows, we proceed to explain each step, making a point to put emphasis on the parts that the user must take into consideration to use phis package effectively.
*Note: although the radar can provide either detections and/or objects/tracks, we treat them as points in this package, and as such may refer to the radar pointcloud when needed.
Step 1: Background model construction
Detecting corner reflectors in an unknown environment, without imposing impractical restrictions on the reflectors themselves, the operators, or the environment, it a challenging problem. From the perspective of the lidar, radar reflectors may be confused with the floor or other metallic objects, and from the radar’s perspective, although corner reflectors are detected by the sensor (the user must confirm it themselves before attempting to use this tool!), other objects are also detected, with no practical way to tell them apart most of the time.
For these reasons, we avoid addressing the full problem an instead leverage the use of background models. To do this, the user must first present the sensors an environment with no radar reflectors nor any dynamic objects (mostly persons) in the space that is to be used for calibration. The tool will collect data for a set period of time or until there is no new information. For each modality, this data is then turned into voxels, marking the space of each occupied voxel as background
in the following steps.
Step 2: Foreground extraction and reflector detection
Once the background models for both sensors have been prepared, new data gets filtered using the background models to leave only the foreground.
Before placing radar reflectors, the foreground data should ideally be empty, and once placing them, only the reflectors and the people holding them should appear as foreground. In practice, however, even small variations in the load of the vehicle can cause ground points to escape the background models and be marked as foreground (a phenomenon exclusive to the lidars). To address this issue, we also employ a RANSAC-based ground segmentation algorithm to avoid these ground points being processed in downstream steps.
All foreground radar objects are automatically categorized as potential reflector detections. For foreground lidar points, however, the reflector detection process involves more steps:
- We first apply a clustering algorithm on the lidar foreground points and discard clusters with a number of points below a predefined threshold.
- Compute the highest point of each cluster and discard it if the highest point exceeds
reflector_max_height
. This is required to discard the clusters corresponding to users (we assume the operators are taller than the reflectors). - Finally, we average all points within a
reflector_radius
from the highest point to estimate the center point of the reflector.
The following images illustrate the background construction and foreground extraction process respectively. Although the radar pre-processing is presented, the process remains the same for the lidar.
During background model construction (left image), the blue voxels (presented as 2d grid for visualization purposes) are marked as background since sensor data is present in said voxels.
Once background model construction finishes and the foreground extraction process begins (right image), only points that fall outside previous background-marked voxels are considered as foreground. In this example, the points hitting the corner reflector and a human are marked as foreground (note that those points’ voxels, here marked in green are disjoint with those of the background).
Background model construction. |
Foreground extraction |
Step 3: Matching and filtering
The output of the previous step consists of two lists of points of potentials radar reflector candidates for each sensor. However, it is not possible to directly match points among these lists, and they are expected to contain a high number of false positives on both sensors.
To address this issue, we rely on a heuristic that leverages the accuracy of initial calibration. Usually, robot/vehicle CAD designs allow an initial calibration with an accuracy of a few centimeters/degrees, and direct sensor calibration is only used to refine it.
Using the initial radar-lidar calibration, we project each lidar corner reflector candidate into the radar coordinates and for each candidate we compute the closest candidate from the other modality. We consider real radar-lidar pairs of corner reflectors those pairs who are mutually their closest candidate.
Matches using this heuristic can still contain incorrect pairs and false positives, which is why we employ a Kalman filter to both improve the estimations and check for temporal consistency (false positives are not usually consistent in time).
Once matches’ estimations converge (using a covariance matrix criteria), they are added to the calibration list.
Step 4: Rigid transformation estimation
After matching detection pairs, we apply rigid transformation estimation algorithms to those pairs to estimate the transformation between the radar and lidar sensors. We currently support two algorithms: a 2d SVD-based method and a yaw-only rotation method.
2d SVD-based method
In this method, we reduce the problem to a 2d transformation estimation since radar detections lack a z component (elevation is fixed to zero).
However, because lidar detections are in the lidar frame and likely involve a 3d transformation (non-zero roll and\or pitch) to the radar frame, we transform the lidar detections to a frame dubbed the radar parallel
frame and then set their z component to zero. The radar parallel
frame has only a 2d transformation (x, y, yaw) relative to the radar frame. By dropping the z-component we explicitly give up on computing a 3D pose, which was not possible due to the nature of the radar.
In autonomous vehicles, radars are mounted in a way designed to minimize pitch and roll angles, maximizing their performance and measurement range. This means the radar sensors are aligned as parallel as possible to the ground plane, making the base_link
a suitable choice for the radar parallel
frame.
**Note: this assumes that the lidar to radar parallel
frame is either hardcoded or previously calibrated
Next, we apply the SVD-based rigid transformation estimation algorithm between the lidar detections in the radar parallel frame and the radar detections in the radar frame. This allows us to estimate the transformation between the lidar and radar by multiplying the radar-to-radar-parallel transformation (calibrated) with the radar-parallel-to-lidar transformation (known before-handed). The SVD-based algorithm, provided by PCL, leverages SVD to find the optimal rotation component and then computes the translation component based on the rotation.
Yaw-only rotation method
This method, on the other hand, utilizes the initial radar-to-lidar transformation to calculate lidar detections in the radar frame. We then calculate the average yaw angle difference of all pairs, considering only yaw rotation between the lidar and radar detections in the radar frame, to estimate a yaw-only rotation transformation in the radar frame. Finally, we estimate the transformation between the lidar and radar by multiplying the yaw-only rotation transformation with the initial radar-to-lidar transformation.
Generally, the 2d SVD-based method is preferred when valid; otherwise, the yaw-only rotation method is used as the calibration output.
Diagram
Below, you can see how the algorithm is implemented in the marker_radar_lidar_calibrator
package.
ROS Interfaces
Input
Name | Type | Description |
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- calibration_service_name [default: calibrate_radar_lidar]
- rviz [default: true]
- radar_parallel_frame [default: front_unit_base_link]
- input_lidar_pointcloud [default: /sensing/lidar/front_lower/pointcloud_raw]
- input_radar_objects [default: /sensing/radar/front_center/objects_raw]
- use_lidar_initial_crop_box_filter [default: false]
- lidar_initial_crop_box_min_x [default: -50.0]
- lidar_initial_crop_box_min_y [default: -50.0]
- lidar_initial_crop_box_min_z [default: -50.0]
- lidar_initial_crop_box_max_x [default: 50.0]
- lidar_initial_crop_box_max_y [default: 50.0]
- lidar_initial_crop_box_max_z [default: 50.0]
- use_radar_initial_crop_box_filter [default: false]
- radar_initial_crop_box_min_x [default: -50.0]
- radar_initial_crop_box_min_y [default: -50.0]
- radar_initial_crop_box_min_z [default: -50.0]
- radar_initial_crop_box_max_x [default: 50.0]
- radar_initial_crop_box_max_y [default: 50.0]
- radar_initial_crop_box_max_z [default: 50.0]
Messages
Services
Plugins
Recent questions tagged marker_radar_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
marker_radar_lidar_calibrator
A tutorial for this calibrator can be found here
Purpose
The package marker_radar_lidar_calibrator
performs extrinsic calibration between radar and 3d lidar sensors used in autonomous driving and robotics.
Currently, the calibrator only supports radars whose detection interface includes distance and azimuth angle, but do not offer elevation angle. For example, ARS408 radars can be calibrated with this tool. Also, note that the 3d lidar should have a high enough resolution to present several returns on the radar reflector (calibration target).
Inner-workings / Algorithms
The calibrator computes the center of the reflectors from the pointcloud and pairs them to the radar objects/tracks. Afterwards, both an SVD-based and a yaw-only rotation estimation algorithm are applied to these matched points to estimate the rigid transformation between sensors.
Due to the complexity of the problem, the process in split in the following steps: constructing a background model, extracting the foreground to detect reflectors, matching and filtering the lidar and radar detections, and estimating the rigid transformation between the radar and lidar sensors.
In what follows, we proceed to explain each step, making a point to put emphasis on the parts that the user must take into consideration to use phis package effectively.
*Note: although the radar can provide either detections and/or objects/tracks, we treat them as points in this package, and as such may refer to the radar pointcloud when needed.
Step 1: Background model construction
Detecting corner reflectors in an unknown environment, without imposing impractical restrictions on the reflectors themselves, the operators, or the environment, it a challenging problem. From the perspective of the lidar, radar reflectors may be confused with the floor or other metallic objects, and from the radar’s perspective, although corner reflectors are detected by the sensor (the user must confirm it themselves before attempting to use this tool!), other objects are also detected, with no practical way to tell them apart most of the time.
For these reasons, we avoid addressing the full problem an instead leverage the use of background models. To do this, the user must first present the sensors an environment with no radar reflectors nor any dynamic objects (mostly persons) in the space that is to be used for calibration. The tool will collect data for a set period of time or until there is no new information. For each modality, this data is then turned into voxels, marking the space of each occupied voxel as background
in the following steps.
Step 2: Foreground extraction and reflector detection
Once the background models for both sensors have been prepared, new data gets filtered using the background models to leave only the foreground.
Before placing radar reflectors, the foreground data should ideally be empty, and once placing them, only the reflectors and the people holding them should appear as foreground. In practice, however, even small variations in the load of the vehicle can cause ground points to escape the background models and be marked as foreground (a phenomenon exclusive to the lidars). To address this issue, we also employ a RANSAC-based ground segmentation algorithm to avoid these ground points being processed in downstream steps.
All foreground radar objects are automatically categorized as potential reflector detections. For foreground lidar points, however, the reflector detection process involves more steps:
- We first apply a clustering algorithm on the lidar foreground points and discard clusters with a number of points below a predefined threshold.
- Compute the highest point of each cluster and discard it if the highest point exceeds
reflector_max_height
. This is required to discard the clusters corresponding to users (we assume the operators are taller than the reflectors). - Finally, we average all points within a
reflector_radius
from the highest point to estimate the center point of the reflector.
The following images illustrate the background construction and foreground extraction process respectively. Although the radar pre-processing is presented, the process remains the same for the lidar.
During background model construction (left image), the blue voxels (presented as 2d grid for visualization purposes) are marked as background since sensor data is present in said voxels.
Once background model construction finishes and the foreground extraction process begins (right image), only points that fall outside previous background-marked voxels are considered as foreground. In this example, the points hitting the corner reflector and a human are marked as foreground (note that those points’ voxels, here marked in green are disjoint with those of the background).
Background model construction. |
Foreground extraction |
Step 3: Matching and filtering
The output of the previous step consists of two lists of points of potentials radar reflector candidates for each sensor. However, it is not possible to directly match points among these lists, and they are expected to contain a high number of false positives on both sensors.
To address this issue, we rely on a heuristic that leverages the accuracy of initial calibration. Usually, robot/vehicle CAD designs allow an initial calibration with an accuracy of a few centimeters/degrees, and direct sensor calibration is only used to refine it.
Using the initial radar-lidar calibration, we project each lidar corner reflector candidate into the radar coordinates and for each candidate we compute the closest candidate from the other modality. We consider real radar-lidar pairs of corner reflectors those pairs who are mutually their closest candidate.
Matches using this heuristic can still contain incorrect pairs and false positives, which is why we employ a Kalman filter to both improve the estimations and check for temporal consistency (false positives are not usually consistent in time).
Once matches’ estimations converge (using a covariance matrix criteria), they are added to the calibration list.
Step 4: Rigid transformation estimation
After matching detection pairs, we apply rigid transformation estimation algorithms to those pairs to estimate the transformation between the radar and lidar sensors. We currently support two algorithms: a 2d SVD-based method and a yaw-only rotation method.
2d SVD-based method
In this method, we reduce the problem to a 2d transformation estimation since radar detections lack a z component (elevation is fixed to zero).
However, because lidar detections are in the lidar frame and likely involve a 3d transformation (non-zero roll and\or pitch) to the radar frame, we transform the lidar detections to a frame dubbed the radar parallel
frame and then set their z component to zero. The radar parallel
frame has only a 2d transformation (x, y, yaw) relative to the radar frame. By dropping the z-component we explicitly give up on computing a 3D pose, which was not possible due to the nature of the radar.
In autonomous vehicles, radars are mounted in a way designed to minimize pitch and roll angles, maximizing their performance and measurement range. This means the radar sensors are aligned as parallel as possible to the ground plane, making the base_link
a suitable choice for the radar parallel
frame.
**Note: this assumes that the lidar to radar parallel
frame is either hardcoded or previously calibrated
Next, we apply the SVD-based rigid transformation estimation algorithm between the lidar detections in the radar parallel frame and the radar detections in the radar frame. This allows us to estimate the transformation between the lidar and radar by multiplying the radar-to-radar-parallel transformation (calibrated) with the radar-parallel-to-lidar transformation (known before-handed). The SVD-based algorithm, provided by PCL, leverages SVD to find the optimal rotation component and then computes the translation component based on the rotation.
Yaw-only rotation method
This method, on the other hand, utilizes the initial radar-to-lidar transformation to calculate lidar detections in the radar frame. We then calculate the average yaw angle difference of all pairs, considering only yaw rotation between the lidar and radar detections in the radar frame, to estimate a yaw-only rotation transformation in the radar frame. Finally, we estimate the transformation between the lidar and radar by multiplying the yaw-only rotation transformation with the initial radar-to-lidar transformation.
Generally, the 2d SVD-based method is preferred when valid; otherwise, the yaw-only rotation method is used as the calibration output.
Diagram
Below, you can see how the algorithm is implemented in the marker_radar_lidar_calibrator
package.
ROS Interfaces
Input
Name | Type | Description |
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- calibration_service_name [default: calibrate_radar_lidar]
- rviz [default: true]
- radar_parallel_frame [default: front_unit_base_link]
- input_lidar_pointcloud [default: /sensing/lidar/front_lower/pointcloud_raw]
- input_radar_objects [default: /sensing/radar/front_center/objects_raw]
- use_lidar_initial_crop_box_filter [default: false]
- lidar_initial_crop_box_min_x [default: -50.0]
- lidar_initial_crop_box_min_y [default: -50.0]
- lidar_initial_crop_box_min_z [default: -50.0]
- lidar_initial_crop_box_max_x [default: 50.0]
- lidar_initial_crop_box_max_y [default: 50.0]
- lidar_initial_crop_box_max_z [default: 50.0]
- use_radar_initial_crop_box_filter [default: false]
- radar_initial_crop_box_min_x [default: -50.0]
- radar_initial_crop_box_min_y [default: -50.0]
- radar_initial_crop_box_min_z [default: -50.0]
- radar_initial_crop_box_max_x [default: 50.0]
- radar_initial_crop_box_max_y [default: 50.0]
- radar_initial_crop_box_max_z [default: 50.0]
Messages
Services
Plugins
Recent questions tagged marker_radar_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
marker_radar_lidar_calibrator
A tutorial for this calibrator can be found here
Purpose
The package marker_radar_lidar_calibrator
performs extrinsic calibration between radar and 3d lidar sensors used in autonomous driving and robotics.
Currently, the calibrator only supports radars whose detection interface includes distance and azimuth angle, but do not offer elevation angle. For example, ARS408 radars can be calibrated with this tool. Also, note that the 3d lidar should have a high enough resolution to present several returns on the radar reflector (calibration target).
Inner-workings / Algorithms
The calibrator computes the center of the reflectors from the pointcloud and pairs them to the radar objects/tracks. Afterwards, both an SVD-based and a yaw-only rotation estimation algorithm are applied to these matched points to estimate the rigid transformation between sensors.
Due to the complexity of the problem, the process in split in the following steps: constructing a background model, extracting the foreground to detect reflectors, matching and filtering the lidar and radar detections, and estimating the rigid transformation between the radar and lidar sensors.
In what follows, we proceed to explain each step, making a point to put emphasis on the parts that the user must take into consideration to use phis package effectively.
*Note: although the radar can provide either detections and/or objects/tracks, we treat them as points in this package, and as such may refer to the radar pointcloud when needed.
Step 1: Background model construction
Detecting corner reflectors in an unknown environment, without imposing impractical restrictions on the reflectors themselves, the operators, or the environment, it a challenging problem. From the perspective of the lidar, radar reflectors may be confused with the floor or other metallic objects, and from the radar’s perspective, although corner reflectors are detected by the sensor (the user must confirm it themselves before attempting to use this tool!), other objects are also detected, with no practical way to tell them apart most of the time.
For these reasons, we avoid addressing the full problem an instead leverage the use of background models. To do this, the user must first present the sensors an environment with no radar reflectors nor any dynamic objects (mostly persons) in the space that is to be used for calibration. The tool will collect data for a set period of time or until there is no new information. For each modality, this data is then turned into voxels, marking the space of each occupied voxel as background
in the following steps.
Step 2: Foreground extraction and reflector detection
Once the background models for both sensors have been prepared, new data gets filtered using the background models to leave only the foreground.
Before placing radar reflectors, the foreground data should ideally be empty, and once placing them, only the reflectors and the people holding them should appear as foreground. In practice, however, even small variations in the load of the vehicle can cause ground points to escape the background models and be marked as foreground (a phenomenon exclusive to the lidars). To address this issue, we also employ a RANSAC-based ground segmentation algorithm to avoid these ground points being processed in downstream steps.
All foreground radar objects are automatically categorized as potential reflector detections. For foreground lidar points, however, the reflector detection process involves more steps:
- We first apply a clustering algorithm on the lidar foreground points and discard clusters with a number of points below a predefined threshold.
- Compute the highest point of each cluster and discard it if the highest point exceeds
reflector_max_height
. This is required to discard the clusters corresponding to users (we assume the operators are taller than the reflectors). - Finally, we average all points within a
reflector_radius
from the highest point to estimate the center point of the reflector.
The following images illustrate the background construction and foreground extraction process respectively. Although the radar pre-processing is presented, the process remains the same for the lidar.
During background model construction (left image), the blue voxels (presented as 2d grid for visualization purposes) are marked as background since sensor data is present in said voxels.
Once background model construction finishes and the foreground extraction process begins (right image), only points that fall outside previous background-marked voxels are considered as foreground. In this example, the points hitting the corner reflector and a human are marked as foreground (note that those points’ voxels, here marked in green are disjoint with those of the background).
Background model construction. |
Foreground extraction |
Step 3: Matching and filtering
The output of the previous step consists of two lists of points of potentials radar reflector candidates for each sensor. However, it is not possible to directly match points among these lists, and they are expected to contain a high number of false positives on both sensors.
To address this issue, we rely on a heuristic that leverages the accuracy of initial calibration. Usually, robot/vehicle CAD designs allow an initial calibration with an accuracy of a few centimeters/degrees, and direct sensor calibration is only used to refine it.
Using the initial radar-lidar calibration, we project each lidar corner reflector candidate into the radar coordinates and for each candidate we compute the closest candidate from the other modality. We consider real radar-lidar pairs of corner reflectors those pairs who are mutually their closest candidate.
Matches using this heuristic can still contain incorrect pairs and false positives, which is why we employ a Kalman filter to both improve the estimations and check for temporal consistency (false positives are not usually consistent in time).
Once matches’ estimations converge (using a covariance matrix criteria), they are added to the calibration list.
Step 4: Rigid transformation estimation
After matching detection pairs, we apply rigid transformation estimation algorithms to those pairs to estimate the transformation between the radar and lidar sensors. We currently support two algorithms: a 2d SVD-based method and a yaw-only rotation method.
2d SVD-based method
In this method, we reduce the problem to a 2d transformation estimation since radar detections lack a z component (elevation is fixed to zero).
However, because lidar detections are in the lidar frame and likely involve a 3d transformation (non-zero roll and\or pitch) to the radar frame, we transform the lidar detections to a frame dubbed the radar parallel
frame and then set their z component to zero. The radar parallel
frame has only a 2d transformation (x, y, yaw) relative to the radar frame. By dropping the z-component we explicitly give up on computing a 3D pose, which was not possible due to the nature of the radar.
In autonomous vehicles, radars are mounted in a way designed to minimize pitch and roll angles, maximizing their performance and measurement range. This means the radar sensors are aligned as parallel as possible to the ground plane, making the base_link
a suitable choice for the radar parallel
frame.
**Note: this assumes that the lidar to radar parallel
frame is either hardcoded or previously calibrated
Next, we apply the SVD-based rigid transformation estimation algorithm between the lidar detections in the radar parallel frame and the radar detections in the radar frame. This allows us to estimate the transformation between the lidar and radar by multiplying the radar-to-radar-parallel transformation (calibrated) with the radar-parallel-to-lidar transformation (known before-handed). The SVD-based algorithm, provided by PCL, leverages SVD to find the optimal rotation component and then computes the translation component based on the rotation.
Yaw-only rotation method
This method, on the other hand, utilizes the initial radar-to-lidar transformation to calculate lidar detections in the radar frame. We then calculate the average yaw angle difference of all pairs, considering only yaw rotation between the lidar and radar detections in the radar frame, to estimate a yaw-only rotation transformation in the radar frame. Finally, we estimate the transformation between the lidar and radar by multiplying the yaw-only rotation transformation with the initial radar-to-lidar transformation.
Generally, the 2d SVD-based method is preferred when valid; otherwise, the yaw-only rotation method is used as the calibration output.
Diagram
Below, you can see how the algorithm is implemented in the marker_radar_lidar_calibrator
package.
ROS Interfaces
Input
Name | Type | Description |
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- calibration_service_name [default: calibrate_radar_lidar]
- rviz [default: true]
- radar_parallel_frame [default: front_unit_base_link]
- input_lidar_pointcloud [default: /sensing/lidar/front_lower/pointcloud_raw]
- input_radar_objects [default: /sensing/radar/front_center/objects_raw]
- use_lidar_initial_crop_box_filter [default: false]
- lidar_initial_crop_box_min_x [default: -50.0]
- lidar_initial_crop_box_min_y [default: -50.0]
- lidar_initial_crop_box_min_z [default: -50.0]
- lidar_initial_crop_box_max_x [default: 50.0]
- lidar_initial_crop_box_max_y [default: 50.0]
- lidar_initial_crop_box_max_z [default: 50.0]
- use_radar_initial_crop_box_filter [default: false]
- radar_initial_crop_box_min_x [default: -50.0]
- radar_initial_crop_box_min_y [default: -50.0]
- radar_initial_crop_box_min_z [default: -50.0]
- radar_initial_crop_box_max_x [default: 50.0]
- radar_initial_crop_box_max_y [default: 50.0]
- radar_initial_crop_box_max_z [default: 50.0]
Messages
Services
Plugins
Recent questions tagged marker_radar_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
marker_radar_lidar_calibrator
A tutorial for this calibrator can be found here
Purpose
The package marker_radar_lidar_calibrator
performs extrinsic calibration between radar and 3d lidar sensors used in autonomous driving and robotics.
Currently, the calibrator only supports radars whose detection interface includes distance and azimuth angle, but do not offer elevation angle. For example, ARS408 radars can be calibrated with this tool. Also, note that the 3d lidar should have a high enough resolution to present several returns on the radar reflector (calibration target).
Inner-workings / Algorithms
The calibrator computes the center of the reflectors from the pointcloud and pairs them to the radar objects/tracks. Afterwards, both an SVD-based and a yaw-only rotation estimation algorithm are applied to these matched points to estimate the rigid transformation between sensors.
Due to the complexity of the problem, the process in split in the following steps: constructing a background model, extracting the foreground to detect reflectors, matching and filtering the lidar and radar detections, and estimating the rigid transformation between the radar and lidar sensors.
In what follows, we proceed to explain each step, making a point to put emphasis on the parts that the user must take into consideration to use phis package effectively.
*Note: although the radar can provide either detections and/or objects/tracks, we treat them as points in this package, and as such may refer to the radar pointcloud when needed.
Step 1: Background model construction
Detecting corner reflectors in an unknown environment, without imposing impractical restrictions on the reflectors themselves, the operators, or the environment, it a challenging problem. From the perspective of the lidar, radar reflectors may be confused with the floor or other metallic objects, and from the radar’s perspective, although corner reflectors are detected by the sensor (the user must confirm it themselves before attempting to use this tool!), other objects are also detected, with no practical way to tell them apart most of the time.
For these reasons, we avoid addressing the full problem an instead leverage the use of background models. To do this, the user must first present the sensors an environment with no radar reflectors nor any dynamic objects (mostly persons) in the space that is to be used for calibration. The tool will collect data for a set period of time or until there is no new information. For each modality, this data is then turned into voxels, marking the space of each occupied voxel as background
in the following steps.
Step 2: Foreground extraction and reflector detection
Once the background models for both sensors have been prepared, new data gets filtered using the background models to leave only the foreground.
Before placing radar reflectors, the foreground data should ideally be empty, and once placing them, only the reflectors and the people holding them should appear as foreground. In practice, however, even small variations in the load of the vehicle can cause ground points to escape the background models and be marked as foreground (a phenomenon exclusive to the lidars). To address this issue, we also employ a RANSAC-based ground segmentation algorithm to avoid these ground points being processed in downstream steps.
All foreground radar objects are automatically categorized as potential reflector detections. For foreground lidar points, however, the reflector detection process involves more steps:
- We first apply a clustering algorithm on the lidar foreground points and discard clusters with a number of points below a predefined threshold.
- Compute the highest point of each cluster and discard it if the highest point exceeds
reflector_max_height
. This is required to discard the clusters corresponding to users (we assume the operators are taller than the reflectors). - Finally, we average all points within a
reflector_radius
from the highest point to estimate the center point of the reflector.
The following images illustrate the background construction and foreground extraction process respectively. Although the radar pre-processing is presented, the process remains the same for the lidar.
During background model construction (left image), the blue voxels (presented as 2d grid for visualization purposes) are marked as background since sensor data is present in said voxels.
Once background model construction finishes and the foreground extraction process begins (right image), only points that fall outside previous background-marked voxels are considered as foreground. In this example, the points hitting the corner reflector and a human are marked as foreground (note that those points’ voxels, here marked in green are disjoint with those of the background).
Background model construction. |
Foreground extraction |
Step 3: Matching and filtering
The output of the previous step consists of two lists of points of potentials radar reflector candidates for each sensor. However, it is not possible to directly match points among these lists, and they are expected to contain a high number of false positives on both sensors.
To address this issue, we rely on a heuristic that leverages the accuracy of initial calibration. Usually, robot/vehicle CAD designs allow an initial calibration with an accuracy of a few centimeters/degrees, and direct sensor calibration is only used to refine it.
Using the initial radar-lidar calibration, we project each lidar corner reflector candidate into the radar coordinates and for each candidate we compute the closest candidate from the other modality. We consider real radar-lidar pairs of corner reflectors those pairs who are mutually their closest candidate.
Matches using this heuristic can still contain incorrect pairs and false positives, which is why we employ a Kalman filter to both improve the estimations and check for temporal consistency (false positives are not usually consistent in time).
Once matches’ estimations converge (using a covariance matrix criteria), they are added to the calibration list.
Step 4: Rigid transformation estimation
After matching detection pairs, we apply rigid transformation estimation algorithms to those pairs to estimate the transformation between the radar and lidar sensors. We currently support two algorithms: a 2d SVD-based method and a yaw-only rotation method.
2d SVD-based method
In this method, we reduce the problem to a 2d transformation estimation since radar detections lack a z component (elevation is fixed to zero).
However, because lidar detections are in the lidar frame and likely involve a 3d transformation (non-zero roll and\or pitch) to the radar frame, we transform the lidar detections to a frame dubbed the radar parallel
frame and then set their z component to zero. The radar parallel
frame has only a 2d transformation (x, y, yaw) relative to the radar frame. By dropping the z-component we explicitly give up on computing a 3D pose, which was not possible due to the nature of the radar.
In autonomous vehicles, radars are mounted in a way designed to minimize pitch and roll angles, maximizing their performance and measurement range. This means the radar sensors are aligned as parallel as possible to the ground plane, making the base_link
a suitable choice for the radar parallel
frame.
**Note: this assumes that the lidar to radar parallel
frame is either hardcoded or previously calibrated
Next, we apply the SVD-based rigid transformation estimation algorithm between the lidar detections in the radar parallel frame and the radar detections in the radar frame. This allows us to estimate the transformation between the lidar and radar by multiplying the radar-to-radar-parallel transformation (calibrated) with the radar-parallel-to-lidar transformation (known before-handed). The SVD-based algorithm, provided by PCL, leverages SVD to find the optimal rotation component and then computes the translation component based on the rotation.
Yaw-only rotation method
This method, on the other hand, utilizes the initial radar-to-lidar transformation to calculate lidar detections in the radar frame. We then calculate the average yaw angle difference of all pairs, considering only yaw rotation between the lidar and radar detections in the radar frame, to estimate a yaw-only rotation transformation in the radar frame. Finally, we estimate the transformation between the lidar and radar by multiplying the yaw-only rotation transformation with the initial radar-to-lidar transformation.
Generally, the 2d SVD-based method is preferred when valid; otherwise, the yaw-only rotation method is used as the calibration output.
Diagram
Below, you can see how the algorithm is implemented in the marker_radar_lidar_calibrator
package.
ROS Interfaces
Input
Name | Type | Description |
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- calibration_service_name [default: calibrate_radar_lidar]
- rviz [default: true]
- radar_parallel_frame [default: front_unit_base_link]
- input_lidar_pointcloud [default: /sensing/lidar/front_lower/pointcloud_raw]
- input_radar_objects [default: /sensing/radar/front_center/objects_raw]
- use_lidar_initial_crop_box_filter [default: false]
- lidar_initial_crop_box_min_x [default: -50.0]
- lidar_initial_crop_box_min_y [default: -50.0]
- lidar_initial_crop_box_min_z [default: -50.0]
- lidar_initial_crop_box_max_x [default: 50.0]
- lidar_initial_crop_box_max_y [default: 50.0]
- lidar_initial_crop_box_max_z [default: 50.0]
- use_radar_initial_crop_box_filter [default: false]
- radar_initial_crop_box_min_x [default: -50.0]
- radar_initial_crop_box_min_y [default: -50.0]
- radar_initial_crop_box_min_z [default: -50.0]
- radar_initial_crop_box_max_x [default: 50.0]
- radar_initial_crop_box_max_y [default: 50.0]
- radar_initial_crop_box_max_z [default: 50.0]
Messages
Services
Plugins
Recent questions tagged marker_radar_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
marker_radar_lidar_calibrator
A tutorial for this calibrator can be found here
Purpose
The package marker_radar_lidar_calibrator
performs extrinsic calibration between radar and 3d lidar sensors used in autonomous driving and robotics.
Currently, the calibrator only supports radars whose detection interface includes distance and azimuth angle, but do not offer elevation angle. For example, ARS408 radars can be calibrated with this tool. Also, note that the 3d lidar should have a high enough resolution to present several returns on the radar reflector (calibration target).
Inner-workings / Algorithms
The calibrator computes the center of the reflectors from the pointcloud and pairs them to the radar objects/tracks. Afterwards, both an SVD-based and a yaw-only rotation estimation algorithm are applied to these matched points to estimate the rigid transformation between sensors.
Due to the complexity of the problem, the process in split in the following steps: constructing a background model, extracting the foreground to detect reflectors, matching and filtering the lidar and radar detections, and estimating the rigid transformation between the radar and lidar sensors.
In what follows, we proceed to explain each step, making a point to put emphasis on the parts that the user must take into consideration to use phis package effectively.
*Note: although the radar can provide either detections and/or objects/tracks, we treat them as points in this package, and as such may refer to the radar pointcloud when needed.
Step 1: Background model construction
Detecting corner reflectors in an unknown environment, without imposing impractical restrictions on the reflectors themselves, the operators, or the environment, it a challenging problem. From the perspective of the lidar, radar reflectors may be confused with the floor or other metallic objects, and from the radar’s perspective, although corner reflectors are detected by the sensor (the user must confirm it themselves before attempting to use this tool!), other objects are also detected, with no practical way to tell them apart most of the time.
For these reasons, we avoid addressing the full problem an instead leverage the use of background models. To do this, the user must first present the sensors an environment with no radar reflectors nor any dynamic objects (mostly persons) in the space that is to be used for calibration. The tool will collect data for a set period of time or until there is no new information. For each modality, this data is then turned into voxels, marking the space of each occupied voxel as background
in the following steps.
Step 2: Foreground extraction and reflector detection
Once the background models for both sensors have been prepared, new data gets filtered using the background models to leave only the foreground.
Before placing radar reflectors, the foreground data should ideally be empty, and once placing them, only the reflectors and the people holding them should appear as foreground. In practice, however, even small variations in the load of the vehicle can cause ground points to escape the background models and be marked as foreground (a phenomenon exclusive to the lidars). To address this issue, we also employ a RANSAC-based ground segmentation algorithm to avoid these ground points being processed in downstream steps.
All foreground radar objects are automatically categorized as potential reflector detections. For foreground lidar points, however, the reflector detection process involves more steps:
- We first apply a clustering algorithm on the lidar foreground points and discard clusters with a number of points below a predefined threshold.
- Compute the highest point of each cluster and discard it if the highest point exceeds
reflector_max_height
. This is required to discard the clusters corresponding to users (we assume the operators are taller than the reflectors). - Finally, we average all points within a
reflector_radius
from the highest point to estimate the center point of the reflector.
The following images illustrate the background construction and foreground extraction process respectively. Although the radar pre-processing is presented, the process remains the same for the lidar.
During background model construction (left image), the blue voxels (presented as 2d grid for visualization purposes) are marked as background since sensor data is present in said voxels.
Once background model construction finishes and the foreground extraction process begins (right image), only points that fall outside previous background-marked voxels are considered as foreground. In this example, the points hitting the corner reflector and a human are marked as foreground (note that those points’ voxels, here marked in green are disjoint with those of the background).
Background model construction. |
Foreground extraction |
Step 3: Matching and filtering
The output of the previous step consists of two lists of points of potentials radar reflector candidates for each sensor. However, it is not possible to directly match points among these lists, and they are expected to contain a high number of false positives on both sensors.
To address this issue, we rely on a heuristic that leverages the accuracy of initial calibration. Usually, robot/vehicle CAD designs allow an initial calibration with an accuracy of a few centimeters/degrees, and direct sensor calibration is only used to refine it.
Using the initial radar-lidar calibration, we project each lidar corner reflector candidate into the radar coordinates and for each candidate we compute the closest candidate from the other modality. We consider real radar-lidar pairs of corner reflectors those pairs who are mutually their closest candidate.
Matches using this heuristic can still contain incorrect pairs and false positives, which is why we employ a Kalman filter to both improve the estimations and check for temporal consistency (false positives are not usually consistent in time).
Once matches’ estimations converge (using a covariance matrix criteria), they are added to the calibration list.
Step 4: Rigid transformation estimation
After matching detection pairs, we apply rigid transformation estimation algorithms to those pairs to estimate the transformation between the radar and lidar sensors. We currently support two algorithms: a 2d SVD-based method and a yaw-only rotation method.
2d SVD-based method
In this method, we reduce the problem to a 2d transformation estimation since radar detections lack a z component (elevation is fixed to zero).
However, because lidar detections are in the lidar frame and likely involve a 3d transformation (non-zero roll and\or pitch) to the radar frame, we transform the lidar detections to a frame dubbed the radar parallel
frame and then set their z component to zero. The radar parallel
frame has only a 2d transformation (x, y, yaw) relative to the radar frame. By dropping the z-component we explicitly give up on computing a 3D pose, which was not possible due to the nature of the radar.
In autonomous vehicles, radars are mounted in a way designed to minimize pitch and roll angles, maximizing their performance and measurement range. This means the radar sensors are aligned as parallel as possible to the ground plane, making the base_link
a suitable choice for the radar parallel
frame.
**Note: this assumes that the lidar to radar parallel
frame is either hardcoded or previously calibrated
Next, we apply the SVD-based rigid transformation estimation algorithm between the lidar detections in the radar parallel frame and the radar detections in the radar frame. This allows us to estimate the transformation between the lidar and radar by multiplying the radar-to-radar-parallel transformation (calibrated) with the radar-parallel-to-lidar transformation (known before-handed). The SVD-based algorithm, provided by PCL, leverages SVD to find the optimal rotation component and then computes the translation component based on the rotation.
Yaw-only rotation method
This method, on the other hand, utilizes the initial radar-to-lidar transformation to calculate lidar detections in the radar frame. We then calculate the average yaw angle difference of all pairs, considering only yaw rotation between the lidar and radar detections in the radar frame, to estimate a yaw-only rotation transformation in the radar frame. Finally, we estimate the transformation between the lidar and radar by multiplying the yaw-only rotation transformation with the initial radar-to-lidar transformation.
Generally, the 2d SVD-based method is preferred when valid; otherwise, the yaw-only rotation method is used as the calibration output.
Diagram
Below, you can see how the algorithm is implemented in the marker_radar_lidar_calibrator
package.
ROS Interfaces
Input
Name | Type | Description |
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- calibration_service_name [default: calibrate_radar_lidar]
- rviz [default: true]
- radar_parallel_frame [default: front_unit_base_link]
- input_lidar_pointcloud [default: /sensing/lidar/front_lower/pointcloud_raw]
- input_radar_objects [default: /sensing/radar/front_center/objects_raw]
- use_lidar_initial_crop_box_filter [default: false]
- lidar_initial_crop_box_min_x [default: -50.0]
- lidar_initial_crop_box_min_y [default: -50.0]
- lidar_initial_crop_box_min_z [default: -50.0]
- lidar_initial_crop_box_max_x [default: 50.0]
- lidar_initial_crop_box_max_y [default: 50.0]
- lidar_initial_crop_box_max_z [default: 50.0]
- use_radar_initial_crop_box_filter [default: false]
- radar_initial_crop_box_min_x [default: -50.0]
- radar_initial_crop_box_min_y [default: -50.0]
- radar_initial_crop_box_min_z [default: -50.0]
- radar_initial_crop_box_max_x [default: 50.0]
- radar_initial_crop_box_max_y [default: 50.0]
- radar_initial_crop_box_max_z [default: 50.0]
Messages
Services
Plugins
Recent questions tagged marker_radar_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
marker_radar_lidar_calibrator
A tutorial for this calibrator can be found here
Purpose
The package marker_radar_lidar_calibrator
performs extrinsic calibration between radar and 3d lidar sensors used in autonomous driving and robotics.
Currently, the calibrator only supports radars whose detection interface includes distance and azimuth angle, but do not offer elevation angle. For example, ARS408 radars can be calibrated with this tool. Also, note that the 3d lidar should have a high enough resolution to present several returns on the radar reflector (calibration target).
Inner-workings / Algorithms
The calibrator computes the center of the reflectors from the pointcloud and pairs them to the radar objects/tracks. Afterwards, both an SVD-based and a yaw-only rotation estimation algorithm are applied to these matched points to estimate the rigid transformation between sensors.
Due to the complexity of the problem, the process in split in the following steps: constructing a background model, extracting the foreground to detect reflectors, matching and filtering the lidar and radar detections, and estimating the rigid transformation between the radar and lidar sensors.
In what follows, we proceed to explain each step, making a point to put emphasis on the parts that the user must take into consideration to use phis package effectively.
*Note: although the radar can provide either detections and/or objects/tracks, we treat them as points in this package, and as such may refer to the radar pointcloud when needed.
Step 1: Background model construction
Detecting corner reflectors in an unknown environment, without imposing impractical restrictions on the reflectors themselves, the operators, or the environment, it a challenging problem. From the perspective of the lidar, radar reflectors may be confused with the floor or other metallic objects, and from the radar’s perspective, although corner reflectors are detected by the sensor (the user must confirm it themselves before attempting to use this tool!), other objects are also detected, with no practical way to tell them apart most of the time.
For these reasons, we avoid addressing the full problem an instead leverage the use of background models. To do this, the user must first present the sensors an environment with no radar reflectors nor any dynamic objects (mostly persons) in the space that is to be used for calibration. The tool will collect data for a set period of time or until there is no new information. For each modality, this data is then turned into voxels, marking the space of each occupied voxel as background
in the following steps.
Step 2: Foreground extraction and reflector detection
Once the background models for both sensors have been prepared, new data gets filtered using the background models to leave only the foreground.
Before placing radar reflectors, the foreground data should ideally be empty, and once placing them, only the reflectors and the people holding them should appear as foreground. In practice, however, even small variations in the load of the vehicle can cause ground points to escape the background models and be marked as foreground (a phenomenon exclusive to the lidars). To address this issue, we also employ a RANSAC-based ground segmentation algorithm to avoid these ground points being processed in downstream steps.
All foreground radar objects are automatically categorized as potential reflector detections. For foreground lidar points, however, the reflector detection process involves more steps:
- We first apply a clustering algorithm on the lidar foreground points and discard clusters with a number of points below a predefined threshold.
- Compute the highest point of each cluster and discard it if the highest point exceeds
reflector_max_height
. This is required to discard the clusters corresponding to users (we assume the operators are taller than the reflectors). - Finally, we average all points within a
reflector_radius
from the highest point to estimate the center point of the reflector.
The following images illustrate the background construction and foreground extraction process respectively. Although the radar pre-processing is presented, the process remains the same for the lidar.
During background model construction (left image), the blue voxels (presented as 2d grid for visualization purposes) are marked as background since sensor data is present in said voxels.
Once background model construction finishes and the foreground extraction process begins (right image), only points that fall outside previous background-marked voxels are considered as foreground. In this example, the points hitting the corner reflector and a human are marked as foreground (note that those points’ voxels, here marked in green are disjoint with those of the background).
Background model construction. |
Foreground extraction |
Step 3: Matching and filtering
The output of the previous step consists of two lists of points of potentials radar reflector candidates for each sensor. However, it is not possible to directly match points among these lists, and they are expected to contain a high number of false positives on both sensors.
To address this issue, we rely on a heuristic that leverages the accuracy of initial calibration. Usually, robot/vehicle CAD designs allow an initial calibration with an accuracy of a few centimeters/degrees, and direct sensor calibration is only used to refine it.
Using the initial radar-lidar calibration, we project each lidar corner reflector candidate into the radar coordinates and for each candidate we compute the closest candidate from the other modality. We consider real radar-lidar pairs of corner reflectors those pairs who are mutually their closest candidate.
Matches using this heuristic can still contain incorrect pairs and false positives, which is why we employ a Kalman filter to both improve the estimations and check for temporal consistency (false positives are not usually consistent in time).
Once matches’ estimations converge (using a covariance matrix criteria), they are added to the calibration list.
Step 4: Rigid transformation estimation
After matching detection pairs, we apply rigid transformation estimation algorithms to those pairs to estimate the transformation between the radar and lidar sensors. We currently support two algorithms: a 2d SVD-based method and a yaw-only rotation method.
2d SVD-based method
In this method, we reduce the problem to a 2d transformation estimation since radar detections lack a z component (elevation is fixed to zero).
However, because lidar detections are in the lidar frame and likely involve a 3d transformation (non-zero roll and\or pitch) to the radar frame, we transform the lidar detections to a frame dubbed the radar parallel
frame and then set their z component to zero. The radar parallel
frame has only a 2d transformation (x, y, yaw) relative to the radar frame. By dropping the z-component we explicitly give up on computing a 3D pose, which was not possible due to the nature of the radar.
In autonomous vehicles, radars are mounted in a way designed to minimize pitch and roll angles, maximizing their performance and measurement range. This means the radar sensors are aligned as parallel as possible to the ground plane, making the base_link
a suitable choice for the radar parallel
frame.
**Note: this assumes that the lidar to radar parallel
frame is either hardcoded or previously calibrated
Next, we apply the SVD-based rigid transformation estimation algorithm between the lidar detections in the radar parallel frame and the radar detections in the radar frame. This allows us to estimate the transformation between the lidar and radar by multiplying the radar-to-radar-parallel transformation (calibrated) with the radar-parallel-to-lidar transformation (known before-handed). The SVD-based algorithm, provided by PCL, leverages SVD to find the optimal rotation component and then computes the translation component based on the rotation.
Yaw-only rotation method
This method, on the other hand, utilizes the initial radar-to-lidar transformation to calculate lidar detections in the radar frame. We then calculate the average yaw angle difference of all pairs, considering only yaw rotation between the lidar and radar detections in the radar frame, to estimate a yaw-only rotation transformation in the radar frame. Finally, we estimate the transformation between the lidar and radar by multiplying the yaw-only rotation transformation with the initial radar-to-lidar transformation.
Generally, the 2d SVD-based method is preferred when valid; otherwise, the yaw-only rotation method is used as the calibration output.
Diagram
Below, you can see how the algorithm is implemented in the marker_radar_lidar_calibrator
package.
ROS Interfaces
Input
Name | Type | Description |
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- calibration_service_name [default: calibrate_radar_lidar]
- rviz [default: true]
- radar_parallel_frame [default: front_unit_base_link]
- input_lidar_pointcloud [default: /sensing/lidar/front_lower/pointcloud_raw]
- input_radar_objects [default: /sensing/radar/front_center/objects_raw]
- use_lidar_initial_crop_box_filter [default: false]
- lidar_initial_crop_box_min_x [default: -50.0]
- lidar_initial_crop_box_min_y [default: -50.0]
- lidar_initial_crop_box_min_z [default: -50.0]
- lidar_initial_crop_box_max_x [default: 50.0]
- lidar_initial_crop_box_max_y [default: 50.0]
- lidar_initial_crop_box_max_z [default: 50.0]
- use_radar_initial_crop_box_filter [default: false]
- radar_initial_crop_box_min_x [default: -50.0]
- radar_initial_crop_box_min_y [default: -50.0]
- radar_initial_crop_box_min_z [default: -50.0]
- radar_initial_crop_box_max_x [default: 50.0]
- radar_initial_crop_box_max_y [default: 50.0]
- radar_initial_crop_box_max_z [default: 50.0]
Messages
Services
Plugins
Recent questions tagged marker_radar_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
marker_radar_lidar_calibrator
A tutorial for this calibrator can be found here
Purpose
The package marker_radar_lidar_calibrator
performs extrinsic calibration between radar and 3d lidar sensors used in autonomous driving and robotics.
Currently, the calibrator only supports radars whose detection interface includes distance and azimuth angle, but do not offer elevation angle. For example, ARS408 radars can be calibrated with this tool. Also, note that the 3d lidar should have a high enough resolution to present several returns on the radar reflector (calibration target).
Inner-workings / Algorithms
The calibrator computes the center of the reflectors from the pointcloud and pairs them to the radar objects/tracks. Afterwards, both an SVD-based and a yaw-only rotation estimation algorithm are applied to these matched points to estimate the rigid transformation between sensors.
Due to the complexity of the problem, the process in split in the following steps: constructing a background model, extracting the foreground to detect reflectors, matching and filtering the lidar and radar detections, and estimating the rigid transformation between the radar and lidar sensors.
In what follows, we proceed to explain each step, making a point to put emphasis on the parts that the user must take into consideration to use phis package effectively.
*Note: although the radar can provide either detections and/or objects/tracks, we treat them as points in this package, and as such may refer to the radar pointcloud when needed.
Step 1: Background model construction
Detecting corner reflectors in an unknown environment, without imposing impractical restrictions on the reflectors themselves, the operators, or the environment, it a challenging problem. From the perspective of the lidar, radar reflectors may be confused with the floor or other metallic objects, and from the radar’s perspective, although corner reflectors are detected by the sensor (the user must confirm it themselves before attempting to use this tool!), other objects are also detected, with no practical way to tell them apart most of the time.
For these reasons, we avoid addressing the full problem an instead leverage the use of background models. To do this, the user must first present the sensors an environment with no radar reflectors nor any dynamic objects (mostly persons) in the space that is to be used for calibration. The tool will collect data for a set period of time or until there is no new information. For each modality, this data is then turned into voxels, marking the space of each occupied voxel as background
in the following steps.
Step 2: Foreground extraction and reflector detection
Once the background models for both sensors have been prepared, new data gets filtered using the background models to leave only the foreground.
Before placing radar reflectors, the foreground data should ideally be empty, and once placing them, only the reflectors and the people holding them should appear as foreground. In practice, however, even small variations in the load of the vehicle can cause ground points to escape the background models and be marked as foreground (a phenomenon exclusive to the lidars). To address this issue, we also employ a RANSAC-based ground segmentation algorithm to avoid these ground points being processed in downstream steps.
All foreground radar objects are automatically categorized as potential reflector detections. For foreground lidar points, however, the reflector detection process involves more steps:
- We first apply a clustering algorithm on the lidar foreground points and discard clusters with a number of points below a predefined threshold.
- Compute the highest point of each cluster and discard it if the highest point exceeds
reflector_max_height
. This is required to discard the clusters corresponding to users (we assume the operators are taller than the reflectors). - Finally, we average all points within a
reflector_radius
from the highest point to estimate the center point of the reflector.
The following images illustrate the background construction and foreground extraction process respectively. Although the radar pre-processing is presented, the process remains the same for the lidar.
During background model construction (left image), the blue voxels (presented as 2d grid for visualization purposes) are marked as background since sensor data is present in said voxels.
Once background model construction finishes and the foreground extraction process begins (right image), only points that fall outside previous background-marked voxels are considered as foreground. In this example, the points hitting the corner reflector and a human are marked as foreground (note that those points’ voxels, here marked in green are disjoint with those of the background).
Background model construction. |
Foreground extraction |
Step 3: Matching and filtering
The output of the previous step consists of two lists of points of potentials radar reflector candidates for each sensor. However, it is not possible to directly match points among these lists, and they are expected to contain a high number of false positives on both sensors.
To address this issue, we rely on a heuristic that leverages the accuracy of initial calibration. Usually, robot/vehicle CAD designs allow an initial calibration with an accuracy of a few centimeters/degrees, and direct sensor calibration is only used to refine it.
Using the initial radar-lidar calibration, we project each lidar corner reflector candidate into the radar coordinates and for each candidate we compute the closest candidate from the other modality. We consider real radar-lidar pairs of corner reflectors those pairs who are mutually their closest candidate.
Matches using this heuristic can still contain incorrect pairs and false positives, which is why we employ a Kalman filter to both improve the estimations and check for temporal consistency (false positives are not usually consistent in time).
Once matches’ estimations converge (using a covariance matrix criteria), they are added to the calibration list.
Step 4: Rigid transformation estimation
After matching detection pairs, we apply rigid transformation estimation algorithms to those pairs to estimate the transformation between the radar and lidar sensors. We currently support two algorithms: a 2d SVD-based method and a yaw-only rotation method.
2d SVD-based method
In this method, we reduce the problem to a 2d transformation estimation since radar detections lack a z component (elevation is fixed to zero).
However, because lidar detections are in the lidar frame and likely involve a 3d transformation (non-zero roll and\or pitch) to the radar frame, we transform the lidar detections to a frame dubbed the radar parallel
frame and then set their z component to zero. The radar parallel
frame has only a 2d transformation (x, y, yaw) relative to the radar frame. By dropping the z-component we explicitly give up on computing a 3D pose, which was not possible due to the nature of the radar.
In autonomous vehicles, radars are mounted in a way designed to minimize pitch and roll angles, maximizing their performance and measurement range. This means the radar sensors are aligned as parallel as possible to the ground plane, making the base_link
a suitable choice for the radar parallel
frame.
**Note: this assumes that the lidar to radar parallel
frame is either hardcoded or previously calibrated
Next, we apply the SVD-based rigid transformation estimation algorithm between the lidar detections in the radar parallel frame and the radar detections in the radar frame. This allows us to estimate the transformation between the lidar and radar by multiplying the radar-to-radar-parallel transformation (calibrated) with the radar-parallel-to-lidar transformation (known before-handed). The SVD-based algorithm, provided by PCL, leverages SVD to find the optimal rotation component and then computes the translation component based on the rotation.
Yaw-only rotation method
This method, on the other hand, utilizes the initial radar-to-lidar transformation to calculate lidar detections in the radar frame. We then calculate the average yaw angle difference of all pairs, considering only yaw rotation between the lidar and radar detections in the radar frame, to estimate a yaw-only rotation transformation in the radar frame. Finally, we estimate the transformation between the lidar and radar by multiplying the yaw-only rotation transformation with the initial radar-to-lidar transformation.
Generally, the 2d SVD-based method is preferred when valid; otherwise, the yaw-only rotation method is used as the calibration output.
Diagram
Below, you can see how the algorithm is implemented in the marker_radar_lidar_calibrator
package.
ROS Interfaces
Input
Name | Type | Description |
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- calibration_service_name [default: calibrate_radar_lidar]
- rviz [default: true]
- radar_parallel_frame [default: front_unit_base_link]
- input_lidar_pointcloud [default: /sensing/lidar/front_lower/pointcloud_raw]
- input_radar_objects [default: /sensing/radar/front_center/objects_raw]
- use_lidar_initial_crop_box_filter [default: false]
- lidar_initial_crop_box_min_x [default: -50.0]
- lidar_initial_crop_box_min_y [default: -50.0]
- lidar_initial_crop_box_min_z [default: -50.0]
- lidar_initial_crop_box_max_x [default: 50.0]
- lidar_initial_crop_box_max_y [default: 50.0]
- lidar_initial_crop_box_max_z [default: 50.0]
- use_radar_initial_crop_box_filter [default: false]
- radar_initial_crop_box_min_x [default: -50.0]
- radar_initial_crop_box_min_y [default: -50.0]
- radar_initial_crop_box_min_z [default: -50.0]
- radar_initial_crop_box_max_x [default: 50.0]
- radar_initial_crop_box_max_y [default: 50.0]
- radar_initial_crop_box_max_z [default: 50.0]