Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
mapping_based_calibrator
A tutorial for this calibrator can be found here
Purpose
The package mapping_based_calibrator
performs extrinsic calibration between 3d lidar sensors, as well as the (partial) calibration between a single 3d lidar and base_link
.
Inner-workings / Algorithms
lidar-lidar calibration
The calibrator is designed to estimate the transformation between multiple lidar sensors. It does so by moving the robot/vehicle, creating a trajectory in which lidars observe the same features, and formulating the calibration problem as a pointcloud registration one.
One of the lidars is used to map the trajectory (denoted as the mapping lidar
), and once a map created with this lidar, the other lidars (denoted as calibration lidars
) are registered against an augmented mapping lidar
pointcloud (through the mapping process), which is equivalent to lidar-lidar calibration (between the mapping lidar
and each of the calibration lidars
individually).
The calibration process encompasses three main steps: constructing a map with the mapping lidar
via moving the vehicle, preprocessing and selecting the calibration data, and finally performing lidar-lidar calibration via pointcloud registration.
General notes about the environment, trajectory, and sensors used:
- The environment should contain features appropriate for mapping (e.g., an open space with no walls is inadequate).
- The data from the lidars needs to be synchronized since we pair and interpolate data from different sensors.
- Since lidar scans get distorted with the vehicle’s movement, the trajectory followed by the vehicle should be as slow and continuous as possible. Failure to do this has a detrimental impact on the calibration process.
- In addition to mapping, the different lidars must observe common, highly distinctive features to perform pointcloud registration among them. Good examples are objects with a lack of symmetry, and clear 3d shapes (as opposed to 2d objects like walls).
- The mapping lidar is usually chosen as the one with the highest resolution, range, and field of view.
- The resolution and range of the lidars used have a great impact on how or whether this method can be used. We do not make guarantees about any set of combinations, and in most cases, parameters will need to be modified to maintain a good performance.
Note: although this package can perform calibration between the mapping lidar
and several calibration lidars
, the documentation will assume only one calibration lidar
is used. In the presence of multiple calibration lidars
, the process is done in parallel in an independent fashion.
Step 1: Map construction
As mentioned in the previous section, one of the lidars is termed the mapping lidar
(set in the launchers via the mapping_pointcloud
argument), and while the robot/vehicle moves, its data is used to construct a map.
The mapping process is implemented via direct pointcloud registration between individual scans of the mapping lidar
using either NDT[1] or GICP[2] (the algorithm can be set in the launcher). The output of this step is a series of registered pointclouds (raw pointcloud and its pose in the map) dubbed frames
(or keyframes
).
However, not all pointclouds coming from the mapping lidar
are used in the map creation, since there is a chance of data redundancy, which is known to difficult data processing and the registration process itself. For this reason, we consider the following rules when mapping:
- An incoming lidar scan is compared against an aggregated pointcloud of the latest
local_map_num_keyframes
keyframes
. -
keyframes
are lidar scans sampled uniformly everynew_keyframe_min_distance
meters. - Incoming lidars that are not deemed
keyframes
, are saved asframes
if their distance to the latest acceptedframe
is overnew_frame_min_distance
meters. Otherwise, the incoming scan is discarded. - If the vehicle stops (and this fact is detected), a special
stopped frame
is saved, since this data is useful for calibration (still data). - If the trajectory followed by the
frames
is deemed non-continuous (e.g., high accelerations or data loss), theframe
at which this fact is detected is deemed alost frame
and the new incoming scan will not compare against this or previous frames (essentially restarting the mapping process). Note: although in normal mapping applications this is not acceptable, for calibration purposes we only need sequences of registered pointclouds so this is still allowed. However, whenever possible the user should restart the mapping process if he identifies this issue.
Step 2: Calibration data preparation
The data required for calibration is created throughout the mapping process and right before the calibration itself. In particular, the mapping and calibration lidar are expected to have different timestamps so they can not be directly registered. Additionally, the mapping process produces a great amount of potential combinations of pointclouds to register, so the data best suited for calibration needs to be chosen.
Data interpolation
As explained in the previous section, pointclouds from the mapping lidar
and calibration lidar
have different timestamps which makes registration directly unfeasible. To address this problem, whenever a keyframe
from the mapping lidar
is generated, the temporally closest calibration lidar
pointcloud is associated to it, and the pose of the mapping lidar
pointcloud is interpolated to the stamp of the calibration lidar
pointcloud using the map (adjacent frames to the keyframe
).
However, the interpolation is only an approximation and its use induces an interpolation error that can be detrimental to calibration. For this reason, interpolation statistics like the interpolation time, distance, angle, and estimated dynamics are computed.
The output of this step is a list of what we call calibration frames
, consisting of the mapping lidar
keyframe
, the calibration lidar
pointcloud, the interpolated pose, and the interpolation statistics.
Data selection
At this point, we have obtained a series of calibration frames
that can be used to perform lidar calibration. However, their contents could have little to no useful information (calibration-wise), their data could be compromised due to an incorrect mapping, or their interpolation error could be non-negligible. For these reasons, we select the calibration data using the following criteria:
- All
calibration frames
“close” tolost frames
are discarded. The term “close” in this context refers to the fact that theframes
near thecalibration lidar
keyframe
from thecalibration frame
are used to augment said pointcloud. This step makes sure no invalid data is used (mapping-wise). - The interpolation statistics are used to discard
calibration frames
. High interpolation times, distances, angles, speed, and acceleration are not accepted (thresholds are set via parameters). -
calibration frames
have varying levels of “information” in them, and in some cases, that information may not be useful for calibration. To select the frames more suited for calibration information-wise, the following criteria are used:- The Principal Component Analysis (PCA) is applied to the
calibration lidar
pointcloud of thecalibration frames
. In this context, the higher the smallest component of PCA is, the more suited a pointcloud is for calibration. - Then, the
calibration frames
are sorted in descending order and they are greedily added to the final calibration set until a maximum budget is reached. - However,
calibration frames
will be skipped if another one near it has already been added (using distance criteria in the map).
- The Principal Component Analysis (PCA) is applied to the
Data preprocessing
When doing source-to-target pointcloud registration, all points in the source pointcloud are projected into the target one, and each source point forms a pair with its closest target one. In the case of sparse pointclouds from lidar scans, this causes convergence issues that are very common in the case of algorithms like ICP
and still cause problems on others like GICP
.
For this reason, instead of registering the calibration lidar
points into the mapping lidar
ones, we first augment the mapping lidar
pointclouds with their neighbors in the map within a vicinity. This augmented pointcloud has a very high number of points, which makes pointcloud registration intractable. To solve this, we use voxel subsampling before pointcloud registration.
Step 3: Pointcloud registration
Lidar-to-lidar calibration is solved implicitly via the pointcloud registration of calibration lidar
pointclouds into the augmented mapping lidar
pointclouds. Each pair of pointclouds produces a registered pose, essentially the calibration pose. Among all of these resulting poses, the one that presents a lower overall error (source to target error among all calibration frames
) is the one chosen as the output calibration result.
However, as registration algorithms are very sensitive to their initial guess and parameters, we use multiple registrators (ICP
, GICP
, and NDT
with different parameters) in a sequential fashion similar to an ensemble, using as the initial guess at every step the best calibration pose so far.
In addition to calibrating using calibration frame
independently, we also use Batched ICP
, which allows us to perform ICP using all the calibration frames
of each lidar simultaneously.
base-lidar calibration
In addition of lidar-lidar calibration, we can also utilize the map generated by the mapping lidar
to partially calibrate the transformation between the mapping lidar
and the base_link
. This possible if the assumption that the area around of the vehicle forms a plane holds true.
Step 1: Map construction
The first step of base-lidar calibration is identical to the Step 1 of lidar-lidar calibration
.
Step 2: Extract ground plane from the pointcloud
After constructing the map, and computing the augmented pointcloud from mapping lidar
, which is identical to the Step 2, a RANSAC-based plane estimation algorithm is used to extract the ground plane pointcloud and its mathematical model.
Step 3: Estimate transformation
To estimate the transformation between the mapping lidar
and the base_link
, the tool needs to calculate the transformation between the lidar and the ground pose, as well as the transformation between the ground pose and the base_link
.
The transformation between the lidar and the ground pose is calculated by utilizing the normal vector and a point on the ground plane, both obtained in the last step. To estimate the transformation between the ground pose and the base_link
, the tool first determines the initial ground-pose-to-base-link using the initial lidar-to-base-link and lidar-to-ground-pose transformations. Then, the tool projects this initial ground-pose-to-base-link transformation onto the xy plane to estimate the transformation between the ground pose and the base_link
. The final lidar to base_link
pose can be obtained by composing the previous poses.
ROS Interfaces
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- rviz [default: true]
- calibration_service_name [default: extrinsic_calibration]
- calibrate_base_frame [default: false]
- base_frame
- mapping_pointcloud
- detected_objects [default: detected_objects]
- predicted_objects [default: predicted_objects]
- calibration_camera_info_topics
- calibration_image_topics
- calibration_pointcloud_topics
- calibration_camera_optical_link_frames
- calibration_lidar_frames
- mapping_lidar_frame
- mapping_registrator [default: gicp]
- local_map_num_keyframes [default: 15]
- dense_pointcloud_num_keyframes [default: 10]
- mapping_max_frames [default: 5000]
- mapping_min_range [default: 1.0]
- mapping_max_range [default: 100.0]
- marker_size [default: 10.0]
- mapper_resolution [default: 1.0]
- mapper_step_size [default: 0.05]
- mapper_max_iterations [default: 500]
- mapper_epsilon [default: 0.001]
- mapper_num_threads [default: 12]
- mapper_max_correspondence_distance [default: 0.1]
- lidar_calibration_max_frames [default: 7]
- camera_calibration_max_frames [default: 1]
- lost_frame_max_angle_diff [default: 25.0]
- lost_frame_interpolation_error [default: 0.05]
- lost_frame_max_acceleration [default: 8.0]
- min_calibration_range [default: 1.5]
- max_calibration_range [default: 80.0]
- calibration_min_pca_eigenvalue [default: 0.02]
- calibration_eval_max_corr_distance [default: 0.2]
- solver_iterations [default: 100]
- calibration_skip_keyframes [default: 3]
- lidar_calibration_min_frames [default: 1]
- calibration_use_only_last_frames [default: false]
- crop_z_calibration_pointclouds [default: false]
- crop_z_calibration_pointclouds_value [default: 4.0]
- base_lidar_crop_box_min_x [default: -5.0]
- base_lidar_crop_box_min_y [default: -5.0]
- base_lidar_crop_box_min_z [default: -5.0]
- base_lidar_crop_box_max_x [default: 10.0]
- base_lidar_crop_box_max_y [default: 5.0]
- base_lidar_crop_box_max_z [default: 5.0]
- base_lidar_min_plane_points_percentage [default: 10.0]
- base_lidar_max_inlier_distance [default: 0.03]
- base_lidar_min_plane_points [default: 500]
- base_lidar_max_cos_distance [default: 0.2]
- base_lidar_max_iterations [default: 500]
- base_lidar_overwrite_xy_yaw [default: false]
- calibration_min_distance_between_frames [default: 1.5]
- use_rosbag [default: true]
Messages
Services
Plugins
Recent questions tagged mapping_based_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
mapping_based_calibrator
A tutorial for this calibrator can be found here
Purpose
The package mapping_based_calibrator
performs extrinsic calibration between 3d lidar sensors, as well as the (partial) calibration between a single 3d lidar and base_link
.
Inner-workings / Algorithms
lidar-lidar calibration
The calibrator is designed to estimate the transformation between multiple lidar sensors. It does so by moving the robot/vehicle, creating a trajectory in which lidars observe the same features, and formulating the calibration problem as a pointcloud registration one.
One of the lidars is used to map the trajectory (denoted as the mapping lidar
), and once a map created with this lidar, the other lidars (denoted as calibration lidars
) are registered against an augmented mapping lidar
pointcloud (through the mapping process), which is equivalent to lidar-lidar calibration (between the mapping lidar
and each of the calibration lidars
individually).
The calibration process encompasses three main steps: constructing a map with the mapping lidar
via moving the vehicle, preprocessing and selecting the calibration data, and finally performing lidar-lidar calibration via pointcloud registration.
General notes about the environment, trajectory, and sensors used:
- The environment should contain features appropriate for mapping (e.g., an open space with no walls is inadequate).
- The data from the lidars needs to be synchronized since we pair and interpolate data from different sensors.
- Since lidar scans get distorted with the vehicle’s movement, the trajectory followed by the vehicle should be as slow and continuous as possible. Failure to do this has a detrimental impact on the calibration process.
- In addition to mapping, the different lidars must observe common, highly distinctive features to perform pointcloud registration among them. Good examples are objects with a lack of symmetry, and clear 3d shapes (as opposed to 2d objects like walls).
- The mapping lidar is usually chosen as the one with the highest resolution, range, and field of view.
- The resolution and range of the lidars used have a great impact on how or whether this method can be used. We do not make guarantees about any set of combinations, and in most cases, parameters will need to be modified to maintain a good performance.
Note: although this package can perform calibration between the mapping lidar
and several calibration lidars
, the documentation will assume only one calibration lidar
is used. In the presence of multiple calibration lidars
, the process is done in parallel in an independent fashion.
Step 1: Map construction
As mentioned in the previous section, one of the lidars is termed the mapping lidar
(set in the launchers via the mapping_pointcloud
argument), and while the robot/vehicle moves, its data is used to construct a map.
The mapping process is implemented via direct pointcloud registration between individual scans of the mapping lidar
using either NDT[1] or GICP[2] (the algorithm can be set in the launcher). The output of this step is a series of registered pointclouds (raw pointcloud and its pose in the map) dubbed frames
(or keyframes
).
However, not all pointclouds coming from the mapping lidar
are used in the map creation, since there is a chance of data redundancy, which is known to difficult data processing and the registration process itself. For this reason, we consider the following rules when mapping:
- An incoming lidar scan is compared against an aggregated pointcloud of the latest
local_map_num_keyframes
keyframes
. -
keyframes
are lidar scans sampled uniformly everynew_keyframe_min_distance
meters. - Incoming lidars that are not deemed
keyframes
, are saved asframes
if their distance to the latest acceptedframe
is overnew_frame_min_distance
meters. Otherwise, the incoming scan is discarded. - If the vehicle stops (and this fact is detected), a special
stopped frame
is saved, since this data is useful for calibration (still data). - If the trajectory followed by the
frames
is deemed non-continuous (e.g., high accelerations or data loss), theframe
at which this fact is detected is deemed alost frame
and the new incoming scan will not compare against this or previous frames (essentially restarting the mapping process). Note: although in normal mapping applications this is not acceptable, for calibration purposes we only need sequences of registered pointclouds so this is still allowed. However, whenever possible the user should restart the mapping process if he identifies this issue.
Step 2: Calibration data preparation
The data required for calibration is created throughout the mapping process and right before the calibration itself. In particular, the mapping and calibration lidar are expected to have different timestamps so they can not be directly registered. Additionally, the mapping process produces a great amount of potential combinations of pointclouds to register, so the data best suited for calibration needs to be chosen.
Data interpolation
As explained in the previous section, pointclouds from the mapping lidar
and calibration lidar
have different timestamps which makes registration directly unfeasible. To address this problem, whenever a keyframe
from the mapping lidar
is generated, the temporally closest calibration lidar
pointcloud is associated to it, and the pose of the mapping lidar
pointcloud is interpolated to the stamp of the calibration lidar
pointcloud using the map (adjacent frames to the keyframe
).
However, the interpolation is only an approximation and its use induces an interpolation error that can be detrimental to calibration. For this reason, interpolation statistics like the interpolation time, distance, angle, and estimated dynamics are computed.
The output of this step is a list of what we call calibration frames
, consisting of the mapping lidar
keyframe
, the calibration lidar
pointcloud, the interpolated pose, and the interpolation statistics.
Data selection
At this point, we have obtained a series of calibration frames
that can be used to perform lidar calibration. However, their contents could have little to no useful information (calibration-wise), their data could be compromised due to an incorrect mapping, or their interpolation error could be non-negligible. For these reasons, we select the calibration data using the following criteria:
- All
calibration frames
“close” tolost frames
are discarded. The term “close” in this context refers to the fact that theframes
near thecalibration lidar
keyframe
from thecalibration frame
are used to augment said pointcloud. This step makes sure no invalid data is used (mapping-wise). - The interpolation statistics are used to discard
calibration frames
. High interpolation times, distances, angles, speed, and acceleration are not accepted (thresholds are set via parameters). -
calibration frames
have varying levels of “information” in them, and in some cases, that information may not be useful for calibration. To select the frames more suited for calibration information-wise, the following criteria are used:- The Principal Component Analysis (PCA) is applied to the
calibration lidar
pointcloud of thecalibration frames
. In this context, the higher the smallest component of PCA is, the more suited a pointcloud is for calibration. - Then, the
calibration frames
are sorted in descending order and they are greedily added to the final calibration set until a maximum budget is reached. - However,
calibration frames
will be skipped if another one near it has already been added (using distance criteria in the map).
- The Principal Component Analysis (PCA) is applied to the
Data preprocessing
When doing source-to-target pointcloud registration, all points in the source pointcloud are projected into the target one, and each source point forms a pair with its closest target one. In the case of sparse pointclouds from lidar scans, this causes convergence issues that are very common in the case of algorithms like ICP
and still cause problems on others like GICP
.
For this reason, instead of registering the calibration lidar
points into the mapping lidar
ones, we first augment the mapping lidar
pointclouds with their neighbors in the map within a vicinity. This augmented pointcloud has a very high number of points, which makes pointcloud registration intractable. To solve this, we use voxel subsampling before pointcloud registration.
Step 3: Pointcloud registration
Lidar-to-lidar calibration is solved implicitly via the pointcloud registration of calibration lidar
pointclouds into the augmented mapping lidar
pointclouds. Each pair of pointclouds produces a registered pose, essentially the calibration pose. Among all of these resulting poses, the one that presents a lower overall error (source to target error among all calibration frames
) is the one chosen as the output calibration result.
However, as registration algorithms are very sensitive to their initial guess and parameters, we use multiple registrators (ICP
, GICP
, and NDT
with different parameters) in a sequential fashion similar to an ensemble, using as the initial guess at every step the best calibration pose so far.
In addition to calibrating using calibration frame
independently, we also use Batched ICP
, which allows us to perform ICP using all the calibration frames
of each lidar simultaneously.
base-lidar calibration
In addition of lidar-lidar calibration, we can also utilize the map generated by the mapping lidar
to partially calibrate the transformation between the mapping lidar
and the base_link
. This possible if the assumption that the area around of the vehicle forms a plane holds true.
Step 1: Map construction
The first step of base-lidar calibration is identical to the Step 1 of lidar-lidar calibration
.
Step 2: Extract ground plane from the pointcloud
After constructing the map, and computing the augmented pointcloud from mapping lidar
, which is identical to the Step 2, a RANSAC-based plane estimation algorithm is used to extract the ground plane pointcloud and its mathematical model.
Step 3: Estimate transformation
To estimate the transformation between the mapping lidar
and the base_link
, the tool needs to calculate the transformation between the lidar and the ground pose, as well as the transformation between the ground pose and the base_link
.
The transformation between the lidar and the ground pose is calculated by utilizing the normal vector and a point on the ground plane, both obtained in the last step. To estimate the transformation between the ground pose and the base_link
, the tool first determines the initial ground-pose-to-base-link using the initial lidar-to-base-link and lidar-to-ground-pose transformations. Then, the tool projects this initial ground-pose-to-base-link transformation onto the xy plane to estimate the transformation between the ground pose and the base_link
. The final lidar to base_link
pose can be obtained by composing the previous poses.
ROS Interfaces
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- rviz [default: true]
- calibration_service_name [default: extrinsic_calibration]
- calibrate_base_frame [default: false]
- base_frame
- mapping_pointcloud
- detected_objects [default: detected_objects]
- predicted_objects [default: predicted_objects]
- calibration_camera_info_topics
- calibration_image_topics
- calibration_pointcloud_topics
- calibration_camera_optical_link_frames
- calibration_lidar_frames
- mapping_lidar_frame
- mapping_registrator [default: gicp]
- local_map_num_keyframes [default: 15]
- dense_pointcloud_num_keyframes [default: 10]
- mapping_max_frames [default: 5000]
- mapping_min_range [default: 1.0]
- mapping_max_range [default: 100.0]
- marker_size [default: 10.0]
- mapper_resolution [default: 1.0]
- mapper_step_size [default: 0.05]
- mapper_max_iterations [default: 500]
- mapper_epsilon [default: 0.001]
- mapper_num_threads [default: 12]
- mapper_max_correspondence_distance [default: 0.1]
- lidar_calibration_max_frames [default: 7]
- camera_calibration_max_frames [default: 1]
- lost_frame_max_angle_diff [default: 25.0]
- lost_frame_interpolation_error [default: 0.05]
- lost_frame_max_acceleration [default: 8.0]
- min_calibration_range [default: 1.5]
- max_calibration_range [default: 80.0]
- calibration_min_pca_eigenvalue [default: 0.02]
- calibration_eval_max_corr_distance [default: 0.2]
- solver_iterations [default: 100]
- calibration_skip_keyframes [default: 3]
- lidar_calibration_min_frames [default: 1]
- calibration_use_only_last_frames [default: false]
- crop_z_calibration_pointclouds [default: false]
- crop_z_calibration_pointclouds_value [default: 4.0]
- base_lidar_crop_box_min_x [default: -5.0]
- base_lidar_crop_box_min_y [default: -5.0]
- base_lidar_crop_box_min_z [default: -5.0]
- base_lidar_crop_box_max_x [default: 10.0]
- base_lidar_crop_box_max_y [default: 5.0]
- base_lidar_crop_box_max_z [default: 5.0]
- base_lidar_min_plane_points_percentage [default: 10.0]
- base_lidar_max_inlier_distance [default: 0.03]
- base_lidar_min_plane_points [default: 500]
- base_lidar_max_cos_distance [default: 0.2]
- base_lidar_max_iterations [default: 500]
- base_lidar_overwrite_xy_yaw [default: false]
- calibration_min_distance_between_frames [default: 1.5]
- use_rosbag [default: true]
Messages
Services
Plugins
Recent questions tagged mapping_based_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
mapping_based_calibrator
A tutorial for this calibrator can be found here
Purpose
The package mapping_based_calibrator
performs extrinsic calibration between 3d lidar sensors, as well as the (partial) calibration between a single 3d lidar and base_link
.
Inner-workings / Algorithms
lidar-lidar calibration
The calibrator is designed to estimate the transformation between multiple lidar sensors. It does so by moving the robot/vehicle, creating a trajectory in which lidars observe the same features, and formulating the calibration problem as a pointcloud registration one.
One of the lidars is used to map the trajectory (denoted as the mapping lidar
), and once a map created with this lidar, the other lidars (denoted as calibration lidars
) are registered against an augmented mapping lidar
pointcloud (through the mapping process), which is equivalent to lidar-lidar calibration (between the mapping lidar
and each of the calibration lidars
individually).
The calibration process encompasses three main steps: constructing a map with the mapping lidar
via moving the vehicle, preprocessing and selecting the calibration data, and finally performing lidar-lidar calibration via pointcloud registration.
General notes about the environment, trajectory, and sensors used:
- The environment should contain features appropriate for mapping (e.g., an open space with no walls is inadequate).
- The data from the lidars needs to be synchronized since we pair and interpolate data from different sensors.
- Since lidar scans get distorted with the vehicle’s movement, the trajectory followed by the vehicle should be as slow and continuous as possible. Failure to do this has a detrimental impact on the calibration process.
- In addition to mapping, the different lidars must observe common, highly distinctive features to perform pointcloud registration among them. Good examples are objects with a lack of symmetry, and clear 3d shapes (as opposed to 2d objects like walls).
- The mapping lidar is usually chosen as the one with the highest resolution, range, and field of view.
- The resolution and range of the lidars used have a great impact on how or whether this method can be used. We do not make guarantees about any set of combinations, and in most cases, parameters will need to be modified to maintain a good performance.
Note: although this package can perform calibration between the mapping lidar
and several calibration lidars
, the documentation will assume only one calibration lidar
is used. In the presence of multiple calibration lidars
, the process is done in parallel in an independent fashion.
Step 1: Map construction
As mentioned in the previous section, one of the lidars is termed the mapping lidar
(set in the launchers via the mapping_pointcloud
argument), and while the robot/vehicle moves, its data is used to construct a map.
The mapping process is implemented via direct pointcloud registration between individual scans of the mapping lidar
using either NDT[1] or GICP[2] (the algorithm can be set in the launcher). The output of this step is a series of registered pointclouds (raw pointcloud and its pose in the map) dubbed frames
(or keyframes
).
However, not all pointclouds coming from the mapping lidar
are used in the map creation, since there is a chance of data redundancy, which is known to difficult data processing and the registration process itself. For this reason, we consider the following rules when mapping:
- An incoming lidar scan is compared against an aggregated pointcloud of the latest
local_map_num_keyframes
keyframes
. -
keyframes
are lidar scans sampled uniformly everynew_keyframe_min_distance
meters. - Incoming lidars that are not deemed
keyframes
, are saved asframes
if their distance to the latest acceptedframe
is overnew_frame_min_distance
meters. Otherwise, the incoming scan is discarded. - If the vehicle stops (and this fact is detected), a special
stopped frame
is saved, since this data is useful for calibration (still data). - If the trajectory followed by the
frames
is deemed non-continuous (e.g., high accelerations or data loss), theframe
at which this fact is detected is deemed alost frame
and the new incoming scan will not compare against this or previous frames (essentially restarting the mapping process). Note: although in normal mapping applications this is not acceptable, for calibration purposes we only need sequences of registered pointclouds so this is still allowed. However, whenever possible the user should restart the mapping process if he identifies this issue.
Step 2: Calibration data preparation
The data required for calibration is created throughout the mapping process and right before the calibration itself. In particular, the mapping and calibration lidar are expected to have different timestamps so they can not be directly registered. Additionally, the mapping process produces a great amount of potential combinations of pointclouds to register, so the data best suited for calibration needs to be chosen.
Data interpolation
As explained in the previous section, pointclouds from the mapping lidar
and calibration lidar
have different timestamps which makes registration directly unfeasible. To address this problem, whenever a keyframe
from the mapping lidar
is generated, the temporally closest calibration lidar
pointcloud is associated to it, and the pose of the mapping lidar
pointcloud is interpolated to the stamp of the calibration lidar
pointcloud using the map (adjacent frames to the keyframe
).
However, the interpolation is only an approximation and its use induces an interpolation error that can be detrimental to calibration. For this reason, interpolation statistics like the interpolation time, distance, angle, and estimated dynamics are computed.
The output of this step is a list of what we call calibration frames
, consisting of the mapping lidar
keyframe
, the calibration lidar
pointcloud, the interpolated pose, and the interpolation statistics.
Data selection
At this point, we have obtained a series of calibration frames
that can be used to perform lidar calibration. However, their contents could have little to no useful information (calibration-wise), their data could be compromised due to an incorrect mapping, or their interpolation error could be non-negligible. For these reasons, we select the calibration data using the following criteria:
- All
calibration frames
“close” tolost frames
are discarded. The term “close” in this context refers to the fact that theframes
near thecalibration lidar
keyframe
from thecalibration frame
are used to augment said pointcloud. This step makes sure no invalid data is used (mapping-wise). - The interpolation statistics are used to discard
calibration frames
. High interpolation times, distances, angles, speed, and acceleration are not accepted (thresholds are set via parameters). -
calibration frames
have varying levels of “information” in them, and in some cases, that information may not be useful for calibration. To select the frames more suited for calibration information-wise, the following criteria are used:- The Principal Component Analysis (PCA) is applied to the
calibration lidar
pointcloud of thecalibration frames
. In this context, the higher the smallest component of PCA is, the more suited a pointcloud is for calibration. - Then, the
calibration frames
are sorted in descending order and they are greedily added to the final calibration set until a maximum budget is reached. - However,
calibration frames
will be skipped if another one near it has already been added (using distance criteria in the map).
- The Principal Component Analysis (PCA) is applied to the
Data preprocessing
When doing source-to-target pointcloud registration, all points in the source pointcloud are projected into the target one, and each source point forms a pair with its closest target one. In the case of sparse pointclouds from lidar scans, this causes convergence issues that are very common in the case of algorithms like ICP
and still cause problems on others like GICP
.
For this reason, instead of registering the calibration lidar
points into the mapping lidar
ones, we first augment the mapping lidar
pointclouds with their neighbors in the map within a vicinity. This augmented pointcloud has a very high number of points, which makes pointcloud registration intractable. To solve this, we use voxel subsampling before pointcloud registration.
Step 3: Pointcloud registration
Lidar-to-lidar calibration is solved implicitly via the pointcloud registration of calibration lidar
pointclouds into the augmented mapping lidar
pointclouds. Each pair of pointclouds produces a registered pose, essentially the calibration pose. Among all of these resulting poses, the one that presents a lower overall error (source to target error among all calibration frames
) is the one chosen as the output calibration result.
However, as registration algorithms are very sensitive to their initial guess and parameters, we use multiple registrators (ICP
, GICP
, and NDT
with different parameters) in a sequential fashion similar to an ensemble, using as the initial guess at every step the best calibration pose so far.
In addition to calibrating using calibration frame
independently, we also use Batched ICP
, which allows us to perform ICP using all the calibration frames
of each lidar simultaneously.
base-lidar calibration
In addition of lidar-lidar calibration, we can also utilize the map generated by the mapping lidar
to partially calibrate the transformation between the mapping lidar
and the base_link
. This possible if the assumption that the area around of the vehicle forms a plane holds true.
Step 1: Map construction
The first step of base-lidar calibration is identical to the Step 1 of lidar-lidar calibration
.
Step 2: Extract ground plane from the pointcloud
After constructing the map, and computing the augmented pointcloud from mapping lidar
, which is identical to the Step 2, a RANSAC-based plane estimation algorithm is used to extract the ground plane pointcloud and its mathematical model.
Step 3: Estimate transformation
To estimate the transformation between the mapping lidar
and the base_link
, the tool needs to calculate the transformation between the lidar and the ground pose, as well as the transformation between the ground pose and the base_link
.
The transformation between the lidar and the ground pose is calculated by utilizing the normal vector and a point on the ground plane, both obtained in the last step. To estimate the transformation between the ground pose and the base_link
, the tool first determines the initial ground-pose-to-base-link using the initial lidar-to-base-link and lidar-to-ground-pose transformations. Then, the tool projects this initial ground-pose-to-base-link transformation onto the xy plane to estimate the transformation between the ground pose and the base_link
. The final lidar to base_link
pose can be obtained by composing the previous poses.
ROS Interfaces
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- rviz [default: true]
- calibration_service_name [default: extrinsic_calibration]
- calibrate_base_frame [default: false]
- base_frame
- mapping_pointcloud
- detected_objects [default: detected_objects]
- predicted_objects [default: predicted_objects]
- calibration_camera_info_topics
- calibration_image_topics
- calibration_pointcloud_topics
- calibration_camera_optical_link_frames
- calibration_lidar_frames
- mapping_lidar_frame
- mapping_registrator [default: gicp]
- local_map_num_keyframes [default: 15]
- dense_pointcloud_num_keyframes [default: 10]
- mapping_max_frames [default: 5000]
- mapping_min_range [default: 1.0]
- mapping_max_range [default: 100.0]
- marker_size [default: 10.0]
- mapper_resolution [default: 1.0]
- mapper_step_size [default: 0.05]
- mapper_max_iterations [default: 500]
- mapper_epsilon [default: 0.001]
- mapper_num_threads [default: 12]
- mapper_max_correspondence_distance [default: 0.1]
- lidar_calibration_max_frames [default: 7]
- camera_calibration_max_frames [default: 1]
- lost_frame_max_angle_diff [default: 25.0]
- lost_frame_interpolation_error [default: 0.05]
- lost_frame_max_acceleration [default: 8.0]
- min_calibration_range [default: 1.5]
- max_calibration_range [default: 80.0]
- calibration_min_pca_eigenvalue [default: 0.02]
- calibration_eval_max_corr_distance [default: 0.2]
- solver_iterations [default: 100]
- calibration_skip_keyframes [default: 3]
- lidar_calibration_min_frames [default: 1]
- calibration_use_only_last_frames [default: false]
- crop_z_calibration_pointclouds [default: false]
- crop_z_calibration_pointclouds_value [default: 4.0]
- base_lidar_crop_box_min_x [default: -5.0]
- base_lidar_crop_box_min_y [default: -5.0]
- base_lidar_crop_box_min_z [default: -5.0]
- base_lidar_crop_box_max_x [default: 10.0]
- base_lidar_crop_box_max_y [default: 5.0]
- base_lidar_crop_box_max_z [default: 5.0]
- base_lidar_min_plane_points_percentage [default: 10.0]
- base_lidar_max_inlier_distance [default: 0.03]
- base_lidar_min_plane_points [default: 500]
- base_lidar_max_cos_distance [default: 0.2]
- base_lidar_max_iterations [default: 500]
- base_lidar_overwrite_xy_yaw [default: false]
- calibration_min_distance_between_frames [default: 1.5]
- use_rosbag [default: true]
Messages
Services
Plugins
Recent questions tagged mapping_based_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
mapping_based_calibrator
A tutorial for this calibrator can be found here
Purpose
The package mapping_based_calibrator
performs extrinsic calibration between 3d lidar sensors, as well as the (partial) calibration between a single 3d lidar and base_link
.
Inner-workings / Algorithms
lidar-lidar calibration
The calibrator is designed to estimate the transformation between multiple lidar sensors. It does so by moving the robot/vehicle, creating a trajectory in which lidars observe the same features, and formulating the calibration problem as a pointcloud registration one.
One of the lidars is used to map the trajectory (denoted as the mapping lidar
), and once a map created with this lidar, the other lidars (denoted as calibration lidars
) are registered against an augmented mapping lidar
pointcloud (through the mapping process), which is equivalent to lidar-lidar calibration (between the mapping lidar
and each of the calibration lidars
individually).
The calibration process encompasses three main steps: constructing a map with the mapping lidar
via moving the vehicle, preprocessing and selecting the calibration data, and finally performing lidar-lidar calibration via pointcloud registration.
General notes about the environment, trajectory, and sensors used:
- The environment should contain features appropriate for mapping (e.g., an open space with no walls is inadequate).
- The data from the lidars needs to be synchronized since we pair and interpolate data from different sensors.
- Since lidar scans get distorted with the vehicle’s movement, the trajectory followed by the vehicle should be as slow and continuous as possible. Failure to do this has a detrimental impact on the calibration process.
- In addition to mapping, the different lidars must observe common, highly distinctive features to perform pointcloud registration among them. Good examples are objects with a lack of symmetry, and clear 3d shapes (as opposed to 2d objects like walls).
- The mapping lidar is usually chosen as the one with the highest resolution, range, and field of view.
- The resolution and range of the lidars used have a great impact on how or whether this method can be used. We do not make guarantees about any set of combinations, and in most cases, parameters will need to be modified to maintain a good performance.
Note: although this package can perform calibration between the mapping lidar
and several calibration lidars
, the documentation will assume only one calibration lidar
is used. In the presence of multiple calibration lidars
, the process is done in parallel in an independent fashion.
Step 1: Map construction
As mentioned in the previous section, one of the lidars is termed the mapping lidar
(set in the launchers via the mapping_pointcloud
argument), and while the robot/vehicle moves, its data is used to construct a map.
The mapping process is implemented via direct pointcloud registration between individual scans of the mapping lidar
using either NDT[1] or GICP[2] (the algorithm can be set in the launcher). The output of this step is a series of registered pointclouds (raw pointcloud and its pose in the map) dubbed frames
(or keyframes
).
However, not all pointclouds coming from the mapping lidar
are used in the map creation, since there is a chance of data redundancy, which is known to difficult data processing and the registration process itself. For this reason, we consider the following rules when mapping:
- An incoming lidar scan is compared against an aggregated pointcloud of the latest
local_map_num_keyframes
keyframes
. -
keyframes
are lidar scans sampled uniformly everynew_keyframe_min_distance
meters. - Incoming lidars that are not deemed
keyframes
, are saved asframes
if their distance to the latest acceptedframe
is overnew_frame_min_distance
meters. Otherwise, the incoming scan is discarded. - If the vehicle stops (and this fact is detected), a special
stopped frame
is saved, since this data is useful for calibration (still data). - If the trajectory followed by the
frames
is deemed non-continuous (e.g., high accelerations or data loss), theframe
at which this fact is detected is deemed alost frame
and the new incoming scan will not compare against this or previous frames (essentially restarting the mapping process). Note: although in normal mapping applications this is not acceptable, for calibration purposes we only need sequences of registered pointclouds so this is still allowed. However, whenever possible the user should restart the mapping process if he identifies this issue.
Step 2: Calibration data preparation
The data required for calibration is created throughout the mapping process and right before the calibration itself. In particular, the mapping and calibration lidar are expected to have different timestamps so they can not be directly registered. Additionally, the mapping process produces a great amount of potential combinations of pointclouds to register, so the data best suited for calibration needs to be chosen.
Data interpolation
As explained in the previous section, pointclouds from the mapping lidar
and calibration lidar
have different timestamps which makes registration directly unfeasible. To address this problem, whenever a keyframe
from the mapping lidar
is generated, the temporally closest calibration lidar
pointcloud is associated to it, and the pose of the mapping lidar
pointcloud is interpolated to the stamp of the calibration lidar
pointcloud using the map (adjacent frames to the keyframe
).
However, the interpolation is only an approximation and its use induces an interpolation error that can be detrimental to calibration. For this reason, interpolation statistics like the interpolation time, distance, angle, and estimated dynamics are computed.
The output of this step is a list of what we call calibration frames
, consisting of the mapping lidar
keyframe
, the calibration lidar
pointcloud, the interpolated pose, and the interpolation statistics.
Data selection
At this point, we have obtained a series of calibration frames
that can be used to perform lidar calibration. However, their contents could have little to no useful information (calibration-wise), their data could be compromised due to an incorrect mapping, or their interpolation error could be non-negligible. For these reasons, we select the calibration data using the following criteria:
- All
calibration frames
“close” tolost frames
are discarded. The term “close” in this context refers to the fact that theframes
near thecalibration lidar
keyframe
from thecalibration frame
are used to augment said pointcloud. This step makes sure no invalid data is used (mapping-wise). - The interpolation statistics are used to discard
calibration frames
. High interpolation times, distances, angles, speed, and acceleration are not accepted (thresholds are set via parameters). -
calibration frames
have varying levels of “information” in them, and in some cases, that information may not be useful for calibration. To select the frames more suited for calibration information-wise, the following criteria are used:- The Principal Component Analysis (PCA) is applied to the
calibration lidar
pointcloud of thecalibration frames
. In this context, the higher the smallest component of PCA is, the more suited a pointcloud is for calibration. - Then, the
calibration frames
are sorted in descending order and they are greedily added to the final calibration set until a maximum budget is reached. - However,
calibration frames
will be skipped if another one near it has already been added (using distance criteria in the map).
- The Principal Component Analysis (PCA) is applied to the
Data preprocessing
When doing source-to-target pointcloud registration, all points in the source pointcloud are projected into the target one, and each source point forms a pair with its closest target one. In the case of sparse pointclouds from lidar scans, this causes convergence issues that are very common in the case of algorithms like ICP
and still cause problems on others like GICP
.
For this reason, instead of registering the calibration lidar
points into the mapping lidar
ones, we first augment the mapping lidar
pointclouds with their neighbors in the map within a vicinity. This augmented pointcloud has a very high number of points, which makes pointcloud registration intractable. To solve this, we use voxel subsampling before pointcloud registration.
Step 3: Pointcloud registration
Lidar-to-lidar calibration is solved implicitly via the pointcloud registration of calibration lidar
pointclouds into the augmented mapping lidar
pointclouds. Each pair of pointclouds produces a registered pose, essentially the calibration pose. Among all of these resulting poses, the one that presents a lower overall error (source to target error among all calibration frames
) is the one chosen as the output calibration result.
However, as registration algorithms are very sensitive to their initial guess and parameters, we use multiple registrators (ICP
, GICP
, and NDT
with different parameters) in a sequential fashion similar to an ensemble, using as the initial guess at every step the best calibration pose so far.
In addition to calibrating using calibration frame
independently, we also use Batched ICP
, which allows us to perform ICP using all the calibration frames
of each lidar simultaneously.
base-lidar calibration
In addition of lidar-lidar calibration, we can also utilize the map generated by the mapping lidar
to partially calibrate the transformation between the mapping lidar
and the base_link
. This possible if the assumption that the area around of the vehicle forms a plane holds true.
Step 1: Map construction
The first step of base-lidar calibration is identical to the Step 1 of lidar-lidar calibration
.
Step 2: Extract ground plane from the pointcloud
After constructing the map, and computing the augmented pointcloud from mapping lidar
, which is identical to the Step 2, a RANSAC-based plane estimation algorithm is used to extract the ground plane pointcloud and its mathematical model.
Step 3: Estimate transformation
To estimate the transformation between the mapping lidar
and the base_link
, the tool needs to calculate the transformation between the lidar and the ground pose, as well as the transformation between the ground pose and the base_link
.
The transformation between the lidar and the ground pose is calculated by utilizing the normal vector and a point on the ground plane, both obtained in the last step. To estimate the transformation between the ground pose and the base_link
, the tool first determines the initial ground-pose-to-base-link using the initial lidar-to-base-link and lidar-to-ground-pose transformations. Then, the tool projects this initial ground-pose-to-base-link transformation onto the xy plane to estimate the transformation between the ground pose and the base_link
. The final lidar to base_link
pose can be obtained by composing the previous poses.
ROS Interfaces
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- rviz [default: true]
- calibration_service_name [default: extrinsic_calibration]
- calibrate_base_frame [default: false]
- base_frame
- mapping_pointcloud
- detected_objects [default: detected_objects]
- predicted_objects [default: predicted_objects]
- calibration_camera_info_topics
- calibration_image_topics
- calibration_pointcloud_topics
- calibration_camera_optical_link_frames
- calibration_lidar_frames
- mapping_lidar_frame
- mapping_registrator [default: gicp]
- local_map_num_keyframes [default: 15]
- dense_pointcloud_num_keyframes [default: 10]
- mapping_max_frames [default: 5000]
- mapping_min_range [default: 1.0]
- mapping_max_range [default: 100.0]
- marker_size [default: 10.0]
- mapper_resolution [default: 1.0]
- mapper_step_size [default: 0.05]
- mapper_max_iterations [default: 500]
- mapper_epsilon [default: 0.001]
- mapper_num_threads [default: 12]
- mapper_max_correspondence_distance [default: 0.1]
- lidar_calibration_max_frames [default: 7]
- camera_calibration_max_frames [default: 1]
- lost_frame_max_angle_diff [default: 25.0]
- lost_frame_interpolation_error [default: 0.05]
- lost_frame_max_acceleration [default: 8.0]
- min_calibration_range [default: 1.5]
- max_calibration_range [default: 80.0]
- calibration_min_pca_eigenvalue [default: 0.02]
- calibration_eval_max_corr_distance [default: 0.2]
- solver_iterations [default: 100]
- calibration_skip_keyframes [default: 3]
- lidar_calibration_min_frames [default: 1]
- calibration_use_only_last_frames [default: false]
- crop_z_calibration_pointclouds [default: false]
- crop_z_calibration_pointclouds_value [default: 4.0]
- base_lidar_crop_box_min_x [default: -5.0]
- base_lidar_crop_box_min_y [default: -5.0]
- base_lidar_crop_box_min_z [default: -5.0]
- base_lidar_crop_box_max_x [default: 10.0]
- base_lidar_crop_box_max_y [default: 5.0]
- base_lidar_crop_box_max_z [default: 5.0]
- base_lidar_min_plane_points_percentage [default: 10.0]
- base_lidar_max_inlier_distance [default: 0.03]
- base_lidar_min_plane_points [default: 500]
- base_lidar_max_cos_distance [default: 0.2]
- base_lidar_max_iterations [default: 500]
- base_lidar_overwrite_xy_yaw [default: false]
- calibration_min_distance_between_frames [default: 1.5]
- use_rosbag [default: true]
Messages
Services
Plugins
Recent questions tagged mapping_based_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
mapping_based_calibrator
A tutorial for this calibrator can be found here
Purpose
The package mapping_based_calibrator
performs extrinsic calibration between 3d lidar sensors, as well as the (partial) calibration between a single 3d lidar and base_link
.
Inner-workings / Algorithms
lidar-lidar calibration
The calibrator is designed to estimate the transformation between multiple lidar sensors. It does so by moving the robot/vehicle, creating a trajectory in which lidars observe the same features, and formulating the calibration problem as a pointcloud registration one.
One of the lidars is used to map the trajectory (denoted as the mapping lidar
), and once a map created with this lidar, the other lidars (denoted as calibration lidars
) are registered against an augmented mapping lidar
pointcloud (through the mapping process), which is equivalent to lidar-lidar calibration (between the mapping lidar
and each of the calibration lidars
individually).
The calibration process encompasses three main steps: constructing a map with the mapping lidar
via moving the vehicle, preprocessing and selecting the calibration data, and finally performing lidar-lidar calibration via pointcloud registration.
General notes about the environment, trajectory, and sensors used:
- The environment should contain features appropriate for mapping (e.g., an open space with no walls is inadequate).
- The data from the lidars needs to be synchronized since we pair and interpolate data from different sensors.
- Since lidar scans get distorted with the vehicle’s movement, the trajectory followed by the vehicle should be as slow and continuous as possible. Failure to do this has a detrimental impact on the calibration process.
- In addition to mapping, the different lidars must observe common, highly distinctive features to perform pointcloud registration among them. Good examples are objects with a lack of symmetry, and clear 3d shapes (as opposed to 2d objects like walls).
- The mapping lidar is usually chosen as the one with the highest resolution, range, and field of view.
- The resolution and range of the lidars used have a great impact on how or whether this method can be used. We do not make guarantees about any set of combinations, and in most cases, parameters will need to be modified to maintain a good performance.
Note: although this package can perform calibration between the mapping lidar
and several calibration lidars
, the documentation will assume only one calibration lidar
is used. In the presence of multiple calibration lidars
, the process is done in parallel in an independent fashion.
Step 1: Map construction
As mentioned in the previous section, one of the lidars is termed the mapping lidar
(set in the launchers via the mapping_pointcloud
argument), and while the robot/vehicle moves, its data is used to construct a map.
The mapping process is implemented via direct pointcloud registration between individual scans of the mapping lidar
using either NDT[1] or GICP[2] (the algorithm can be set in the launcher). The output of this step is a series of registered pointclouds (raw pointcloud and its pose in the map) dubbed frames
(or keyframes
).
However, not all pointclouds coming from the mapping lidar
are used in the map creation, since there is a chance of data redundancy, which is known to difficult data processing and the registration process itself. For this reason, we consider the following rules when mapping:
- An incoming lidar scan is compared against an aggregated pointcloud of the latest
local_map_num_keyframes
keyframes
. -
keyframes
are lidar scans sampled uniformly everynew_keyframe_min_distance
meters. - Incoming lidars that are not deemed
keyframes
, are saved asframes
if their distance to the latest acceptedframe
is overnew_frame_min_distance
meters. Otherwise, the incoming scan is discarded. - If the vehicle stops (and this fact is detected), a special
stopped frame
is saved, since this data is useful for calibration (still data). - If the trajectory followed by the
frames
is deemed non-continuous (e.g., high accelerations or data loss), theframe
at which this fact is detected is deemed alost frame
and the new incoming scan will not compare against this or previous frames (essentially restarting the mapping process). Note: although in normal mapping applications this is not acceptable, for calibration purposes we only need sequences of registered pointclouds so this is still allowed. However, whenever possible the user should restart the mapping process if he identifies this issue.
Step 2: Calibration data preparation
The data required for calibration is created throughout the mapping process and right before the calibration itself. In particular, the mapping and calibration lidar are expected to have different timestamps so they can not be directly registered. Additionally, the mapping process produces a great amount of potential combinations of pointclouds to register, so the data best suited for calibration needs to be chosen.
Data interpolation
As explained in the previous section, pointclouds from the mapping lidar
and calibration lidar
have different timestamps which makes registration directly unfeasible. To address this problem, whenever a keyframe
from the mapping lidar
is generated, the temporally closest calibration lidar
pointcloud is associated to it, and the pose of the mapping lidar
pointcloud is interpolated to the stamp of the calibration lidar
pointcloud using the map (adjacent frames to the keyframe
).
However, the interpolation is only an approximation and its use induces an interpolation error that can be detrimental to calibration. For this reason, interpolation statistics like the interpolation time, distance, angle, and estimated dynamics are computed.
The output of this step is a list of what we call calibration frames
, consisting of the mapping lidar
keyframe
, the calibration lidar
pointcloud, the interpolated pose, and the interpolation statistics.
Data selection
At this point, we have obtained a series of calibration frames
that can be used to perform lidar calibration. However, their contents could have little to no useful information (calibration-wise), their data could be compromised due to an incorrect mapping, or their interpolation error could be non-negligible. For these reasons, we select the calibration data using the following criteria:
- All
calibration frames
“close” tolost frames
are discarded. The term “close” in this context refers to the fact that theframes
near thecalibration lidar
keyframe
from thecalibration frame
are used to augment said pointcloud. This step makes sure no invalid data is used (mapping-wise). - The interpolation statistics are used to discard
calibration frames
. High interpolation times, distances, angles, speed, and acceleration are not accepted (thresholds are set via parameters). -
calibration frames
have varying levels of “information” in them, and in some cases, that information may not be useful for calibration. To select the frames more suited for calibration information-wise, the following criteria are used:- The Principal Component Analysis (PCA) is applied to the
calibration lidar
pointcloud of thecalibration frames
. In this context, the higher the smallest component of PCA is, the more suited a pointcloud is for calibration. - Then, the
calibration frames
are sorted in descending order and they are greedily added to the final calibration set until a maximum budget is reached. - However,
calibration frames
will be skipped if another one near it has already been added (using distance criteria in the map).
- The Principal Component Analysis (PCA) is applied to the
Data preprocessing
When doing source-to-target pointcloud registration, all points in the source pointcloud are projected into the target one, and each source point forms a pair with its closest target one. In the case of sparse pointclouds from lidar scans, this causes convergence issues that are very common in the case of algorithms like ICP
and still cause problems on others like GICP
.
For this reason, instead of registering the calibration lidar
points into the mapping lidar
ones, we first augment the mapping lidar
pointclouds with their neighbors in the map within a vicinity. This augmented pointcloud has a very high number of points, which makes pointcloud registration intractable. To solve this, we use voxel subsampling before pointcloud registration.
Step 3: Pointcloud registration
Lidar-to-lidar calibration is solved implicitly via the pointcloud registration of calibration lidar
pointclouds into the augmented mapping lidar
pointclouds. Each pair of pointclouds produces a registered pose, essentially the calibration pose. Among all of these resulting poses, the one that presents a lower overall error (source to target error among all calibration frames
) is the one chosen as the output calibration result.
However, as registration algorithms are very sensitive to their initial guess and parameters, we use multiple registrators (ICP
, GICP
, and NDT
with different parameters) in a sequential fashion similar to an ensemble, using as the initial guess at every step the best calibration pose so far.
In addition to calibrating using calibration frame
independently, we also use Batched ICP
, which allows us to perform ICP using all the calibration frames
of each lidar simultaneously.
base-lidar calibration
In addition of lidar-lidar calibration, we can also utilize the map generated by the mapping lidar
to partially calibrate the transformation between the mapping lidar
and the base_link
. This possible if the assumption that the area around of the vehicle forms a plane holds true.
Step 1: Map construction
The first step of base-lidar calibration is identical to the Step 1 of lidar-lidar calibration
.
Step 2: Extract ground plane from the pointcloud
After constructing the map, and computing the augmented pointcloud from mapping lidar
, which is identical to the Step 2, a RANSAC-based plane estimation algorithm is used to extract the ground plane pointcloud and its mathematical model.
Step 3: Estimate transformation
To estimate the transformation between the mapping lidar
and the base_link
, the tool needs to calculate the transformation between the lidar and the ground pose, as well as the transformation between the ground pose and the base_link
.
The transformation between the lidar and the ground pose is calculated by utilizing the normal vector and a point on the ground plane, both obtained in the last step. To estimate the transformation between the ground pose and the base_link
, the tool first determines the initial ground-pose-to-base-link using the initial lidar-to-base-link and lidar-to-ground-pose transformations. Then, the tool projects this initial ground-pose-to-base-link transformation onto the xy plane to estimate the transformation between the ground pose and the base_link
. The final lidar to base_link
pose can be obtained by composing the previous poses.
ROS Interfaces
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- rviz [default: true]
- calibration_service_name [default: extrinsic_calibration]
- calibrate_base_frame [default: false]
- base_frame
- mapping_pointcloud
- detected_objects [default: detected_objects]
- predicted_objects [default: predicted_objects]
- calibration_camera_info_topics
- calibration_image_topics
- calibration_pointcloud_topics
- calibration_camera_optical_link_frames
- calibration_lidar_frames
- mapping_lidar_frame
- mapping_registrator [default: gicp]
- local_map_num_keyframes [default: 15]
- dense_pointcloud_num_keyframes [default: 10]
- mapping_max_frames [default: 5000]
- mapping_min_range [default: 1.0]
- mapping_max_range [default: 100.0]
- marker_size [default: 10.0]
- mapper_resolution [default: 1.0]
- mapper_step_size [default: 0.05]
- mapper_max_iterations [default: 500]
- mapper_epsilon [default: 0.001]
- mapper_num_threads [default: 12]
- mapper_max_correspondence_distance [default: 0.1]
- lidar_calibration_max_frames [default: 7]
- camera_calibration_max_frames [default: 1]
- lost_frame_max_angle_diff [default: 25.0]
- lost_frame_interpolation_error [default: 0.05]
- lost_frame_max_acceleration [default: 8.0]
- min_calibration_range [default: 1.5]
- max_calibration_range [default: 80.0]
- calibration_min_pca_eigenvalue [default: 0.02]
- calibration_eval_max_corr_distance [default: 0.2]
- solver_iterations [default: 100]
- calibration_skip_keyframes [default: 3]
- lidar_calibration_min_frames [default: 1]
- calibration_use_only_last_frames [default: false]
- crop_z_calibration_pointclouds [default: false]
- crop_z_calibration_pointclouds_value [default: 4.0]
- base_lidar_crop_box_min_x [default: -5.0]
- base_lidar_crop_box_min_y [default: -5.0]
- base_lidar_crop_box_min_z [default: -5.0]
- base_lidar_crop_box_max_x [default: 10.0]
- base_lidar_crop_box_max_y [default: 5.0]
- base_lidar_crop_box_max_z [default: 5.0]
- base_lidar_min_plane_points_percentage [default: 10.0]
- base_lidar_max_inlier_distance [default: 0.03]
- base_lidar_min_plane_points [default: 500]
- base_lidar_max_cos_distance [default: 0.2]
- base_lidar_max_iterations [default: 500]
- base_lidar_overwrite_xy_yaw [default: false]
- calibration_min_distance_between_frames [default: 1.5]
- use_rosbag [default: true]
Messages
Services
Plugins
Recent questions tagged mapping_based_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
mapping_based_calibrator
A tutorial for this calibrator can be found here
Purpose
The package mapping_based_calibrator
performs extrinsic calibration between 3d lidar sensors, as well as the (partial) calibration between a single 3d lidar and base_link
.
Inner-workings / Algorithms
lidar-lidar calibration
The calibrator is designed to estimate the transformation between multiple lidar sensors. It does so by moving the robot/vehicle, creating a trajectory in which lidars observe the same features, and formulating the calibration problem as a pointcloud registration one.
One of the lidars is used to map the trajectory (denoted as the mapping lidar
), and once a map created with this lidar, the other lidars (denoted as calibration lidars
) are registered against an augmented mapping lidar
pointcloud (through the mapping process), which is equivalent to lidar-lidar calibration (between the mapping lidar
and each of the calibration lidars
individually).
The calibration process encompasses three main steps: constructing a map with the mapping lidar
via moving the vehicle, preprocessing and selecting the calibration data, and finally performing lidar-lidar calibration via pointcloud registration.
General notes about the environment, trajectory, and sensors used:
- The environment should contain features appropriate for mapping (e.g., an open space with no walls is inadequate).
- The data from the lidars needs to be synchronized since we pair and interpolate data from different sensors.
- Since lidar scans get distorted with the vehicle’s movement, the trajectory followed by the vehicle should be as slow and continuous as possible. Failure to do this has a detrimental impact on the calibration process.
- In addition to mapping, the different lidars must observe common, highly distinctive features to perform pointcloud registration among them. Good examples are objects with a lack of symmetry, and clear 3d shapes (as opposed to 2d objects like walls).
- The mapping lidar is usually chosen as the one with the highest resolution, range, and field of view.
- The resolution and range of the lidars used have a great impact on how or whether this method can be used. We do not make guarantees about any set of combinations, and in most cases, parameters will need to be modified to maintain a good performance.
Note: although this package can perform calibration between the mapping lidar
and several calibration lidars
, the documentation will assume only one calibration lidar
is used. In the presence of multiple calibration lidars
, the process is done in parallel in an independent fashion.
Step 1: Map construction
As mentioned in the previous section, one of the lidars is termed the mapping lidar
(set in the launchers via the mapping_pointcloud
argument), and while the robot/vehicle moves, its data is used to construct a map.
The mapping process is implemented via direct pointcloud registration between individual scans of the mapping lidar
using either NDT[1] or GICP[2] (the algorithm can be set in the launcher). The output of this step is a series of registered pointclouds (raw pointcloud and its pose in the map) dubbed frames
(or keyframes
).
However, not all pointclouds coming from the mapping lidar
are used in the map creation, since there is a chance of data redundancy, which is known to difficult data processing and the registration process itself. For this reason, we consider the following rules when mapping:
- An incoming lidar scan is compared against an aggregated pointcloud of the latest
local_map_num_keyframes
keyframes
. -
keyframes
are lidar scans sampled uniformly everynew_keyframe_min_distance
meters. - Incoming lidars that are not deemed
keyframes
, are saved asframes
if their distance to the latest acceptedframe
is overnew_frame_min_distance
meters. Otherwise, the incoming scan is discarded. - If the vehicle stops (and this fact is detected), a special
stopped frame
is saved, since this data is useful for calibration (still data). - If the trajectory followed by the
frames
is deemed non-continuous (e.g., high accelerations or data loss), theframe
at which this fact is detected is deemed alost frame
and the new incoming scan will not compare against this or previous frames (essentially restarting the mapping process). Note: although in normal mapping applications this is not acceptable, for calibration purposes we only need sequences of registered pointclouds so this is still allowed. However, whenever possible the user should restart the mapping process if he identifies this issue.
Step 2: Calibration data preparation
The data required for calibration is created throughout the mapping process and right before the calibration itself. In particular, the mapping and calibration lidar are expected to have different timestamps so they can not be directly registered. Additionally, the mapping process produces a great amount of potential combinations of pointclouds to register, so the data best suited for calibration needs to be chosen.
Data interpolation
As explained in the previous section, pointclouds from the mapping lidar
and calibration lidar
have different timestamps which makes registration directly unfeasible. To address this problem, whenever a keyframe
from the mapping lidar
is generated, the temporally closest calibration lidar
pointcloud is associated to it, and the pose of the mapping lidar
pointcloud is interpolated to the stamp of the calibration lidar
pointcloud using the map (adjacent frames to the keyframe
).
However, the interpolation is only an approximation and its use induces an interpolation error that can be detrimental to calibration. For this reason, interpolation statistics like the interpolation time, distance, angle, and estimated dynamics are computed.
The output of this step is a list of what we call calibration frames
, consisting of the mapping lidar
keyframe
, the calibration lidar
pointcloud, the interpolated pose, and the interpolation statistics.
Data selection
At this point, we have obtained a series of calibration frames
that can be used to perform lidar calibration. However, their contents could have little to no useful information (calibration-wise), their data could be compromised due to an incorrect mapping, or their interpolation error could be non-negligible. For these reasons, we select the calibration data using the following criteria:
- All
calibration frames
“close” tolost frames
are discarded. The term “close” in this context refers to the fact that theframes
near thecalibration lidar
keyframe
from thecalibration frame
are used to augment said pointcloud. This step makes sure no invalid data is used (mapping-wise). - The interpolation statistics are used to discard
calibration frames
. High interpolation times, distances, angles, speed, and acceleration are not accepted (thresholds are set via parameters). -
calibration frames
have varying levels of “information” in them, and in some cases, that information may not be useful for calibration. To select the frames more suited for calibration information-wise, the following criteria are used:- The Principal Component Analysis (PCA) is applied to the
calibration lidar
pointcloud of thecalibration frames
. In this context, the higher the smallest component of PCA is, the more suited a pointcloud is for calibration. - Then, the
calibration frames
are sorted in descending order and they are greedily added to the final calibration set until a maximum budget is reached. - However,
calibration frames
will be skipped if another one near it has already been added (using distance criteria in the map).
- The Principal Component Analysis (PCA) is applied to the
Data preprocessing
When doing source-to-target pointcloud registration, all points in the source pointcloud are projected into the target one, and each source point forms a pair with its closest target one. In the case of sparse pointclouds from lidar scans, this causes convergence issues that are very common in the case of algorithms like ICP
and still cause problems on others like GICP
.
For this reason, instead of registering the calibration lidar
points into the mapping lidar
ones, we first augment the mapping lidar
pointclouds with their neighbors in the map within a vicinity. This augmented pointcloud has a very high number of points, which makes pointcloud registration intractable. To solve this, we use voxel subsampling before pointcloud registration.
Step 3: Pointcloud registration
Lidar-to-lidar calibration is solved implicitly via the pointcloud registration of calibration lidar
pointclouds into the augmented mapping lidar
pointclouds. Each pair of pointclouds produces a registered pose, essentially the calibration pose. Among all of these resulting poses, the one that presents a lower overall error (source to target error among all calibration frames
) is the one chosen as the output calibration result.
However, as registration algorithms are very sensitive to their initial guess and parameters, we use multiple registrators (ICP
, GICP
, and NDT
with different parameters) in a sequential fashion similar to an ensemble, using as the initial guess at every step the best calibration pose so far.
In addition to calibrating using calibration frame
independently, we also use Batched ICP
, which allows us to perform ICP using all the calibration frames
of each lidar simultaneously.
base-lidar calibration
In addition of lidar-lidar calibration, we can also utilize the map generated by the mapping lidar
to partially calibrate the transformation between the mapping lidar
and the base_link
. This possible if the assumption that the area around of the vehicle forms a plane holds true.
Step 1: Map construction
The first step of base-lidar calibration is identical to the Step 1 of lidar-lidar calibration
.
Step 2: Extract ground plane from the pointcloud
After constructing the map, and computing the augmented pointcloud from mapping lidar
, which is identical to the Step 2, a RANSAC-based plane estimation algorithm is used to extract the ground plane pointcloud and its mathematical model.
Step 3: Estimate transformation
To estimate the transformation between the mapping lidar
and the base_link
, the tool needs to calculate the transformation between the lidar and the ground pose, as well as the transformation between the ground pose and the base_link
.
The transformation between the lidar and the ground pose is calculated by utilizing the normal vector and a point on the ground plane, both obtained in the last step. To estimate the transformation between the ground pose and the base_link
, the tool first determines the initial ground-pose-to-base-link using the initial lidar-to-base-link and lidar-to-ground-pose transformations. Then, the tool projects this initial ground-pose-to-base-link transformation onto the xy plane to estimate the transformation between the ground pose and the base_link
. The final lidar to base_link
pose can be obtained by composing the previous poses.
ROS Interfaces
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- rviz [default: true]
- calibration_service_name [default: extrinsic_calibration]
- calibrate_base_frame [default: false]
- base_frame
- mapping_pointcloud
- detected_objects [default: detected_objects]
- predicted_objects [default: predicted_objects]
- calibration_camera_info_topics
- calibration_image_topics
- calibration_pointcloud_topics
- calibration_camera_optical_link_frames
- calibration_lidar_frames
- mapping_lidar_frame
- mapping_registrator [default: gicp]
- local_map_num_keyframes [default: 15]
- dense_pointcloud_num_keyframes [default: 10]
- mapping_max_frames [default: 5000]
- mapping_min_range [default: 1.0]
- mapping_max_range [default: 100.0]
- marker_size [default: 10.0]
- mapper_resolution [default: 1.0]
- mapper_step_size [default: 0.05]
- mapper_max_iterations [default: 500]
- mapper_epsilon [default: 0.001]
- mapper_num_threads [default: 12]
- mapper_max_correspondence_distance [default: 0.1]
- lidar_calibration_max_frames [default: 7]
- camera_calibration_max_frames [default: 1]
- lost_frame_max_angle_diff [default: 25.0]
- lost_frame_interpolation_error [default: 0.05]
- lost_frame_max_acceleration [default: 8.0]
- min_calibration_range [default: 1.5]
- max_calibration_range [default: 80.0]
- calibration_min_pca_eigenvalue [default: 0.02]
- calibration_eval_max_corr_distance [default: 0.2]
- solver_iterations [default: 100]
- calibration_skip_keyframes [default: 3]
- lidar_calibration_min_frames [default: 1]
- calibration_use_only_last_frames [default: false]
- crop_z_calibration_pointclouds [default: false]
- crop_z_calibration_pointclouds_value [default: 4.0]
- base_lidar_crop_box_min_x [default: -5.0]
- base_lidar_crop_box_min_y [default: -5.0]
- base_lidar_crop_box_min_z [default: -5.0]
- base_lidar_crop_box_max_x [default: 10.0]
- base_lidar_crop_box_max_y [default: 5.0]
- base_lidar_crop_box_max_z [default: 5.0]
- base_lidar_min_plane_points_percentage [default: 10.0]
- base_lidar_max_inlier_distance [default: 0.03]
- base_lidar_min_plane_points [default: 500]
- base_lidar_max_cos_distance [default: 0.2]
- base_lidar_max_iterations [default: 500]
- base_lidar_overwrite_xy_yaw [default: false]
- calibration_min_distance_between_frames [default: 1.5]
- use_rosbag [default: true]
Messages
Services
Plugins
Recent questions tagged mapping_based_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
mapping_based_calibrator
A tutorial for this calibrator can be found here
Purpose
The package mapping_based_calibrator
performs extrinsic calibration between 3d lidar sensors, as well as the (partial) calibration between a single 3d lidar and base_link
.
Inner-workings / Algorithms
lidar-lidar calibration
The calibrator is designed to estimate the transformation between multiple lidar sensors. It does so by moving the robot/vehicle, creating a trajectory in which lidars observe the same features, and formulating the calibration problem as a pointcloud registration one.
One of the lidars is used to map the trajectory (denoted as the mapping lidar
), and once a map created with this lidar, the other lidars (denoted as calibration lidars
) are registered against an augmented mapping lidar
pointcloud (through the mapping process), which is equivalent to lidar-lidar calibration (between the mapping lidar
and each of the calibration lidars
individually).
The calibration process encompasses three main steps: constructing a map with the mapping lidar
via moving the vehicle, preprocessing and selecting the calibration data, and finally performing lidar-lidar calibration via pointcloud registration.
General notes about the environment, trajectory, and sensors used:
- The environment should contain features appropriate for mapping (e.g., an open space with no walls is inadequate).
- The data from the lidars needs to be synchronized since we pair and interpolate data from different sensors.
- Since lidar scans get distorted with the vehicle’s movement, the trajectory followed by the vehicle should be as slow and continuous as possible. Failure to do this has a detrimental impact on the calibration process.
- In addition to mapping, the different lidars must observe common, highly distinctive features to perform pointcloud registration among them. Good examples are objects with a lack of symmetry, and clear 3d shapes (as opposed to 2d objects like walls).
- The mapping lidar is usually chosen as the one with the highest resolution, range, and field of view.
- The resolution and range of the lidars used have a great impact on how or whether this method can be used. We do not make guarantees about any set of combinations, and in most cases, parameters will need to be modified to maintain a good performance.
Note: although this package can perform calibration between the mapping lidar
and several calibration lidars
, the documentation will assume only one calibration lidar
is used. In the presence of multiple calibration lidars
, the process is done in parallel in an independent fashion.
Step 1: Map construction
As mentioned in the previous section, one of the lidars is termed the mapping lidar
(set in the launchers via the mapping_pointcloud
argument), and while the robot/vehicle moves, its data is used to construct a map.
The mapping process is implemented via direct pointcloud registration between individual scans of the mapping lidar
using either NDT[1] or GICP[2] (the algorithm can be set in the launcher). The output of this step is a series of registered pointclouds (raw pointcloud and its pose in the map) dubbed frames
(or keyframes
).
However, not all pointclouds coming from the mapping lidar
are used in the map creation, since there is a chance of data redundancy, which is known to difficult data processing and the registration process itself. For this reason, we consider the following rules when mapping:
- An incoming lidar scan is compared against an aggregated pointcloud of the latest
local_map_num_keyframes
keyframes
. -
keyframes
are lidar scans sampled uniformly everynew_keyframe_min_distance
meters. - Incoming lidars that are not deemed
keyframes
, are saved asframes
if their distance to the latest acceptedframe
is overnew_frame_min_distance
meters. Otherwise, the incoming scan is discarded. - If the vehicle stops (and this fact is detected), a special
stopped frame
is saved, since this data is useful for calibration (still data). - If the trajectory followed by the
frames
is deemed non-continuous (e.g., high accelerations or data loss), theframe
at which this fact is detected is deemed alost frame
and the new incoming scan will not compare against this or previous frames (essentially restarting the mapping process). Note: although in normal mapping applications this is not acceptable, for calibration purposes we only need sequences of registered pointclouds so this is still allowed. However, whenever possible the user should restart the mapping process if he identifies this issue.
Step 2: Calibration data preparation
The data required for calibration is created throughout the mapping process and right before the calibration itself. In particular, the mapping and calibration lidar are expected to have different timestamps so they can not be directly registered. Additionally, the mapping process produces a great amount of potential combinations of pointclouds to register, so the data best suited for calibration needs to be chosen.
Data interpolation
As explained in the previous section, pointclouds from the mapping lidar
and calibration lidar
have different timestamps which makes registration directly unfeasible. To address this problem, whenever a keyframe
from the mapping lidar
is generated, the temporally closest calibration lidar
pointcloud is associated to it, and the pose of the mapping lidar
pointcloud is interpolated to the stamp of the calibration lidar
pointcloud using the map (adjacent frames to the keyframe
).
However, the interpolation is only an approximation and its use induces an interpolation error that can be detrimental to calibration. For this reason, interpolation statistics like the interpolation time, distance, angle, and estimated dynamics are computed.
The output of this step is a list of what we call calibration frames
, consisting of the mapping lidar
keyframe
, the calibration lidar
pointcloud, the interpolated pose, and the interpolation statistics.
Data selection
At this point, we have obtained a series of calibration frames
that can be used to perform lidar calibration. However, their contents could have little to no useful information (calibration-wise), their data could be compromised due to an incorrect mapping, or their interpolation error could be non-negligible. For these reasons, we select the calibration data using the following criteria:
- All
calibration frames
“close” tolost frames
are discarded. The term “close” in this context refers to the fact that theframes
near thecalibration lidar
keyframe
from thecalibration frame
are used to augment said pointcloud. This step makes sure no invalid data is used (mapping-wise). - The interpolation statistics are used to discard
calibration frames
. High interpolation times, distances, angles, speed, and acceleration are not accepted (thresholds are set via parameters). -
calibration frames
have varying levels of “information” in them, and in some cases, that information may not be useful for calibration. To select the frames more suited for calibration information-wise, the following criteria are used:- The Principal Component Analysis (PCA) is applied to the
calibration lidar
pointcloud of thecalibration frames
. In this context, the higher the smallest component of PCA is, the more suited a pointcloud is for calibration. - Then, the
calibration frames
are sorted in descending order and they are greedily added to the final calibration set until a maximum budget is reached. - However,
calibration frames
will be skipped if another one near it has already been added (using distance criteria in the map).
- The Principal Component Analysis (PCA) is applied to the
Data preprocessing
When doing source-to-target pointcloud registration, all points in the source pointcloud are projected into the target one, and each source point forms a pair with its closest target one. In the case of sparse pointclouds from lidar scans, this causes convergence issues that are very common in the case of algorithms like ICP
and still cause problems on others like GICP
.
For this reason, instead of registering the calibration lidar
points into the mapping lidar
ones, we first augment the mapping lidar
pointclouds with their neighbors in the map within a vicinity. This augmented pointcloud has a very high number of points, which makes pointcloud registration intractable. To solve this, we use voxel subsampling before pointcloud registration.
Step 3: Pointcloud registration
Lidar-to-lidar calibration is solved implicitly via the pointcloud registration of calibration lidar
pointclouds into the augmented mapping lidar
pointclouds. Each pair of pointclouds produces a registered pose, essentially the calibration pose. Among all of these resulting poses, the one that presents a lower overall error (source to target error among all calibration frames
) is the one chosen as the output calibration result.
However, as registration algorithms are very sensitive to their initial guess and parameters, we use multiple registrators (ICP
, GICP
, and NDT
with different parameters) in a sequential fashion similar to an ensemble, using as the initial guess at every step the best calibration pose so far.
In addition to calibrating using calibration frame
independently, we also use Batched ICP
, which allows us to perform ICP using all the calibration frames
of each lidar simultaneously.
base-lidar calibration
In addition of lidar-lidar calibration, we can also utilize the map generated by the mapping lidar
to partially calibrate the transformation between the mapping lidar
and the base_link
. This possible if the assumption that the area around of the vehicle forms a plane holds true.
Step 1: Map construction
The first step of base-lidar calibration is identical to the Step 1 of lidar-lidar calibration
.
Step 2: Extract ground plane from the pointcloud
After constructing the map, and computing the augmented pointcloud from mapping lidar
, which is identical to the Step 2, a RANSAC-based plane estimation algorithm is used to extract the ground plane pointcloud and its mathematical model.
Step 3: Estimate transformation
To estimate the transformation between the mapping lidar
and the base_link
, the tool needs to calculate the transformation between the lidar and the ground pose, as well as the transformation between the ground pose and the base_link
.
The transformation between the lidar and the ground pose is calculated by utilizing the normal vector and a point on the ground plane, both obtained in the last step. To estimate the transformation between the ground pose and the base_link
, the tool first determines the initial ground-pose-to-base-link using the initial lidar-to-base-link and lidar-to-ground-pose transformations. Then, the tool projects this initial ground-pose-to-base-link transformation onto the xy plane to estimate the transformation between the ground pose and the base_link
. The final lidar to base_link
pose can be obtained by composing the previous poses.
ROS Interfaces
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- rviz [default: true]
- calibration_service_name [default: extrinsic_calibration]
- calibrate_base_frame [default: false]
- base_frame
- mapping_pointcloud
- detected_objects [default: detected_objects]
- predicted_objects [default: predicted_objects]
- calibration_camera_info_topics
- calibration_image_topics
- calibration_pointcloud_topics
- calibration_camera_optical_link_frames
- calibration_lidar_frames
- mapping_lidar_frame
- mapping_registrator [default: gicp]
- local_map_num_keyframes [default: 15]
- dense_pointcloud_num_keyframes [default: 10]
- mapping_max_frames [default: 5000]
- mapping_min_range [default: 1.0]
- mapping_max_range [default: 100.0]
- marker_size [default: 10.0]
- mapper_resolution [default: 1.0]
- mapper_step_size [default: 0.05]
- mapper_max_iterations [default: 500]
- mapper_epsilon [default: 0.001]
- mapper_num_threads [default: 12]
- mapper_max_correspondence_distance [default: 0.1]
- lidar_calibration_max_frames [default: 7]
- camera_calibration_max_frames [default: 1]
- lost_frame_max_angle_diff [default: 25.0]
- lost_frame_interpolation_error [default: 0.05]
- lost_frame_max_acceleration [default: 8.0]
- min_calibration_range [default: 1.5]
- max_calibration_range [default: 80.0]
- calibration_min_pca_eigenvalue [default: 0.02]
- calibration_eval_max_corr_distance [default: 0.2]
- solver_iterations [default: 100]
- calibration_skip_keyframes [default: 3]
- lidar_calibration_min_frames [default: 1]
- calibration_use_only_last_frames [default: false]
- crop_z_calibration_pointclouds [default: false]
- crop_z_calibration_pointclouds_value [default: 4.0]
- base_lidar_crop_box_min_x [default: -5.0]
- base_lidar_crop_box_min_y [default: -5.0]
- base_lidar_crop_box_min_z [default: -5.0]
- base_lidar_crop_box_max_x [default: 10.0]
- base_lidar_crop_box_max_y [default: 5.0]
- base_lidar_crop_box_max_z [default: 5.0]
- base_lidar_min_plane_points_percentage [default: 10.0]
- base_lidar_max_inlier_distance [default: 0.03]
- base_lidar_min_plane_points [default: 500]
- base_lidar_max_cos_distance [default: 0.2]
- base_lidar_max_iterations [default: 500]
- base_lidar_overwrite_xy_yaw [default: false]
- calibration_min_distance_between_frames [default: 1.5]
- use_rosbag [default: true]
Messages
Services
Plugins
Recent questions tagged mapping_based_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
mapping_based_calibrator
A tutorial for this calibrator can be found here
Purpose
The package mapping_based_calibrator
performs extrinsic calibration between 3d lidar sensors, as well as the (partial) calibration between a single 3d lidar and base_link
.
Inner-workings / Algorithms
lidar-lidar calibration
The calibrator is designed to estimate the transformation between multiple lidar sensors. It does so by moving the robot/vehicle, creating a trajectory in which lidars observe the same features, and formulating the calibration problem as a pointcloud registration one.
One of the lidars is used to map the trajectory (denoted as the mapping lidar
), and once a map created with this lidar, the other lidars (denoted as calibration lidars
) are registered against an augmented mapping lidar
pointcloud (through the mapping process), which is equivalent to lidar-lidar calibration (between the mapping lidar
and each of the calibration lidars
individually).
The calibration process encompasses three main steps: constructing a map with the mapping lidar
via moving the vehicle, preprocessing and selecting the calibration data, and finally performing lidar-lidar calibration via pointcloud registration.
General notes about the environment, trajectory, and sensors used:
- The environment should contain features appropriate for mapping (e.g., an open space with no walls is inadequate).
- The data from the lidars needs to be synchronized since we pair and interpolate data from different sensors.
- Since lidar scans get distorted with the vehicle’s movement, the trajectory followed by the vehicle should be as slow and continuous as possible. Failure to do this has a detrimental impact on the calibration process.
- In addition to mapping, the different lidars must observe common, highly distinctive features to perform pointcloud registration among them. Good examples are objects with a lack of symmetry, and clear 3d shapes (as opposed to 2d objects like walls).
- The mapping lidar is usually chosen as the one with the highest resolution, range, and field of view.
- The resolution and range of the lidars used have a great impact on how or whether this method can be used. We do not make guarantees about any set of combinations, and in most cases, parameters will need to be modified to maintain a good performance.
Note: although this package can perform calibration between the mapping lidar
and several calibration lidars
, the documentation will assume only one calibration lidar
is used. In the presence of multiple calibration lidars
, the process is done in parallel in an independent fashion.
Step 1: Map construction
As mentioned in the previous section, one of the lidars is termed the mapping lidar
(set in the launchers via the mapping_pointcloud
argument), and while the robot/vehicle moves, its data is used to construct a map.
The mapping process is implemented via direct pointcloud registration between individual scans of the mapping lidar
using either NDT[1] or GICP[2] (the algorithm can be set in the launcher). The output of this step is a series of registered pointclouds (raw pointcloud and its pose in the map) dubbed frames
(or keyframes
).
However, not all pointclouds coming from the mapping lidar
are used in the map creation, since there is a chance of data redundancy, which is known to difficult data processing and the registration process itself. For this reason, we consider the following rules when mapping:
- An incoming lidar scan is compared against an aggregated pointcloud of the latest
local_map_num_keyframes
keyframes
. -
keyframes
are lidar scans sampled uniformly everynew_keyframe_min_distance
meters. - Incoming lidars that are not deemed
keyframes
, are saved asframes
if their distance to the latest acceptedframe
is overnew_frame_min_distance
meters. Otherwise, the incoming scan is discarded. - If the vehicle stops (and this fact is detected), a special
stopped frame
is saved, since this data is useful for calibration (still data). - If the trajectory followed by the
frames
is deemed non-continuous (e.g., high accelerations or data loss), theframe
at which this fact is detected is deemed alost frame
and the new incoming scan will not compare against this or previous frames (essentially restarting the mapping process). Note: although in normal mapping applications this is not acceptable, for calibration purposes we only need sequences of registered pointclouds so this is still allowed. However, whenever possible the user should restart the mapping process if he identifies this issue.
Step 2: Calibration data preparation
The data required for calibration is created throughout the mapping process and right before the calibration itself. In particular, the mapping and calibration lidar are expected to have different timestamps so they can not be directly registered. Additionally, the mapping process produces a great amount of potential combinations of pointclouds to register, so the data best suited for calibration needs to be chosen.
Data interpolation
As explained in the previous section, pointclouds from the mapping lidar
and calibration lidar
have different timestamps which makes registration directly unfeasible. To address this problem, whenever a keyframe
from the mapping lidar
is generated, the temporally closest calibration lidar
pointcloud is associated to it, and the pose of the mapping lidar
pointcloud is interpolated to the stamp of the calibration lidar
pointcloud using the map (adjacent frames to the keyframe
).
However, the interpolation is only an approximation and its use induces an interpolation error that can be detrimental to calibration. For this reason, interpolation statistics like the interpolation time, distance, angle, and estimated dynamics are computed.
The output of this step is a list of what we call calibration frames
, consisting of the mapping lidar
keyframe
, the calibration lidar
pointcloud, the interpolated pose, and the interpolation statistics.
Data selection
At this point, we have obtained a series of calibration frames
that can be used to perform lidar calibration. However, their contents could have little to no useful information (calibration-wise), their data could be compromised due to an incorrect mapping, or their interpolation error could be non-negligible. For these reasons, we select the calibration data using the following criteria:
- All
calibration frames
“close” tolost frames
are discarded. The term “close” in this context refers to the fact that theframes
near thecalibration lidar
keyframe
from thecalibration frame
are used to augment said pointcloud. This step makes sure no invalid data is used (mapping-wise). - The interpolation statistics are used to discard
calibration frames
. High interpolation times, distances, angles, speed, and acceleration are not accepted (thresholds are set via parameters). -
calibration frames
have varying levels of “information” in them, and in some cases, that information may not be useful for calibration. To select the frames more suited for calibration information-wise, the following criteria are used:- The Principal Component Analysis (PCA) is applied to the
calibration lidar
pointcloud of thecalibration frames
. In this context, the higher the smallest component of PCA is, the more suited a pointcloud is for calibration. - Then, the
calibration frames
are sorted in descending order and they are greedily added to the final calibration set until a maximum budget is reached. - However,
calibration frames
will be skipped if another one near it has already been added (using distance criteria in the map).
- The Principal Component Analysis (PCA) is applied to the
Data preprocessing
When doing source-to-target pointcloud registration, all points in the source pointcloud are projected into the target one, and each source point forms a pair with its closest target one. In the case of sparse pointclouds from lidar scans, this causes convergence issues that are very common in the case of algorithms like ICP
and still cause problems on others like GICP
.
For this reason, instead of registering the calibration lidar
points into the mapping lidar
ones, we first augment the mapping lidar
pointclouds with their neighbors in the map within a vicinity. This augmented pointcloud has a very high number of points, which makes pointcloud registration intractable. To solve this, we use voxel subsampling before pointcloud registration.
Step 3: Pointcloud registration
Lidar-to-lidar calibration is solved implicitly via the pointcloud registration of calibration lidar
pointclouds into the augmented mapping lidar
pointclouds. Each pair of pointclouds produces a registered pose, essentially the calibration pose. Among all of these resulting poses, the one that presents a lower overall error (source to target error among all calibration frames
) is the one chosen as the output calibration result.
However, as registration algorithms are very sensitive to their initial guess and parameters, we use multiple registrators (ICP
, GICP
, and NDT
with different parameters) in a sequential fashion similar to an ensemble, using as the initial guess at every step the best calibration pose so far.
In addition to calibrating using calibration frame
independently, we also use Batched ICP
, which allows us to perform ICP using all the calibration frames
of each lidar simultaneously.
base-lidar calibration
In addition of lidar-lidar calibration, we can also utilize the map generated by the mapping lidar
to partially calibrate the transformation between the mapping lidar
and the base_link
. This possible if the assumption that the area around of the vehicle forms a plane holds true.
Step 1: Map construction
The first step of base-lidar calibration is identical to the Step 1 of lidar-lidar calibration
.
Step 2: Extract ground plane from the pointcloud
After constructing the map, and computing the augmented pointcloud from mapping lidar
, which is identical to the Step 2, a RANSAC-based plane estimation algorithm is used to extract the ground plane pointcloud and its mathematical model.
Step 3: Estimate transformation
To estimate the transformation between the mapping lidar
and the base_link
, the tool needs to calculate the transformation between the lidar and the ground pose, as well as the transformation between the ground pose and the base_link
.
The transformation between the lidar and the ground pose is calculated by utilizing the normal vector and a point on the ground plane, both obtained in the last step. To estimate the transformation between the ground pose and the base_link
, the tool first determines the initial ground-pose-to-base-link using the initial lidar-to-base-link and lidar-to-ground-pose transformations. Then, the tool projects this initial ground-pose-to-base-link transformation onto the xy plane to estimate the transformation between the ground pose and the base_link
. The final lidar to base_link
pose can be obtained by composing the previous poses.
ROS Interfaces
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- rviz [default: true]
- calibration_service_name [default: extrinsic_calibration]
- calibrate_base_frame [default: false]
- base_frame
- mapping_pointcloud
- detected_objects [default: detected_objects]
- predicted_objects [default: predicted_objects]
- calibration_camera_info_topics
- calibration_image_topics
- calibration_pointcloud_topics
- calibration_camera_optical_link_frames
- calibration_lidar_frames
- mapping_lidar_frame
- mapping_registrator [default: gicp]
- local_map_num_keyframes [default: 15]
- dense_pointcloud_num_keyframes [default: 10]
- mapping_max_frames [default: 5000]
- mapping_min_range [default: 1.0]
- mapping_max_range [default: 100.0]
- marker_size [default: 10.0]
- mapper_resolution [default: 1.0]
- mapper_step_size [default: 0.05]
- mapper_max_iterations [default: 500]
- mapper_epsilon [default: 0.001]
- mapper_num_threads [default: 12]
- mapper_max_correspondence_distance [default: 0.1]
- lidar_calibration_max_frames [default: 7]
- camera_calibration_max_frames [default: 1]
- lost_frame_max_angle_diff [default: 25.0]
- lost_frame_interpolation_error [default: 0.05]
- lost_frame_max_acceleration [default: 8.0]
- min_calibration_range [default: 1.5]
- max_calibration_range [default: 80.0]
- calibration_min_pca_eigenvalue [default: 0.02]
- calibration_eval_max_corr_distance [default: 0.2]
- solver_iterations [default: 100]
- calibration_skip_keyframes [default: 3]
- lidar_calibration_min_frames [default: 1]
- calibration_use_only_last_frames [default: false]
- crop_z_calibration_pointclouds [default: false]
- crop_z_calibration_pointclouds_value [default: 4.0]
- base_lidar_crop_box_min_x [default: -5.0]
- base_lidar_crop_box_min_y [default: -5.0]
- base_lidar_crop_box_min_z [default: -5.0]
- base_lidar_crop_box_max_x [default: 10.0]
- base_lidar_crop_box_max_y [default: 5.0]
- base_lidar_crop_box_max_z [default: 5.0]
- base_lidar_min_plane_points_percentage [default: 10.0]
- base_lidar_max_inlier_distance [default: 0.03]
- base_lidar_min_plane_points [default: 500]
- base_lidar_max_cos_distance [default: 0.2]
- base_lidar_max_iterations [default: 500]
- base_lidar_overwrite_xy_yaw [default: false]
- calibration_min_distance_between_frames [default: 1.5]
- use_rosbag [default: true]
Messages
Services
Plugins
Recent questions tagged mapping_based_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | BSD |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | sensor calibration tools for autonomous driving and robotics |
Checkout URI | https://github.com/tier4/calibrationtools.git |
VCS Type | git |
VCS Version | tier4/universe |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | computer-vision camera-calibration calibration autonomous-driving ros2 autoware sensor-calibration lidar-calibration robtics |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos Tsunekawa
Authors
mapping_based_calibrator
A tutorial for this calibrator can be found here
Purpose
The package mapping_based_calibrator
performs extrinsic calibration between 3d lidar sensors, as well as the (partial) calibration between a single 3d lidar and base_link
.
Inner-workings / Algorithms
lidar-lidar calibration
The calibrator is designed to estimate the transformation between multiple lidar sensors. It does so by moving the robot/vehicle, creating a trajectory in which lidars observe the same features, and formulating the calibration problem as a pointcloud registration one.
One of the lidars is used to map the trajectory (denoted as the mapping lidar
), and once a map created with this lidar, the other lidars (denoted as calibration lidars
) are registered against an augmented mapping lidar
pointcloud (through the mapping process), which is equivalent to lidar-lidar calibration (between the mapping lidar
and each of the calibration lidars
individually).
The calibration process encompasses three main steps: constructing a map with the mapping lidar
via moving the vehicle, preprocessing and selecting the calibration data, and finally performing lidar-lidar calibration via pointcloud registration.
General notes about the environment, trajectory, and sensors used:
- The environment should contain features appropriate for mapping (e.g., an open space with no walls is inadequate).
- The data from the lidars needs to be synchronized since we pair and interpolate data from different sensors.
- Since lidar scans get distorted with the vehicle’s movement, the trajectory followed by the vehicle should be as slow and continuous as possible. Failure to do this has a detrimental impact on the calibration process.
- In addition to mapping, the different lidars must observe common, highly distinctive features to perform pointcloud registration among them. Good examples are objects with a lack of symmetry, and clear 3d shapes (as opposed to 2d objects like walls).
- The mapping lidar is usually chosen as the one with the highest resolution, range, and field of view.
- The resolution and range of the lidars used have a great impact on how or whether this method can be used. We do not make guarantees about any set of combinations, and in most cases, parameters will need to be modified to maintain a good performance.
Note: although this package can perform calibration between the mapping lidar
and several calibration lidars
, the documentation will assume only one calibration lidar
is used. In the presence of multiple calibration lidars
, the process is done in parallel in an independent fashion.
Step 1: Map construction
As mentioned in the previous section, one of the lidars is termed the mapping lidar
(set in the launchers via the mapping_pointcloud
argument), and while the robot/vehicle moves, its data is used to construct a map.
The mapping process is implemented via direct pointcloud registration between individual scans of the mapping lidar
using either NDT[1] or GICP[2] (the algorithm can be set in the launcher). The output of this step is a series of registered pointclouds (raw pointcloud and its pose in the map) dubbed frames
(or keyframes
).
However, not all pointclouds coming from the mapping lidar
are used in the map creation, since there is a chance of data redundancy, which is known to difficult data processing and the registration process itself. For this reason, we consider the following rules when mapping:
- An incoming lidar scan is compared against an aggregated pointcloud of the latest
local_map_num_keyframes
keyframes
. -
keyframes
are lidar scans sampled uniformly everynew_keyframe_min_distance
meters. - Incoming lidars that are not deemed
keyframes
, are saved asframes
if their distance to the latest acceptedframe
is overnew_frame_min_distance
meters. Otherwise, the incoming scan is discarded. - If the vehicle stops (and this fact is detected), a special
stopped frame
is saved, since this data is useful for calibration (still data). - If the trajectory followed by the
frames
is deemed non-continuous (e.g., high accelerations or data loss), theframe
at which this fact is detected is deemed alost frame
and the new incoming scan will not compare against this or previous frames (essentially restarting the mapping process). Note: although in normal mapping applications this is not acceptable, for calibration purposes we only need sequences of registered pointclouds so this is still allowed. However, whenever possible the user should restart the mapping process if he identifies this issue.
Step 2: Calibration data preparation
The data required for calibration is created throughout the mapping process and right before the calibration itself. In particular, the mapping and calibration lidar are expected to have different timestamps so they can not be directly registered. Additionally, the mapping process produces a great amount of potential combinations of pointclouds to register, so the data best suited for calibration needs to be chosen.
Data interpolation
As explained in the previous section, pointclouds from the mapping lidar
and calibration lidar
have different timestamps which makes registration directly unfeasible. To address this problem, whenever a keyframe
from the mapping lidar
is generated, the temporally closest calibration lidar
pointcloud is associated to it, and the pose of the mapping lidar
pointcloud is interpolated to the stamp of the calibration lidar
pointcloud using the map (adjacent frames to the keyframe
).
However, the interpolation is only an approximation and its use induces an interpolation error that can be detrimental to calibration. For this reason, interpolation statistics like the interpolation time, distance, angle, and estimated dynamics are computed.
The output of this step is a list of what we call calibration frames
, consisting of the mapping lidar
keyframe
, the calibration lidar
pointcloud, the interpolated pose, and the interpolation statistics.
Data selection
At this point, we have obtained a series of calibration frames
that can be used to perform lidar calibration. However, their contents could have little to no useful information (calibration-wise), their data could be compromised due to an incorrect mapping, or their interpolation error could be non-negligible. For these reasons, we select the calibration data using the following criteria:
- All
calibration frames
“close” tolost frames
are discarded. The term “close” in this context refers to the fact that theframes
near thecalibration lidar
keyframe
from thecalibration frame
are used to augment said pointcloud. This step makes sure no invalid data is used (mapping-wise). - The interpolation statistics are used to discard
calibration frames
. High interpolation times, distances, angles, speed, and acceleration are not accepted (thresholds are set via parameters). -
calibration frames
have varying levels of “information” in them, and in some cases, that information may not be useful for calibration. To select the frames more suited for calibration information-wise, the following criteria are used:- The Principal Component Analysis (PCA) is applied to the
calibration lidar
pointcloud of thecalibration frames
. In this context, the higher the smallest component of PCA is, the more suited a pointcloud is for calibration. - Then, the
calibration frames
are sorted in descending order and they are greedily added to the final calibration set until a maximum budget is reached. - However,
calibration frames
will be skipped if another one near it has already been added (using distance criteria in the map).
- The Principal Component Analysis (PCA) is applied to the
Data preprocessing
When doing source-to-target pointcloud registration, all points in the source pointcloud are projected into the target one, and each source point forms a pair with its closest target one. In the case of sparse pointclouds from lidar scans, this causes convergence issues that are very common in the case of algorithms like ICP
and still cause problems on others like GICP
.
For this reason, instead of registering the calibration lidar
points into the mapping lidar
ones, we first augment the mapping lidar
pointclouds with their neighbors in the map within a vicinity. This augmented pointcloud has a very high number of points, which makes pointcloud registration intractable. To solve this, we use voxel subsampling before pointcloud registration.
Step 3: Pointcloud registration
Lidar-to-lidar calibration is solved implicitly via the pointcloud registration of calibration lidar
pointclouds into the augmented mapping lidar
pointclouds. Each pair of pointclouds produces a registered pose, essentially the calibration pose. Among all of these resulting poses, the one that presents a lower overall error (source to target error among all calibration frames
) is the one chosen as the output calibration result.
However, as registration algorithms are very sensitive to their initial guess and parameters, we use multiple registrators (ICP
, GICP
, and NDT
with different parameters) in a sequential fashion similar to an ensemble, using as the initial guess at every step the best calibration pose so far.
In addition to calibrating using calibration frame
independently, we also use Batched ICP
, which allows us to perform ICP using all the calibration frames
of each lidar simultaneously.
base-lidar calibration
In addition of lidar-lidar calibration, we can also utilize the map generated by the mapping lidar
to partially calibrate the transformation between the mapping lidar
and the base_link
. This possible if the assumption that the area around of the vehicle forms a plane holds true.
Step 1: Map construction
The first step of base-lidar calibration is identical to the Step 1 of lidar-lidar calibration
.
Step 2: Extract ground plane from the pointcloud
After constructing the map, and computing the augmented pointcloud from mapping lidar
, which is identical to the Step 2, a RANSAC-based plane estimation algorithm is used to extract the ground plane pointcloud and its mathematical model.
Step 3: Estimate transformation
To estimate the transformation between the mapping lidar
and the base_link
, the tool needs to calculate the transformation between the lidar and the ground pose, as well as the transformation between the ground pose and the base_link
.
The transformation between the lidar and the ground pose is calculated by utilizing the normal vector and a point on the ground plane, both obtained in the last step. To estimate the transformation between the ground pose and the base_link
, the tool first determines the initial ground-pose-to-base-link using the initial lidar-to-base-link and lidar-to-ground-pose transformations. Then, the tool projects this initial ground-pose-to-base-link transformation onto the xy plane to estimate the transformation between the ground pose and the base_link
. The final lidar to base_link
pose can be obtained by composing the previous poses.
ROS Interfaces
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/calibrator.launch.xml
-
- ns [default: ]
- rviz [default: true]
- calibration_service_name [default: extrinsic_calibration]
- calibrate_base_frame [default: false]
- base_frame
- mapping_pointcloud
- detected_objects [default: detected_objects]
- predicted_objects [default: predicted_objects]
- calibration_camera_info_topics
- calibration_image_topics
- calibration_pointcloud_topics
- calibration_camera_optical_link_frames
- calibration_lidar_frames
- mapping_lidar_frame
- mapping_registrator [default: gicp]
- local_map_num_keyframes [default: 15]
- dense_pointcloud_num_keyframes [default: 10]
- mapping_max_frames [default: 5000]
- mapping_min_range [default: 1.0]
- mapping_max_range [default: 100.0]
- marker_size [default: 10.0]
- mapper_resolution [default: 1.0]
- mapper_step_size [default: 0.05]
- mapper_max_iterations [default: 500]
- mapper_epsilon [default: 0.001]
- mapper_num_threads [default: 12]
- mapper_max_correspondence_distance [default: 0.1]
- lidar_calibration_max_frames [default: 7]
- camera_calibration_max_frames [default: 1]
- lost_frame_max_angle_diff [default: 25.0]
- lost_frame_interpolation_error [default: 0.05]
- lost_frame_max_acceleration [default: 8.0]
- min_calibration_range [default: 1.5]
- max_calibration_range [default: 80.0]
- calibration_min_pca_eigenvalue [default: 0.02]
- calibration_eval_max_corr_distance [default: 0.2]
- solver_iterations [default: 100]
- calibration_skip_keyframes [default: 3]
- lidar_calibration_min_frames [default: 1]
- calibration_use_only_last_frames [default: false]
- crop_z_calibration_pointclouds [default: false]
- crop_z_calibration_pointclouds_value [default: 4.0]
- base_lidar_crop_box_min_x [default: -5.0]
- base_lidar_crop_box_min_y [default: -5.0]
- base_lidar_crop_box_min_z [default: -5.0]
- base_lidar_crop_box_max_x [default: 10.0]
- base_lidar_crop_box_max_y [default: 5.0]
- base_lidar_crop_box_max_z [default: 5.0]
- base_lidar_min_plane_points_percentage [default: 10.0]
- base_lidar_max_inlier_distance [default: 0.03]
- base_lidar_min_plane_points [default: 500]
- base_lidar_max_cos_distance [default: 0.2]
- base_lidar_max_iterations [default: 500]
- base_lidar_overwrite_xy_yaw [default: false]
- calibration_min_distance_between_frames [default: 1.5]
- use_rosbag [default: true]