Package Summary
Tags | No category tags. |
Version | 1.12.0 |
License | Apache 2 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | autoware.ai perf |
Checkout URI | https://github.com/is-whale/autoware_learn.git |
VCS Type | git |
VCS Version | 1.14 |
Last Updated | 2025-03-14 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Abraham Monrroy
- Jacob Lambert
Authors
- Jacob Lambert
- Abraham Monrroy
Autoware Camera-LiDAR Calibration Package
How to calibrate
Camera-LiDAR calibration is performed in two steps:
- Obtain camera intrinsics
- Obtain camera-LiDAR extrinsics
Camera intrinsic calibration
The intrinsics are obtained using the autoware_camera_calibration
script, which is a fork of the official ROS calibration tool.
How to launch
- In a sourced terminal:
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --square SQUARE_SIZE --size MxN image:=/image_topic
- Play a rosbag or stream from a camera in the selected topic name.
- Move the checkerboard around within the field of view of the camera until the bars turn green.
- Press the
CALIBRATE
button. - The output and result of the calibration will be shown in the terminal.
- Press the
SAVE
button. - A file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
.
This file will contain the intrinsic calibration to rectify the image.
Parameters available
Flag| Parameter| Type| Description|
—–|———-|—–|——–
–square|SQUARE_SIZE
|double |Defines the size of the checkerboard square in meters.|
–size|MxN
|string |Defines the layout size of the checkerboard (inner size).|
image:=|image
|string |Topic name of the camera image source topic in raw
format (color or b&w).|
–min_samples|min_samples
|integer |Defines the minimum number of samples required to allow calibration.|
–detection|engine
|string|Chessboard detection engine, default cv2
or matlab
|
For extra details please visit: http://www.ros.org/wiki/camera_calibration
Matlab checkerboard detection engine (beta)
This node additionally supports the Matlab engine for chessboard detection, which is faster and more robust than the OpenCV implementation.
- Go to the Matlab python setup path
/PATH/TO/MATLAB/R201XY/extern/engines/python
. - Run
python setup.py install
to setup Matlab bindings.
To use this engine, add --detection matlab
to the list of arguments, i.e.
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --detection matlab --square SQUARE_SIZE --size MxN image:=/image_topic
Camera-LiDAR extrinsic calibration
Camera-LiDAR extrinsic calibration is performed by clicking on corresponding points in the image and the point cloud.
This node uses clicked_point
and screenpoint
from the rviz
and image_view2
packages respectively.
How to launch
- Perform the intrinsic camera calibration using camera intrinsic calibration tool described above (resulting in the file
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
). - In a sourced terminal:
roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/PATH/TO/YYYYmmdd_HHMM_autoware_camera_calibration.yaml image_src:=/image
- An image viewer will be displayed.
- Open Rviz and show the point cloud and the correct fixed frame.
- Observe the image and the point cloud simultaneously.
- Find a point within the image that you can match to a corresponding point within the point cloud.
- Click on the pixel of the point in the image.
- Click on the corresponding 3D point in Rviz using the Publish Point tool.
- Repeat this with at least 9 different points.
- Once finished, a file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_lidar_camera_calibration.yaml
.
This file can be used with Autoware’s Calibration Publisher to publish and register the transformation between the LiDAR and camera. The file contains both the intrinsic and extrinsic parameters.
Parameters available
Parameter | Type | Description | |
---|---|---|---|
image_src |
string | Topic name of the camera image source topic. Default: /image_raw . |
|
camera_id |
string | If working with more than one camera, set this to the correct camera namespace, i.e. /camera0 . |
|
intrinsics_file |
string | Topic name of the camera image source topic in raw format (color or b&w). |
|
compressed_stream |
bool | If set to true, a node to convert the image from a compressed stream to an uncompressed one will be launched. |
Camera-LiDAR calibration example
To test the calibration results, the generated yaml file can be used in the Calibration Publisher
and then the Points Image
in the Sensing tab.
Notes
This calibration tool assumes that the Velodyne is installed with the default order of axes for the Velodyne sensor.
- X axis points to the front
- Y axis points to the left
- Z axis points upwards
Changelog for package autoware_camera_lidar_calibrator
1.11.0 (2019-03-21)
- [fix] Install commands for all the packages
(#1861)
-
Initial fixes to detection, sensing, semantics and utils
-
fixing wrong filename on install command
-
Fixes to install commands
-
Hokuyo fix name
-
Fix obj db
-
Obj db include fixes
-
End of final cleaning sweep
-
Incorrect command order in runtime manager
-
Param tempfile not required by runtime_manager
-
- Fixes to runtime manager install commands
-
Remove devel directory from catkin, if any
-
Updated launch files for robosense
-
Updated robosense
-
Fix/add missing install (#1977)
-
Added launch install to lidar_kf_contour_track
-
Added install to op_global_planner
-
Added install to way_planner
-
Added install to op_local_planner
-
Added install to op_simulation_package
-
Added install to op_utilities
-
Added install to sync
-
- Improved installation script for pointgrey packages
-
Fixed nodelet error for gmsl cameras
-
USe install space in catkin as well
-
add install to catkin
-
Fix install directives (#1990)
-
Fixed installation path
-
Fixed params installation path
-
Fixed cfg installation path
- Delete cache on colcon_release
-
- Fix package name and dependency (#1914)
- Fix license notice in corresponding package.xml
- Contributors: Abraham Monrroy Cano, Akihito Ohsato, amc-nu
1.10.0 (2019-01-17)
- Fixes for catkin_make
- Switch to Apache 2 license (develop branch)
(#1741)
- Switch to Apache 2
* Replace BSD-3 license header with Apache 2 and reassign copyright to the Autoware Foundation.
- Update license on Python files
- Update copyright years
- Add #ifndef/define _POINTS_IMAGE_H_
- Updated license comment
- Use colcon as the build tool
(#1704)
- Switch to colcon as the build tool instead of catkin
- Added cmake-target
- Added note about the second colcon call
- Added warning about catkin* scripts being deprecated
- Fix COLCON_OPTS
- Added install targets
- Update Docker image tags
- Message packages fixes
- Fix missing dependency
- Feature/perception visualization cleanup
(#1648)
-
- Initial commit for visualization package
-
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
qtbase5-dev |
libqt5-core |
Dependant Packages
Launch files
- launch/camera_lidar_calibration.launch
- roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/home/ne0/Desktop/calib_heat_camera1_rear_center_fisheye.yaml compressed_stream:=True camera_id:=camera1
-
- image_src [default: /image_raw]
- camera_info_src [default: /camera_info]
- camera_id [default: /]
- intrinsics_file
- compressed_stream [default: false]
- target_frame [default: velodyne]
- camera_frame [default: camera]
Messages
Services
Plugins
Recent questions tagged autoware_camera_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 1.12.0 |
License | Apache 2 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | autoware.ai perf |
Checkout URI | https://github.com/is-whale/autoware_learn.git |
VCS Type | git |
VCS Version | 1.14 |
Last Updated | 2025-03-14 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Abraham Monrroy
- Jacob Lambert
Authors
- Jacob Lambert
- Abraham Monrroy
Autoware Camera-LiDAR Calibration Package
How to calibrate
Camera-LiDAR calibration is performed in two steps:
- Obtain camera intrinsics
- Obtain camera-LiDAR extrinsics
Camera intrinsic calibration
The intrinsics are obtained using the autoware_camera_calibration
script, which is a fork of the official ROS calibration tool.
How to launch
- In a sourced terminal:
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --square SQUARE_SIZE --size MxN image:=/image_topic
- Play a rosbag or stream from a camera in the selected topic name.
- Move the checkerboard around within the field of view of the camera until the bars turn green.
- Press the
CALIBRATE
button. - The output and result of the calibration will be shown in the terminal.
- Press the
SAVE
button. - A file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
.
This file will contain the intrinsic calibration to rectify the image.
Parameters available
Flag| Parameter| Type| Description|
—–|———-|—–|——–
–square|SQUARE_SIZE
|double |Defines the size of the checkerboard square in meters.|
–size|MxN
|string |Defines the layout size of the checkerboard (inner size).|
image:=|image
|string |Topic name of the camera image source topic in raw
format (color or b&w).|
–min_samples|min_samples
|integer |Defines the minimum number of samples required to allow calibration.|
–detection|engine
|string|Chessboard detection engine, default cv2
or matlab
|
For extra details please visit: http://www.ros.org/wiki/camera_calibration
Matlab checkerboard detection engine (beta)
This node additionally supports the Matlab engine for chessboard detection, which is faster and more robust than the OpenCV implementation.
- Go to the Matlab python setup path
/PATH/TO/MATLAB/R201XY/extern/engines/python
. - Run
python setup.py install
to setup Matlab bindings.
To use this engine, add --detection matlab
to the list of arguments, i.e.
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --detection matlab --square SQUARE_SIZE --size MxN image:=/image_topic
Camera-LiDAR extrinsic calibration
Camera-LiDAR extrinsic calibration is performed by clicking on corresponding points in the image and the point cloud.
This node uses clicked_point
and screenpoint
from the rviz
and image_view2
packages respectively.
How to launch
- Perform the intrinsic camera calibration using camera intrinsic calibration tool described above (resulting in the file
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
). - In a sourced terminal:
roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/PATH/TO/YYYYmmdd_HHMM_autoware_camera_calibration.yaml image_src:=/image
- An image viewer will be displayed.
- Open Rviz and show the point cloud and the correct fixed frame.
- Observe the image and the point cloud simultaneously.
- Find a point within the image that you can match to a corresponding point within the point cloud.
- Click on the pixel of the point in the image.
- Click on the corresponding 3D point in Rviz using the Publish Point tool.
- Repeat this with at least 9 different points.
- Once finished, a file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_lidar_camera_calibration.yaml
.
This file can be used with Autoware’s Calibration Publisher to publish and register the transformation between the LiDAR and camera. The file contains both the intrinsic and extrinsic parameters.
Parameters available
Parameter | Type | Description | |
---|---|---|---|
image_src |
string | Topic name of the camera image source topic. Default: /image_raw . |
|
camera_id |
string | If working with more than one camera, set this to the correct camera namespace, i.e. /camera0 . |
|
intrinsics_file |
string | Topic name of the camera image source topic in raw format (color or b&w). |
|
compressed_stream |
bool | If set to true, a node to convert the image from a compressed stream to an uncompressed one will be launched. |
Camera-LiDAR calibration example
To test the calibration results, the generated yaml file can be used in the Calibration Publisher
and then the Points Image
in the Sensing tab.
Notes
This calibration tool assumes that the Velodyne is installed with the default order of axes for the Velodyne sensor.
- X axis points to the front
- Y axis points to the left
- Z axis points upwards
Changelog for package autoware_camera_lidar_calibrator
1.11.0 (2019-03-21)
- [fix] Install commands for all the packages
(#1861)
-
Initial fixes to detection, sensing, semantics and utils
-
fixing wrong filename on install command
-
Fixes to install commands
-
Hokuyo fix name
-
Fix obj db
-
Obj db include fixes
-
End of final cleaning sweep
-
Incorrect command order in runtime manager
-
Param tempfile not required by runtime_manager
-
- Fixes to runtime manager install commands
-
Remove devel directory from catkin, if any
-
Updated launch files for robosense
-
Updated robosense
-
Fix/add missing install (#1977)
-
Added launch install to lidar_kf_contour_track
-
Added install to op_global_planner
-
Added install to way_planner
-
Added install to op_local_planner
-
Added install to op_simulation_package
-
Added install to op_utilities
-
Added install to sync
-
- Improved installation script for pointgrey packages
-
Fixed nodelet error for gmsl cameras
-
USe install space in catkin as well
-
add install to catkin
-
Fix install directives (#1990)
-
Fixed installation path
-
Fixed params installation path
-
Fixed cfg installation path
- Delete cache on colcon_release
-
- Fix package name and dependency (#1914)
- Fix license notice in corresponding package.xml
- Contributors: Abraham Monrroy Cano, Akihito Ohsato, amc-nu
1.10.0 (2019-01-17)
- Fixes for catkin_make
- Switch to Apache 2 license (develop branch)
(#1741)
- Switch to Apache 2
* Replace BSD-3 license header with Apache 2 and reassign copyright to the Autoware Foundation.
- Update license on Python files
- Update copyright years
- Add #ifndef/define _POINTS_IMAGE_H_
- Updated license comment
- Use colcon as the build tool
(#1704)
- Switch to colcon as the build tool instead of catkin
- Added cmake-target
- Added note about the second colcon call
- Added warning about catkin* scripts being deprecated
- Fix COLCON_OPTS
- Added install targets
- Update Docker image tags
- Message packages fixes
- Fix missing dependency
- Feature/perception visualization cleanup
(#1648)
-
- Initial commit for visualization package
-
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
qtbase5-dev |
libqt5-core |
Dependant Packages
Launch files
- launch/camera_lidar_calibration.launch
- roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/home/ne0/Desktop/calib_heat_camera1_rear_center_fisheye.yaml compressed_stream:=True camera_id:=camera1
-
- image_src [default: /image_raw]
- camera_info_src [default: /camera_info]
- camera_id [default: /]
- intrinsics_file
- compressed_stream [default: false]
- target_frame [default: velodyne]
- camera_frame [default: camera]
Messages
Services
Plugins
Recent questions tagged autoware_camera_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 1.12.0 |
License | Apache 2 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | autoware.ai perf |
Checkout URI | https://github.com/is-whale/autoware_learn.git |
VCS Type | git |
VCS Version | 1.14 |
Last Updated | 2025-03-14 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Abraham Monrroy
- Jacob Lambert
Authors
- Jacob Lambert
- Abraham Monrroy
Autoware Camera-LiDAR Calibration Package
How to calibrate
Camera-LiDAR calibration is performed in two steps:
- Obtain camera intrinsics
- Obtain camera-LiDAR extrinsics
Camera intrinsic calibration
The intrinsics are obtained using the autoware_camera_calibration
script, which is a fork of the official ROS calibration tool.
How to launch
- In a sourced terminal:
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --square SQUARE_SIZE --size MxN image:=/image_topic
- Play a rosbag or stream from a camera in the selected topic name.
- Move the checkerboard around within the field of view of the camera until the bars turn green.
- Press the
CALIBRATE
button. - The output and result of the calibration will be shown in the terminal.
- Press the
SAVE
button. - A file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
.
This file will contain the intrinsic calibration to rectify the image.
Parameters available
Flag| Parameter| Type| Description|
—–|———-|—–|——–
–square|SQUARE_SIZE
|double |Defines the size of the checkerboard square in meters.|
–size|MxN
|string |Defines the layout size of the checkerboard (inner size).|
image:=|image
|string |Topic name of the camera image source topic in raw
format (color or b&w).|
–min_samples|min_samples
|integer |Defines the minimum number of samples required to allow calibration.|
–detection|engine
|string|Chessboard detection engine, default cv2
or matlab
|
For extra details please visit: http://www.ros.org/wiki/camera_calibration
Matlab checkerboard detection engine (beta)
This node additionally supports the Matlab engine for chessboard detection, which is faster and more robust than the OpenCV implementation.
- Go to the Matlab python setup path
/PATH/TO/MATLAB/R201XY/extern/engines/python
. - Run
python setup.py install
to setup Matlab bindings.
To use this engine, add --detection matlab
to the list of arguments, i.e.
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --detection matlab --square SQUARE_SIZE --size MxN image:=/image_topic
Camera-LiDAR extrinsic calibration
Camera-LiDAR extrinsic calibration is performed by clicking on corresponding points in the image and the point cloud.
This node uses clicked_point
and screenpoint
from the rviz
and image_view2
packages respectively.
How to launch
- Perform the intrinsic camera calibration using camera intrinsic calibration tool described above (resulting in the file
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
). - In a sourced terminal:
roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/PATH/TO/YYYYmmdd_HHMM_autoware_camera_calibration.yaml image_src:=/image
- An image viewer will be displayed.
- Open Rviz and show the point cloud and the correct fixed frame.
- Observe the image and the point cloud simultaneously.
- Find a point within the image that you can match to a corresponding point within the point cloud.
- Click on the pixel of the point in the image.
- Click on the corresponding 3D point in Rviz using the Publish Point tool.
- Repeat this with at least 9 different points.
- Once finished, a file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_lidar_camera_calibration.yaml
.
This file can be used with Autoware’s Calibration Publisher to publish and register the transformation between the LiDAR and camera. The file contains both the intrinsic and extrinsic parameters.
Parameters available
Parameter | Type | Description | |
---|---|---|---|
image_src |
string | Topic name of the camera image source topic. Default: /image_raw . |
|
camera_id |
string | If working with more than one camera, set this to the correct camera namespace, i.e. /camera0 . |
|
intrinsics_file |
string | Topic name of the camera image source topic in raw format (color or b&w). |
|
compressed_stream |
bool | If set to true, a node to convert the image from a compressed stream to an uncompressed one will be launched. |
Camera-LiDAR calibration example
To test the calibration results, the generated yaml file can be used in the Calibration Publisher
and then the Points Image
in the Sensing tab.
Notes
This calibration tool assumes that the Velodyne is installed with the default order of axes for the Velodyne sensor.
- X axis points to the front
- Y axis points to the left
- Z axis points upwards
Changelog for package autoware_camera_lidar_calibrator
1.11.0 (2019-03-21)
- [fix] Install commands for all the packages
(#1861)
-
Initial fixes to detection, sensing, semantics and utils
-
fixing wrong filename on install command
-
Fixes to install commands
-
Hokuyo fix name
-
Fix obj db
-
Obj db include fixes
-
End of final cleaning sweep
-
Incorrect command order in runtime manager
-
Param tempfile not required by runtime_manager
-
- Fixes to runtime manager install commands
-
Remove devel directory from catkin, if any
-
Updated launch files for robosense
-
Updated robosense
-
Fix/add missing install (#1977)
-
Added launch install to lidar_kf_contour_track
-
Added install to op_global_planner
-
Added install to way_planner
-
Added install to op_local_planner
-
Added install to op_simulation_package
-
Added install to op_utilities
-
Added install to sync
-
- Improved installation script for pointgrey packages
-
Fixed nodelet error for gmsl cameras
-
USe install space in catkin as well
-
add install to catkin
-
Fix install directives (#1990)
-
Fixed installation path
-
Fixed params installation path
-
Fixed cfg installation path
- Delete cache on colcon_release
-
- Fix package name and dependency (#1914)
- Fix license notice in corresponding package.xml
- Contributors: Abraham Monrroy Cano, Akihito Ohsato, amc-nu
1.10.0 (2019-01-17)
- Fixes for catkin_make
- Switch to Apache 2 license (develop branch)
(#1741)
- Switch to Apache 2
* Replace BSD-3 license header with Apache 2 and reassign copyright to the Autoware Foundation.
- Update license on Python files
- Update copyright years
- Add #ifndef/define _POINTS_IMAGE_H_
- Updated license comment
- Use colcon as the build tool
(#1704)
- Switch to colcon as the build tool instead of catkin
- Added cmake-target
- Added note about the second colcon call
- Added warning about catkin* scripts being deprecated
- Fix COLCON_OPTS
- Added install targets
- Update Docker image tags
- Message packages fixes
- Fix missing dependency
- Feature/perception visualization cleanup
(#1648)
-
- Initial commit for visualization package
-
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
qtbase5-dev |
libqt5-core |
Dependant Packages
Launch files
- launch/camera_lidar_calibration.launch
- roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/home/ne0/Desktop/calib_heat_camera1_rear_center_fisheye.yaml compressed_stream:=True camera_id:=camera1
-
- image_src [default: /image_raw]
- camera_info_src [default: /camera_info]
- camera_id [default: /]
- intrinsics_file
- compressed_stream [default: false]
- target_frame [default: velodyne]
- camera_frame [default: camera]
Messages
Services
Plugins
Recent questions tagged autoware_camera_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 1.12.0 |
License | Apache 2 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | autoware.ai perf |
Checkout URI | https://github.com/is-whale/autoware_learn.git |
VCS Type | git |
VCS Version | 1.14 |
Last Updated | 2025-03-14 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Abraham Monrroy
- Jacob Lambert
Authors
- Jacob Lambert
- Abraham Monrroy
Autoware Camera-LiDAR Calibration Package
How to calibrate
Camera-LiDAR calibration is performed in two steps:
- Obtain camera intrinsics
- Obtain camera-LiDAR extrinsics
Camera intrinsic calibration
The intrinsics are obtained using the autoware_camera_calibration
script, which is a fork of the official ROS calibration tool.
How to launch
- In a sourced terminal:
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --square SQUARE_SIZE --size MxN image:=/image_topic
- Play a rosbag or stream from a camera in the selected topic name.
- Move the checkerboard around within the field of view of the camera until the bars turn green.
- Press the
CALIBRATE
button. - The output and result of the calibration will be shown in the terminal.
- Press the
SAVE
button. - A file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
.
This file will contain the intrinsic calibration to rectify the image.
Parameters available
Flag| Parameter| Type| Description|
—–|———-|—–|——–
–square|SQUARE_SIZE
|double |Defines the size of the checkerboard square in meters.|
–size|MxN
|string |Defines the layout size of the checkerboard (inner size).|
image:=|image
|string |Topic name of the camera image source topic in raw
format (color or b&w).|
–min_samples|min_samples
|integer |Defines the minimum number of samples required to allow calibration.|
–detection|engine
|string|Chessboard detection engine, default cv2
or matlab
|
For extra details please visit: http://www.ros.org/wiki/camera_calibration
Matlab checkerboard detection engine (beta)
This node additionally supports the Matlab engine for chessboard detection, which is faster and more robust than the OpenCV implementation.
- Go to the Matlab python setup path
/PATH/TO/MATLAB/R201XY/extern/engines/python
. - Run
python setup.py install
to setup Matlab bindings.
To use this engine, add --detection matlab
to the list of arguments, i.e.
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --detection matlab --square SQUARE_SIZE --size MxN image:=/image_topic
Camera-LiDAR extrinsic calibration
Camera-LiDAR extrinsic calibration is performed by clicking on corresponding points in the image and the point cloud.
This node uses clicked_point
and screenpoint
from the rviz
and image_view2
packages respectively.
How to launch
- Perform the intrinsic camera calibration using camera intrinsic calibration tool described above (resulting in the file
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
). - In a sourced terminal:
roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/PATH/TO/YYYYmmdd_HHMM_autoware_camera_calibration.yaml image_src:=/image
- An image viewer will be displayed.
- Open Rviz and show the point cloud and the correct fixed frame.
- Observe the image and the point cloud simultaneously.
- Find a point within the image that you can match to a corresponding point within the point cloud.
- Click on the pixel of the point in the image.
- Click on the corresponding 3D point in Rviz using the Publish Point tool.
- Repeat this with at least 9 different points.
- Once finished, a file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_lidar_camera_calibration.yaml
.
This file can be used with Autoware’s Calibration Publisher to publish and register the transformation between the LiDAR and camera. The file contains both the intrinsic and extrinsic parameters.
Parameters available
Parameter | Type | Description | |
---|---|---|---|
image_src |
string | Topic name of the camera image source topic. Default: /image_raw . |
|
camera_id |
string | If working with more than one camera, set this to the correct camera namespace, i.e. /camera0 . |
|
intrinsics_file |
string | Topic name of the camera image source topic in raw format (color or b&w). |
|
compressed_stream |
bool | If set to true, a node to convert the image from a compressed stream to an uncompressed one will be launched. |
Camera-LiDAR calibration example
To test the calibration results, the generated yaml file can be used in the Calibration Publisher
and then the Points Image
in the Sensing tab.
Notes
This calibration tool assumes that the Velodyne is installed with the default order of axes for the Velodyne sensor.
- X axis points to the front
- Y axis points to the left
- Z axis points upwards
Changelog for package autoware_camera_lidar_calibrator
1.11.0 (2019-03-21)
- [fix] Install commands for all the packages
(#1861)
-
Initial fixes to detection, sensing, semantics and utils
-
fixing wrong filename on install command
-
Fixes to install commands
-
Hokuyo fix name
-
Fix obj db
-
Obj db include fixes
-
End of final cleaning sweep
-
Incorrect command order in runtime manager
-
Param tempfile not required by runtime_manager
-
- Fixes to runtime manager install commands
-
Remove devel directory from catkin, if any
-
Updated launch files for robosense
-
Updated robosense
-
Fix/add missing install (#1977)
-
Added launch install to lidar_kf_contour_track
-
Added install to op_global_planner
-
Added install to way_planner
-
Added install to op_local_planner
-
Added install to op_simulation_package
-
Added install to op_utilities
-
Added install to sync
-
- Improved installation script for pointgrey packages
-
Fixed nodelet error for gmsl cameras
-
USe install space in catkin as well
-
add install to catkin
-
Fix install directives (#1990)
-
Fixed installation path
-
Fixed params installation path
-
Fixed cfg installation path
- Delete cache on colcon_release
-
- Fix package name and dependency (#1914)
- Fix license notice in corresponding package.xml
- Contributors: Abraham Monrroy Cano, Akihito Ohsato, amc-nu
1.10.0 (2019-01-17)
- Fixes for catkin_make
- Switch to Apache 2 license (develop branch)
(#1741)
- Switch to Apache 2
* Replace BSD-3 license header with Apache 2 and reassign copyright to the Autoware Foundation.
- Update license on Python files
- Update copyright years
- Add #ifndef/define _POINTS_IMAGE_H_
- Updated license comment
- Use colcon as the build tool
(#1704)
- Switch to colcon as the build tool instead of catkin
- Added cmake-target
- Added note about the second colcon call
- Added warning about catkin* scripts being deprecated
- Fix COLCON_OPTS
- Added install targets
- Update Docker image tags
- Message packages fixes
- Fix missing dependency
- Feature/perception visualization cleanup
(#1648)
-
- Initial commit for visualization package
-
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
qtbase5-dev |
libqt5-core |
Dependant Packages
Launch files
- launch/camera_lidar_calibration.launch
- roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/home/ne0/Desktop/calib_heat_camera1_rear_center_fisheye.yaml compressed_stream:=True camera_id:=camera1
-
- image_src [default: /image_raw]
- camera_info_src [default: /camera_info]
- camera_id [default: /]
- intrinsics_file
- compressed_stream [default: false]
- target_frame [default: velodyne]
- camera_frame [default: camera]
Messages
Services
Plugins
Recent questions tagged autoware_camera_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 1.12.0 |
License | Apache 2 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | autoware.ai perf |
Checkout URI | https://github.com/is-whale/autoware_learn.git |
VCS Type | git |
VCS Version | 1.14 |
Last Updated | 2025-03-14 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Abraham Monrroy
- Jacob Lambert
Authors
- Jacob Lambert
- Abraham Monrroy
Autoware Camera-LiDAR Calibration Package
How to calibrate
Camera-LiDAR calibration is performed in two steps:
- Obtain camera intrinsics
- Obtain camera-LiDAR extrinsics
Camera intrinsic calibration
The intrinsics are obtained using the autoware_camera_calibration
script, which is a fork of the official ROS calibration tool.
How to launch
- In a sourced terminal:
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --square SQUARE_SIZE --size MxN image:=/image_topic
- Play a rosbag or stream from a camera in the selected topic name.
- Move the checkerboard around within the field of view of the camera until the bars turn green.
- Press the
CALIBRATE
button. - The output and result of the calibration will be shown in the terminal.
- Press the
SAVE
button. - A file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
.
This file will contain the intrinsic calibration to rectify the image.
Parameters available
Flag| Parameter| Type| Description|
—–|———-|—–|——–
–square|SQUARE_SIZE
|double |Defines the size of the checkerboard square in meters.|
–size|MxN
|string |Defines the layout size of the checkerboard (inner size).|
image:=|image
|string |Topic name of the camera image source topic in raw
format (color or b&w).|
–min_samples|min_samples
|integer |Defines the minimum number of samples required to allow calibration.|
–detection|engine
|string|Chessboard detection engine, default cv2
or matlab
|
For extra details please visit: http://www.ros.org/wiki/camera_calibration
Matlab checkerboard detection engine (beta)
This node additionally supports the Matlab engine for chessboard detection, which is faster and more robust than the OpenCV implementation.
- Go to the Matlab python setup path
/PATH/TO/MATLAB/R201XY/extern/engines/python
. - Run
python setup.py install
to setup Matlab bindings.
To use this engine, add --detection matlab
to the list of arguments, i.e.
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --detection matlab --square SQUARE_SIZE --size MxN image:=/image_topic
Camera-LiDAR extrinsic calibration
Camera-LiDAR extrinsic calibration is performed by clicking on corresponding points in the image and the point cloud.
This node uses clicked_point
and screenpoint
from the rviz
and image_view2
packages respectively.
How to launch
- Perform the intrinsic camera calibration using camera intrinsic calibration tool described above (resulting in the file
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
). - In a sourced terminal:
roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/PATH/TO/YYYYmmdd_HHMM_autoware_camera_calibration.yaml image_src:=/image
- An image viewer will be displayed.
- Open Rviz and show the point cloud and the correct fixed frame.
- Observe the image and the point cloud simultaneously.
- Find a point within the image that you can match to a corresponding point within the point cloud.
- Click on the pixel of the point in the image.
- Click on the corresponding 3D point in Rviz using the Publish Point tool.
- Repeat this with at least 9 different points.
- Once finished, a file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_lidar_camera_calibration.yaml
.
This file can be used with Autoware’s Calibration Publisher to publish and register the transformation between the LiDAR and camera. The file contains both the intrinsic and extrinsic parameters.
Parameters available
Parameter | Type | Description | |
---|---|---|---|
image_src |
string | Topic name of the camera image source topic. Default: /image_raw . |
|
camera_id |
string | If working with more than one camera, set this to the correct camera namespace, i.e. /camera0 . |
|
intrinsics_file |
string | Topic name of the camera image source topic in raw format (color or b&w). |
|
compressed_stream |
bool | If set to true, a node to convert the image from a compressed stream to an uncompressed one will be launched. |
Camera-LiDAR calibration example
To test the calibration results, the generated yaml file can be used in the Calibration Publisher
and then the Points Image
in the Sensing tab.
Notes
This calibration tool assumes that the Velodyne is installed with the default order of axes for the Velodyne sensor.
- X axis points to the front
- Y axis points to the left
- Z axis points upwards
Changelog for package autoware_camera_lidar_calibrator
1.11.0 (2019-03-21)
- [fix] Install commands for all the packages
(#1861)
-
Initial fixes to detection, sensing, semantics and utils
-
fixing wrong filename on install command
-
Fixes to install commands
-
Hokuyo fix name
-
Fix obj db
-
Obj db include fixes
-
End of final cleaning sweep
-
Incorrect command order in runtime manager
-
Param tempfile not required by runtime_manager
-
- Fixes to runtime manager install commands
-
Remove devel directory from catkin, if any
-
Updated launch files for robosense
-
Updated robosense
-
Fix/add missing install (#1977)
-
Added launch install to lidar_kf_contour_track
-
Added install to op_global_planner
-
Added install to way_planner
-
Added install to op_local_planner
-
Added install to op_simulation_package
-
Added install to op_utilities
-
Added install to sync
-
- Improved installation script for pointgrey packages
-
Fixed nodelet error for gmsl cameras
-
USe install space in catkin as well
-
add install to catkin
-
Fix install directives (#1990)
-
Fixed installation path
-
Fixed params installation path
-
Fixed cfg installation path
- Delete cache on colcon_release
-
- Fix package name and dependency (#1914)
- Fix license notice in corresponding package.xml
- Contributors: Abraham Monrroy Cano, Akihito Ohsato, amc-nu
1.10.0 (2019-01-17)
- Fixes for catkin_make
- Switch to Apache 2 license (develop branch)
(#1741)
- Switch to Apache 2
* Replace BSD-3 license header with Apache 2 and reassign copyright to the Autoware Foundation.
- Update license on Python files
- Update copyright years
- Add #ifndef/define _POINTS_IMAGE_H_
- Updated license comment
- Use colcon as the build tool
(#1704)
- Switch to colcon as the build tool instead of catkin
- Added cmake-target
- Added note about the second colcon call
- Added warning about catkin* scripts being deprecated
- Fix COLCON_OPTS
- Added install targets
- Update Docker image tags
- Message packages fixes
- Fix missing dependency
- Feature/perception visualization cleanup
(#1648)
-
- Initial commit for visualization package
-
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
qtbase5-dev |
libqt5-core |
Dependant Packages
Launch files
- launch/camera_lidar_calibration.launch
- roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/home/ne0/Desktop/calib_heat_camera1_rear_center_fisheye.yaml compressed_stream:=True camera_id:=camera1
-
- image_src [default: /image_raw]
- camera_info_src [default: /camera_info]
- camera_id [default: /]
- intrinsics_file
- compressed_stream [default: false]
- target_frame [default: velodyne]
- camera_frame [default: camera]
Messages
Services
Plugins
Recent questions tagged autoware_camera_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 1.12.0 |
License | Apache 2 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | autoware.ai perf |
Checkout URI | https://github.com/is-whale/autoware_learn.git |
VCS Type | git |
VCS Version | 1.14 |
Last Updated | 2025-03-14 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Abraham Monrroy
- Jacob Lambert
Authors
- Jacob Lambert
- Abraham Monrroy
Autoware Camera-LiDAR Calibration Package
How to calibrate
Camera-LiDAR calibration is performed in two steps:
- Obtain camera intrinsics
- Obtain camera-LiDAR extrinsics
Camera intrinsic calibration
The intrinsics are obtained using the autoware_camera_calibration
script, which is a fork of the official ROS calibration tool.
How to launch
- In a sourced terminal:
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --square SQUARE_SIZE --size MxN image:=/image_topic
- Play a rosbag or stream from a camera in the selected topic name.
- Move the checkerboard around within the field of view of the camera until the bars turn green.
- Press the
CALIBRATE
button. - The output and result of the calibration will be shown in the terminal.
- Press the
SAVE
button. - A file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
.
This file will contain the intrinsic calibration to rectify the image.
Parameters available
Flag| Parameter| Type| Description|
—–|———-|—–|——–
–square|SQUARE_SIZE
|double |Defines the size of the checkerboard square in meters.|
–size|MxN
|string |Defines the layout size of the checkerboard (inner size).|
image:=|image
|string |Topic name of the camera image source topic in raw
format (color or b&w).|
–min_samples|min_samples
|integer |Defines the minimum number of samples required to allow calibration.|
–detection|engine
|string|Chessboard detection engine, default cv2
or matlab
|
For extra details please visit: http://www.ros.org/wiki/camera_calibration
Matlab checkerboard detection engine (beta)
This node additionally supports the Matlab engine for chessboard detection, which is faster and more robust than the OpenCV implementation.
- Go to the Matlab python setup path
/PATH/TO/MATLAB/R201XY/extern/engines/python
. - Run
python setup.py install
to setup Matlab bindings.
To use this engine, add --detection matlab
to the list of arguments, i.e.
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --detection matlab --square SQUARE_SIZE --size MxN image:=/image_topic
Camera-LiDAR extrinsic calibration
Camera-LiDAR extrinsic calibration is performed by clicking on corresponding points in the image and the point cloud.
This node uses clicked_point
and screenpoint
from the rviz
and image_view2
packages respectively.
How to launch
- Perform the intrinsic camera calibration using camera intrinsic calibration tool described above (resulting in the file
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
). - In a sourced terminal:
roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/PATH/TO/YYYYmmdd_HHMM_autoware_camera_calibration.yaml image_src:=/image
- An image viewer will be displayed.
- Open Rviz and show the point cloud and the correct fixed frame.
- Observe the image and the point cloud simultaneously.
- Find a point within the image that you can match to a corresponding point within the point cloud.
- Click on the pixel of the point in the image.
- Click on the corresponding 3D point in Rviz using the Publish Point tool.
- Repeat this with at least 9 different points.
- Once finished, a file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_lidar_camera_calibration.yaml
.
This file can be used with Autoware’s Calibration Publisher to publish and register the transformation between the LiDAR and camera. The file contains both the intrinsic and extrinsic parameters.
Parameters available
Parameter | Type | Description | |
---|---|---|---|
image_src |
string | Topic name of the camera image source topic. Default: /image_raw . |
|
camera_id |
string | If working with more than one camera, set this to the correct camera namespace, i.e. /camera0 . |
|
intrinsics_file |
string | Topic name of the camera image source topic in raw format (color or b&w). |
|
compressed_stream |
bool | If set to true, a node to convert the image from a compressed stream to an uncompressed one will be launched. |
Camera-LiDAR calibration example
To test the calibration results, the generated yaml file can be used in the Calibration Publisher
and then the Points Image
in the Sensing tab.
Notes
This calibration tool assumes that the Velodyne is installed with the default order of axes for the Velodyne sensor.
- X axis points to the front
- Y axis points to the left
- Z axis points upwards
Changelog for package autoware_camera_lidar_calibrator
1.11.0 (2019-03-21)
- [fix] Install commands for all the packages
(#1861)
-
Initial fixes to detection, sensing, semantics and utils
-
fixing wrong filename on install command
-
Fixes to install commands
-
Hokuyo fix name
-
Fix obj db
-
Obj db include fixes
-
End of final cleaning sweep
-
Incorrect command order in runtime manager
-
Param tempfile not required by runtime_manager
-
- Fixes to runtime manager install commands
-
Remove devel directory from catkin, if any
-
Updated launch files for robosense
-
Updated robosense
-
Fix/add missing install (#1977)
-
Added launch install to lidar_kf_contour_track
-
Added install to op_global_planner
-
Added install to way_planner
-
Added install to op_local_planner
-
Added install to op_simulation_package
-
Added install to op_utilities
-
Added install to sync
-
- Improved installation script for pointgrey packages
-
Fixed nodelet error for gmsl cameras
-
USe install space in catkin as well
-
add install to catkin
-
Fix install directives (#1990)
-
Fixed installation path
-
Fixed params installation path
-
Fixed cfg installation path
- Delete cache on colcon_release
-
- Fix package name and dependency (#1914)
- Fix license notice in corresponding package.xml
- Contributors: Abraham Monrroy Cano, Akihito Ohsato, amc-nu
1.10.0 (2019-01-17)
- Fixes for catkin_make
- Switch to Apache 2 license (develop branch)
(#1741)
- Switch to Apache 2
* Replace BSD-3 license header with Apache 2 and reassign copyright to the Autoware Foundation.
- Update license on Python files
- Update copyright years
- Add #ifndef/define _POINTS_IMAGE_H_
- Updated license comment
- Use colcon as the build tool
(#1704)
- Switch to colcon as the build tool instead of catkin
- Added cmake-target
- Added note about the second colcon call
- Added warning about catkin* scripts being deprecated
- Fix COLCON_OPTS
- Added install targets
- Update Docker image tags
- Message packages fixes
- Fix missing dependency
- Feature/perception visualization cleanup
(#1648)
-
- Initial commit for visualization package
-
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
qtbase5-dev |
libqt5-core |
Dependant Packages
Launch files
- launch/camera_lidar_calibration.launch
- roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/home/ne0/Desktop/calib_heat_camera1_rear_center_fisheye.yaml compressed_stream:=True camera_id:=camera1
-
- image_src [default: /image_raw]
- camera_info_src [default: /camera_info]
- camera_id [default: /]
- intrinsics_file
- compressed_stream [default: false]
- target_frame [default: velodyne]
- camera_frame [default: camera]
Messages
Services
Plugins
Recent questions tagged autoware_camera_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 1.12.0 |
License | Apache 2 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | autoware.ai perf |
Checkout URI | https://github.com/is-whale/autoware_learn.git |
VCS Type | git |
VCS Version | 1.14 |
Last Updated | 2025-03-14 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Abraham Monrroy
- Jacob Lambert
Authors
- Jacob Lambert
- Abraham Monrroy
Autoware Camera-LiDAR Calibration Package
How to calibrate
Camera-LiDAR calibration is performed in two steps:
- Obtain camera intrinsics
- Obtain camera-LiDAR extrinsics
Camera intrinsic calibration
The intrinsics are obtained using the autoware_camera_calibration
script, which is a fork of the official ROS calibration tool.
How to launch
- In a sourced terminal:
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --square SQUARE_SIZE --size MxN image:=/image_topic
- Play a rosbag or stream from a camera in the selected topic name.
- Move the checkerboard around within the field of view of the camera until the bars turn green.
- Press the
CALIBRATE
button. - The output and result of the calibration will be shown in the terminal.
- Press the
SAVE
button. - A file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
.
This file will contain the intrinsic calibration to rectify the image.
Parameters available
Flag| Parameter| Type| Description|
—–|———-|—–|——–
–square|SQUARE_SIZE
|double |Defines the size of the checkerboard square in meters.|
–size|MxN
|string |Defines the layout size of the checkerboard (inner size).|
image:=|image
|string |Topic name of the camera image source topic in raw
format (color or b&w).|
–min_samples|min_samples
|integer |Defines the minimum number of samples required to allow calibration.|
–detection|engine
|string|Chessboard detection engine, default cv2
or matlab
|
For extra details please visit: http://www.ros.org/wiki/camera_calibration
Matlab checkerboard detection engine (beta)
This node additionally supports the Matlab engine for chessboard detection, which is faster and more robust than the OpenCV implementation.
- Go to the Matlab python setup path
/PATH/TO/MATLAB/R201XY/extern/engines/python
. - Run
python setup.py install
to setup Matlab bindings.
To use this engine, add --detection matlab
to the list of arguments, i.e.
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --detection matlab --square SQUARE_SIZE --size MxN image:=/image_topic
Camera-LiDAR extrinsic calibration
Camera-LiDAR extrinsic calibration is performed by clicking on corresponding points in the image and the point cloud.
This node uses clicked_point
and screenpoint
from the rviz
and image_view2
packages respectively.
How to launch
- Perform the intrinsic camera calibration using camera intrinsic calibration tool described above (resulting in the file
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
). - In a sourced terminal:
roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/PATH/TO/YYYYmmdd_HHMM_autoware_camera_calibration.yaml image_src:=/image
- An image viewer will be displayed.
- Open Rviz and show the point cloud and the correct fixed frame.
- Observe the image and the point cloud simultaneously.
- Find a point within the image that you can match to a corresponding point within the point cloud.
- Click on the pixel of the point in the image.
- Click on the corresponding 3D point in Rviz using the Publish Point tool.
- Repeat this with at least 9 different points.
- Once finished, a file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_lidar_camera_calibration.yaml
.
This file can be used with Autoware’s Calibration Publisher to publish and register the transformation between the LiDAR and camera. The file contains both the intrinsic and extrinsic parameters.
Parameters available
Parameter | Type | Description | |
---|---|---|---|
image_src |
string | Topic name of the camera image source topic. Default: /image_raw . |
|
camera_id |
string | If working with more than one camera, set this to the correct camera namespace, i.e. /camera0 . |
|
intrinsics_file |
string | Topic name of the camera image source topic in raw format (color or b&w). |
|
compressed_stream |
bool | If set to true, a node to convert the image from a compressed stream to an uncompressed one will be launched. |
Camera-LiDAR calibration example
To test the calibration results, the generated yaml file can be used in the Calibration Publisher
and then the Points Image
in the Sensing tab.
Notes
This calibration tool assumes that the Velodyne is installed with the default order of axes for the Velodyne sensor.
- X axis points to the front
- Y axis points to the left
- Z axis points upwards
Changelog for package autoware_camera_lidar_calibrator
1.11.0 (2019-03-21)
- [fix] Install commands for all the packages
(#1861)
-
Initial fixes to detection, sensing, semantics and utils
-
fixing wrong filename on install command
-
Fixes to install commands
-
Hokuyo fix name
-
Fix obj db
-
Obj db include fixes
-
End of final cleaning sweep
-
Incorrect command order in runtime manager
-
Param tempfile not required by runtime_manager
-
- Fixes to runtime manager install commands
-
Remove devel directory from catkin, if any
-
Updated launch files for robosense
-
Updated robosense
-
Fix/add missing install (#1977)
-
Added launch install to lidar_kf_contour_track
-
Added install to op_global_planner
-
Added install to way_planner
-
Added install to op_local_planner
-
Added install to op_simulation_package
-
Added install to op_utilities
-
Added install to sync
-
- Improved installation script for pointgrey packages
-
Fixed nodelet error for gmsl cameras
-
USe install space in catkin as well
-
add install to catkin
-
Fix install directives (#1990)
-
Fixed installation path
-
Fixed params installation path
-
Fixed cfg installation path
- Delete cache on colcon_release
-
- Fix package name and dependency (#1914)
- Fix license notice in corresponding package.xml
- Contributors: Abraham Monrroy Cano, Akihito Ohsato, amc-nu
1.10.0 (2019-01-17)
- Fixes for catkin_make
- Switch to Apache 2 license (develop branch)
(#1741)
- Switch to Apache 2
* Replace BSD-3 license header with Apache 2 and reassign copyright to the Autoware Foundation.
- Update license on Python files
- Update copyright years
- Add #ifndef/define _POINTS_IMAGE_H_
- Updated license comment
- Use colcon as the build tool
(#1704)
- Switch to colcon as the build tool instead of catkin
- Added cmake-target
- Added note about the second colcon call
- Added warning about catkin* scripts being deprecated
- Fix COLCON_OPTS
- Added install targets
- Update Docker image tags
- Message packages fixes
- Fix missing dependency
- Feature/perception visualization cleanup
(#1648)
-
- Initial commit for visualization package
-
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
qtbase5-dev |
libqt5-core |
Dependant Packages
Launch files
- launch/camera_lidar_calibration.launch
- roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/home/ne0/Desktop/calib_heat_camera1_rear_center_fisheye.yaml compressed_stream:=True camera_id:=camera1
-
- image_src [default: /image_raw]
- camera_info_src [default: /camera_info]
- camera_id [default: /]
- intrinsics_file
- compressed_stream [default: false]
- target_frame [default: velodyne]
- camera_frame [default: camera]
Messages
Services
Plugins
Recent questions tagged autoware_camera_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 1.12.0 |
License | Apache 2 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | autoware.ai perf |
Checkout URI | https://github.com/is-whale/autoware_learn.git |
VCS Type | git |
VCS Version | 1.14 |
Last Updated | 2025-03-14 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Abraham Monrroy
- Jacob Lambert
Authors
- Jacob Lambert
- Abraham Monrroy
Autoware Camera-LiDAR Calibration Package
How to calibrate
Camera-LiDAR calibration is performed in two steps:
- Obtain camera intrinsics
- Obtain camera-LiDAR extrinsics
Camera intrinsic calibration
The intrinsics are obtained using the autoware_camera_calibration
script, which is a fork of the official ROS calibration tool.
How to launch
- In a sourced terminal:
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --square SQUARE_SIZE --size MxN image:=/image_topic
- Play a rosbag or stream from a camera in the selected topic name.
- Move the checkerboard around within the field of view of the camera until the bars turn green.
- Press the
CALIBRATE
button. - The output and result of the calibration will be shown in the terminal.
- Press the
SAVE
button. - A file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
.
This file will contain the intrinsic calibration to rectify the image.
Parameters available
Flag| Parameter| Type| Description|
—–|———-|—–|——–
–square|SQUARE_SIZE
|double |Defines the size of the checkerboard square in meters.|
–size|MxN
|string |Defines the layout size of the checkerboard (inner size).|
image:=|image
|string |Topic name of the camera image source topic in raw
format (color or b&w).|
–min_samples|min_samples
|integer |Defines the minimum number of samples required to allow calibration.|
–detection|engine
|string|Chessboard detection engine, default cv2
or matlab
|
For extra details please visit: http://www.ros.org/wiki/camera_calibration
Matlab checkerboard detection engine (beta)
This node additionally supports the Matlab engine for chessboard detection, which is faster and more robust than the OpenCV implementation.
- Go to the Matlab python setup path
/PATH/TO/MATLAB/R201XY/extern/engines/python
. - Run
python setup.py install
to setup Matlab bindings.
To use this engine, add --detection matlab
to the list of arguments, i.e.
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --detection matlab --square SQUARE_SIZE --size MxN image:=/image_topic
Camera-LiDAR extrinsic calibration
Camera-LiDAR extrinsic calibration is performed by clicking on corresponding points in the image and the point cloud.
This node uses clicked_point
and screenpoint
from the rviz
and image_view2
packages respectively.
How to launch
- Perform the intrinsic camera calibration using camera intrinsic calibration tool described above (resulting in the file
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
). - In a sourced terminal:
roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/PATH/TO/YYYYmmdd_HHMM_autoware_camera_calibration.yaml image_src:=/image
- An image viewer will be displayed.
- Open Rviz and show the point cloud and the correct fixed frame.
- Observe the image and the point cloud simultaneously.
- Find a point within the image that you can match to a corresponding point within the point cloud.
- Click on the pixel of the point in the image.
- Click on the corresponding 3D point in Rviz using the Publish Point tool.
- Repeat this with at least 9 different points.
- Once finished, a file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_lidar_camera_calibration.yaml
.
This file can be used with Autoware’s Calibration Publisher to publish and register the transformation between the LiDAR and camera. The file contains both the intrinsic and extrinsic parameters.
Parameters available
Parameter | Type | Description | |
---|---|---|---|
image_src |
string | Topic name of the camera image source topic. Default: /image_raw . |
|
camera_id |
string | If working with more than one camera, set this to the correct camera namespace, i.e. /camera0 . |
|
intrinsics_file |
string | Topic name of the camera image source topic in raw format (color or b&w). |
|
compressed_stream |
bool | If set to true, a node to convert the image from a compressed stream to an uncompressed one will be launched. |
Camera-LiDAR calibration example
To test the calibration results, the generated yaml file can be used in the Calibration Publisher
and then the Points Image
in the Sensing tab.
Notes
This calibration tool assumes that the Velodyne is installed with the default order of axes for the Velodyne sensor.
- X axis points to the front
- Y axis points to the left
- Z axis points upwards
Changelog for package autoware_camera_lidar_calibrator
1.11.0 (2019-03-21)
- [fix] Install commands for all the packages
(#1861)
-
Initial fixes to detection, sensing, semantics and utils
-
fixing wrong filename on install command
-
Fixes to install commands
-
Hokuyo fix name
-
Fix obj db
-
Obj db include fixes
-
End of final cleaning sweep
-
Incorrect command order in runtime manager
-
Param tempfile not required by runtime_manager
-
- Fixes to runtime manager install commands
-
Remove devel directory from catkin, if any
-
Updated launch files for robosense
-
Updated robosense
-
Fix/add missing install (#1977)
-
Added launch install to lidar_kf_contour_track
-
Added install to op_global_planner
-
Added install to way_planner
-
Added install to op_local_planner
-
Added install to op_simulation_package
-
Added install to op_utilities
-
Added install to sync
-
- Improved installation script for pointgrey packages
-
Fixed nodelet error for gmsl cameras
-
USe install space in catkin as well
-
add install to catkin
-
Fix install directives (#1990)
-
Fixed installation path
-
Fixed params installation path
-
Fixed cfg installation path
- Delete cache on colcon_release
-
- Fix package name and dependency (#1914)
- Fix license notice in corresponding package.xml
- Contributors: Abraham Monrroy Cano, Akihito Ohsato, amc-nu
1.10.0 (2019-01-17)
- Fixes for catkin_make
- Switch to Apache 2 license (develop branch)
(#1741)
- Switch to Apache 2
* Replace BSD-3 license header with Apache 2 and reassign copyright to the Autoware Foundation.
- Update license on Python files
- Update copyright years
- Add #ifndef/define _POINTS_IMAGE_H_
- Updated license comment
- Use colcon as the build tool
(#1704)
- Switch to colcon as the build tool instead of catkin
- Added cmake-target
- Added note about the second colcon call
- Added warning about catkin* scripts being deprecated
- Fix COLCON_OPTS
- Added install targets
- Update Docker image tags
- Message packages fixes
- Fix missing dependency
- Feature/perception visualization cleanup
(#1648)
-
- Initial commit for visualization package
-
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
qtbase5-dev |
libqt5-core |
Dependant Packages
Launch files
- launch/camera_lidar_calibration.launch
- roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/home/ne0/Desktop/calib_heat_camera1_rear_center_fisheye.yaml compressed_stream:=True camera_id:=camera1
-
- image_src [default: /image_raw]
- camera_info_src [default: /camera_info]
- camera_id [default: /]
- intrinsics_file
- compressed_stream [default: false]
- target_frame [default: velodyne]
- camera_frame [default: camera]
Messages
Services
Plugins
Recent questions tagged autoware_camera_lidar_calibrator at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 1.12.0 |
License | Apache 2 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | autoware.ai perf |
Checkout URI | https://github.com/is-whale/autoware_learn.git |
VCS Type | git |
VCS Version | 1.14 |
Last Updated | 2025-03-14 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Abraham Monrroy
- Jacob Lambert
Authors
- Jacob Lambert
- Abraham Monrroy
Autoware Camera-LiDAR Calibration Package
How to calibrate
Camera-LiDAR calibration is performed in two steps:
- Obtain camera intrinsics
- Obtain camera-LiDAR extrinsics
Camera intrinsic calibration
The intrinsics are obtained using the autoware_camera_calibration
script, which is a fork of the official ROS calibration tool.
How to launch
- In a sourced terminal:
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --square SQUARE_SIZE --size MxN image:=/image_topic
- Play a rosbag or stream from a camera in the selected topic name.
- Move the checkerboard around within the field of view of the camera until the bars turn green.
- Press the
CALIBRATE
button. - The output and result of the calibration will be shown in the terminal.
- Press the
SAVE
button. - A file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
.
This file will contain the intrinsic calibration to rectify the image.
Parameters available
Flag| Parameter| Type| Description|
—–|———-|—–|——–
–square|SQUARE_SIZE
|double |Defines the size of the checkerboard square in meters.|
–size|MxN
|string |Defines the layout size of the checkerboard (inner size).|
image:=|image
|string |Topic name of the camera image source topic in raw
format (color or b&w).|
–min_samples|min_samples
|integer |Defines the minimum number of samples required to allow calibration.|
–detection|engine
|string|Chessboard detection engine, default cv2
or matlab
|
For extra details please visit: http://www.ros.org/wiki/camera_calibration
Matlab checkerboard detection engine (beta)
This node additionally supports the Matlab engine for chessboard detection, which is faster and more robust than the OpenCV implementation.
- Go to the Matlab python setup path
/PATH/TO/MATLAB/R201XY/extern/engines/python
. - Run
python setup.py install
to setup Matlab bindings.
To use this engine, add --detection matlab
to the list of arguments, i.e.
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --detection matlab --square SQUARE_SIZE --size MxN image:=/image_topic
Camera-LiDAR extrinsic calibration
Camera-LiDAR extrinsic calibration is performed by clicking on corresponding points in the image and the point cloud.
This node uses clicked_point
and screenpoint
from the rviz
and image_view2
packages respectively.
How to launch
- Perform the intrinsic camera calibration using camera intrinsic calibration tool described above (resulting in the file
YYYYmmdd_HHMM_autoware_camera_calibration.yaml
). - In a sourced terminal:
roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/PATH/TO/YYYYmmdd_HHMM_autoware_camera_calibration.yaml image_src:=/image
- An image viewer will be displayed.
- Open Rviz and show the point cloud and the correct fixed frame.
- Observe the image and the point cloud simultaneously.
- Find a point within the image that you can match to a corresponding point within the point cloud.
- Click on the pixel of the point in the image.
- Click on the corresponding 3D point in Rviz using the Publish Point tool.
- Repeat this with at least 9 different points.
- Once finished, a file will be saved in your home directory with the name
YYYYmmdd_HHMM_autoware_lidar_camera_calibration.yaml
.
This file can be used with Autoware’s Calibration Publisher to publish and register the transformation between the LiDAR and camera. The file contains both the intrinsic and extrinsic parameters.
Parameters available
Parameter | Type | Description | |
---|---|---|---|
image_src |
string | Topic name of the camera image source topic. Default: /image_raw . |
|
camera_id |
string | If working with more than one camera, set this to the correct camera namespace, i.e. /camera0 . |
|
intrinsics_file |
string | Topic name of the camera image source topic in raw format (color or b&w). |
|
compressed_stream |
bool | If set to true, a node to convert the image from a compressed stream to an uncompressed one will be launched. |
Camera-LiDAR calibration example
To test the calibration results, the generated yaml file can be used in the Calibration Publisher
and then the Points Image
in the Sensing tab.
Notes
This calibration tool assumes that the Velodyne is installed with the default order of axes for the Velodyne sensor.
- X axis points to the front
- Y axis points to the left
- Z axis points upwards
Changelog for package autoware_camera_lidar_calibrator
1.11.0 (2019-03-21)
- [fix] Install commands for all the packages
(#1861)
-
Initial fixes to detection, sensing, semantics and utils
-
fixing wrong filename on install command
-
Fixes to install commands
-
Hokuyo fix name
-
Fix obj db
-
Obj db include fixes
-
End of final cleaning sweep
-
Incorrect command order in runtime manager
-
Param tempfile not required by runtime_manager
-
- Fixes to runtime manager install commands
-
Remove devel directory from catkin, if any
-
Updated launch files for robosense
-
Updated robosense
-
Fix/add missing install (#1977)
-
Added launch install to lidar_kf_contour_track
-
Added install to op_global_planner
-
Added install to way_planner
-
Added install to op_local_planner
-
Added install to op_simulation_package
-
Added install to op_utilities
-
Added install to sync
-
- Improved installation script for pointgrey packages
-
Fixed nodelet error for gmsl cameras
-
USe install space in catkin as well
-
add install to catkin
-
Fix install directives (#1990)
-
Fixed installation path
-
Fixed params installation path
-
Fixed cfg installation path
- Delete cache on colcon_release
-
- Fix package name and dependency (#1914)
- Fix license notice in corresponding package.xml
- Contributors: Abraham Monrroy Cano, Akihito Ohsato, amc-nu
1.10.0 (2019-01-17)
- Fixes for catkin_make
- Switch to Apache 2 license (develop branch)
(#1741)
- Switch to Apache 2
* Replace BSD-3 license header with Apache 2 and reassign copyright to the Autoware Foundation.
- Update license on Python files
- Update copyright years
- Add #ifndef/define _POINTS_IMAGE_H_
- Updated license comment
- Use colcon as the build tool
(#1704)
- Switch to colcon as the build tool instead of catkin
- Added cmake-target
- Added note about the second colcon call
- Added warning about catkin* scripts being deprecated
- Fix COLCON_OPTS
- Added install targets
- Update Docker image tags
- Message packages fixes
- Fix missing dependency
- Feature/perception visualization cleanup
(#1648)
-
- Initial commit for visualization package
-
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
qtbase5-dev |
libqt5-core |
Dependant Packages
Launch files
- launch/camera_lidar_calibration.launch
- roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/home/ne0/Desktop/calib_heat_camera1_rear_center_fisheye.yaml compressed_stream:=True camera_id:=camera1
-
- image_src [default: /image_raw]
- camera_info_src [default: /camera_info]
- camera_id [default: /]
- intrinsics_file
- compressed_stream [default: false]
- target_frame [default: velodyne]
- camera_frame [default: camera]