Package Summary
| Version | 0.50.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-25 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Maintainers
- Kenzo Lobos-Tsunekawa
- Amadeusz Szymko
- Kotaro Uetake
- Masato Saeki
- Kok Seang Tan
Authors
autoware_bevfusion
Purpose
The autoware_bevfusion package is used for 3D object detection based on lidar or camera-lidar fusion.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for BEVFusion [1]. The sparse convolution backend corresponds to spconv. Autoware installs it automatically in its setup script. If needed, the user can also build it and install it following the following instructions.
Inputs / Outputs
Input
| Name | Type | Description |
|---|---|---|
~/input/pointcloud |
sensor_msgs::msg::PointCloud2 |
Input pointcloud topics. |
~/input/image* |
sensor_msgs::msg::Image |
Input image topics. |
~/input/camera_info* |
sensor_msgs::msg::CameraInfo |
Input camera info topics. |
Output
| Name | Type | Description |
|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. |
debug/cyclic_time_ms |
tier4_debug_msgs::msg::Float64Stamped |
Cyclic time (ms). |
debug/pipeline_latency_ms |
tier4_debug_msgs::msg::Float64Stamped |
Pipeline latency time (ms). |
debug/processing_time/preprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Preprocess (ms). |
debug/processing_time/inference_ms |
tier4_debug_msgs::msg::Float64Stamped |
Inference time (ms). |
debug/processing_time/postprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Postprocess time (ms). |
debug/processing_time/total_ms |
tier4_debug_msgs::msg::Float64Stamped |
Total processing time (ms). |
Parameters
BEVFusion node
{{ json_to_markdown(“perception/autoware_bevfusion/schema/bevfusion.schema.json”) }}
BEVFusion model
{{ json_to_markdown(“perception/autoware_bevfusion/schema/ml_package_bevfusion.schema.json”) }}
Detection class remapper
{{ json_to_markdown(“perception/autoware_bevfusion/schema/detection_class_remapper.schema.json”) }}
The build_only option
The autoware_bevfusion node has a build_only option to build the TensorRT engine file from the specified ONNX file, after which the program exits.
ros2 launch autoware_bevfusion bevfusion.launch.xml build_only:=true
The log_level option
The default logging severity level for autoware_bevfusion is info. For debugging purposes, the developer may decrease severity level using log_level parameter:
ros2 launch autoware_bevfusion bevfusion.launch.xml log_level:=debug
Assumptions / Known limits
This node assumes that the input pointcloud follows the PointXYZIRC layout defined in autoware_point_types.
Trained Models
You can download the onnx and config files in the following links.
The files need to be placed inside $(env HOME)/autoware_data/bevfusion
- lidar-only model:
- camera-lidar model:
- class remapper
The model was trained in TIER IV’s internal database (~35k lidar frames) for 30 epochs.
Changelog
References/External links
[1] Zhijian Liu, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela Rus, and Song Han. “BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation.” 2023 International Conference on Robotics and Automation.
(Optional) Future extensions / Unimplemented parts
Although this node can perform camera-lidar fusion, as it is the first method in autoware to actually use images and lidars for inference, the package structure and its full integration in the autoware pipeline are left for future work. In the current structure, it can be employed without any changes as a lidar-based detector.
Changelog for package autoware_bevfusion
0.50.0 (2026-02-14)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat(autoware_bevfusion): update nvcc flags (#12045) Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
-
feat(BEVFusion): move cuda stream creation to the beginning of BEVFusionTRT initialization (#11967)
- move cuda stream init before init
* Remove empty lines ---------
-
fix(bevfusion): suppress -Werror for precomputed_features.cpp (#11959)
-
fix(autoware_bevfusion): restore spconv in cmakelists (#11953)
-
chore(autoware_bevfusion): remove cudnn dependency (#11887)
-
Contributors: Amadeusz Szymko, Kok Seang Tan, Mete Fatih Cırıt, Ryohsuke Mitsudome
0.49.0 (2025-12-30)
-
Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog
-
feat(autoware_bevfusion): separate image backbone from fusion model and add lidar intensity option (#11468)
- image backbone building
- inference running without error
- working bevfusion-cl
- style(pre-commit): autofix
- removed unnecessary changes
- style(pre-commit): autofix
- made requested changes
- style(pre-commit): autofix
- updated memcopy for img_matrices
- fix parameter names and defaults
- style(pre-commit): autofix
- fixed complile time issues
- refactor pre-process method
- refactored node code
- style(pre-commit): autofix
- refactor init method
- style(pre-commit): autofix
- split node code
- style(pre-commit): autofix
- helper code complexity refactor
- fix lint error
- style(pre-commit): autofix
- update schema params
* suppress clang changes ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Samrat Thapa
0.48.0 (2025-11-18)
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
- style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Mete Fatih Cırıt
0.46.0 (2025-06-20)
-
Merge remote-tracking branch 'upstream/main' into tmp/TaikiYamada/bump_version_base
-
fix(autoware_bevfusion): fix clang-tidy errors by removing unused fields (#10850)
- fix clang-tidy errors by removing unused fields
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(cmake): update spconv availability messages to use STATUS and WAR… (#10690)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/bevfusion.launch.xml
-
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- output/objects [default: objects]
- data_path [default: $(env HOME)/autoware_data]
- model_name [default: bevfusion_lidar]
- model_path [default: $(var data_path)/bevfusion]
- model_param_path [default: $(find-pkg-share autoware_bevfusion)/config/$(var model_name).param.yaml]
- ml_package_param_path [default: $(var model_path)/ml_package_$(var model_name).param.yaml]
- class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
- common_param_path [default: $(find-pkg-share autoware_bevfusion)/config/common_bevfusion.param.yaml]
- build_only [default: false]
- log_level [default: info]
- use_pointcloud_container [default: false]
- pointcloud_container_name [default: pointcloud_container]
- use_decompress [default: false]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- image0 [default: /sensing/camera/camera0/image_rect_color]
- image1 [default: /sensing/camera/camera1/image_rect_color]
- image2 [default: /sensing/camera/camera2/image_rect_color]
- image3 [default: /sensing/camera/camera3/image_rect_color]
- image4 [default: /sensing/camera/camera4/image_rect_color]
- image5 [default: /sensing/camera/camera5/image_rect_color]
- decompressor_param_file [default: $(find-pkg-share autoware_image_transport_decompressor)/config/image_transport_decompressor.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_bevfusion at Robotics Stack Exchange
Package Summary
| Version | 0.50.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-25 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Maintainers
- Kenzo Lobos-Tsunekawa
- Amadeusz Szymko
- Kotaro Uetake
- Masato Saeki
- Kok Seang Tan
Authors
autoware_bevfusion
Purpose
The autoware_bevfusion package is used for 3D object detection based on lidar or camera-lidar fusion.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for BEVFusion [1]. The sparse convolution backend corresponds to spconv. Autoware installs it automatically in its setup script. If needed, the user can also build it and install it following the following instructions.
Inputs / Outputs
Input
| Name | Type | Description |
|---|---|---|
~/input/pointcloud |
sensor_msgs::msg::PointCloud2 |
Input pointcloud topics. |
~/input/image* |
sensor_msgs::msg::Image |
Input image topics. |
~/input/camera_info* |
sensor_msgs::msg::CameraInfo |
Input camera info topics. |
Output
| Name | Type | Description |
|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. |
debug/cyclic_time_ms |
tier4_debug_msgs::msg::Float64Stamped |
Cyclic time (ms). |
debug/pipeline_latency_ms |
tier4_debug_msgs::msg::Float64Stamped |
Pipeline latency time (ms). |
debug/processing_time/preprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Preprocess (ms). |
debug/processing_time/inference_ms |
tier4_debug_msgs::msg::Float64Stamped |
Inference time (ms). |
debug/processing_time/postprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Postprocess time (ms). |
debug/processing_time/total_ms |
tier4_debug_msgs::msg::Float64Stamped |
Total processing time (ms). |
Parameters
BEVFusion node
{{ json_to_markdown(“perception/autoware_bevfusion/schema/bevfusion.schema.json”) }}
BEVFusion model
{{ json_to_markdown(“perception/autoware_bevfusion/schema/ml_package_bevfusion.schema.json”) }}
Detection class remapper
{{ json_to_markdown(“perception/autoware_bevfusion/schema/detection_class_remapper.schema.json”) }}
The build_only option
The autoware_bevfusion node has a build_only option to build the TensorRT engine file from the specified ONNX file, after which the program exits.
ros2 launch autoware_bevfusion bevfusion.launch.xml build_only:=true
The log_level option
The default logging severity level for autoware_bevfusion is info. For debugging purposes, the developer may decrease severity level using log_level parameter:
ros2 launch autoware_bevfusion bevfusion.launch.xml log_level:=debug
Assumptions / Known limits
This node assumes that the input pointcloud follows the PointXYZIRC layout defined in autoware_point_types.
Trained Models
You can download the onnx and config files in the following links.
The files need to be placed inside $(env HOME)/autoware_data/bevfusion
- lidar-only model:
- camera-lidar model:
- class remapper
The model was trained in TIER IV’s internal database (~35k lidar frames) for 30 epochs.
Changelog
References/External links
[1] Zhijian Liu, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela Rus, and Song Han. “BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation.” 2023 International Conference on Robotics and Automation.
(Optional) Future extensions / Unimplemented parts
Although this node can perform camera-lidar fusion, as it is the first method in autoware to actually use images and lidars for inference, the package structure and its full integration in the autoware pipeline are left for future work. In the current structure, it can be employed without any changes as a lidar-based detector.
Changelog for package autoware_bevfusion
0.50.0 (2026-02-14)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat(autoware_bevfusion): update nvcc flags (#12045) Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
-
feat(BEVFusion): move cuda stream creation to the beginning of BEVFusionTRT initialization (#11967)
- move cuda stream init before init
* Remove empty lines ---------
-
fix(bevfusion): suppress -Werror for precomputed_features.cpp (#11959)
-
fix(autoware_bevfusion): restore spconv in cmakelists (#11953)
-
chore(autoware_bevfusion): remove cudnn dependency (#11887)
-
Contributors: Amadeusz Szymko, Kok Seang Tan, Mete Fatih Cırıt, Ryohsuke Mitsudome
0.49.0 (2025-12-30)
-
Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog
-
feat(autoware_bevfusion): separate image backbone from fusion model and add lidar intensity option (#11468)
- image backbone building
- inference running without error
- working bevfusion-cl
- style(pre-commit): autofix
- removed unnecessary changes
- style(pre-commit): autofix
- made requested changes
- style(pre-commit): autofix
- updated memcopy for img_matrices
- fix parameter names and defaults
- style(pre-commit): autofix
- fixed complile time issues
- refactor pre-process method
- refactored node code
- style(pre-commit): autofix
- refactor init method
- style(pre-commit): autofix
- split node code
- style(pre-commit): autofix
- helper code complexity refactor
- fix lint error
- style(pre-commit): autofix
- update schema params
* suppress clang changes ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Samrat Thapa
0.48.0 (2025-11-18)
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
- style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Mete Fatih Cırıt
0.46.0 (2025-06-20)
-
Merge remote-tracking branch 'upstream/main' into tmp/TaikiYamada/bump_version_base
-
fix(autoware_bevfusion): fix clang-tidy errors by removing unused fields (#10850)
- fix clang-tidy errors by removing unused fields
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(cmake): update spconv availability messages to use STATUS and WAR… (#10690)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/bevfusion.launch.xml
-
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- output/objects [default: objects]
- data_path [default: $(env HOME)/autoware_data]
- model_name [default: bevfusion_lidar]
- model_path [default: $(var data_path)/bevfusion]
- model_param_path [default: $(find-pkg-share autoware_bevfusion)/config/$(var model_name).param.yaml]
- ml_package_param_path [default: $(var model_path)/ml_package_$(var model_name).param.yaml]
- class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
- common_param_path [default: $(find-pkg-share autoware_bevfusion)/config/common_bevfusion.param.yaml]
- build_only [default: false]
- log_level [default: info]
- use_pointcloud_container [default: false]
- pointcloud_container_name [default: pointcloud_container]
- use_decompress [default: false]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- image0 [default: /sensing/camera/camera0/image_rect_color]
- image1 [default: /sensing/camera/camera1/image_rect_color]
- image2 [default: /sensing/camera/camera2/image_rect_color]
- image3 [default: /sensing/camera/camera3/image_rect_color]
- image4 [default: /sensing/camera/camera4/image_rect_color]
- image5 [default: /sensing/camera/camera5/image_rect_color]
- decompressor_param_file [default: $(find-pkg-share autoware_image_transport_decompressor)/config/image_transport_decompressor.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_bevfusion at Robotics Stack Exchange
Package Summary
| Version | 0.50.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-25 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Maintainers
- Kenzo Lobos-Tsunekawa
- Amadeusz Szymko
- Kotaro Uetake
- Masato Saeki
- Kok Seang Tan
Authors
autoware_bevfusion
Purpose
The autoware_bevfusion package is used for 3D object detection based on lidar or camera-lidar fusion.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for BEVFusion [1]. The sparse convolution backend corresponds to spconv. Autoware installs it automatically in its setup script. If needed, the user can also build it and install it following the following instructions.
Inputs / Outputs
Input
| Name | Type | Description |
|---|---|---|
~/input/pointcloud |
sensor_msgs::msg::PointCloud2 |
Input pointcloud topics. |
~/input/image* |
sensor_msgs::msg::Image |
Input image topics. |
~/input/camera_info* |
sensor_msgs::msg::CameraInfo |
Input camera info topics. |
Output
| Name | Type | Description |
|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. |
debug/cyclic_time_ms |
tier4_debug_msgs::msg::Float64Stamped |
Cyclic time (ms). |
debug/pipeline_latency_ms |
tier4_debug_msgs::msg::Float64Stamped |
Pipeline latency time (ms). |
debug/processing_time/preprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Preprocess (ms). |
debug/processing_time/inference_ms |
tier4_debug_msgs::msg::Float64Stamped |
Inference time (ms). |
debug/processing_time/postprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Postprocess time (ms). |
debug/processing_time/total_ms |
tier4_debug_msgs::msg::Float64Stamped |
Total processing time (ms). |
Parameters
BEVFusion node
{{ json_to_markdown(“perception/autoware_bevfusion/schema/bevfusion.schema.json”) }}
BEVFusion model
{{ json_to_markdown(“perception/autoware_bevfusion/schema/ml_package_bevfusion.schema.json”) }}
Detection class remapper
{{ json_to_markdown(“perception/autoware_bevfusion/schema/detection_class_remapper.schema.json”) }}
The build_only option
The autoware_bevfusion node has a build_only option to build the TensorRT engine file from the specified ONNX file, after which the program exits.
ros2 launch autoware_bevfusion bevfusion.launch.xml build_only:=true
The log_level option
The default logging severity level for autoware_bevfusion is info. For debugging purposes, the developer may decrease severity level using log_level parameter:
ros2 launch autoware_bevfusion bevfusion.launch.xml log_level:=debug
Assumptions / Known limits
This node assumes that the input pointcloud follows the PointXYZIRC layout defined in autoware_point_types.
Trained Models
You can download the onnx and config files in the following links.
The files need to be placed inside $(env HOME)/autoware_data/bevfusion
- lidar-only model:
- camera-lidar model:
- class remapper
The model was trained in TIER IV’s internal database (~35k lidar frames) for 30 epochs.
Changelog
References/External links
[1] Zhijian Liu, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela Rus, and Song Han. “BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation.” 2023 International Conference on Robotics and Automation.
(Optional) Future extensions / Unimplemented parts
Although this node can perform camera-lidar fusion, as it is the first method in autoware to actually use images and lidars for inference, the package structure and its full integration in the autoware pipeline are left for future work. In the current structure, it can be employed without any changes as a lidar-based detector.
Changelog for package autoware_bevfusion
0.50.0 (2026-02-14)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat(autoware_bevfusion): update nvcc flags (#12045) Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
-
feat(BEVFusion): move cuda stream creation to the beginning of BEVFusionTRT initialization (#11967)
- move cuda stream init before init
* Remove empty lines ---------
-
fix(bevfusion): suppress -Werror for precomputed_features.cpp (#11959)
-
fix(autoware_bevfusion): restore spconv in cmakelists (#11953)
-
chore(autoware_bevfusion): remove cudnn dependency (#11887)
-
Contributors: Amadeusz Szymko, Kok Seang Tan, Mete Fatih Cırıt, Ryohsuke Mitsudome
0.49.0 (2025-12-30)
-
Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog
-
feat(autoware_bevfusion): separate image backbone from fusion model and add lidar intensity option (#11468)
- image backbone building
- inference running without error
- working bevfusion-cl
- style(pre-commit): autofix
- removed unnecessary changes
- style(pre-commit): autofix
- made requested changes
- style(pre-commit): autofix
- updated memcopy for img_matrices
- fix parameter names and defaults
- style(pre-commit): autofix
- fixed complile time issues
- refactor pre-process method
- refactored node code
- style(pre-commit): autofix
- refactor init method
- style(pre-commit): autofix
- split node code
- style(pre-commit): autofix
- helper code complexity refactor
- fix lint error
- style(pre-commit): autofix
- update schema params
* suppress clang changes ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Samrat Thapa
0.48.0 (2025-11-18)
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
- style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Mete Fatih Cırıt
0.46.0 (2025-06-20)
-
Merge remote-tracking branch 'upstream/main' into tmp/TaikiYamada/bump_version_base
-
fix(autoware_bevfusion): fix clang-tidy errors by removing unused fields (#10850)
- fix clang-tidy errors by removing unused fields
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(cmake): update spconv availability messages to use STATUS and WAR… (#10690)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/bevfusion.launch.xml
-
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- output/objects [default: objects]
- data_path [default: $(env HOME)/autoware_data]
- model_name [default: bevfusion_lidar]
- model_path [default: $(var data_path)/bevfusion]
- model_param_path [default: $(find-pkg-share autoware_bevfusion)/config/$(var model_name).param.yaml]
- ml_package_param_path [default: $(var model_path)/ml_package_$(var model_name).param.yaml]
- class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
- common_param_path [default: $(find-pkg-share autoware_bevfusion)/config/common_bevfusion.param.yaml]
- build_only [default: false]
- log_level [default: info]
- use_pointcloud_container [default: false]
- pointcloud_container_name [default: pointcloud_container]
- use_decompress [default: false]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- image0 [default: /sensing/camera/camera0/image_rect_color]
- image1 [default: /sensing/camera/camera1/image_rect_color]
- image2 [default: /sensing/camera/camera2/image_rect_color]
- image3 [default: /sensing/camera/camera3/image_rect_color]
- image4 [default: /sensing/camera/camera4/image_rect_color]
- image5 [default: /sensing/camera/camera5/image_rect_color]
- decompressor_param_file [default: $(find-pkg-share autoware_image_transport_decompressor)/config/image_transport_decompressor.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_bevfusion at Robotics Stack Exchange
Package Summary
| Version | 0.50.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-25 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Maintainers
- Kenzo Lobos-Tsunekawa
- Amadeusz Szymko
- Kotaro Uetake
- Masato Saeki
- Kok Seang Tan
Authors
autoware_bevfusion
Purpose
The autoware_bevfusion package is used for 3D object detection based on lidar or camera-lidar fusion.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for BEVFusion [1]. The sparse convolution backend corresponds to spconv. Autoware installs it automatically in its setup script. If needed, the user can also build it and install it following the following instructions.
Inputs / Outputs
Input
| Name | Type | Description |
|---|---|---|
~/input/pointcloud |
sensor_msgs::msg::PointCloud2 |
Input pointcloud topics. |
~/input/image* |
sensor_msgs::msg::Image |
Input image topics. |
~/input/camera_info* |
sensor_msgs::msg::CameraInfo |
Input camera info topics. |
Output
| Name | Type | Description |
|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. |
debug/cyclic_time_ms |
tier4_debug_msgs::msg::Float64Stamped |
Cyclic time (ms). |
debug/pipeline_latency_ms |
tier4_debug_msgs::msg::Float64Stamped |
Pipeline latency time (ms). |
debug/processing_time/preprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Preprocess (ms). |
debug/processing_time/inference_ms |
tier4_debug_msgs::msg::Float64Stamped |
Inference time (ms). |
debug/processing_time/postprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Postprocess time (ms). |
debug/processing_time/total_ms |
tier4_debug_msgs::msg::Float64Stamped |
Total processing time (ms). |
Parameters
BEVFusion node
{{ json_to_markdown(“perception/autoware_bevfusion/schema/bevfusion.schema.json”) }}
BEVFusion model
{{ json_to_markdown(“perception/autoware_bevfusion/schema/ml_package_bevfusion.schema.json”) }}
Detection class remapper
{{ json_to_markdown(“perception/autoware_bevfusion/schema/detection_class_remapper.schema.json”) }}
The build_only option
The autoware_bevfusion node has a build_only option to build the TensorRT engine file from the specified ONNX file, after which the program exits.
ros2 launch autoware_bevfusion bevfusion.launch.xml build_only:=true
The log_level option
The default logging severity level for autoware_bevfusion is info. For debugging purposes, the developer may decrease severity level using log_level parameter:
ros2 launch autoware_bevfusion bevfusion.launch.xml log_level:=debug
Assumptions / Known limits
This node assumes that the input pointcloud follows the PointXYZIRC layout defined in autoware_point_types.
Trained Models
You can download the onnx and config files in the following links.
The files need to be placed inside $(env HOME)/autoware_data/bevfusion
- lidar-only model:
- camera-lidar model:
- class remapper
The model was trained in TIER IV’s internal database (~35k lidar frames) for 30 epochs.
Changelog
References/External links
[1] Zhijian Liu, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela Rus, and Song Han. “BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation.” 2023 International Conference on Robotics and Automation.
(Optional) Future extensions / Unimplemented parts
Although this node can perform camera-lidar fusion, as it is the first method in autoware to actually use images and lidars for inference, the package structure and its full integration in the autoware pipeline are left for future work. In the current structure, it can be employed without any changes as a lidar-based detector.
Changelog for package autoware_bevfusion
0.50.0 (2026-02-14)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat(autoware_bevfusion): update nvcc flags (#12045) Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
-
feat(BEVFusion): move cuda stream creation to the beginning of BEVFusionTRT initialization (#11967)
- move cuda stream init before init
* Remove empty lines ---------
-
fix(bevfusion): suppress -Werror for precomputed_features.cpp (#11959)
-
fix(autoware_bevfusion): restore spconv in cmakelists (#11953)
-
chore(autoware_bevfusion): remove cudnn dependency (#11887)
-
Contributors: Amadeusz Szymko, Kok Seang Tan, Mete Fatih Cırıt, Ryohsuke Mitsudome
0.49.0 (2025-12-30)
-
Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog
-
feat(autoware_bevfusion): separate image backbone from fusion model and add lidar intensity option (#11468)
- image backbone building
- inference running without error
- working bevfusion-cl
- style(pre-commit): autofix
- removed unnecessary changes
- style(pre-commit): autofix
- made requested changes
- style(pre-commit): autofix
- updated memcopy for img_matrices
- fix parameter names and defaults
- style(pre-commit): autofix
- fixed complile time issues
- refactor pre-process method
- refactored node code
- style(pre-commit): autofix
- refactor init method
- style(pre-commit): autofix
- split node code
- style(pre-commit): autofix
- helper code complexity refactor
- fix lint error
- style(pre-commit): autofix
- update schema params
* suppress clang changes ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Samrat Thapa
0.48.0 (2025-11-18)
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
- style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Mete Fatih Cırıt
0.46.0 (2025-06-20)
-
Merge remote-tracking branch 'upstream/main' into tmp/TaikiYamada/bump_version_base
-
fix(autoware_bevfusion): fix clang-tidy errors by removing unused fields (#10850)
- fix clang-tidy errors by removing unused fields
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(cmake): update spconv availability messages to use STATUS and WAR… (#10690)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/bevfusion.launch.xml
-
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- output/objects [default: objects]
- data_path [default: $(env HOME)/autoware_data]
- model_name [default: bevfusion_lidar]
- model_path [default: $(var data_path)/bevfusion]
- model_param_path [default: $(find-pkg-share autoware_bevfusion)/config/$(var model_name).param.yaml]
- ml_package_param_path [default: $(var model_path)/ml_package_$(var model_name).param.yaml]
- class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
- common_param_path [default: $(find-pkg-share autoware_bevfusion)/config/common_bevfusion.param.yaml]
- build_only [default: false]
- log_level [default: info]
- use_pointcloud_container [default: false]
- pointcloud_container_name [default: pointcloud_container]
- use_decompress [default: false]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- image0 [default: /sensing/camera/camera0/image_rect_color]
- image1 [default: /sensing/camera/camera1/image_rect_color]
- image2 [default: /sensing/camera/camera2/image_rect_color]
- image3 [default: /sensing/camera/camera3/image_rect_color]
- image4 [default: /sensing/camera/camera4/image_rect_color]
- image5 [default: /sensing/camera/camera5/image_rect_color]
- decompressor_param_file [default: $(find-pkg-share autoware_image_transport_decompressor)/config/image_transport_decompressor.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_bevfusion at Robotics Stack Exchange
Package Summary
| Version | 0.50.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-25 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Maintainers
- Kenzo Lobos-Tsunekawa
- Amadeusz Szymko
- Kotaro Uetake
- Masato Saeki
- Kok Seang Tan
Authors
autoware_bevfusion
Purpose
The autoware_bevfusion package is used for 3D object detection based on lidar or camera-lidar fusion.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for BEVFusion [1]. The sparse convolution backend corresponds to spconv. Autoware installs it automatically in its setup script. If needed, the user can also build it and install it following the following instructions.
Inputs / Outputs
Input
| Name | Type | Description |
|---|---|---|
~/input/pointcloud |
sensor_msgs::msg::PointCloud2 |
Input pointcloud topics. |
~/input/image* |
sensor_msgs::msg::Image |
Input image topics. |
~/input/camera_info* |
sensor_msgs::msg::CameraInfo |
Input camera info topics. |
Output
| Name | Type | Description |
|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. |
debug/cyclic_time_ms |
tier4_debug_msgs::msg::Float64Stamped |
Cyclic time (ms). |
debug/pipeline_latency_ms |
tier4_debug_msgs::msg::Float64Stamped |
Pipeline latency time (ms). |
debug/processing_time/preprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Preprocess (ms). |
debug/processing_time/inference_ms |
tier4_debug_msgs::msg::Float64Stamped |
Inference time (ms). |
debug/processing_time/postprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Postprocess time (ms). |
debug/processing_time/total_ms |
tier4_debug_msgs::msg::Float64Stamped |
Total processing time (ms). |
Parameters
BEVFusion node
{{ json_to_markdown(“perception/autoware_bevfusion/schema/bevfusion.schema.json”) }}
BEVFusion model
{{ json_to_markdown(“perception/autoware_bevfusion/schema/ml_package_bevfusion.schema.json”) }}
Detection class remapper
{{ json_to_markdown(“perception/autoware_bevfusion/schema/detection_class_remapper.schema.json”) }}
The build_only option
The autoware_bevfusion node has a build_only option to build the TensorRT engine file from the specified ONNX file, after which the program exits.
ros2 launch autoware_bevfusion bevfusion.launch.xml build_only:=true
The log_level option
The default logging severity level for autoware_bevfusion is info. For debugging purposes, the developer may decrease severity level using log_level parameter:
ros2 launch autoware_bevfusion bevfusion.launch.xml log_level:=debug
Assumptions / Known limits
This node assumes that the input pointcloud follows the PointXYZIRC layout defined in autoware_point_types.
Trained Models
You can download the onnx and config files in the following links.
The files need to be placed inside $(env HOME)/autoware_data/bevfusion
- lidar-only model:
- camera-lidar model:
- class remapper
The model was trained in TIER IV’s internal database (~35k lidar frames) for 30 epochs.
Changelog
References/External links
[1] Zhijian Liu, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela Rus, and Song Han. “BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation.” 2023 International Conference on Robotics and Automation.
(Optional) Future extensions / Unimplemented parts
Although this node can perform camera-lidar fusion, as it is the first method in autoware to actually use images and lidars for inference, the package structure and its full integration in the autoware pipeline are left for future work. In the current structure, it can be employed without any changes as a lidar-based detector.
Changelog for package autoware_bevfusion
0.50.0 (2026-02-14)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat(autoware_bevfusion): update nvcc flags (#12045) Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
-
feat(BEVFusion): move cuda stream creation to the beginning of BEVFusionTRT initialization (#11967)
- move cuda stream init before init
* Remove empty lines ---------
-
fix(bevfusion): suppress -Werror for precomputed_features.cpp (#11959)
-
fix(autoware_bevfusion): restore spconv in cmakelists (#11953)
-
chore(autoware_bevfusion): remove cudnn dependency (#11887)
-
Contributors: Amadeusz Szymko, Kok Seang Tan, Mete Fatih Cırıt, Ryohsuke Mitsudome
0.49.0 (2025-12-30)
-
Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog
-
feat(autoware_bevfusion): separate image backbone from fusion model and add lidar intensity option (#11468)
- image backbone building
- inference running without error
- working bevfusion-cl
- style(pre-commit): autofix
- removed unnecessary changes
- style(pre-commit): autofix
- made requested changes
- style(pre-commit): autofix
- updated memcopy for img_matrices
- fix parameter names and defaults
- style(pre-commit): autofix
- fixed complile time issues
- refactor pre-process method
- refactored node code
- style(pre-commit): autofix
- refactor init method
- style(pre-commit): autofix
- split node code
- style(pre-commit): autofix
- helper code complexity refactor
- fix lint error
- style(pre-commit): autofix
- update schema params
* suppress clang changes ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Samrat Thapa
0.48.0 (2025-11-18)
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
- style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Mete Fatih Cırıt
0.46.0 (2025-06-20)
-
Merge remote-tracking branch 'upstream/main' into tmp/TaikiYamada/bump_version_base
-
fix(autoware_bevfusion): fix clang-tidy errors by removing unused fields (#10850)
- fix clang-tidy errors by removing unused fields
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(cmake): update spconv availability messages to use STATUS and WAR… (#10690)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/bevfusion.launch.xml
-
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- output/objects [default: objects]
- data_path [default: $(env HOME)/autoware_data]
- model_name [default: bevfusion_lidar]
- model_path [default: $(var data_path)/bevfusion]
- model_param_path [default: $(find-pkg-share autoware_bevfusion)/config/$(var model_name).param.yaml]
- ml_package_param_path [default: $(var model_path)/ml_package_$(var model_name).param.yaml]
- class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
- common_param_path [default: $(find-pkg-share autoware_bevfusion)/config/common_bevfusion.param.yaml]
- build_only [default: false]
- log_level [default: info]
- use_pointcloud_container [default: false]
- pointcloud_container_name [default: pointcloud_container]
- use_decompress [default: false]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- image0 [default: /sensing/camera/camera0/image_rect_color]
- image1 [default: /sensing/camera/camera1/image_rect_color]
- image2 [default: /sensing/camera/camera2/image_rect_color]
- image3 [default: /sensing/camera/camera3/image_rect_color]
- image4 [default: /sensing/camera/camera4/image_rect_color]
- image5 [default: /sensing/camera/camera5/image_rect_color]
- decompressor_param_file [default: $(find-pkg-share autoware_image_transport_decompressor)/config/image_transport_decompressor.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_bevfusion at Robotics Stack Exchange
Package Summary
| Version | 0.50.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-25 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Maintainers
- Kenzo Lobos-Tsunekawa
- Amadeusz Szymko
- Kotaro Uetake
- Masato Saeki
- Kok Seang Tan
Authors
autoware_bevfusion
Purpose
The autoware_bevfusion package is used for 3D object detection based on lidar or camera-lidar fusion.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for BEVFusion [1]. The sparse convolution backend corresponds to spconv. Autoware installs it automatically in its setup script. If needed, the user can also build it and install it following the following instructions.
Inputs / Outputs
Input
| Name | Type | Description |
|---|---|---|
~/input/pointcloud |
sensor_msgs::msg::PointCloud2 |
Input pointcloud topics. |
~/input/image* |
sensor_msgs::msg::Image |
Input image topics. |
~/input/camera_info* |
sensor_msgs::msg::CameraInfo |
Input camera info topics. |
Output
| Name | Type | Description |
|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. |
debug/cyclic_time_ms |
tier4_debug_msgs::msg::Float64Stamped |
Cyclic time (ms). |
debug/pipeline_latency_ms |
tier4_debug_msgs::msg::Float64Stamped |
Pipeline latency time (ms). |
debug/processing_time/preprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Preprocess (ms). |
debug/processing_time/inference_ms |
tier4_debug_msgs::msg::Float64Stamped |
Inference time (ms). |
debug/processing_time/postprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Postprocess time (ms). |
debug/processing_time/total_ms |
tier4_debug_msgs::msg::Float64Stamped |
Total processing time (ms). |
Parameters
BEVFusion node
{{ json_to_markdown(“perception/autoware_bevfusion/schema/bevfusion.schema.json”) }}
BEVFusion model
{{ json_to_markdown(“perception/autoware_bevfusion/schema/ml_package_bevfusion.schema.json”) }}
Detection class remapper
{{ json_to_markdown(“perception/autoware_bevfusion/schema/detection_class_remapper.schema.json”) }}
The build_only option
The autoware_bevfusion node has a build_only option to build the TensorRT engine file from the specified ONNX file, after which the program exits.
ros2 launch autoware_bevfusion bevfusion.launch.xml build_only:=true
The log_level option
The default logging severity level for autoware_bevfusion is info. For debugging purposes, the developer may decrease severity level using log_level parameter:
ros2 launch autoware_bevfusion bevfusion.launch.xml log_level:=debug
Assumptions / Known limits
This node assumes that the input pointcloud follows the PointXYZIRC layout defined in autoware_point_types.
Trained Models
You can download the onnx and config files in the following links.
The files need to be placed inside $(env HOME)/autoware_data/bevfusion
- lidar-only model:
- camera-lidar model:
- class remapper
The model was trained in TIER IV’s internal database (~35k lidar frames) for 30 epochs.
Changelog
References/External links
[1] Zhijian Liu, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela Rus, and Song Han. “BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation.” 2023 International Conference on Robotics and Automation.
(Optional) Future extensions / Unimplemented parts
Although this node can perform camera-lidar fusion, as it is the first method in autoware to actually use images and lidars for inference, the package structure and its full integration in the autoware pipeline are left for future work. In the current structure, it can be employed without any changes as a lidar-based detector.
Changelog for package autoware_bevfusion
0.50.0 (2026-02-14)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat(autoware_bevfusion): update nvcc flags (#12045) Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
-
feat(BEVFusion): move cuda stream creation to the beginning of BEVFusionTRT initialization (#11967)
- move cuda stream init before init
* Remove empty lines ---------
-
fix(bevfusion): suppress -Werror for precomputed_features.cpp (#11959)
-
fix(autoware_bevfusion): restore spconv in cmakelists (#11953)
-
chore(autoware_bevfusion): remove cudnn dependency (#11887)
-
Contributors: Amadeusz Szymko, Kok Seang Tan, Mete Fatih Cırıt, Ryohsuke Mitsudome
0.49.0 (2025-12-30)
-
Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog
-
feat(autoware_bevfusion): separate image backbone from fusion model and add lidar intensity option (#11468)
- image backbone building
- inference running without error
- working bevfusion-cl
- style(pre-commit): autofix
- removed unnecessary changes
- style(pre-commit): autofix
- made requested changes
- style(pre-commit): autofix
- updated memcopy for img_matrices
- fix parameter names and defaults
- style(pre-commit): autofix
- fixed complile time issues
- refactor pre-process method
- refactored node code
- style(pre-commit): autofix
- refactor init method
- style(pre-commit): autofix
- split node code
- style(pre-commit): autofix
- helper code complexity refactor
- fix lint error
- style(pre-commit): autofix
- update schema params
* suppress clang changes ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Samrat Thapa
0.48.0 (2025-11-18)
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
- style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Mete Fatih Cırıt
0.46.0 (2025-06-20)
-
Merge remote-tracking branch 'upstream/main' into tmp/TaikiYamada/bump_version_base
-
fix(autoware_bevfusion): fix clang-tidy errors by removing unused fields (#10850)
- fix clang-tidy errors by removing unused fields
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(cmake): update spconv availability messages to use STATUS and WAR… (#10690)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/bevfusion.launch.xml
-
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- output/objects [default: objects]
- data_path [default: $(env HOME)/autoware_data]
- model_name [default: bevfusion_lidar]
- model_path [default: $(var data_path)/bevfusion]
- model_param_path [default: $(find-pkg-share autoware_bevfusion)/config/$(var model_name).param.yaml]
- ml_package_param_path [default: $(var model_path)/ml_package_$(var model_name).param.yaml]
- class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
- common_param_path [default: $(find-pkg-share autoware_bevfusion)/config/common_bevfusion.param.yaml]
- build_only [default: false]
- log_level [default: info]
- use_pointcloud_container [default: false]
- pointcloud_container_name [default: pointcloud_container]
- use_decompress [default: false]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- image0 [default: /sensing/camera/camera0/image_rect_color]
- image1 [default: /sensing/camera/camera1/image_rect_color]
- image2 [default: /sensing/camera/camera2/image_rect_color]
- image3 [default: /sensing/camera/camera3/image_rect_color]
- image4 [default: /sensing/camera/camera4/image_rect_color]
- image5 [default: /sensing/camera/camera5/image_rect_color]
- decompressor_param_file [default: $(find-pkg-share autoware_image_transport_decompressor)/config/image_transport_decompressor.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_bevfusion at Robotics Stack Exchange
Package Summary
| Version | 0.50.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-25 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Maintainers
- Kenzo Lobos-Tsunekawa
- Amadeusz Szymko
- Kotaro Uetake
- Masato Saeki
- Kok Seang Tan
Authors
autoware_bevfusion
Purpose
The autoware_bevfusion package is used for 3D object detection based on lidar or camera-lidar fusion.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for BEVFusion [1]. The sparse convolution backend corresponds to spconv. Autoware installs it automatically in its setup script. If needed, the user can also build it and install it following the following instructions.
Inputs / Outputs
Input
| Name | Type | Description |
|---|---|---|
~/input/pointcloud |
sensor_msgs::msg::PointCloud2 |
Input pointcloud topics. |
~/input/image* |
sensor_msgs::msg::Image |
Input image topics. |
~/input/camera_info* |
sensor_msgs::msg::CameraInfo |
Input camera info topics. |
Output
| Name | Type | Description |
|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. |
debug/cyclic_time_ms |
tier4_debug_msgs::msg::Float64Stamped |
Cyclic time (ms). |
debug/pipeline_latency_ms |
tier4_debug_msgs::msg::Float64Stamped |
Pipeline latency time (ms). |
debug/processing_time/preprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Preprocess (ms). |
debug/processing_time/inference_ms |
tier4_debug_msgs::msg::Float64Stamped |
Inference time (ms). |
debug/processing_time/postprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Postprocess time (ms). |
debug/processing_time/total_ms |
tier4_debug_msgs::msg::Float64Stamped |
Total processing time (ms). |
Parameters
BEVFusion node
{{ json_to_markdown(“perception/autoware_bevfusion/schema/bevfusion.schema.json”) }}
BEVFusion model
{{ json_to_markdown(“perception/autoware_bevfusion/schema/ml_package_bevfusion.schema.json”) }}
Detection class remapper
{{ json_to_markdown(“perception/autoware_bevfusion/schema/detection_class_remapper.schema.json”) }}
The build_only option
The autoware_bevfusion node has a build_only option to build the TensorRT engine file from the specified ONNX file, after which the program exits.
ros2 launch autoware_bevfusion bevfusion.launch.xml build_only:=true
The log_level option
The default logging severity level for autoware_bevfusion is info. For debugging purposes, the developer may decrease severity level using log_level parameter:
ros2 launch autoware_bevfusion bevfusion.launch.xml log_level:=debug
Assumptions / Known limits
This node assumes that the input pointcloud follows the PointXYZIRC layout defined in autoware_point_types.
Trained Models
You can download the onnx and config files in the following links.
The files need to be placed inside $(env HOME)/autoware_data/bevfusion
- lidar-only model:
- camera-lidar model:
- class remapper
The model was trained in TIER IV’s internal database (~35k lidar frames) for 30 epochs.
Changelog
References/External links
[1] Zhijian Liu, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela Rus, and Song Han. “BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation.” 2023 International Conference on Robotics and Automation.
(Optional) Future extensions / Unimplemented parts
Although this node can perform camera-lidar fusion, as it is the first method in autoware to actually use images and lidars for inference, the package structure and its full integration in the autoware pipeline are left for future work. In the current structure, it can be employed without any changes as a lidar-based detector.
Changelog for package autoware_bevfusion
0.50.0 (2026-02-14)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat(autoware_bevfusion): update nvcc flags (#12045) Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
-
feat(BEVFusion): move cuda stream creation to the beginning of BEVFusionTRT initialization (#11967)
- move cuda stream init before init
* Remove empty lines ---------
-
fix(bevfusion): suppress -Werror for precomputed_features.cpp (#11959)
-
fix(autoware_bevfusion): restore spconv in cmakelists (#11953)
-
chore(autoware_bevfusion): remove cudnn dependency (#11887)
-
Contributors: Amadeusz Szymko, Kok Seang Tan, Mete Fatih Cırıt, Ryohsuke Mitsudome
0.49.0 (2025-12-30)
-
Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog
-
feat(autoware_bevfusion): separate image backbone from fusion model and add lidar intensity option (#11468)
- image backbone building
- inference running without error
- working bevfusion-cl
- style(pre-commit): autofix
- removed unnecessary changes
- style(pre-commit): autofix
- made requested changes
- style(pre-commit): autofix
- updated memcopy for img_matrices
- fix parameter names and defaults
- style(pre-commit): autofix
- fixed complile time issues
- refactor pre-process method
- refactored node code
- style(pre-commit): autofix
- refactor init method
- style(pre-commit): autofix
- split node code
- style(pre-commit): autofix
- helper code complexity refactor
- fix lint error
- style(pre-commit): autofix
- update schema params
* suppress clang changes ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Samrat Thapa
0.48.0 (2025-11-18)
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
- style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Mete Fatih Cırıt
0.46.0 (2025-06-20)
-
Merge remote-tracking branch 'upstream/main' into tmp/TaikiYamada/bump_version_base
-
fix(autoware_bevfusion): fix clang-tidy errors by removing unused fields (#10850)
- fix clang-tidy errors by removing unused fields
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(cmake): update spconv availability messages to use STATUS and WAR… (#10690)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/bevfusion.launch.xml
-
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- output/objects [default: objects]
- data_path [default: $(env HOME)/autoware_data]
- model_name [default: bevfusion_lidar]
- model_path [default: $(var data_path)/bevfusion]
- model_param_path [default: $(find-pkg-share autoware_bevfusion)/config/$(var model_name).param.yaml]
- ml_package_param_path [default: $(var model_path)/ml_package_$(var model_name).param.yaml]
- class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
- common_param_path [default: $(find-pkg-share autoware_bevfusion)/config/common_bevfusion.param.yaml]
- build_only [default: false]
- log_level [default: info]
- use_pointcloud_container [default: false]
- pointcloud_container_name [default: pointcloud_container]
- use_decompress [default: false]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- image0 [default: /sensing/camera/camera0/image_rect_color]
- image1 [default: /sensing/camera/camera1/image_rect_color]
- image2 [default: /sensing/camera/camera2/image_rect_color]
- image3 [default: /sensing/camera/camera3/image_rect_color]
- image4 [default: /sensing/camera/camera4/image_rect_color]
- image5 [default: /sensing/camera/camera5/image_rect_color]
- decompressor_param_file [default: $(find-pkg-share autoware_image_transport_decompressor)/config/image_transport_decompressor.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_bevfusion at Robotics Stack Exchange
Package Summary
| Version | 0.50.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-25 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Maintainers
- Kenzo Lobos-Tsunekawa
- Amadeusz Szymko
- Kotaro Uetake
- Masato Saeki
- Kok Seang Tan
Authors
autoware_bevfusion
Purpose
The autoware_bevfusion package is used for 3D object detection based on lidar or camera-lidar fusion.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for BEVFusion [1]. The sparse convolution backend corresponds to spconv. Autoware installs it automatically in its setup script. If needed, the user can also build it and install it following the following instructions.
Inputs / Outputs
Input
| Name | Type | Description |
|---|---|---|
~/input/pointcloud |
sensor_msgs::msg::PointCloud2 |
Input pointcloud topics. |
~/input/image* |
sensor_msgs::msg::Image |
Input image topics. |
~/input/camera_info* |
sensor_msgs::msg::CameraInfo |
Input camera info topics. |
Output
| Name | Type | Description |
|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. |
debug/cyclic_time_ms |
tier4_debug_msgs::msg::Float64Stamped |
Cyclic time (ms). |
debug/pipeline_latency_ms |
tier4_debug_msgs::msg::Float64Stamped |
Pipeline latency time (ms). |
debug/processing_time/preprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Preprocess (ms). |
debug/processing_time/inference_ms |
tier4_debug_msgs::msg::Float64Stamped |
Inference time (ms). |
debug/processing_time/postprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Postprocess time (ms). |
debug/processing_time/total_ms |
tier4_debug_msgs::msg::Float64Stamped |
Total processing time (ms). |
Parameters
BEVFusion node
{{ json_to_markdown(“perception/autoware_bevfusion/schema/bevfusion.schema.json”) }}
BEVFusion model
{{ json_to_markdown(“perception/autoware_bevfusion/schema/ml_package_bevfusion.schema.json”) }}
Detection class remapper
{{ json_to_markdown(“perception/autoware_bevfusion/schema/detection_class_remapper.schema.json”) }}
The build_only option
The autoware_bevfusion node has a build_only option to build the TensorRT engine file from the specified ONNX file, after which the program exits.
ros2 launch autoware_bevfusion bevfusion.launch.xml build_only:=true
The log_level option
The default logging severity level for autoware_bevfusion is info. For debugging purposes, the developer may decrease severity level using log_level parameter:
ros2 launch autoware_bevfusion bevfusion.launch.xml log_level:=debug
Assumptions / Known limits
This node assumes that the input pointcloud follows the PointXYZIRC layout defined in autoware_point_types.
Trained Models
You can download the onnx and config files in the following links.
The files need to be placed inside $(env HOME)/autoware_data/bevfusion
- lidar-only model:
- camera-lidar model:
- class remapper
The model was trained in TIER IV’s internal database (~35k lidar frames) for 30 epochs.
Changelog
References/External links
[1] Zhijian Liu, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela Rus, and Song Han. “BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation.” 2023 International Conference on Robotics and Automation.
(Optional) Future extensions / Unimplemented parts
Although this node can perform camera-lidar fusion, as it is the first method in autoware to actually use images and lidars for inference, the package structure and its full integration in the autoware pipeline are left for future work. In the current structure, it can be employed without any changes as a lidar-based detector.
Changelog for package autoware_bevfusion
0.50.0 (2026-02-14)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat(autoware_bevfusion): update nvcc flags (#12045) Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
-
feat(BEVFusion): move cuda stream creation to the beginning of BEVFusionTRT initialization (#11967)
- move cuda stream init before init
* Remove empty lines ---------
-
fix(bevfusion): suppress -Werror for precomputed_features.cpp (#11959)
-
fix(autoware_bevfusion): restore spconv in cmakelists (#11953)
-
chore(autoware_bevfusion): remove cudnn dependency (#11887)
-
Contributors: Amadeusz Szymko, Kok Seang Tan, Mete Fatih Cırıt, Ryohsuke Mitsudome
0.49.0 (2025-12-30)
-
Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog
-
feat(autoware_bevfusion): separate image backbone from fusion model and add lidar intensity option (#11468)
- image backbone building
- inference running without error
- working bevfusion-cl
- style(pre-commit): autofix
- removed unnecessary changes
- style(pre-commit): autofix
- made requested changes
- style(pre-commit): autofix
- updated memcopy for img_matrices
- fix parameter names and defaults
- style(pre-commit): autofix
- fixed complile time issues
- refactor pre-process method
- refactored node code
- style(pre-commit): autofix
- refactor init method
- style(pre-commit): autofix
- split node code
- style(pre-commit): autofix
- helper code complexity refactor
- fix lint error
- style(pre-commit): autofix
- update schema params
* suppress clang changes ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Samrat Thapa
0.48.0 (2025-11-18)
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
- style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Mete Fatih Cırıt
0.46.0 (2025-06-20)
-
Merge remote-tracking branch 'upstream/main' into tmp/TaikiYamada/bump_version_base
-
fix(autoware_bevfusion): fix clang-tidy errors by removing unused fields (#10850)
- fix clang-tidy errors by removing unused fields
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(cmake): update spconv availability messages to use STATUS and WAR… (#10690)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/bevfusion.launch.xml
-
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- output/objects [default: objects]
- data_path [default: $(env HOME)/autoware_data]
- model_name [default: bevfusion_lidar]
- model_path [default: $(var data_path)/bevfusion]
- model_param_path [default: $(find-pkg-share autoware_bevfusion)/config/$(var model_name).param.yaml]
- ml_package_param_path [default: $(var model_path)/ml_package_$(var model_name).param.yaml]
- class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
- common_param_path [default: $(find-pkg-share autoware_bevfusion)/config/common_bevfusion.param.yaml]
- build_only [default: false]
- log_level [default: info]
- use_pointcloud_container [default: false]
- pointcloud_container_name [default: pointcloud_container]
- use_decompress [default: false]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- image0 [default: /sensing/camera/camera0/image_rect_color]
- image1 [default: /sensing/camera/camera1/image_rect_color]
- image2 [default: /sensing/camera/camera2/image_rect_color]
- image3 [default: /sensing/camera/camera3/image_rect_color]
- image4 [default: /sensing/camera/camera4/image_rect_color]
- image5 [default: /sensing/camera/camera5/image_rect_color]
- decompressor_param_file [default: $(find-pkg-share autoware_image_transport_decompressor)/config/image_transport_decompressor.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_bevfusion at Robotics Stack Exchange
Package Summary
| Version | 0.50.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-25 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Maintainers
- Kenzo Lobos-Tsunekawa
- Amadeusz Szymko
- Kotaro Uetake
- Masato Saeki
- Kok Seang Tan
Authors
autoware_bevfusion
Purpose
The autoware_bevfusion package is used for 3D object detection based on lidar or camera-lidar fusion.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for BEVFusion [1]. The sparse convolution backend corresponds to spconv. Autoware installs it automatically in its setup script. If needed, the user can also build it and install it following the following instructions.
Inputs / Outputs
Input
| Name | Type | Description |
|---|---|---|
~/input/pointcloud |
sensor_msgs::msg::PointCloud2 |
Input pointcloud topics. |
~/input/image* |
sensor_msgs::msg::Image |
Input image topics. |
~/input/camera_info* |
sensor_msgs::msg::CameraInfo |
Input camera info topics. |
Output
| Name | Type | Description |
|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. |
debug/cyclic_time_ms |
tier4_debug_msgs::msg::Float64Stamped |
Cyclic time (ms). |
debug/pipeline_latency_ms |
tier4_debug_msgs::msg::Float64Stamped |
Pipeline latency time (ms). |
debug/processing_time/preprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Preprocess (ms). |
debug/processing_time/inference_ms |
tier4_debug_msgs::msg::Float64Stamped |
Inference time (ms). |
debug/processing_time/postprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Postprocess time (ms). |
debug/processing_time/total_ms |
tier4_debug_msgs::msg::Float64Stamped |
Total processing time (ms). |
Parameters
BEVFusion node
{{ json_to_markdown(“perception/autoware_bevfusion/schema/bevfusion.schema.json”) }}
BEVFusion model
{{ json_to_markdown(“perception/autoware_bevfusion/schema/ml_package_bevfusion.schema.json”) }}
Detection class remapper
{{ json_to_markdown(“perception/autoware_bevfusion/schema/detection_class_remapper.schema.json”) }}
The build_only option
The autoware_bevfusion node has a build_only option to build the TensorRT engine file from the specified ONNX file, after which the program exits.
ros2 launch autoware_bevfusion bevfusion.launch.xml build_only:=true
The log_level option
The default logging severity level for autoware_bevfusion is info. For debugging purposes, the developer may decrease severity level using log_level parameter:
ros2 launch autoware_bevfusion bevfusion.launch.xml log_level:=debug
Assumptions / Known limits
This node assumes that the input pointcloud follows the PointXYZIRC layout defined in autoware_point_types.
Trained Models
You can download the onnx and config files in the following links.
The files need to be placed inside $(env HOME)/autoware_data/bevfusion
- lidar-only model:
- camera-lidar model:
- class remapper
The model was trained in TIER IV’s internal database (~35k lidar frames) for 30 epochs.
Changelog
References/External links
[1] Zhijian Liu, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela Rus, and Song Han. “BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation.” 2023 International Conference on Robotics and Automation.
(Optional) Future extensions / Unimplemented parts
Although this node can perform camera-lidar fusion, as it is the first method in autoware to actually use images and lidars for inference, the package structure and its full integration in the autoware pipeline are left for future work. In the current structure, it can be employed without any changes as a lidar-based detector.
Changelog for package autoware_bevfusion
0.50.0 (2026-02-14)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat(autoware_bevfusion): update nvcc flags (#12045) Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
-
feat(BEVFusion): move cuda stream creation to the beginning of BEVFusionTRT initialization (#11967)
- move cuda stream init before init
* Remove empty lines ---------
-
fix(bevfusion): suppress -Werror for precomputed_features.cpp (#11959)
-
fix(autoware_bevfusion): restore spconv in cmakelists (#11953)
-
chore(autoware_bevfusion): remove cudnn dependency (#11887)
-
Contributors: Amadeusz Szymko, Kok Seang Tan, Mete Fatih Cırıt, Ryohsuke Mitsudome
0.49.0 (2025-12-30)
-
Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog
-
feat(autoware_bevfusion): separate image backbone from fusion model and add lidar intensity option (#11468)
- image backbone building
- inference running without error
- working bevfusion-cl
- style(pre-commit): autofix
- removed unnecessary changes
- style(pre-commit): autofix
- made requested changes
- style(pre-commit): autofix
- updated memcopy for img_matrices
- fix parameter names and defaults
- style(pre-commit): autofix
- fixed complile time issues
- refactor pre-process method
- refactored node code
- style(pre-commit): autofix
- refactor init method
- style(pre-commit): autofix
- split node code
- style(pre-commit): autofix
- helper code complexity refactor
- fix lint error
- style(pre-commit): autofix
- update schema params
* suppress clang changes ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Samrat Thapa
0.48.0 (2025-11-18)
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
- style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Mete Fatih Cırıt
0.46.0 (2025-06-20)
-
Merge remote-tracking branch 'upstream/main' into tmp/TaikiYamada/bump_version_base
-
fix(autoware_bevfusion): fix clang-tidy errors by removing unused fields (#10850)
- fix clang-tidy errors by removing unused fields
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(cmake): update spconv availability messages to use STATUS and WAR… (#10690)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/bevfusion.launch.xml
-
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- output/objects [default: objects]
- data_path [default: $(env HOME)/autoware_data]
- model_name [default: bevfusion_lidar]
- model_path [default: $(var data_path)/bevfusion]
- model_param_path [default: $(find-pkg-share autoware_bevfusion)/config/$(var model_name).param.yaml]
- ml_package_param_path [default: $(var model_path)/ml_package_$(var model_name).param.yaml]
- class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
- common_param_path [default: $(find-pkg-share autoware_bevfusion)/config/common_bevfusion.param.yaml]
- build_only [default: false]
- log_level [default: info]
- use_pointcloud_container [default: false]
- pointcloud_container_name [default: pointcloud_container]
- use_decompress [default: false]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- image0 [default: /sensing/camera/camera0/image_rect_color]
- image1 [default: /sensing/camera/camera1/image_rect_color]
- image2 [default: /sensing/camera/camera2/image_rect_color]
- image3 [default: /sensing/camera/camera3/image_rect_color]
- image4 [default: /sensing/camera/camera4/image_rect_color]
- image5 [default: /sensing/camera/camera5/image_rect_color]
- decompressor_param_file [default: $(find-pkg-share autoware_image_transport_decompressor)/config/image_transport_decompressor.param.yaml]