Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-28 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Selventhiran Rengaraj
- Ramaseshan Subramanian
- Naveen Sathiyaseelan
- Dhinesh Panneerselvam
- Rahul Gandhi Sundar
Authors
tensorrt_bevformer
Purpose
The core algorithm, named BEVFormer
, unifies multi-view images into the BEV perspective for 3D object detection tasks with temporal fusion.
Inner-workings / Algorithms
Cite
- Zhicheng Wang, et al., “BEVFormer: Incorporating Transformers for Multi-Camera 3D Detection” [ref]
- This node is ported and adapted for Autoware from Multicoreware’s BEVFormer ROS2 C++ repository.
Inputs / Outputs
Inputs
Name | Type | Description |
---|---|---|
~/input/topic_img_front_left |
sensor_msgs::msg::Image |
input front_left camera image |
~/input/topic_img_front |
sensor_msgs::msg::Image |
input front camera image |
~/input/topic_img_front_right |
sensor_msgs::msg::Image |
input front_right camera image |
~/input/topic_img_back_left |
sensor_msgs::msg::Image |
input back_left camera image |
~/input/topic_img_back |
sensor_msgs::msg::Image |
input back camera image |
~/input/topic_img_back_right |
sensor_msgs::msg::Image |
input back_right camera image |
~/input/topic_img_front_left/camera_info |
sensor_msgs::msg::CameraInfo |
input front_left camera parameters |
~/input/topic_img_front/camera_info |
sensor_msgs::msg::CameraInfo |
input front camera parameters |
~/input/topic_img_front_right/camera_info |
sensor_msgs::msg::CameraInfo |
input front_right camera parameters |
~/input/topic_img_back_left/camera_info |
sensor_msgs::msg::CameraInfo |
input back_left camera parameters |
~/input/topic_img_back/camera_info |
sensor_msgs::msg::CameraInfo |
input back camera parameters |
~/input/topic_img_back_right/camera_info |
sensor_msgs::msg::CameraInfo |
input back_right camera parameters |
~/input/can_bus |
autoware_localization_msgs::msg::KinematicState |
CAN bus data for ego-motion |
Outputs
Name | Type | Description |
---|---|---|
~/output_boxes |
autoware_perception_msgs::msg::DetectedObjects |
detected objects |
~/output_bboxes |
visualization_msgs::msg::MarkerArray |
detected objects for nuScenes visualization |
How to Use Tensorrt BEVFormer Node
Prerequisites
- TensorRT 10.8.0.43
- CUDA 12.4
- cuDNN 8.9.2
Trained Model
Download the bevformer_small.onnx
model to:
$HOME/autoware_data/tensorrt_bevformer
Note: The BEVFormer model was trained on the nuScenes dataset for 24 epochs with temporal fusion enabled.
Test TensorRT BEVFormer Node with nuScenes
-
Integrate this package into your autoware_universe/perception directory.
-
To play ROS 2 bag of nuScenes data:
cd autoware/src
git clone -b feature/bevformer-integration https://github.com/naveen-mcw/ros2_dataset_bridge.git
cd ..
Note: The
feature/bevformer-integration
branch provides required data for the BEVFormer.
Download nuScenes dataset and canbus data here.
Open and edit the launch file to set dataset paths/configs:
nano src/ros2_dataset_bridge/launch/nuscenes_launch.xml
Update as needed:
<arg name="NUSCENES_DIR" default="<nuScenes_dataset_path>"/>
<arg name="NUSCENES_CAN_BUS_DIR" default="<can_bus_path>"/>
<arg name="NUSCENES_VER" default="v1.0-trainval"/>
<arg name="UPDATE_FREQUENCY" default="10.0"/>
- Build the autoware_tensorrt_bevformer and ros2_dataset_bridge packages
```bash # Build ros2_dataset_bridge
colcon build –packages-up-to ros2_dataset_bridge
# Build autoware_tensorrt_bevformer
colcon build –packages-up-to autoware_tensorrt_bevformer
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
libopencv-dev |
Dependant Packages
Launch files
- launch/bevformer.launch.xml
-
- input/img_front_left [default: /nuscenes/CAM_FRONT_LEFT/image]
- input/img_front [default: /nuscenes/CAM_FRONT/image]
- input/img_front_right [default: /nuscenes/CAM_FRONT_RIGHT/image]
- input/img_back_left [default: /nuscenes/CAM_BACK_LEFT/image]
- input/img_back [default: /nuscenes/CAM_BACK/image]
- input/img_back_right [default: /nuscenes/CAM_BACK_RIGHT/image]
- input/can_bus [default: /nuscenes/can_bus]
- output_boxes [default: ~/output_boxes]
- output_bboxes [default: ~/output/debug/markers/bounding_boxes]
- input/img_front_left/camera_info [default: /nuscenes/CAM_FRONT_LEFT/camera_info]
- input/img_front/camera_info [default: /nuscenes/CAM_FRONT/camera_info]
- input/img_front_right/camera_info [default: /nuscenes/CAM_FRONT_RIGHT/camera_info]
- input/img_back_left/camera_info [default: /nuscenes/CAM_BACK_LEFT/camera_info]
- input/img_back/camera_info [default: /nuscenes/CAM_BACK/camera_info]
- input/img_back_right/camera_info [default: /nuscenes/CAM_BACK_RIGHT/camera_info]
- data_path [default: $(env HOME)/autoware_data/tensorrt_bevformer]
- onnx_file [default: $(var data_path)/bevformer_small.onnx]
- engine_file [default: $(var data_path)/bevformer_small.engine]
- auto_convert [default: true]
- precision [default: fp16]
- debug_mode [default: false]
- workspace_size [default: 4096]
- model_name [default: bevformer_small]
- param_file [default: $(find-pkg-share autoware_tensorrt_bevformer)/config/bevformer.param.yaml]
- plugin_path [default: ]
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_bevformer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-28 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Selventhiran Rengaraj
- Ramaseshan Subramanian
- Naveen Sathiyaseelan
- Dhinesh Panneerselvam
- Rahul Gandhi Sundar
Authors
tensorrt_bevformer
Purpose
The core algorithm, named BEVFormer
, unifies multi-view images into the BEV perspective for 3D object detection tasks with temporal fusion.
Inner-workings / Algorithms
Cite
- Zhicheng Wang, et al., “BEVFormer: Incorporating Transformers for Multi-Camera 3D Detection” [ref]
- This node is ported and adapted for Autoware from Multicoreware’s BEVFormer ROS2 C++ repository.
Inputs / Outputs
Inputs
Name | Type | Description |
---|---|---|
~/input/topic_img_front_left |
sensor_msgs::msg::Image |
input front_left camera image |
~/input/topic_img_front |
sensor_msgs::msg::Image |
input front camera image |
~/input/topic_img_front_right |
sensor_msgs::msg::Image |
input front_right camera image |
~/input/topic_img_back_left |
sensor_msgs::msg::Image |
input back_left camera image |
~/input/topic_img_back |
sensor_msgs::msg::Image |
input back camera image |
~/input/topic_img_back_right |
sensor_msgs::msg::Image |
input back_right camera image |
~/input/topic_img_front_left/camera_info |
sensor_msgs::msg::CameraInfo |
input front_left camera parameters |
~/input/topic_img_front/camera_info |
sensor_msgs::msg::CameraInfo |
input front camera parameters |
~/input/topic_img_front_right/camera_info |
sensor_msgs::msg::CameraInfo |
input front_right camera parameters |
~/input/topic_img_back_left/camera_info |
sensor_msgs::msg::CameraInfo |
input back_left camera parameters |
~/input/topic_img_back/camera_info |
sensor_msgs::msg::CameraInfo |
input back camera parameters |
~/input/topic_img_back_right/camera_info |
sensor_msgs::msg::CameraInfo |
input back_right camera parameters |
~/input/can_bus |
autoware_localization_msgs::msg::KinematicState |
CAN bus data for ego-motion |
Outputs
Name | Type | Description |
---|---|---|
~/output_boxes |
autoware_perception_msgs::msg::DetectedObjects |
detected objects |
~/output_bboxes |
visualization_msgs::msg::MarkerArray |
detected objects for nuScenes visualization |
How to Use Tensorrt BEVFormer Node
Prerequisites
- TensorRT 10.8.0.43
- CUDA 12.4
- cuDNN 8.9.2
Trained Model
Download the bevformer_small.onnx
model to:
$HOME/autoware_data/tensorrt_bevformer
Note: The BEVFormer model was trained on the nuScenes dataset for 24 epochs with temporal fusion enabled.
Test TensorRT BEVFormer Node with nuScenes
-
Integrate this package into your autoware_universe/perception directory.
-
To play ROS 2 bag of nuScenes data:
cd autoware/src
git clone -b feature/bevformer-integration https://github.com/naveen-mcw/ros2_dataset_bridge.git
cd ..
Note: The
feature/bevformer-integration
branch provides required data for the BEVFormer.
Download nuScenes dataset and canbus data here.
Open and edit the launch file to set dataset paths/configs:
nano src/ros2_dataset_bridge/launch/nuscenes_launch.xml
Update as needed:
<arg name="NUSCENES_DIR" default="<nuScenes_dataset_path>"/>
<arg name="NUSCENES_CAN_BUS_DIR" default="<can_bus_path>"/>
<arg name="NUSCENES_VER" default="v1.0-trainval"/>
<arg name="UPDATE_FREQUENCY" default="10.0"/>
- Build the autoware_tensorrt_bevformer and ros2_dataset_bridge packages
```bash # Build ros2_dataset_bridge
colcon build –packages-up-to ros2_dataset_bridge
# Build autoware_tensorrt_bevformer
colcon build –packages-up-to autoware_tensorrt_bevformer
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
libopencv-dev |
Dependant Packages
Launch files
- launch/bevformer.launch.xml
-
- input/img_front_left [default: /nuscenes/CAM_FRONT_LEFT/image]
- input/img_front [default: /nuscenes/CAM_FRONT/image]
- input/img_front_right [default: /nuscenes/CAM_FRONT_RIGHT/image]
- input/img_back_left [default: /nuscenes/CAM_BACK_LEFT/image]
- input/img_back [default: /nuscenes/CAM_BACK/image]
- input/img_back_right [default: /nuscenes/CAM_BACK_RIGHT/image]
- input/can_bus [default: /nuscenes/can_bus]
- output_boxes [default: ~/output_boxes]
- output_bboxes [default: ~/output/debug/markers/bounding_boxes]
- input/img_front_left/camera_info [default: /nuscenes/CAM_FRONT_LEFT/camera_info]
- input/img_front/camera_info [default: /nuscenes/CAM_FRONT/camera_info]
- input/img_front_right/camera_info [default: /nuscenes/CAM_FRONT_RIGHT/camera_info]
- input/img_back_left/camera_info [default: /nuscenes/CAM_BACK_LEFT/camera_info]
- input/img_back/camera_info [default: /nuscenes/CAM_BACK/camera_info]
- input/img_back_right/camera_info [default: /nuscenes/CAM_BACK_RIGHT/camera_info]
- data_path [default: $(env HOME)/autoware_data/tensorrt_bevformer]
- onnx_file [default: $(var data_path)/bevformer_small.onnx]
- engine_file [default: $(var data_path)/bevformer_small.engine]
- auto_convert [default: true]
- precision [default: fp16]
- debug_mode [default: false]
- workspace_size [default: 4096]
- model_name [default: bevformer_small]
- param_file [default: $(find-pkg-share autoware_tensorrt_bevformer)/config/bevformer.param.yaml]
- plugin_path [default: ]
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_bevformer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-28 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Selventhiran Rengaraj
- Ramaseshan Subramanian
- Naveen Sathiyaseelan
- Dhinesh Panneerselvam
- Rahul Gandhi Sundar
Authors
tensorrt_bevformer
Purpose
The core algorithm, named BEVFormer
, unifies multi-view images into the BEV perspective for 3D object detection tasks with temporal fusion.
Inner-workings / Algorithms
Cite
- Zhicheng Wang, et al., “BEVFormer: Incorporating Transformers for Multi-Camera 3D Detection” [ref]
- This node is ported and adapted for Autoware from Multicoreware’s BEVFormer ROS2 C++ repository.
Inputs / Outputs
Inputs
Name | Type | Description |
---|---|---|
~/input/topic_img_front_left |
sensor_msgs::msg::Image |
input front_left camera image |
~/input/topic_img_front |
sensor_msgs::msg::Image |
input front camera image |
~/input/topic_img_front_right |
sensor_msgs::msg::Image |
input front_right camera image |
~/input/topic_img_back_left |
sensor_msgs::msg::Image |
input back_left camera image |
~/input/topic_img_back |
sensor_msgs::msg::Image |
input back camera image |
~/input/topic_img_back_right |
sensor_msgs::msg::Image |
input back_right camera image |
~/input/topic_img_front_left/camera_info |
sensor_msgs::msg::CameraInfo |
input front_left camera parameters |
~/input/topic_img_front/camera_info |
sensor_msgs::msg::CameraInfo |
input front camera parameters |
~/input/topic_img_front_right/camera_info |
sensor_msgs::msg::CameraInfo |
input front_right camera parameters |
~/input/topic_img_back_left/camera_info |
sensor_msgs::msg::CameraInfo |
input back_left camera parameters |
~/input/topic_img_back/camera_info |
sensor_msgs::msg::CameraInfo |
input back camera parameters |
~/input/topic_img_back_right/camera_info |
sensor_msgs::msg::CameraInfo |
input back_right camera parameters |
~/input/can_bus |
autoware_localization_msgs::msg::KinematicState |
CAN bus data for ego-motion |
Outputs
Name | Type | Description |
---|---|---|
~/output_boxes |
autoware_perception_msgs::msg::DetectedObjects |
detected objects |
~/output_bboxes |
visualization_msgs::msg::MarkerArray |
detected objects for nuScenes visualization |
How to Use Tensorrt BEVFormer Node
Prerequisites
- TensorRT 10.8.0.43
- CUDA 12.4
- cuDNN 8.9.2
Trained Model
Download the bevformer_small.onnx
model to:
$HOME/autoware_data/tensorrt_bevformer
Note: The BEVFormer model was trained on the nuScenes dataset for 24 epochs with temporal fusion enabled.
Test TensorRT BEVFormer Node with nuScenes
-
Integrate this package into your autoware_universe/perception directory.
-
To play ROS 2 bag of nuScenes data:
cd autoware/src
git clone -b feature/bevformer-integration https://github.com/naveen-mcw/ros2_dataset_bridge.git
cd ..
Note: The
feature/bevformer-integration
branch provides required data for the BEVFormer.
Download nuScenes dataset and canbus data here.
Open and edit the launch file to set dataset paths/configs:
nano src/ros2_dataset_bridge/launch/nuscenes_launch.xml
Update as needed:
<arg name="NUSCENES_DIR" default="<nuScenes_dataset_path>"/>
<arg name="NUSCENES_CAN_BUS_DIR" default="<can_bus_path>"/>
<arg name="NUSCENES_VER" default="v1.0-trainval"/>
<arg name="UPDATE_FREQUENCY" default="10.0"/>
- Build the autoware_tensorrt_bevformer and ros2_dataset_bridge packages
```bash # Build ros2_dataset_bridge
colcon build –packages-up-to ros2_dataset_bridge
# Build autoware_tensorrt_bevformer
colcon build –packages-up-to autoware_tensorrt_bevformer
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
libopencv-dev |
Dependant Packages
Launch files
- launch/bevformer.launch.xml
-
- input/img_front_left [default: /nuscenes/CAM_FRONT_LEFT/image]
- input/img_front [default: /nuscenes/CAM_FRONT/image]
- input/img_front_right [default: /nuscenes/CAM_FRONT_RIGHT/image]
- input/img_back_left [default: /nuscenes/CAM_BACK_LEFT/image]
- input/img_back [default: /nuscenes/CAM_BACK/image]
- input/img_back_right [default: /nuscenes/CAM_BACK_RIGHT/image]
- input/can_bus [default: /nuscenes/can_bus]
- output_boxes [default: ~/output_boxes]
- output_bboxes [default: ~/output/debug/markers/bounding_boxes]
- input/img_front_left/camera_info [default: /nuscenes/CAM_FRONT_LEFT/camera_info]
- input/img_front/camera_info [default: /nuscenes/CAM_FRONT/camera_info]
- input/img_front_right/camera_info [default: /nuscenes/CAM_FRONT_RIGHT/camera_info]
- input/img_back_left/camera_info [default: /nuscenes/CAM_BACK_LEFT/camera_info]
- input/img_back/camera_info [default: /nuscenes/CAM_BACK/camera_info]
- input/img_back_right/camera_info [default: /nuscenes/CAM_BACK_RIGHT/camera_info]
- data_path [default: $(env HOME)/autoware_data/tensorrt_bevformer]
- onnx_file [default: $(var data_path)/bevformer_small.onnx]
- engine_file [default: $(var data_path)/bevformer_small.engine]
- auto_convert [default: true]
- precision [default: fp16]
- debug_mode [default: false]
- workspace_size [default: 4096]
- model_name [default: bevformer_small]
- param_file [default: $(find-pkg-share autoware_tensorrt_bevformer)/config/bevformer.param.yaml]
- plugin_path [default: ]
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_bevformer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-28 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Selventhiran Rengaraj
- Ramaseshan Subramanian
- Naveen Sathiyaseelan
- Dhinesh Panneerselvam
- Rahul Gandhi Sundar
Authors
tensorrt_bevformer
Purpose
The core algorithm, named BEVFormer
, unifies multi-view images into the BEV perspective for 3D object detection tasks with temporal fusion.
Inner-workings / Algorithms
Cite
- Zhicheng Wang, et al., “BEVFormer: Incorporating Transformers for Multi-Camera 3D Detection” [ref]
- This node is ported and adapted for Autoware from Multicoreware’s BEVFormer ROS2 C++ repository.
Inputs / Outputs
Inputs
Name | Type | Description |
---|---|---|
~/input/topic_img_front_left |
sensor_msgs::msg::Image |
input front_left camera image |
~/input/topic_img_front |
sensor_msgs::msg::Image |
input front camera image |
~/input/topic_img_front_right |
sensor_msgs::msg::Image |
input front_right camera image |
~/input/topic_img_back_left |
sensor_msgs::msg::Image |
input back_left camera image |
~/input/topic_img_back |
sensor_msgs::msg::Image |
input back camera image |
~/input/topic_img_back_right |
sensor_msgs::msg::Image |
input back_right camera image |
~/input/topic_img_front_left/camera_info |
sensor_msgs::msg::CameraInfo |
input front_left camera parameters |
~/input/topic_img_front/camera_info |
sensor_msgs::msg::CameraInfo |
input front camera parameters |
~/input/topic_img_front_right/camera_info |
sensor_msgs::msg::CameraInfo |
input front_right camera parameters |
~/input/topic_img_back_left/camera_info |
sensor_msgs::msg::CameraInfo |
input back_left camera parameters |
~/input/topic_img_back/camera_info |
sensor_msgs::msg::CameraInfo |
input back camera parameters |
~/input/topic_img_back_right/camera_info |
sensor_msgs::msg::CameraInfo |
input back_right camera parameters |
~/input/can_bus |
autoware_localization_msgs::msg::KinematicState |
CAN bus data for ego-motion |
Outputs
Name | Type | Description |
---|---|---|
~/output_boxes |
autoware_perception_msgs::msg::DetectedObjects |
detected objects |
~/output_bboxes |
visualization_msgs::msg::MarkerArray |
detected objects for nuScenes visualization |
How to Use Tensorrt BEVFormer Node
Prerequisites
- TensorRT 10.8.0.43
- CUDA 12.4
- cuDNN 8.9.2
Trained Model
Download the bevformer_small.onnx
model to:
$HOME/autoware_data/tensorrt_bevformer
Note: The BEVFormer model was trained on the nuScenes dataset for 24 epochs with temporal fusion enabled.
Test TensorRT BEVFormer Node with nuScenes
-
Integrate this package into your autoware_universe/perception directory.
-
To play ROS 2 bag of nuScenes data:
cd autoware/src
git clone -b feature/bevformer-integration https://github.com/naveen-mcw/ros2_dataset_bridge.git
cd ..
Note: The
feature/bevformer-integration
branch provides required data for the BEVFormer.
Download nuScenes dataset and canbus data here.
Open and edit the launch file to set dataset paths/configs:
nano src/ros2_dataset_bridge/launch/nuscenes_launch.xml
Update as needed:
<arg name="NUSCENES_DIR" default="<nuScenes_dataset_path>"/>
<arg name="NUSCENES_CAN_BUS_DIR" default="<can_bus_path>"/>
<arg name="NUSCENES_VER" default="v1.0-trainval"/>
<arg name="UPDATE_FREQUENCY" default="10.0"/>
- Build the autoware_tensorrt_bevformer and ros2_dataset_bridge packages
```bash # Build ros2_dataset_bridge
colcon build –packages-up-to ros2_dataset_bridge
# Build autoware_tensorrt_bevformer
colcon build –packages-up-to autoware_tensorrt_bevformer
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
libopencv-dev |
Dependant Packages
Launch files
- launch/bevformer.launch.xml
-
- input/img_front_left [default: /nuscenes/CAM_FRONT_LEFT/image]
- input/img_front [default: /nuscenes/CAM_FRONT/image]
- input/img_front_right [default: /nuscenes/CAM_FRONT_RIGHT/image]
- input/img_back_left [default: /nuscenes/CAM_BACK_LEFT/image]
- input/img_back [default: /nuscenes/CAM_BACK/image]
- input/img_back_right [default: /nuscenes/CAM_BACK_RIGHT/image]
- input/can_bus [default: /nuscenes/can_bus]
- output_boxes [default: ~/output_boxes]
- output_bboxes [default: ~/output/debug/markers/bounding_boxes]
- input/img_front_left/camera_info [default: /nuscenes/CAM_FRONT_LEFT/camera_info]
- input/img_front/camera_info [default: /nuscenes/CAM_FRONT/camera_info]
- input/img_front_right/camera_info [default: /nuscenes/CAM_FRONT_RIGHT/camera_info]
- input/img_back_left/camera_info [default: /nuscenes/CAM_BACK_LEFT/camera_info]
- input/img_back/camera_info [default: /nuscenes/CAM_BACK/camera_info]
- input/img_back_right/camera_info [default: /nuscenes/CAM_BACK_RIGHT/camera_info]
- data_path [default: $(env HOME)/autoware_data/tensorrt_bevformer]
- onnx_file [default: $(var data_path)/bevformer_small.onnx]
- engine_file [default: $(var data_path)/bevformer_small.engine]
- auto_convert [default: true]
- precision [default: fp16]
- debug_mode [default: false]
- workspace_size [default: 4096]
- model_name [default: bevformer_small]
- param_file [default: $(find-pkg-share autoware_tensorrt_bevformer)/config/bevformer.param.yaml]
- plugin_path [default: ]
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_bevformer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-28 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Selventhiran Rengaraj
- Ramaseshan Subramanian
- Naveen Sathiyaseelan
- Dhinesh Panneerselvam
- Rahul Gandhi Sundar
Authors
tensorrt_bevformer
Purpose
The core algorithm, named BEVFormer
, unifies multi-view images into the BEV perspective for 3D object detection tasks with temporal fusion.
Inner-workings / Algorithms
Cite
- Zhicheng Wang, et al., “BEVFormer: Incorporating Transformers for Multi-Camera 3D Detection” [ref]
- This node is ported and adapted for Autoware from Multicoreware’s BEVFormer ROS2 C++ repository.
Inputs / Outputs
Inputs
Name | Type | Description |
---|---|---|
~/input/topic_img_front_left |
sensor_msgs::msg::Image |
input front_left camera image |
~/input/topic_img_front |
sensor_msgs::msg::Image |
input front camera image |
~/input/topic_img_front_right |
sensor_msgs::msg::Image |
input front_right camera image |
~/input/topic_img_back_left |
sensor_msgs::msg::Image |
input back_left camera image |
~/input/topic_img_back |
sensor_msgs::msg::Image |
input back camera image |
~/input/topic_img_back_right |
sensor_msgs::msg::Image |
input back_right camera image |
~/input/topic_img_front_left/camera_info |
sensor_msgs::msg::CameraInfo |
input front_left camera parameters |
~/input/topic_img_front/camera_info |
sensor_msgs::msg::CameraInfo |
input front camera parameters |
~/input/topic_img_front_right/camera_info |
sensor_msgs::msg::CameraInfo |
input front_right camera parameters |
~/input/topic_img_back_left/camera_info |
sensor_msgs::msg::CameraInfo |
input back_left camera parameters |
~/input/topic_img_back/camera_info |
sensor_msgs::msg::CameraInfo |
input back camera parameters |
~/input/topic_img_back_right/camera_info |
sensor_msgs::msg::CameraInfo |
input back_right camera parameters |
~/input/can_bus |
autoware_localization_msgs::msg::KinematicState |
CAN bus data for ego-motion |
Outputs
Name | Type | Description |
---|---|---|
~/output_boxes |
autoware_perception_msgs::msg::DetectedObjects |
detected objects |
~/output_bboxes |
visualization_msgs::msg::MarkerArray |
detected objects for nuScenes visualization |
How to Use Tensorrt BEVFormer Node
Prerequisites
- TensorRT 10.8.0.43
- CUDA 12.4
- cuDNN 8.9.2
Trained Model
Download the bevformer_small.onnx
model to:
$HOME/autoware_data/tensorrt_bevformer
Note: The BEVFormer model was trained on the nuScenes dataset for 24 epochs with temporal fusion enabled.
Test TensorRT BEVFormer Node with nuScenes
-
Integrate this package into your autoware_universe/perception directory.
-
To play ROS 2 bag of nuScenes data:
cd autoware/src
git clone -b feature/bevformer-integration https://github.com/naveen-mcw/ros2_dataset_bridge.git
cd ..
Note: The
feature/bevformer-integration
branch provides required data for the BEVFormer.
Download nuScenes dataset and canbus data here.
Open and edit the launch file to set dataset paths/configs:
nano src/ros2_dataset_bridge/launch/nuscenes_launch.xml
Update as needed:
<arg name="NUSCENES_DIR" default="<nuScenes_dataset_path>"/>
<arg name="NUSCENES_CAN_BUS_DIR" default="<can_bus_path>"/>
<arg name="NUSCENES_VER" default="v1.0-trainval"/>
<arg name="UPDATE_FREQUENCY" default="10.0"/>
- Build the autoware_tensorrt_bevformer and ros2_dataset_bridge packages
```bash # Build ros2_dataset_bridge
colcon build –packages-up-to ros2_dataset_bridge
# Build autoware_tensorrt_bevformer
colcon build –packages-up-to autoware_tensorrt_bevformer
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
libopencv-dev |
Dependant Packages
Launch files
- launch/bevformer.launch.xml
-
- input/img_front_left [default: /nuscenes/CAM_FRONT_LEFT/image]
- input/img_front [default: /nuscenes/CAM_FRONT/image]
- input/img_front_right [default: /nuscenes/CAM_FRONT_RIGHT/image]
- input/img_back_left [default: /nuscenes/CAM_BACK_LEFT/image]
- input/img_back [default: /nuscenes/CAM_BACK/image]
- input/img_back_right [default: /nuscenes/CAM_BACK_RIGHT/image]
- input/can_bus [default: /nuscenes/can_bus]
- output_boxes [default: ~/output_boxes]
- output_bboxes [default: ~/output/debug/markers/bounding_boxes]
- input/img_front_left/camera_info [default: /nuscenes/CAM_FRONT_LEFT/camera_info]
- input/img_front/camera_info [default: /nuscenes/CAM_FRONT/camera_info]
- input/img_front_right/camera_info [default: /nuscenes/CAM_FRONT_RIGHT/camera_info]
- input/img_back_left/camera_info [default: /nuscenes/CAM_BACK_LEFT/camera_info]
- input/img_back/camera_info [default: /nuscenes/CAM_BACK/camera_info]
- input/img_back_right/camera_info [default: /nuscenes/CAM_BACK_RIGHT/camera_info]
- data_path [default: $(env HOME)/autoware_data/tensorrt_bevformer]
- onnx_file [default: $(var data_path)/bevformer_small.onnx]
- engine_file [default: $(var data_path)/bevformer_small.engine]
- auto_convert [default: true]
- precision [default: fp16]
- debug_mode [default: false]
- workspace_size [default: 4096]
- model_name [default: bevformer_small]
- param_file [default: $(find-pkg-share autoware_tensorrt_bevformer)/config/bevformer.param.yaml]
- plugin_path [default: ]
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_bevformer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-28 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Selventhiran Rengaraj
- Ramaseshan Subramanian
- Naveen Sathiyaseelan
- Dhinesh Panneerselvam
- Rahul Gandhi Sundar
Authors
tensorrt_bevformer
Purpose
The core algorithm, named BEVFormer
, unifies multi-view images into the BEV perspective for 3D object detection tasks with temporal fusion.
Inner-workings / Algorithms
Cite
- Zhicheng Wang, et al., “BEVFormer: Incorporating Transformers for Multi-Camera 3D Detection” [ref]
- This node is ported and adapted for Autoware from Multicoreware’s BEVFormer ROS2 C++ repository.
Inputs / Outputs
Inputs
Name | Type | Description |
---|---|---|
~/input/topic_img_front_left |
sensor_msgs::msg::Image |
input front_left camera image |
~/input/topic_img_front |
sensor_msgs::msg::Image |
input front camera image |
~/input/topic_img_front_right |
sensor_msgs::msg::Image |
input front_right camera image |
~/input/topic_img_back_left |
sensor_msgs::msg::Image |
input back_left camera image |
~/input/topic_img_back |
sensor_msgs::msg::Image |
input back camera image |
~/input/topic_img_back_right |
sensor_msgs::msg::Image |
input back_right camera image |
~/input/topic_img_front_left/camera_info |
sensor_msgs::msg::CameraInfo |
input front_left camera parameters |
~/input/topic_img_front/camera_info |
sensor_msgs::msg::CameraInfo |
input front camera parameters |
~/input/topic_img_front_right/camera_info |
sensor_msgs::msg::CameraInfo |
input front_right camera parameters |
~/input/topic_img_back_left/camera_info |
sensor_msgs::msg::CameraInfo |
input back_left camera parameters |
~/input/topic_img_back/camera_info |
sensor_msgs::msg::CameraInfo |
input back camera parameters |
~/input/topic_img_back_right/camera_info |
sensor_msgs::msg::CameraInfo |
input back_right camera parameters |
~/input/can_bus |
autoware_localization_msgs::msg::KinematicState |
CAN bus data for ego-motion |
Outputs
Name | Type | Description |
---|---|---|
~/output_boxes |
autoware_perception_msgs::msg::DetectedObjects |
detected objects |
~/output_bboxes |
visualization_msgs::msg::MarkerArray |
detected objects for nuScenes visualization |
How to Use Tensorrt BEVFormer Node
Prerequisites
- TensorRT 10.8.0.43
- CUDA 12.4
- cuDNN 8.9.2
Trained Model
Download the bevformer_small.onnx
model to:
$HOME/autoware_data/tensorrt_bevformer
Note: The BEVFormer model was trained on the nuScenes dataset for 24 epochs with temporal fusion enabled.
Test TensorRT BEVFormer Node with nuScenes
-
Integrate this package into your autoware_universe/perception directory.
-
To play ROS 2 bag of nuScenes data:
cd autoware/src
git clone -b feature/bevformer-integration https://github.com/naveen-mcw/ros2_dataset_bridge.git
cd ..
Note: The
feature/bevformer-integration
branch provides required data for the BEVFormer.
Download nuScenes dataset and canbus data here.
Open and edit the launch file to set dataset paths/configs:
nano src/ros2_dataset_bridge/launch/nuscenes_launch.xml
Update as needed:
<arg name="NUSCENES_DIR" default="<nuScenes_dataset_path>"/>
<arg name="NUSCENES_CAN_BUS_DIR" default="<can_bus_path>"/>
<arg name="NUSCENES_VER" default="v1.0-trainval"/>
<arg name="UPDATE_FREQUENCY" default="10.0"/>
- Build the autoware_tensorrt_bevformer and ros2_dataset_bridge packages
```bash # Build ros2_dataset_bridge
colcon build –packages-up-to ros2_dataset_bridge
# Build autoware_tensorrt_bevformer
colcon build –packages-up-to autoware_tensorrt_bevformer
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
libopencv-dev |
Dependant Packages
Launch files
- launch/bevformer.launch.xml
-
- input/img_front_left [default: /nuscenes/CAM_FRONT_LEFT/image]
- input/img_front [default: /nuscenes/CAM_FRONT/image]
- input/img_front_right [default: /nuscenes/CAM_FRONT_RIGHT/image]
- input/img_back_left [default: /nuscenes/CAM_BACK_LEFT/image]
- input/img_back [default: /nuscenes/CAM_BACK/image]
- input/img_back_right [default: /nuscenes/CAM_BACK_RIGHT/image]
- input/can_bus [default: /nuscenes/can_bus]
- output_boxes [default: ~/output_boxes]
- output_bboxes [default: ~/output/debug/markers/bounding_boxes]
- input/img_front_left/camera_info [default: /nuscenes/CAM_FRONT_LEFT/camera_info]
- input/img_front/camera_info [default: /nuscenes/CAM_FRONT/camera_info]
- input/img_front_right/camera_info [default: /nuscenes/CAM_FRONT_RIGHT/camera_info]
- input/img_back_left/camera_info [default: /nuscenes/CAM_BACK_LEFT/camera_info]
- input/img_back/camera_info [default: /nuscenes/CAM_BACK/camera_info]
- input/img_back_right/camera_info [default: /nuscenes/CAM_BACK_RIGHT/camera_info]
- data_path [default: $(env HOME)/autoware_data/tensorrt_bevformer]
- onnx_file [default: $(var data_path)/bevformer_small.onnx]
- engine_file [default: $(var data_path)/bevformer_small.engine]
- auto_convert [default: true]
- precision [default: fp16]
- debug_mode [default: false]
- workspace_size [default: 4096]
- model_name [default: bevformer_small]
- param_file [default: $(find-pkg-share autoware_tensorrt_bevformer)/config/bevformer.param.yaml]
- plugin_path [default: ]
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_bevformer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-28 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Selventhiran Rengaraj
- Ramaseshan Subramanian
- Naveen Sathiyaseelan
- Dhinesh Panneerselvam
- Rahul Gandhi Sundar
Authors
tensorrt_bevformer
Purpose
The core algorithm, named BEVFormer
, unifies multi-view images into the BEV perspective for 3D object detection tasks with temporal fusion.
Inner-workings / Algorithms
Cite
- Zhicheng Wang, et al., “BEVFormer: Incorporating Transformers for Multi-Camera 3D Detection” [ref]
- This node is ported and adapted for Autoware from Multicoreware’s BEVFormer ROS2 C++ repository.
Inputs / Outputs
Inputs
Name | Type | Description |
---|---|---|
~/input/topic_img_front_left |
sensor_msgs::msg::Image |
input front_left camera image |
~/input/topic_img_front |
sensor_msgs::msg::Image |
input front camera image |
~/input/topic_img_front_right |
sensor_msgs::msg::Image |
input front_right camera image |
~/input/topic_img_back_left |
sensor_msgs::msg::Image |
input back_left camera image |
~/input/topic_img_back |
sensor_msgs::msg::Image |
input back camera image |
~/input/topic_img_back_right |
sensor_msgs::msg::Image |
input back_right camera image |
~/input/topic_img_front_left/camera_info |
sensor_msgs::msg::CameraInfo |
input front_left camera parameters |
~/input/topic_img_front/camera_info |
sensor_msgs::msg::CameraInfo |
input front camera parameters |
~/input/topic_img_front_right/camera_info |
sensor_msgs::msg::CameraInfo |
input front_right camera parameters |
~/input/topic_img_back_left/camera_info |
sensor_msgs::msg::CameraInfo |
input back_left camera parameters |
~/input/topic_img_back/camera_info |
sensor_msgs::msg::CameraInfo |
input back camera parameters |
~/input/topic_img_back_right/camera_info |
sensor_msgs::msg::CameraInfo |
input back_right camera parameters |
~/input/can_bus |
autoware_localization_msgs::msg::KinematicState |
CAN bus data for ego-motion |
Outputs
Name | Type | Description |
---|---|---|
~/output_boxes |
autoware_perception_msgs::msg::DetectedObjects |
detected objects |
~/output_bboxes |
visualization_msgs::msg::MarkerArray |
detected objects for nuScenes visualization |
How to Use Tensorrt BEVFormer Node
Prerequisites
- TensorRT 10.8.0.43
- CUDA 12.4
- cuDNN 8.9.2
Trained Model
Download the bevformer_small.onnx
model to:
$HOME/autoware_data/tensorrt_bevformer
Note: The BEVFormer model was trained on the nuScenes dataset for 24 epochs with temporal fusion enabled.
Test TensorRT BEVFormer Node with nuScenes
-
Integrate this package into your autoware_universe/perception directory.
-
To play ROS 2 bag of nuScenes data:
cd autoware/src
git clone -b feature/bevformer-integration https://github.com/naveen-mcw/ros2_dataset_bridge.git
cd ..
Note: The
feature/bevformer-integration
branch provides required data for the BEVFormer.
Download nuScenes dataset and canbus data here.
Open and edit the launch file to set dataset paths/configs:
nano src/ros2_dataset_bridge/launch/nuscenes_launch.xml
Update as needed:
<arg name="NUSCENES_DIR" default="<nuScenes_dataset_path>"/>
<arg name="NUSCENES_CAN_BUS_DIR" default="<can_bus_path>"/>
<arg name="NUSCENES_VER" default="v1.0-trainval"/>
<arg name="UPDATE_FREQUENCY" default="10.0"/>
- Build the autoware_tensorrt_bevformer and ros2_dataset_bridge packages
```bash # Build ros2_dataset_bridge
colcon build –packages-up-to ros2_dataset_bridge
# Build autoware_tensorrt_bevformer
colcon build –packages-up-to autoware_tensorrt_bevformer
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
libopencv-dev |
Dependant Packages
Launch files
- launch/bevformer.launch.xml
-
- input/img_front_left [default: /nuscenes/CAM_FRONT_LEFT/image]
- input/img_front [default: /nuscenes/CAM_FRONT/image]
- input/img_front_right [default: /nuscenes/CAM_FRONT_RIGHT/image]
- input/img_back_left [default: /nuscenes/CAM_BACK_LEFT/image]
- input/img_back [default: /nuscenes/CAM_BACK/image]
- input/img_back_right [default: /nuscenes/CAM_BACK_RIGHT/image]
- input/can_bus [default: /nuscenes/can_bus]
- output_boxes [default: ~/output_boxes]
- output_bboxes [default: ~/output/debug/markers/bounding_boxes]
- input/img_front_left/camera_info [default: /nuscenes/CAM_FRONT_LEFT/camera_info]
- input/img_front/camera_info [default: /nuscenes/CAM_FRONT/camera_info]
- input/img_front_right/camera_info [default: /nuscenes/CAM_FRONT_RIGHT/camera_info]
- input/img_back_left/camera_info [default: /nuscenes/CAM_BACK_LEFT/camera_info]
- input/img_back/camera_info [default: /nuscenes/CAM_BACK/camera_info]
- input/img_back_right/camera_info [default: /nuscenes/CAM_BACK_RIGHT/camera_info]
- data_path [default: $(env HOME)/autoware_data/tensorrt_bevformer]
- onnx_file [default: $(var data_path)/bevformer_small.onnx]
- engine_file [default: $(var data_path)/bevformer_small.engine]
- auto_convert [default: true]
- precision [default: fp16]
- debug_mode [default: false]
- workspace_size [default: 4096]
- model_name [default: bevformer_small]
- param_file [default: $(find-pkg-share autoware_tensorrt_bevformer)/config/bevformer.param.yaml]
- plugin_path [default: ]
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_bevformer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-28 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Selventhiran Rengaraj
- Ramaseshan Subramanian
- Naveen Sathiyaseelan
- Dhinesh Panneerselvam
- Rahul Gandhi Sundar
Authors
tensorrt_bevformer
Purpose
The core algorithm, named BEVFormer
, unifies multi-view images into the BEV perspective for 3D object detection tasks with temporal fusion.
Inner-workings / Algorithms
Cite
- Zhicheng Wang, et al., “BEVFormer: Incorporating Transformers for Multi-Camera 3D Detection” [ref]
- This node is ported and adapted for Autoware from Multicoreware’s BEVFormer ROS2 C++ repository.
Inputs / Outputs
Inputs
Name | Type | Description |
---|---|---|
~/input/topic_img_front_left |
sensor_msgs::msg::Image |
input front_left camera image |
~/input/topic_img_front |
sensor_msgs::msg::Image |
input front camera image |
~/input/topic_img_front_right |
sensor_msgs::msg::Image |
input front_right camera image |
~/input/topic_img_back_left |
sensor_msgs::msg::Image |
input back_left camera image |
~/input/topic_img_back |
sensor_msgs::msg::Image |
input back camera image |
~/input/topic_img_back_right |
sensor_msgs::msg::Image |
input back_right camera image |
~/input/topic_img_front_left/camera_info |
sensor_msgs::msg::CameraInfo |
input front_left camera parameters |
~/input/topic_img_front/camera_info |
sensor_msgs::msg::CameraInfo |
input front camera parameters |
~/input/topic_img_front_right/camera_info |
sensor_msgs::msg::CameraInfo |
input front_right camera parameters |
~/input/topic_img_back_left/camera_info |
sensor_msgs::msg::CameraInfo |
input back_left camera parameters |
~/input/topic_img_back/camera_info |
sensor_msgs::msg::CameraInfo |
input back camera parameters |
~/input/topic_img_back_right/camera_info |
sensor_msgs::msg::CameraInfo |
input back_right camera parameters |
~/input/can_bus |
autoware_localization_msgs::msg::KinematicState |
CAN bus data for ego-motion |
Outputs
Name | Type | Description |
---|---|---|
~/output_boxes |
autoware_perception_msgs::msg::DetectedObjects |
detected objects |
~/output_bboxes |
visualization_msgs::msg::MarkerArray |
detected objects for nuScenes visualization |
How to Use Tensorrt BEVFormer Node
Prerequisites
- TensorRT 10.8.0.43
- CUDA 12.4
- cuDNN 8.9.2
Trained Model
Download the bevformer_small.onnx
model to:
$HOME/autoware_data/tensorrt_bevformer
Note: The BEVFormer model was trained on the nuScenes dataset for 24 epochs with temporal fusion enabled.
Test TensorRT BEVFormer Node with nuScenes
-
Integrate this package into your autoware_universe/perception directory.
-
To play ROS 2 bag of nuScenes data:
cd autoware/src
git clone -b feature/bevformer-integration https://github.com/naveen-mcw/ros2_dataset_bridge.git
cd ..
Note: The
feature/bevformer-integration
branch provides required data for the BEVFormer.
Download nuScenes dataset and canbus data here.
Open and edit the launch file to set dataset paths/configs:
nano src/ros2_dataset_bridge/launch/nuscenes_launch.xml
Update as needed:
<arg name="NUSCENES_DIR" default="<nuScenes_dataset_path>"/>
<arg name="NUSCENES_CAN_BUS_DIR" default="<can_bus_path>"/>
<arg name="NUSCENES_VER" default="v1.0-trainval"/>
<arg name="UPDATE_FREQUENCY" default="10.0"/>
- Build the autoware_tensorrt_bevformer and ros2_dataset_bridge packages
```bash # Build ros2_dataset_bridge
colcon build –packages-up-to ros2_dataset_bridge
# Build autoware_tensorrt_bevformer
colcon build –packages-up-to autoware_tensorrt_bevformer
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
libopencv-dev |
Dependant Packages
Launch files
- launch/bevformer.launch.xml
-
- input/img_front_left [default: /nuscenes/CAM_FRONT_LEFT/image]
- input/img_front [default: /nuscenes/CAM_FRONT/image]
- input/img_front_right [default: /nuscenes/CAM_FRONT_RIGHT/image]
- input/img_back_left [default: /nuscenes/CAM_BACK_LEFT/image]
- input/img_back [default: /nuscenes/CAM_BACK/image]
- input/img_back_right [default: /nuscenes/CAM_BACK_RIGHT/image]
- input/can_bus [default: /nuscenes/can_bus]
- output_boxes [default: ~/output_boxes]
- output_bboxes [default: ~/output/debug/markers/bounding_boxes]
- input/img_front_left/camera_info [default: /nuscenes/CAM_FRONT_LEFT/camera_info]
- input/img_front/camera_info [default: /nuscenes/CAM_FRONT/camera_info]
- input/img_front_right/camera_info [default: /nuscenes/CAM_FRONT_RIGHT/camera_info]
- input/img_back_left/camera_info [default: /nuscenes/CAM_BACK_LEFT/camera_info]
- input/img_back/camera_info [default: /nuscenes/CAM_BACK/camera_info]
- input/img_back_right/camera_info [default: /nuscenes/CAM_BACK_RIGHT/camera_info]
- data_path [default: $(env HOME)/autoware_data/tensorrt_bevformer]
- onnx_file [default: $(var data_path)/bevformer_small.onnx]
- engine_file [default: $(var data_path)/bevformer_small.engine]
- auto_convert [default: true]
- precision [default: fp16]
- debug_mode [default: false]
- workspace_size [default: 4096]
- model_name [default: bevformer_small]
- param_file [default: $(find-pkg-share autoware_tensorrt_bevformer)/config/bevformer.param.yaml]
- plugin_path [default: ]
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_bevformer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-28 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Selventhiran Rengaraj
- Ramaseshan Subramanian
- Naveen Sathiyaseelan
- Dhinesh Panneerselvam
- Rahul Gandhi Sundar
Authors
tensorrt_bevformer
Purpose
The core algorithm, named BEVFormer
, unifies multi-view images into the BEV perspective for 3D object detection tasks with temporal fusion.
Inner-workings / Algorithms
Cite
- Zhicheng Wang, et al., “BEVFormer: Incorporating Transformers for Multi-Camera 3D Detection” [ref]
- This node is ported and adapted for Autoware from Multicoreware’s BEVFormer ROS2 C++ repository.
Inputs / Outputs
Inputs
Name | Type | Description |
---|---|---|
~/input/topic_img_front_left |
sensor_msgs::msg::Image |
input front_left camera image |
~/input/topic_img_front |
sensor_msgs::msg::Image |
input front camera image |
~/input/topic_img_front_right |
sensor_msgs::msg::Image |
input front_right camera image |
~/input/topic_img_back_left |
sensor_msgs::msg::Image |
input back_left camera image |
~/input/topic_img_back |
sensor_msgs::msg::Image |
input back camera image |
~/input/topic_img_back_right |
sensor_msgs::msg::Image |
input back_right camera image |
~/input/topic_img_front_left/camera_info |
sensor_msgs::msg::CameraInfo |
input front_left camera parameters |
~/input/topic_img_front/camera_info |
sensor_msgs::msg::CameraInfo |
input front camera parameters |
~/input/topic_img_front_right/camera_info |
sensor_msgs::msg::CameraInfo |
input front_right camera parameters |
~/input/topic_img_back_left/camera_info |
sensor_msgs::msg::CameraInfo |
input back_left camera parameters |
~/input/topic_img_back/camera_info |
sensor_msgs::msg::CameraInfo |
input back camera parameters |
~/input/topic_img_back_right/camera_info |
sensor_msgs::msg::CameraInfo |
input back_right camera parameters |
~/input/can_bus |
autoware_localization_msgs::msg::KinematicState |
CAN bus data for ego-motion |
Outputs
Name | Type | Description |
---|---|---|
~/output_boxes |
autoware_perception_msgs::msg::DetectedObjects |
detected objects |
~/output_bboxes |
visualization_msgs::msg::MarkerArray |
detected objects for nuScenes visualization |
How to Use Tensorrt BEVFormer Node
Prerequisites
- TensorRT 10.8.0.43
- CUDA 12.4
- cuDNN 8.9.2
Trained Model
Download the bevformer_small.onnx
model to:
$HOME/autoware_data/tensorrt_bevformer
Note: The BEVFormer model was trained on the nuScenes dataset for 24 epochs with temporal fusion enabled.
Test TensorRT BEVFormer Node with nuScenes
-
Integrate this package into your autoware_universe/perception directory.
-
To play ROS 2 bag of nuScenes data:
cd autoware/src
git clone -b feature/bevformer-integration https://github.com/naveen-mcw/ros2_dataset_bridge.git
cd ..
Note: The
feature/bevformer-integration
branch provides required data for the BEVFormer.
Download nuScenes dataset and canbus data here.
Open and edit the launch file to set dataset paths/configs:
nano src/ros2_dataset_bridge/launch/nuscenes_launch.xml
Update as needed:
<arg name="NUSCENES_DIR" default="<nuScenes_dataset_path>"/>
<arg name="NUSCENES_CAN_BUS_DIR" default="<can_bus_path>"/>
<arg name="NUSCENES_VER" default="v1.0-trainval"/>
<arg name="UPDATE_FREQUENCY" default="10.0"/>
- Build the autoware_tensorrt_bevformer and ros2_dataset_bridge packages
```bash # Build ros2_dataset_bridge
colcon build –packages-up-to ros2_dataset_bridge
# Build autoware_tensorrt_bevformer
colcon build –packages-up-to autoware_tensorrt_bevformer
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
libopencv-dev |
Dependant Packages
Launch files
- launch/bevformer.launch.xml
-
- input/img_front_left [default: /nuscenes/CAM_FRONT_LEFT/image]
- input/img_front [default: /nuscenes/CAM_FRONT/image]
- input/img_front_right [default: /nuscenes/CAM_FRONT_RIGHT/image]
- input/img_back_left [default: /nuscenes/CAM_BACK_LEFT/image]
- input/img_back [default: /nuscenes/CAM_BACK/image]
- input/img_back_right [default: /nuscenes/CAM_BACK_RIGHT/image]
- input/can_bus [default: /nuscenes/can_bus]
- output_boxes [default: ~/output_boxes]
- output_bboxes [default: ~/output/debug/markers/bounding_boxes]
- input/img_front_left/camera_info [default: /nuscenes/CAM_FRONT_LEFT/camera_info]
- input/img_front/camera_info [default: /nuscenes/CAM_FRONT/camera_info]
- input/img_front_right/camera_info [default: /nuscenes/CAM_FRONT_RIGHT/camera_info]
- input/img_back_left/camera_info [default: /nuscenes/CAM_BACK_LEFT/camera_info]
- input/img_back/camera_info [default: /nuscenes/CAM_BACK/camera_info]
- input/img_back_right/camera_info [default: /nuscenes/CAM_BACK_RIGHT/camera_info]
- data_path [default: $(env HOME)/autoware_data/tensorrt_bevformer]
- onnx_file [default: $(var data_path)/bevformer_small.onnx]
- engine_file [default: $(var data_path)/bevformer_small.engine]
- auto_convert [default: true]
- precision [default: fp16]
- debug_mode [default: false]
- workspace_size [default: 4096]
- model_name [default: bevformer_small]
- param_file [default: $(find-pkg-share autoware_tensorrt_bevformer)/config/bevformer.param.yaml]
- plugin_path [default: ]