Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- samrat
Authors
autoware_camera_streampetr
Purpose
The autoware_camera_streampetr
package is used for 3D object detection based on images only.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for StreamPETR [1]. This is the first camera-only 3D object detection node in autoware.
This node has been optimized for multi-camera systems where the camera topics are published in a sequential manner, not all at once. The node takes advantage of this by preprocessing (resize, crop, normalize) the images and storing them appropriately on GPU, so that delay due to preprocessing can be minimized.
Topic for image_i arrived -------------------------
| |
| |
| |
v |
Is image distorted? |
| \ |
| \ |
Yes No |Image Updates
| | |done in parallel, if multitheading is on
v | |otherwise done sequentially in FIFO order
Undistort | |
| | |
v v |
Load image into GPU memory |
| |
v |
Preprocess image (scale & crop ROI & normalize) |
| |
v |
Store in GPU memory binding location for model input |
| -------------------------|
v |
Is image the `anchor_image`? |
| \ |
| \ |
No Yes |
| | |
v v | If multithreading is on
(Wait) Are all images synced within `max_time_difference`? | image Updates are temporarily frozen
| \ | until this part completes.
| \ |
Yes No |
| | |
v v |
Perform model forward pass (Sync failed! Skip prediction) |
| |
v |
Postprocess (NMS + ROS2 format) |
| |
v |
Publish predictions -------------------------|
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
~/input/camera*/image |
sensor_msgs::msg::Image or sensor_msgs::msg::CompressedImage
|
Input image topics (supports both compressed and uncompressed). |
~/input/camera*/camera_info |
sensor_msgs::msg::CameraInfo |
Input camera info topics, for camera parameters. |
Output
Name | Type | Description | RTX 3090 Latency (ms) |
---|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. | — |
latency/preprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
Preprocessing time per image(ms). | 3.25 |
latency/total |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total processing time (ms): preprocessing + inference + postprocessing. | 26.04 |
latency/inference |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total inference time (ms). | 22.13 |
latency/inference/backbone |
autoware_internal_debug_msgs::msg::Float64Stamped |
Backbone inference time (ms). | 16.21 |
latency/inference/ptshead |
autoware_internal_debug_msgs::msg::Float64Stamped |
Points head inference time (ms). | 5.45 |
latency/inference/pos_embed |
autoware_internal_debug_msgs::msg::Float64Stamped |
Position embedding inference time (ms). | 0.40 |
latency/inference/postprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
nms + filtering + converting network predictions to autoware format (ms). | 0.40 |
latency/cycle_time_ms |
autoware_internal_debug_msgs::msg::Float64Stamped |
Time between two consecutive predictions (ms). | 110.65 |
Parameters
StreamPETR node
The autoware_camera_streampetr
node has various parameters for configuration:
Model Parameters
-
model_params.backbone_path
: Path to the backbone ONNX model -
model_params.head_path
: Path to the head ONNX model -
model_params.position_embedding_path
: Path to the position embedding ONNX model -
model_params.fp16_mode
: Enable FP16 inference mode -
model_params.use_temporal
: Enable temporal modeling -
model_params.input_image_height
: Input image height for preprocessing -
model_params.input_image_width
: Input image width for preprocessing -
model_params.class_names
: List of detection class names -
model_params.num_proposals
: Number of object proposals
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/tensorrt_stream_petr.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: false]
- launch/tensorrt_stream_petr_compressed.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: true]
Messages
Services
Plugins
Recent questions tagged autoware_camera_streampetr at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- samrat
Authors
autoware_camera_streampetr
Purpose
The autoware_camera_streampetr
package is used for 3D object detection based on images only.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for StreamPETR [1]. This is the first camera-only 3D object detection node in autoware.
This node has been optimized for multi-camera systems where the camera topics are published in a sequential manner, not all at once. The node takes advantage of this by preprocessing (resize, crop, normalize) the images and storing them appropriately on GPU, so that delay due to preprocessing can be minimized.
Topic for image_i arrived -------------------------
| |
| |
| |
v |
Is image distorted? |
| \ |
| \ |
Yes No |Image Updates
| | |done in parallel, if multitheading is on
v | |otherwise done sequentially in FIFO order
Undistort | |
| | |
v v |
Load image into GPU memory |
| |
v |
Preprocess image (scale & crop ROI & normalize) |
| |
v |
Store in GPU memory binding location for model input |
| -------------------------|
v |
Is image the `anchor_image`? |
| \ |
| \ |
No Yes |
| | |
v v | If multithreading is on
(Wait) Are all images synced within `max_time_difference`? | image Updates are temporarily frozen
| \ | until this part completes.
| \ |
Yes No |
| | |
v v |
Perform model forward pass (Sync failed! Skip prediction) |
| |
v |
Postprocess (NMS + ROS2 format) |
| |
v |
Publish predictions -------------------------|
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
~/input/camera*/image |
sensor_msgs::msg::Image or sensor_msgs::msg::CompressedImage
|
Input image topics (supports both compressed and uncompressed). |
~/input/camera*/camera_info |
sensor_msgs::msg::CameraInfo |
Input camera info topics, for camera parameters. |
Output
Name | Type | Description | RTX 3090 Latency (ms) |
---|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. | — |
latency/preprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
Preprocessing time per image(ms). | 3.25 |
latency/total |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total processing time (ms): preprocessing + inference + postprocessing. | 26.04 |
latency/inference |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total inference time (ms). | 22.13 |
latency/inference/backbone |
autoware_internal_debug_msgs::msg::Float64Stamped |
Backbone inference time (ms). | 16.21 |
latency/inference/ptshead |
autoware_internal_debug_msgs::msg::Float64Stamped |
Points head inference time (ms). | 5.45 |
latency/inference/pos_embed |
autoware_internal_debug_msgs::msg::Float64Stamped |
Position embedding inference time (ms). | 0.40 |
latency/inference/postprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
nms + filtering + converting network predictions to autoware format (ms). | 0.40 |
latency/cycle_time_ms |
autoware_internal_debug_msgs::msg::Float64Stamped |
Time between two consecutive predictions (ms). | 110.65 |
Parameters
StreamPETR node
The autoware_camera_streampetr
node has various parameters for configuration:
Model Parameters
-
model_params.backbone_path
: Path to the backbone ONNX model -
model_params.head_path
: Path to the head ONNX model -
model_params.position_embedding_path
: Path to the position embedding ONNX model -
model_params.fp16_mode
: Enable FP16 inference mode -
model_params.use_temporal
: Enable temporal modeling -
model_params.input_image_height
: Input image height for preprocessing -
model_params.input_image_width
: Input image width for preprocessing -
model_params.class_names
: List of detection class names -
model_params.num_proposals
: Number of object proposals
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/tensorrt_stream_petr.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: false]
- launch/tensorrt_stream_petr_compressed.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: true]
Messages
Services
Plugins
Recent questions tagged autoware_camera_streampetr at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- samrat
Authors
autoware_camera_streampetr
Purpose
The autoware_camera_streampetr
package is used for 3D object detection based on images only.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for StreamPETR [1]. This is the first camera-only 3D object detection node in autoware.
This node has been optimized for multi-camera systems where the camera topics are published in a sequential manner, not all at once. The node takes advantage of this by preprocessing (resize, crop, normalize) the images and storing them appropriately on GPU, so that delay due to preprocessing can be minimized.
Topic for image_i arrived -------------------------
| |
| |
| |
v |
Is image distorted? |
| \ |
| \ |
Yes No |Image Updates
| | |done in parallel, if multitheading is on
v | |otherwise done sequentially in FIFO order
Undistort | |
| | |
v v |
Load image into GPU memory |
| |
v |
Preprocess image (scale & crop ROI & normalize) |
| |
v |
Store in GPU memory binding location for model input |
| -------------------------|
v |
Is image the `anchor_image`? |
| \ |
| \ |
No Yes |
| | |
v v | If multithreading is on
(Wait) Are all images synced within `max_time_difference`? | image Updates are temporarily frozen
| \ | until this part completes.
| \ |
Yes No |
| | |
v v |
Perform model forward pass (Sync failed! Skip prediction) |
| |
v |
Postprocess (NMS + ROS2 format) |
| |
v |
Publish predictions -------------------------|
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
~/input/camera*/image |
sensor_msgs::msg::Image or sensor_msgs::msg::CompressedImage
|
Input image topics (supports both compressed and uncompressed). |
~/input/camera*/camera_info |
sensor_msgs::msg::CameraInfo |
Input camera info topics, for camera parameters. |
Output
Name | Type | Description | RTX 3090 Latency (ms) |
---|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. | — |
latency/preprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
Preprocessing time per image(ms). | 3.25 |
latency/total |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total processing time (ms): preprocessing + inference + postprocessing. | 26.04 |
latency/inference |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total inference time (ms). | 22.13 |
latency/inference/backbone |
autoware_internal_debug_msgs::msg::Float64Stamped |
Backbone inference time (ms). | 16.21 |
latency/inference/ptshead |
autoware_internal_debug_msgs::msg::Float64Stamped |
Points head inference time (ms). | 5.45 |
latency/inference/pos_embed |
autoware_internal_debug_msgs::msg::Float64Stamped |
Position embedding inference time (ms). | 0.40 |
latency/inference/postprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
nms + filtering + converting network predictions to autoware format (ms). | 0.40 |
latency/cycle_time_ms |
autoware_internal_debug_msgs::msg::Float64Stamped |
Time between two consecutive predictions (ms). | 110.65 |
Parameters
StreamPETR node
The autoware_camera_streampetr
node has various parameters for configuration:
Model Parameters
-
model_params.backbone_path
: Path to the backbone ONNX model -
model_params.head_path
: Path to the head ONNX model -
model_params.position_embedding_path
: Path to the position embedding ONNX model -
model_params.fp16_mode
: Enable FP16 inference mode -
model_params.use_temporal
: Enable temporal modeling -
model_params.input_image_height
: Input image height for preprocessing -
model_params.input_image_width
: Input image width for preprocessing -
model_params.class_names
: List of detection class names -
model_params.num_proposals
: Number of object proposals
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/tensorrt_stream_petr.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: false]
- launch/tensorrt_stream_petr_compressed.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: true]
Messages
Services
Plugins
Recent questions tagged autoware_camera_streampetr at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- samrat
Authors
autoware_camera_streampetr
Purpose
The autoware_camera_streampetr
package is used for 3D object detection based on images only.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for StreamPETR [1]. This is the first camera-only 3D object detection node in autoware.
This node has been optimized for multi-camera systems where the camera topics are published in a sequential manner, not all at once. The node takes advantage of this by preprocessing (resize, crop, normalize) the images and storing them appropriately on GPU, so that delay due to preprocessing can be minimized.
Topic for image_i arrived -------------------------
| |
| |
| |
v |
Is image distorted? |
| \ |
| \ |
Yes No |Image Updates
| | |done in parallel, if multitheading is on
v | |otherwise done sequentially in FIFO order
Undistort | |
| | |
v v |
Load image into GPU memory |
| |
v |
Preprocess image (scale & crop ROI & normalize) |
| |
v |
Store in GPU memory binding location for model input |
| -------------------------|
v |
Is image the `anchor_image`? |
| \ |
| \ |
No Yes |
| | |
v v | If multithreading is on
(Wait) Are all images synced within `max_time_difference`? | image Updates are temporarily frozen
| \ | until this part completes.
| \ |
Yes No |
| | |
v v |
Perform model forward pass (Sync failed! Skip prediction) |
| |
v |
Postprocess (NMS + ROS2 format) |
| |
v |
Publish predictions -------------------------|
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
~/input/camera*/image |
sensor_msgs::msg::Image or sensor_msgs::msg::CompressedImage
|
Input image topics (supports both compressed and uncompressed). |
~/input/camera*/camera_info |
sensor_msgs::msg::CameraInfo |
Input camera info topics, for camera parameters. |
Output
Name | Type | Description | RTX 3090 Latency (ms) |
---|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. | — |
latency/preprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
Preprocessing time per image(ms). | 3.25 |
latency/total |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total processing time (ms): preprocessing + inference + postprocessing. | 26.04 |
latency/inference |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total inference time (ms). | 22.13 |
latency/inference/backbone |
autoware_internal_debug_msgs::msg::Float64Stamped |
Backbone inference time (ms). | 16.21 |
latency/inference/ptshead |
autoware_internal_debug_msgs::msg::Float64Stamped |
Points head inference time (ms). | 5.45 |
latency/inference/pos_embed |
autoware_internal_debug_msgs::msg::Float64Stamped |
Position embedding inference time (ms). | 0.40 |
latency/inference/postprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
nms + filtering + converting network predictions to autoware format (ms). | 0.40 |
latency/cycle_time_ms |
autoware_internal_debug_msgs::msg::Float64Stamped |
Time between two consecutive predictions (ms). | 110.65 |
Parameters
StreamPETR node
The autoware_camera_streampetr
node has various parameters for configuration:
Model Parameters
-
model_params.backbone_path
: Path to the backbone ONNX model -
model_params.head_path
: Path to the head ONNX model -
model_params.position_embedding_path
: Path to the position embedding ONNX model -
model_params.fp16_mode
: Enable FP16 inference mode -
model_params.use_temporal
: Enable temporal modeling -
model_params.input_image_height
: Input image height for preprocessing -
model_params.input_image_width
: Input image width for preprocessing -
model_params.class_names
: List of detection class names -
model_params.num_proposals
: Number of object proposals
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/tensorrt_stream_petr.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: false]
- launch/tensorrt_stream_petr_compressed.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: true]
Messages
Services
Plugins
Recent questions tagged autoware_camera_streampetr at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- samrat
Authors
autoware_camera_streampetr
Purpose
The autoware_camera_streampetr
package is used for 3D object detection based on images only.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for StreamPETR [1]. This is the first camera-only 3D object detection node in autoware.
This node has been optimized for multi-camera systems where the camera topics are published in a sequential manner, not all at once. The node takes advantage of this by preprocessing (resize, crop, normalize) the images and storing them appropriately on GPU, so that delay due to preprocessing can be minimized.
Topic for image_i arrived -------------------------
| |
| |
| |
v |
Is image distorted? |
| \ |
| \ |
Yes No |Image Updates
| | |done in parallel, if multitheading is on
v | |otherwise done sequentially in FIFO order
Undistort | |
| | |
v v |
Load image into GPU memory |
| |
v |
Preprocess image (scale & crop ROI & normalize) |
| |
v |
Store in GPU memory binding location for model input |
| -------------------------|
v |
Is image the `anchor_image`? |
| \ |
| \ |
No Yes |
| | |
v v | If multithreading is on
(Wait) Are all images synced within `max_time_difference`? | image Updates are temporarily frozen
| \ | until this part completes.
| \ |
Yes No |
| | |
v v |
Perform model forward pass (Sync failed! Skip prediction) |
| |
v |
Postprocess (NMS + ROS2 format) |
| |
v |
Publish predictions -------------------------|
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
~/input/camera*/image |
sensor_msgs::msg::Image or sensor_msgs::msg::CompressedImage
|
Input image topics (supports both compressed and uncompressed). |
~/input/camera*/camera_info |
sensor_msgs::msg::CameraInfo |
Input camera info topics, for camera parameters. |
Output
Name | Type | Description | RTX 3090 Latency (ms) |
---|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. | — |
latency/preprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
Preprocessing time per image(ms). | 3.25 |
latency/total |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total processing time (ms): preprocessing + inference + postprocessing. | 26.04 |
latency/inference |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total inference time (ms). | 22.13 |
latency/inference/backbone |
autoware_internal_debug_msgs::msg::Float64Stamped |
Backbone inference time (ms). | 16.21 |
latency/inference/ptshead |
autoware_internal_debug_msgs::msg::Float64Stamped |
Points head inference time (ms). | 5.45 |
latency/inference/pos_embed |
autoware_internal_debug_msgs::msg::Float64Stamped |
Position embedding inference time (ms). | 0.40 |
latency/inference/postprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
nms + filtering + converting network predictions to autoware format (ms). | 0.40 |
latency/cycle_time_ms |
autoware_internal_debug_msgs::msg::Float64Stamped |
Time between two consecutive predictions (ms). | 110.65 |
Parameters
StreamPETR node
The autoware_camera_streampetr
node has various parameters for configuration:
Model Parameters
-
model_params.backbone_path
: Path to the backbone ONNX model -
model_params.head_path
: Path to the head ONNX model -
model_params.position_embedding_path
: Path to the position embedding ONNX model -
model_params.fp16_mode
: Enable FP16 inference mode -
model_params.use_temporal
: Enable temporal modeling -
model_params.input_image_height
: Input image height for preprocessing -
model_params.input_image_width
: Input image width for preprocessing -
model_params.class_names
: List of detection class names -
model_params.num_proposals
: Number of object proposals
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/tensorrt_stream_petr.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: false]
- launch/tensorrt_stream_petr_compressed.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: true]
Messages
Services
Plugins
Recent questions tagged autoware_camera_streampetr at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- samrat
Authors
autoware_camera_streampetr
Purpose
The autoware_camera_streampetr
package is used for 3D object detection based on images only.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for StreamPETR [1]. This is the first camera-only 3D object detection node in autoware.
This node has been optimized for multi-camera systems where the camera topics are published in a sequential manner, not all at once. The node takes advantage of this by preprocessing (resize, crop, normalize) the images and storing them appropriately on GPU, so that delay due to preprocessing can be minimized.
Topic for image_i arrived -------------------------
| |
| |
| |
v |
Is image distorted? |
| \ |
| \ |
Yes No |Image Updates
| | |done in parallel, if multitheading is on
v | |otherwise done sequentially in FIFO order
Undistort | |
| | |
v v |
Load image into GPU memory |
| |
v |
Preprocess image (scale & crop ROI & normalize) |
| |
v |
Store in GPU memory binding location for model input |
| -------------------------|
v |
Is image the `anchor_image`? |
| \ |
| \ |
No Yes |
| | |
v v | If multithreading is on
(Wait) Are all images synced within `max_time_difference`? | image Updates are temporarily frozen
| \ | until this part completes.
| \ |
Yes No |
| | |
v v |
Perform model forward pass (Sync failed! Skip prediction) |
| |
v |
Postprocess (NMS + ROS2 format) |
| |
v |
Publish predictions -------------------------|
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
~/input/camera*/image |
sensor_msgs::msg::Image or sensor_msgs::msg::CompressedImage
|
Input image topics (supports both compressed and uncompressed). |
~/input/camera*/camera_info |
sensor_msgs::msg::CameraInfo |
Input camera info topics, for camera parameters. |
Output
Name | Type | Description | RTX 3090 Latency (ms) |
---|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. | — |
latency/preprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
Preprocessing time per image(ms). | 3.25 |
latency/total |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total processing time (ms): preprocessing + inference + postprocessing. | 26.04 |
latency/inference |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total inference time (ms). | 22.13 |
latency/inference/backbone |
autoware_internal_debug_msgs::msg::Float64Stamped |
Backbone inference time (ms). | 16.21 |
latency/inference/ptshead |
autoware_internal_debug_msgs::msg::Float64Stamped |
Points head inference time (ms). | 5.45 |
latency/inference/pos_embed |
autoware_internal_debug_msgs::msg::Float64Stamped |
Position embedding inference time (ms). | 0.40 |
latency/inference/postprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
nms + filtering + converting network predictions to autoware format (ms). | 0.40 |
latency/cycle_time_ms |
autoware_internal_debug_msgs::msg::Float64Stamped |
Time between two consecutive predictions (ms). | 110.65 |
Parameters
StreamPETR node
The autoware_camera_streampetr
node has various parameters for configuration:
Model Parameters
-
model_params.backbone_path
: Path to the backbone ONNX model -
model_params.head_path
: Path to the head ONNX model -
model_params.position_embedding_path
: Path to the position embedding ONNX model -
model_params.fp16_mode
: Enable FP16 inference mode -
model_params.use_temporal
: Enable temporal modeling -
model_params.input_image_height
: Input image height for preprocessing -
model_params.input_image_width
: Input image width for preprocessing -
model_params.class_names
: List of detection class names -
model_params.num_proposals
: Number of object proposals
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/tensorrt_stream_petr.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: false]
- launch/tensorrt_stream_petr_compressed.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: true]
Messages
Services
Plugins
Recent questions tagged autoware_camera_streampetr at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- samrat
Authors
autoware_camera_streampetr
Purpose
The autoware_camera_streampetr
package is used for 3D object detection based on images only.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for StreamPETR [1]. This is the first camera-only 3D object detection node in autoware.
This node has been optimized for multi-camera systems where the camera topics are published in a sequential manner, not all at once. The node takes advantage of this by preprocessing (resize, crop, normalize) the images and storing them appropriately on GPU, so that delay due to preprocessing can be minimized.
Topic for image_i arrived -------------------------
| |
| |
| |
v |
Is image distorted? |
| \ |
| \ |
Yes No |Image Updates
| | |done in parallel, if multitheading is on
v | |otherwise done sequentially in FIFO order
Undistort | |
| | |
v v |
Load image into GPU memory |
| |
v |
Preprocess image (scale & crop ROI & normalize) |
| |
v |
Store in GPU memory binding location for model input |
| -------------------------|
v |
Is image the `anchor_image`? |
| \ |
| \ |
No Yes |
| | |
v v | If multithreading is on
(Wait) Are all images synced within `max_time_difference`? | image Updates are temporarily frozen
| \ | until this part completes.
| \ |
Yes No |
| | |
v v |
Perform model forward pass (Sync failed! Skip prediction) |
| |
v |
Postprocess (NMS + ROS2 format) |
| |
v |
Publish predictions -------------------------|
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
~/input/camera*/image |
sensor_msgs::msg::Image or sensor_msgs::msg::CompressedImage
|
Input image topics (supports both compressed and uncompressed). |
~/input/camera*/camera_info |
sensor_msgs::msg::CameraInfo |
Input camera info topics, for camera parameters. |
Output
Name | Type | Description | RTX 3090 Latency (ms) |
---|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. | — |
latency/preprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
Preprocessing time per image(ms). | 3.25 |
latency/total |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total processing time (ms): preprocessing + inference + postprocessing. | 26.04 |
latency/inference |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total inference time (ms). | 22.13 |
latency/inference/backbone |
autoware_internal_debug_msgs::msg::Float64Stamped |
Backbone inference time (ms). | 16.21 |
latency/inference/ptshead |
autoware_internal_debug_msgs::msg::Float64Stamped |
Points head inference time (ms). | 5.45 |
latency/inference/pos_embed |
autoware_internal_debug_msgs::msg::Float64Stamped |
Position embedding inference time (ms). | 0.40 |
latency/inference/postprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
nms + filtering + converting network predictions to autoware format (ms). | 0.40 |
latency/cycle_time_ms |
autoware_internal_debug_msgs::msg::Float64Stamped |
Time between two consecutive predictions (ms). | 110.65 |
Parameters
StreamPETR node
The autoware_camera_streampetr
node has various parameters for configuration:
Model Parameters
-
model_params.backbone_path
: Path to the backbone ONNX model -
model_params.head_path
: Path to the head ONNX model -
model_params.position_embedding_path
: Path to the position embedding ONNX model -
model_params.fp16_mode
: Enable FP16 inference mode -
model_params.use_temporal
: Enable temporal modeling -
model_params.input_image_height
: Input image height for preprocessing -
model_params.input_image_width
: Input image width for preprocessing -
model_params.class_names
: List of detection class names -
model_params.num_proposals
: Number of object proposals
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/tensorrt_stream_petr.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: false]
- launch/tensorrt_stream_petr_compressed.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: true]
Messages
Services
Plugins
Recent questions tagged autoware_camera_streampetr at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- samrat
Authors
autoware_camera_streampetr
Purpose
The autoware_camera_streampetr
package is used for 3D object detection based on images only.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for StreamPETR [1]. This is the first camera-only 3D object detection node in autoware.
This node has been optimized for multi-camera systems where the camera topics are published in a sequential manner, not all at once. The node takes advantage of this by preprocessing (resize, crop, normalize) the images and storing them appropriately on GPU, so that delay due to preprocessing can be minimized.
Topic for image_i arrived -------------------------
| |
| |
| |
v |
Is image distorted? |
| \ |
| \ |
Yes No |Image Updates
| | |done in parallel, if multitheading is on
v | |otherwise done sequentially in FIFO order
Undistort | |
| | |
v v |
Load image into GPU memory |
| |
v |
Preprocess image (scale & crop ROI & normalize) |
| |
v |
Store in GPU memory binding location for model input |
| -------------------------|
v |
Is image the `anchor_image`? |
| \ |
| \ |
No Yes |
| | |
v v | If multithreading is on
(Wait) Are all images synced within `max_time_difference`? | image Updates are temporarily frozen
| \ | until this part completes.
| \ |
Yes No |
| | |
v v |
Perform model forward pass (Sync failed! Skip prediction) |
| |
v |
Postprocess (NMS + ROS2 format) |
| |
v |
Publish predictions -------------------------|
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
~/input/camera*/image |
sensor_msgs::msg::Image or sensor_msgs::msg::CompressedImage
|
Input image topics (supports both compressed and uncompressed). |
~/input/camera*/camera_info |
sensor_msgs::msg::CameraInfo |
Input camera info topics, for camera parameters. |
Output
Name | Type | Description | RTX 3090 Latency (ms) |
---|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. | — |
latency/preprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
Preprocessing time per image(ms). | 3.25 |
latency/total |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total processing time (ms): preprocessing + inference + postprocessing. | 26.04 |
latency/inference |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total inference time (ms). | 22.13 |
latency/inference/backbone |
autoware_internal_debug_msgs::msg::Float64Stamped |
Backbone inference time (ms). | 16.21 |
latency/inference/ptshead |
autoware_internal_debug_msgs::msg::Float64Stamped |
Points head inference time (ms). | 5.45 |
latency/inference/pos_embed |
autoware_internal_debug_msgs::msg::Float64Stamped |
Position embedding inference time (ms). | 0.40 |
latency/inference/postprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
nms + filtering + converting network predictions to autoware format (ms). | 0.40 |
latency/cycle_time_ms |
autoware_internal_debug_msgs::msg::Float64Stamped |
Time between two consecutive predictions (ms). | 110.65 |
Parameters
StreamPETR node
The autoware_camera_streampetr
node has various parameters for configuration:
Model Parameters
-
model_params.backbone_path
: Path to the backbone ONNX model -
model_params.head_path
: Path to the head ONNX model -
model_params.position_embedding_path
: Path to the position embedding ONNX model -
model_params.fp16_mode
: Enable FP16 inference mode -
model_params.use_temporal
: Enable temporal modeling -
model_params.input_image_height
: Input image height for preprocessing -
model_params.input_image_width
: Input image width for preprocessing -
model_params.class_names
: List of detection class names -
model_params.num_proposals
: Number of object proposals
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/tensorrt_stream_petr.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: false]
- launch/tensorrt_stream_petr_compressed.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: true]
Messages
Services
Plugins
Recent questions tagged autoware_camera_streampetr at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- samrat
Authors
autoware_camera_streampetr
Purpose
The autoware_camera_streampetr
package is used for 3D object detection based on images only.
Inner-workings / Algorithms
This package implements a TensorRT powered inference node for StreamPETR [1]. This is the first camera-only 3D object detection node in autoware.
This node has been optimized for multi-camera systems where the camera topics are published in a sequential manner, not all at once. The node takes advantage of this by preprocessing (resize, crop, normalize) the images and storing them appropriately on GPU, so that delay due to preprocessing can be minimized.
Topic for image_i arrived -------------------------
| |
| |
| |
v |
Is image distorted? |
| \ |
| \ |
Yes No |Image Updates
| | |done in parallel, if multitheading is on
v | |otherwise done sequentially in FIFO order
Undistort | |
| | |
v v |
Load image into GPU memory |
| |
v |
Preprocess image (scale & crop ROI & normalize) |
| |
v |
Store in GPU memory binding location for model input |
| -------------------------|
v |
Is image the `anchor_image`? |
| \ |
| \ |
No Yes |
| | |
v v | If multithreading is on
(Wait) Are all images synced within `max_time_difference`? | image Updates are temporarily frozen
| \ | until this part completes.
| \ |
Yes No |
| | |
v v |
Perform model forward pass (Sync failed! Skip prediction) |
| |
v |
Postprocess (NMS + ROS2 format) |
| |
v |
Publish predictions -------------------------|
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
~/input/camera*/image |
sensor_msgs::msg::Image or sensor_msgs::msg::CompressedImage
|
Input image topics (supports both compressed and uncompressed). |
~/input/camera*/camera_info |
sensor_msgs::msg::CameraInfo |
Input camera info topics, for camera parameters. |
Output
Name | Type | Description | RTX 3090 Latency (ms) |
---|---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. | — |
latency/preprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
Preprocessing time per image(ms). | 3.25 |
latency/total |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total processing time (ms): preprocessing + inference + postprocessing. | 26.04 |
latency/inference |
autoware_internal_debug_msgs::msg::Float64Stamped |
Total inference time (ms). | 22.13 |
latency/inference/backbone |
autoware_internal_debug_msgs::msg::Float64Stamped |
Backbone inference time (ms). | 16.21 |
latency/inference/ptshead |
autoware_internal_debug_msgs::msg::Float64Stamped |
Points head inference time (ms). | 5.45 |
latency/inference/pos_embed |
autoware_internal_debug_msgs::msg::Float64Stamped |
Position embedding inference time (ms). | 0.40 |
latency/inference/postprocess |
autoware_internal_debug_msgs::msg::Float64Stamped |
nms + filtering + converting network predictions to autoware format (ms). | 0.40 |
latency/cycle_time_ms |
autoware_internal_debug_msgs::msg::Float64Stamped |
Time between two consecutive predictions (ms). | 110.65 |
Parameters
StreamPETR node
The autoware_camera_streampetr
node has various parameters for configuration:
Model Parameters
-
model_params.backbone_path
: Path to the backbone ONNX model -
model_params.head_path
: Path to the head ONNX model -
model_params.position_embedding_path
: Path to the position embedding ONNX model -
model_params.fp16_mode
: Enable FP16 inference mode -
model_params.use_temporal
: Enable temporal modeling -
model_params.input_image_height
: Input image height for preprocessing -
model_params.input_image_width
: Input image width for preprocessing -
model_params.class_names
: List of detection class names -
model_params.num_proposals
: Number of object proposals
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/tensorrt_stream_petr.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: false]
- launch/tensorrt_stream_petr_compressed.launch.xml
-
- build_only [default: false]
- debug_mode [default: true]
- param_path [default: $(find-pkg-share autoware_camera_streampetr)/config/tensorrt_stream_petr.param.yaml]
- model_path [default: $(env HOME)/autoware_data/camera_streampetr]
- is_compressed_image [default: true]