No version for distro humble showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_image_projection_based_fusion package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The autoware_image_projection_based_fusion package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Dai Nguyen
  • Kotaro Uetake
  • Tao Zhong
  • Taekjin Lee

Authors

No additional authors.

autoware_image_projection_based_fusion

Purpose

The autoware_image_projection_based_fusion package is designed to enhance obstacle detection accuracy by integrating information from both image-based and LiDAR-based perception. It fuses detected obstacles — such as bounding boxes or segmentation — from 2D images with 3D point clouds or other obstacle representations, including bounding boxes, clusters, or segmentation. This fusion helps to refine obstacle classification and detection in autonomous driving applications.

Fusion algorithms

The package provides multiple fusion algorithms, each designed for specific use cases. Below are the different fusion methods along with their descriptions and detailed documentation links:

Fusion Name Description Detail
roi_cluster_fusion Assigns classification labels to LiDAR-detected clusters by matching them with Regions of Interest (ROIs) from a 2D object detector. link
roi_detected_object_fusion Updates classification labels of detected objects using ROI information from a 2D object detector. link
pointpainting_fusion Augments the point cloud by painting each point with additional information from ROIs of a 2D object detector. The enriched point cloud is then processed by a 3D object detector for improved accuracy. link
roi_pointcloud_fusion Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects. link
segmentation_pointcloud_fusion Filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation. link

Inner Workings / Algorithms

fusion_algorithm

The fusion process operates on two primary types of input data:

  • Msg3d: This includes 3D data such as point clouds, bounding boxes, or clusters from LiDAR.
  • RoIs (Regions of Interest): These are 2D detections or proposals from camera-based perception modules, such as object detection bounding boxes.

Both inputs come with timestamps, which are crucial for synchronization and fusion. Since sensors operate at different frequencies and may experience network delays, a systematic approach is needed to handle their arrival, align their timestamps, and ensure reliable fusion.

The following steps describe how the node processes these inputs, synchronizes them, and performs multi-sensor fusion.

Step 1: Matching and Creating a Collector

When a Msg3d or a set of RoIs arrives, its timestamp is checked, and an offset is subtracted to determine the reference timestamp. The node then searches for an existing collector with the same reference timestamp.

  • If a matching collector is found, the incoming data is added to it.
  • If no matching collector exists, a new collector is created and initialized with the reference timestamp.

Step 2: Triggering the Timer

Once a collector is created, a countdown timer is started. The timeout duration depends on which message type arrived first and is defined by either msg3d_timeout_sec for msg3d or rois_timeout_sec for RoIs.

The collector will attempt to fuse the collected 3D and 2D data either:

  • When both Msg3d and RoI data are available, or
  • When the timer expires.

If no Msg3d is received before the timer expires, the collector will discard the data without performing fusion.

Step 3: Fusion Process

The fusion process consists of three main stages:

  1. Preprocessing – Preparing the input data for fusion.
  2. Fusion – Aligning and merging RoIs with the 3D point cloud.
  3. Postprocessing – Refining the fused output based on the algorithm’s requirements.

The specific operations performed during these stages may vary depending on the type of fusion being applied.

Step 4: Publishing the Fused Result

After the fusion process is completed, the fused output is published. The collector is then reset to an idle state, ready to process the next incoming message.

The figure below shows how the input data is fused in different scenarios. roi_sync_image2

Parameters

All of the fusion nodes have the common parameters described in the following

{{ json_to_markdown(“perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json”) }}

Parameter Settings

Timeout

The order in which RoIs or the msg3d message arrives at the fusion node depends on your system and sensor configuration. Since the primary goal is to fuse 2D RoIs with msg3d data, msg3d is essential for processing.

If RoIs arrive earlier, they must wait until msg3d is received. You can adjust the waiting time using the rois_timeout_sec parameter.

If msg3d arrives first, the fusion process should proceed as quickly as possible, so the waiting time for msg3d (msg3d_timeout_sec) should be kept minimal.

RoIs Offsets

The offset between each camera and the LiDAR is determined by their shutter timing. To ensure accurate fusion, users must understand the timing offset between the RoIs and msg3d. Once this offset is known, it should be specified in the parameter rois_timestamp_offsets.

In the figure below, the LiDAR completes a full scan from the rear in 100 milliseconds. When the LiDAR scan reaches the area where the camera is facing, the camera is triggered, capturing an image with a corresponding timestamp. The rois_timestamp_offsets can then be calculated by subtracting the LiDAR header timestamp from the camera header timestamp. As a result, the rois_timestamp_offsets would be [0.059, 0.010, 0.026, 0.042, 0.076, 0.093].

lidar_camera_sync

To check the header timestamp of the msg3d and RoIs, user can easily run

ros2 echo [topic] --header field

Matching Strategies

We provide two matching strategies for different scenarios:

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_image_projection_based_fusion

0.47.0 (2025-08-11)

  • chore(image_projection_based_fusion): add initializing status log (#11112)

    • chore(image_projection_based_fusion): add initializing status log

    * chore: change to warning ---------

  • style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(roi_cluster_fusion): fix bug in debug mode (#11054)

    • fix(roi_cluster_fusion): fix bug in debug mode
    • chore: refactor
    • chore: docs

    * fix debug iou ---------

  • fix(tier4_perception_launch): add one more camera fusion (#10973)

    • fix(tier4_perception_launch): add one more camera fusion
    • fix: missing launch
    • feat(detection.launch): add support for additional camera inputs (camera8)

    * fix: missing launch param ---------Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • fix(image_projection_based_fusion): loosen rois_number check (#10924)

  • feat(autoware_lidar_centerpoint): add class-wise confidence thresholds to CenterPoint (#10881)

    • Add PreprocessCuda to CenterPoint
    • style(pre-commit): autofix
    • style(pre-commit): autofix
    • Add intensity preprocessing
    • style(pre-commit): autofix
    • Fix config_.point_feature_size_ typo
    • style(pre-commit): autofix
    • Fix point typo
    • style(pre-commit): autofix
    • Change score_threshold to score_thresholds
    • Use <autoware/cuda_utils/cuda_utils.hpp> for clear_async
    • Rename pre_ptr_ to pre_proc_ptr_
    • Remove unused getCacheSize() and getIdx
    • Use template in generateVoxels_random_kernel instead
    • style(pre-commit): autofix
    • Remove references in generateVoxels_random_kernel
    • Remove references in generateVoxels_random_kernel
    • style(pre-commit): autofix
    • Remove generateIntensityFeatures_kernel and add the case of 11 to ENCODER_IN_FEATURE_SIZE for generateFeatures_kernel
    • style(pre-commit): autofix
    • Add class-wise confidence thresholds to CenterPoint
    • style(pre-commit): autofix
    • Remov empty line changes
    • Update score_threshold to score_thresholds in REAMME
    • style(pre-commit): autofix
    • Change score_thresholds from pass by value to pass by reference
    • style(pre-commit): autofix
    • Add information about class names in scehema
    • Change vector<double> to vector<float>
    • Remove thrust and add stream_ to PostProcessCUDA
    • style(pre-commit): autofix
    • Fix incorrect initialization of score_thresholds_ vector
    • Fix postprocess CudaMemCpy error
    • Fix postprocess score_thresholds_d_ptr_ typing error
    • Fix score_thresholds typing in node.cpp
    • Static casting params.score_thresholds vector
    • style(pre-commit): autofix
    • Update perception/autoware_lidar_centerpoint/src/node.cpp
    • Update perception/autoware_lidar_centerpoint/include/autoware/lidar_centerpoint/centerpoint_config.hpp
    • Update centerpoint_config.hpp
    • Update node.cpp
    • Update score_thresholds_ to double since ros2 supports only double instead of float
    • style(pre-commit): autofix
    • Fix cuda memory and revert double score_thresholds_ to float score_thresholds_

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

  • Contributors: Kok Seang Tan, Mete Fatih Cırıt, badai nguyen

File truncated at 100 lines see the full file

Launch files

  • launch/pointpainting_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /sensing/lidar/top/rectified/pointcloud]
      • output/objects [default: objects]
      • data_path [default: $(env HOME)/autoware_data]
      • model_name [default: pointpainting]
      • model_path [default: $(var data_path)/image_projection_based_fusion]
      • model_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting.param.yaml]
      • ml_package_param_path [default: $(var model_path)/$(var model_name)_ml_package.param.yaml]
      • class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • common_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting_common.param.yaml]
      • build_only [default: false]
      • use_pointcloud_container [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_cluster_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/clusters [default: clusters]
      • output/clusters [default: labeled_clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_cluster_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_detected_object_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/objects [default: objects]
      • output/objects [default: fused_objects]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_detected_object_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_pointcloud_fusion.launch.xml
      • pointcloud_container_name [default: pointcloud_container]
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /perception/object_recognition/detection/pointcloud_map_filtered/pointcloud]
      • output/clusters [default: output/clusters]
      • debug/clusters [default: roi_pointcloud_fusion/debug/clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_pointcloud_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/segmentation_pointcloud_fusion.launch.xml
      • input/camera_number [default: 1]
      • input/mask0 [default: /perception/object_recognition/detection/mask0]
      • input/camera_info0 [default: /sensing/camera/camera0/camera_info]
      • input/mask1 [default: /perception/object_recognition/detection/mask1]
      • input/camera_info1 [default: /sensing/camera/camera1/camera_info]
      • input/mask2 [default: /perception/object_recognition/detection/mask2]
      • input/camera_info2 [default: /sensing/camera/camera2/camera_info]
      • input/mask3 [default: /perception/object_recognition/detection/mask3]
      • input/camera_info3 [default: /sensing/camera/camera3/camera_info]
      • input/mask4 [default: /perception/object_recognition/detection/mask4]
      • input/camera_info4 [default: /sensing/camera/camera4/camera_info]
      • input/mask5 [default: /perception/object_recognition/detection/mask5]
      • input/camera_info5 [default: /sensing/camera/camera5/camera_info]
      • input/mask6 [default: /perception/object_recognition/detection/mask6]
      • input/camera_info6 [default: /sensing/camera/camera6/camera_info]
      • input/mask7 [default: /perception/object_recognition/detection/mask7]
      • input/camera_info7 [default: /sensing/camera/camera7/camera_info]
      • input/mask8 [default: /perception/object_recognition/detection/mask8]
      • input/camera_info8 [default: /sensing/camera/camera8/camera_info]
      • input/pointcloud [default: /sensing/lidar/top/outlier_filtered/pointcloud]
      • output/pointcloud [default: output/pointcloud]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • semantic_segmentation_based_filter_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/segmentation_pointcloud_fusion.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_image_projection_based_fusion at Robotics Stack Exchange

No version for distro jazzy showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_image_projection_based_fusion package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The autoware_image_projection_based_fusion package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Dai Nguyen
  • Kotaro Uetake
  • Tao Zhong
  • Taekjin Lee

Authors

No additional authors.

autoware_image_projection_based_fusion

Purpose

The autoware_image_projection_based_fusion package is designed to enhance obstacle detection accuracy by integrating information from both image-based and LiDAR-based perception. It fuses detected obstacles — such as bounding boxes or segmentation — from 2D images with 3D point clouds or other obstacle representations, including bounding boxes, clusters, or segmentation. This fusion helps to refine obstacle classification and detection in autonomous driving applications.

Fusion algorithms

The package provides multiple fusion algorithms, each designed for specific use cases. Below are the different fusion methods along with their descriptions and detailed documentation links:

Fusion Name Description Detail
roi_cluster_fusion Assigns classification labels to LiDAR-detected clusters by matching them with Regions of Interest (ROIs) from a 2D object detector. link
roi_detected_object_fusion Updates classification labels of detected objects using ROI information from a 2D object detector. link
pointpainting_fusion Augments the point cloud by painting each point with additional information from ROIs of a 2D object detector. The enriched point cloud is then processed by a 3D object detector for improved accuracy. link
roi_pointcloud_fusion Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects. link
segmentation_pointcloud_fusion Filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation. link

Inner Workings / Algorithms

fusion_algorithm

The fusion process operates on two primary types of input data:

  • Msg3d: This includes 3D data such as point clouds, bounding boxes, or clusters from LiDAR.
  • RoIs (Regions of Interest): These are 2D detections or proposals from camera-based perception modules, such as object detection bounding boxes.

Both inputs come with timestamps, which are crucial for synchronization and fusion. Since sensors operate at different frequencies and may experience network delays, a systematic approach is needed to handle their arrival, align their timestamps, and ensure reliable fusion.

The following steps describe how the node processes these inputs, synchronizes them, and performs multi-sensor fusion.

Step 1: Matching and Creating a Collector

When a Msg3d or a set of RoIs arrives, its timestamp is checked, and an offset is subtracted to determine the reference timestamp. The node then searches for an existing collector with the same reference timestamp.

  • If a matching collector is found, the incoming data is added to it.
  • If no matching collector exists, a new collector is created and initialized with the reference timestamp.

Step 2: Triggering the Timer

Once a collector is created, a countdown timer is started. The timeout duration depends on which message type arrived first and is defined by either msg3d_timeout_sec for msg3d or rois_timeout_sec for RoIs.

The collector will attempt to fuse the collected 3D and 2D data either:

  • When both Msg3d and RoI data are available, or
  • When the timer expires.

If no Msg3d is received before the timer expires, the collector will discard the data without performing fusion.

Step 3: Fusion Process

The fusion process consists of three main stages:

  1. Preprocessing – Preparing the input data for fusion.
  2. Fusion – Aligning and merging RoIs with the 3D point cloud.
  3. Postprocessing – Refining the fused output based on the algorithm’s requirements.

The specific operations performed during these stages may vary depending on the type of fusion being applied.

Step 4: Publishing the Fused Result

After the fusion process is completed, the fused output is published. The collector is then reset to an idle state, ready to process the next incoming message.

The figure below shows how the input data is fused in different scenarios. roi_sync_image2

Parameters

All of the fusion nodes have the common parameters described in the following

{{ json_to_markdown(“perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json”) }}

Parameter Settings

Timeout

The order in which RoIs or the msg3d message arrives at the fusion node depends on your system and sensor configuration. Since the primary goal is to fuse 2D RoIs with msg3d data, msg3d is essential for processing.

If RoIs arrive earlier, they must wait until msg3d is received. You can adjust the waiting time using the rois_timeout_sec parameter.

If msg3d arrives first, the fusion process should proceed as quickly as possible, so the waiting time for msg3d (msg3d_timeout_sec) should be kept minimal.

RoIs Offsets

The offset between each camera and the LiDAR is determined by their shutter timing. To ensure accurate fusion, users must understand the timing offset between the RoIs and msg3d. Once this offset is known, it should be specified in the parameter rois_timestamp_offsets.

In the figure below, the LiDAR completes a full scan from the rear in 100 milliseconds. When the LiDAR scan reaches the area where the camera is facing, the camera is triggered, capturing an image with a corresponding timestamp. The rois_timestamp_offsets can then be calculated by subtracting the LiDAR header timestamp from the camera header timestamp. As a result, the rois_timestamp_offsets would be [0.059, 0.010, 0.026, 0.042, 0.076, 0.093].

lidar_camera_sync

To check the header timestamp of the msg3d and RoIs, user can easily run

ros2 echo [topic] --header field

Matching Strategies

We provide two matching strategies for different scenarios:

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_image_projection_based_fusion

0.47.0 (2025-08-11)

  • chore(image_projection_based_fusion): add initializing status log (#11112)

    • chore(image_projection_based_fusion): add initializing status log

    * chore: change to warning ---------

  • style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(roi_cluster_fusion): fix bug in debug mode (#11054)

    • fix(roi_cluster_fusion): fix bug in debug mode
    • chore: refactor
    • chore: docs

    * fix debug iou ---------

  • fix(tier4_perception_launch): add one more camera fusion (#10973)

    • fix(tier4_perception_launch): add one more camera fusion
    • fix: missing launch
    • feat(detection.launch): add support for additional camera inputs (camera8)

    * fix: missing launch param ---------Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • fix(image_projection_based_fusion): loosen rois_number check (#10924)

  • feat(autoware_lidar_centerpoint): add class-wise confidence thresholds to CenterPoint (#10881)

    • Add PreprocessCuda to CenterPoint
    • style(pre-commit): autofix
    • style(pre-commit): autofix
    • Add intensity preprocessing
    • style(pre-commit): autofix
    • Fix config_.point_feature_size_ typo
    • style(pre-commit): autofix
    • Fix point typo
    • style(pre-commit): autofix
    • Change score_threshold to score_thresholds
    • Use <autoware/cuda_utils/cuda_utils.hpp> for clear_async
    • Rename pre_ptr_ to pre_proc_ptr_
    • Remove unused getCacheSize() and getIdx
    • Use template in generateVoxels_random_kernel instead
    • style(pre-commit): autofix
    • Remove references in generateVoxels_random_kernel
    • Remove references in generateVoxels_random_kernel
    • style(pre-commit): autofix
    • Remove generateIntensityFeatures_kernel and add the case of 11 to ENCODER_IN_FEATURE_SIZE for generateFeatures_kernel
    • style(pre-commit): autofix
    • Add class-wise confidence thresholds to CenterPoint
    • style(pre-commit): autofix
    • Remov empty line changes
    • Update score_threshold to score_thresholds in REAMME
    • style(pre-commit): autofix
    • Change score_thresholds from pass by value to pass by reference
    • style(pre-commit): autofix
    • Add information about class names in scehema
    • Change vector<double> to vector<float>
    • Remove thrust and add stream_ to PostProcessCUDA
    • style(pre-commit): autofix
    • Fix incorrect initialization of score_thresholds_ vector
    • Fix postprocess CudaMemCpy error
    • Fix postprocess score_thresholds_d_ptr_ typing error
    • Fix score_thresholds typing in node.cpp
    • Static casting params.score_thresholds vector
    • style(pre-commit): autofix
    • Update perception/autoware_lidar_centerpoint/src/node.cpp
    • Update perception/autoware_lidar_centerpoint/include/autoware/lidar_centerpoint/centerpoint_config.hpp
    • Update centerpoint_config.hpp
    • Update node.cpp
    • Update score_thresholds_ to double since ros2 supports only double instead of float
    • style(pre-commit): autofix
    • Fix cuda memory and revert double score_thresholds_ to float score_thresholds_

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

  • Contributors: Kok Seang Tan, Mete Fatih Cırıt, badai nguyen

File truncated at 100 lines see the full file

Launch files

  • launch/pointpainting_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /sensing/lidar/top/rectified/pointcloud]
      • output/objects [default: objects]
      • data_path [default: $(env HOME)/autoware_data]
      • model_name [default: pointpainting]
      • model_path [default: $(var data_path)/image_projection_based_fusion]
      • model_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting.param.yaml]
      • ml_package_param_path [default: $(var model_path)/$(var model_name)_ml_package.param.yaml]
      • class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • common_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting_common.param.yaml]
      • build_only [default: false]
      • use_pointcloud_container [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_cluster_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/clusters [default: clusters]
      • output/clusters [default: labeled_clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_cluster_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_detected_object_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/objects [default: objects]
      • output/objects [default: fused_objects]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_detected_object_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_pointcloud_fusion.launch.xml
      • pointcloud_container_name [default: pointcloud_container]
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /perception/object_recognition/detection/pointcloud_map_filtered/pointcloud]
      • output/clusters [default: output/clusters]
      • debug/clusters [default: roi_pointcloud_fusion/debug/clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_pointcloud_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/segmentation_pointcloud_fusion.launch.xml
      • input/camera_number [default: 1]
      • input/mask0 [default: /perception/object_recognition/detection/mask0]
      • input/camera_info0 [default: /sensing/camera/camera0/camera_info]
      • input/mask1 [default: /perception/object_recognition/detection/mask1]
      • input/camera_info1 [default: /sensing/camera/camera1/camera_info]
      • input/mask2 [default: /perception/object_recognition/detection/mask2]
      • input/camera_info2 [default: /sensing/camera/camera2/camera_info]
      • input/mask3 [default: /perception/object_recognition/detection/mask3]
      • input/camera_info3 [default: /sensing/camera/camera3/camera_info]
      • input/mask4 [default: /perception/object_recognition/detection/mask4]
      • input/camera_info4 [default: /sensing/camera/camera4/camera_info]
      • input/mask5 [default: /perception/object_recognition/detection/mask5]
      • input/camera_info5 [default: /sensing/camera/camera5/camera_info]
      • input/mask6 [default: /perception/object_recognition/detection/mask6]
      • input/camera_info6 [default: /sensing/camera/camera6/camera_info]
      • input/mask7 [default: /perception/object_recognition/detection/mask7]
      • input/camera_info7 [default: /sensing/camera/camera7/camera_info]
      • input/mask8 [default: /perception/object_recognition/detection/mask8]
      • input/camera_info8 [default: /sensing/camera/camera8/camera_info]
      • input/pointcloud [default: /sensing/lidar/top/outlier_filtered/pointcloud]
      • output/pointcloud [default: output/pointcloud]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • semantic_segmentation_based_filter_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/segmentation_pointcloud_fusion.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_image_projection_based_fusion at Robotics Stack Exchange

No version for distro kilted showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_image_projection_based_fusion package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The autoware_image_projection_based_fusion package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Dai Nguyen
  • Kotaro Uetake
  • Tao Zhong
  • Taekjin Lee

Authors

No additional authors.

autoware_image_projection_based_fusion

Purpose

The autoware_image_projection_based_fusion package is designed to enhance obstacle detection accuracy by integrating information from both image-based and LiDAR-based perception. It fuses detected obstacles — such as bounding boxes or segmentation — from 2D images with 3D point clouds or other obstacle representations, including bounding boxes, clusters, or segmentation. This fusion helps to refine obstacle classification and detection in autonomous driving applications.

Fusion algorithms

The package provides multiple fusion algorithms, each designed for specific use cases. Below are the different fusion methods along with their descriptions and detailed documentation links:

Fusion Name Description Detail
roi_cluster_fusion Assigns classification labels to LiDAR-detected clusters by matching them with Regions of Interest (ROIs) from a 2D object detector. link
roi_detected_object_fusion Updates classification labels of detected objects using ROI information from a 2D object detector. link
pointpainting_fusion Augments the point cloud by painting each point with additional information from ROIs of a 2D object detector. The enriched point cloud is then processed by a 3D object detector for improved accuracy. link
roi_pointcloud_fusion Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects. link
segmentation_pointcloud_fusion Filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation. link

Inner Workings / Algorithms

fusion_algorithm

The fusion process operates on two primary types of input data:

  • Msg3d: This includes 3D data such as point clouds, bounding boxes, or clusters from LiDAR.
  • RoIs (Regions of Interest): These are 2D detections or proposals from camera-based perception modules, such as object detection bounding boxes.

Both inputs come with timestamps, which are crucial for synchronization and fusion. Since sensors operate at different frequencies and may experience network delays, a systematic approach is needed to handle their arrival, align their timestamps, and ensure reliable fusion.

The following steps describe how the node processes these inputs, synchronizes them, and performs multi-sensor fusion.

Step 1: Matching and Creating a Collector

When a Msg3d or a set of RoIs arrives, its timestamp is checked, and an offset is subtracted to determine the reference timestamp. The node then searches for an existing collector with the same reference timestamp.

  • If a matching collector is found, the incoming data is added to it.
  • If no matching collector exists, a new collector is created and initialized with the reference timestamp.

Step 2: Triggering the Timer

Once a collector is created, a countdown timer is started. The timeout duration depends on which message type arrived first and is defined by either msg3d_timeout_sec for msg3d or rois_timeout_sec for RoIs.

The collector will attempt to fuse the collected 3D and 2D data either:

  • When both Msg3d and RoI data are available, or
  • When the timer expires.

If no Msg3d is received before the timer expires, the collector will discard the data without performing fusion.

Step 3: Fusion Process

The fusion process consists of three main stages:

  1. Preprocessing – Preparing the input data for fusion.
  2. Fusion – Aligning and merging RoIs with the 3D point cloud.
  3. Postprocessing – Refining the fused output based on the algorithm’s requirements.

The specific operations performed during these stages may vary depending on the type of fusion being applied.

Step 4: Publishing the Fused Result

After the fusion process is completed, the fused output is published. The collector is then reset to an idle state, ready to process the next incoming message.

The figure below shows how the input data is fused in different scenarios. roi_sync_image2

Parameters

All of the fusion nodes have the common parameters described in the following

{{ json_to_markdown(“perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json”) }}

Parameter Settings

Timeout

The order in which RoIs or the msg3d message arrives at the fusion node depends on your system and sensor configuration. Since the primary goal is to fuse 2D RoIs with msg3d data, msg3d is essential for processing.

If RoIs arrive earlier, they must wait until msg3d is received. You can adjust the waiting time using the rois_timeout_sec parameter.

If msg3d arrives first, the fusion process should proceed as quickly as possible, so the waiting time for msg3d (msg3d_timeout_sec) should be kept minimal.

RoIs Offsets

The offset between each camera and the LiDAR is determined by their shutter timing. To ensure accurate fusion, users must understand the timing offset between the RoIs and msg3d. Once this offset is known, it should be specified in the parameter rois_timestamp_offsets.

In the figure below, the LiDAR completes a full scan from the rear in 100 milliseconds. When the LiDAR scan reaches the area where the camera is facing, the camera is triggered, capturing an image with a corresponding timestamp. The rois_timestamp_offsets can then be calculated by subtracting the LiDAR header timestamp from the camera header timestamp. As a result, the rois_timestamp_offsets would be [0.059, 0.010, 0.026, 0.042, 0.076, 0.093].

lidar_camera_sync

To check the header timestamp of the msg3d and RoIs, user can easily run

ros2 echo [topic] --header field

Matching Strategies

We provide two matching strategies for different scenarios:

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_image_projection_based_fusion

0.47.0 (2025-08-11)

  • chore(image_projection_based_fusion): add initializing status log (#11112)

    • chore(image_projection_based_fusion): add initializing status log

    * chore: change to warning ---------

  • style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(roi_cluster_fusion): fix bug in debug mode (#11054)

    • fix(roi_cluster_fusion): fix bug in debug mode
    • chore: refactor
    • chore: docs

    * fix debug iou ---------

  • fix(tier4_perception_launch): add one more camera fusion (#10973)

    • fix(tier4_perception_launch): add one more camera fusion
    • fix: missing launch
    • feat(detection.launch): add support for additional camera inputs (camera8)

    * fix: missing launch param ---------Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • fix(image_projection_based_fusion): loosen rois_number check (#10924)

  • feat(autoware_lidar_centerpoint): add class-wise confidence thresholds to CenterPoint (#10881)

    • Add PreprocessCuda to CenterPoint
    • style(pre-commit): autofix
    • style(pre-commit): autofix
    • Add intensity preprocessing
    • style(pre-commit): autofix
    • Fix config_.point_feature_size_ typo
    • style(pre-commit): autofix
    • Fix point typo
    • style(pre-commit): autofix
    • Change score_threshold to score_thresholds
    • Use <autoware/cuda_utils/cuda_utils.hpp> for clear_async
    • Rename pre_ptr_ to pre_proc_ptr_
    • Remove unused getCacheSize() and getIdx
    • Use template in generateVoxels_random_kernel instead
    • style(pre-commit): autofix
    • Remove references in generateVoxels_random_kernel
    • Remove references in generateVoxels_random_kernel
    • style(pre-commit): autofix
    • Remove generateIntensityFeatures_kernel and add the case of 11 to ENCODER_IN_FEATURE_SIZE for generateFeatures_kernel
    • style(pre-commit): autofix
    • Add class-wise confidence thresholds to CenterPoint
    • style(pre-commit): autofix
    • Remov empty line changes
    • Update score_threshold to score_thresholds in REAMME
    • style(pre-commit): autofix
    • Change score_thresholds from pass by value to pass by reference
    • style(pre-commit): autofix
    • Add information about class names in scehema
    • Change vector<double> to vector<float>
    • Remove thrust and add stream_ to PostProcessCUDA
    • style(pre-commit): autofix
    • Fix incorrect initialization of score_thresholds_ vector
    • Fix postprocess CudaMemCpy error
    • Fix postprocess score_thresholds_d_ptr_ typing error
    • Fix score_thresholds typing in node.cpp
    • Static casting params.score_thresholds vector
    • style(pre-commit): autofix
    • Update perception/autoware_lidar_centerpoint/src/node.cpp
    • Update perception/autoware_lidar_centerpoint/include/autoware/lidar_centerpoint/centerpoint_config.hpp
    • Update centerpoint_config.hpp
    • Update node.cpp
    • Update score_thresholds_ to double since ros2 supports only double instead of float
    • style(pre-commit): autofix
    • Fix cuda memory and revert double score_thresholds_ to float score_thresholds_

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

  • Contributors: Kok Seang Tan, Mete Fatih Cırıt, badai nguyen

File truncated at 100 lines see the full file

Launch files

  • launch/pointpainting_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /sensing/lidar/top/rectified/pointcloud]
      • output/objects [default: objects]
      • data_path [default: $(env HOME)/autoware_data]
      • model_name [default: pointpainting]
      • model_path [default: $(var data_path)/image_projection_based_fusion]
      • model_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting.param.yaml]
      • ml_package_param_path [default: $(var model_path)/$(var model_name)_ml_package.param.yaml]
      • class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • common_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting_common.param.yaml]
      • build_only [default: false]
      • use_pointcloud_container [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_cluster_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/clusters [default: clusters]
      • output/clusters [default: labeled_clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_cluster_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_detected_object_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/objects [default: objects]
      • output/objects [default: fused_objects]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_detected_object_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_pointcloud_fusion.launch.xml
      • pointcloud_container_name [default: pointcloud_container]
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /perception/object_recognition/detection/pointcloud_map_filtered/pointcloud]
      • output/clusters [default: output/clusters]
      • debug/clusters [default: roi_pointcloud_fusion/debug/clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_pointcloud_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/segmentation_pointcloud_fusion.launch.xml
      • input/camera_number [default: 1]
      • input/mask0 [default: /perception/object_recognition/detection/mask0]
      • input/camera_info0 [default: /sensing/camera/camera0/camera_info]
      • input/mask1 [default: /perception/object_recognition/detection/mask1]
      • input/camera_info1 [default: /sensing/camera/camera1/camera_info]
      • input/mask2 [default: /perception/object_recognition/detection/mask2]
      • input/camera_info2 [default: /sensing/camera/camera2/camera_info]
      • input/mask3 [default: /perception/object_recognition/detection/mask3]
      • input/camera_info3 [default: /sensing/camera/camera3/camera_info]
      • input/mask4 [default: /perception/object_recognition/detection/mask4]
      • input/camera_info4 [default: /sensing/camera/camera4/camera_info]
      • input/mask5 [default: /perception/object_recognition/detection/mask5]
      • input/camera_info5 [default: /sensing/camera/camera5/camera_info]
      • input/mask6 [default: /perception/object_recognition/detection/mask6]
      • input/camera_info6 [default: /sensing/camera/camera6/camera_info]
      • input/mask7 [default: /perception/object_recognition/detection/mask7]
      • input/camera_info7 [default: /sensing/camera/camera7/camera_info]
      • input/mask8 [default: /perception/object_recognition/detection/mask8]
      • input/camera_info8 [default: /sensing/camera/camera8/camera_info]
      • input/pointcloud [default: /sensing/lidar/top/outlier_filtered/pointcloud]
      • output/pointcloud [default: output/pointcloud]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • semantic_segmentation_based_filter_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/segmentation_pointcloud_fusion.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_image_projection_based_fusion at Robotics Stack Exchange

No version for distro rolling showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_image_projection_based_fusion package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The autoware_image_projection_based_fusion package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Dai Nguyen
  • Kotaro Uetake
  • Tao Zhong
  • Taekjin Lee

Authors

No additional authors.

autoware_image_projection_based_fusion

Purpose

The autoware_image_projection_based_fusion package is designed to enhance obstacle detection accuracy by integrating information from both image-based and LiDAR-based perception. It fuses detected obstacles — such as bounding boxes or segmentation — from 2D images with 3D point clouds or other obstacle representations, including bounding boxes, clusters, or segmentation. This fusion helps to refine obstacle classification and detection in autonomous driving applications.

Fusion algorithms

The package provides multiple fusion algorithms, each designed for specific use cases. Below are the different fusion methods along with their descriptions and detailed documentation links:

Fusion Name Description Detail
roi_cluster_fusion Assigns classification labels to LiDAR-detected clusters by matching them with Regions of Interest (ROIs) from a 2D object detector. link
roi_detected_object_fusion Updates classification labels of detected objects using ROI information from a 2D object detector. link
pointpainting_fusion Augments the point cloud by painting each point with additional information from ROIs of a 2D object detector. The enriched point cloud is then processed by a 3D object detector for improved accuracy. link
roi_pointcloud_fusion Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects. link
segmentation_pointcloud_fusion Filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation. link

Inner Workings / Algorithms

fusion_algorithm

The fusion process operates on two primary types of input data:

  • Msg3d: This includes 3D data such as point clouds, bounding boxes, or clusters from LiDAR.
  • RoIs (Regions of Interest): These are 2D detections or proposals from camera-based perception modules, such as object detection bounding boxes.

Both inputs come with timestamps, which are crucial for synchronization and fusion. Since sensors operate at different frequencies and may experience network delays, a systematic approach is needed to handle their arrival, align their timestamps, and ensure reliable fusion.

The following steps describe how the node processes these inputs, synchronizes them, and performs multi-sensor fusion.

Step 1: Matching and Creating a Collector

When a Msg3d or a set of RoIs arrives, its timestamp is checked, and an offset is subtracted to determine the reference timestamp. The node then searches for an existing collector with the same reference timestamp.

  • If a matching collector is found, the incoming data is added to it.
  • If no matching collector exists, a new collector is created and initialized with the reference timestamp.

Step 2: Triggering the Timer

Once a collector is created, a countdown timer is started. The timeout duration depends on which message type arrived first and is defined by either msg3d_timeout_sec for msg3d or rois_timeout_sec for RoIs.

The collector will attempt to fuse the collected 3D and 2D data either:

  • When both Msg3d and RoI data are available, or
  • When the timer expires.

If no Msg3d is received before the timer expires, the collector will discard the data without performing fusion.

Step 3: Fusion Process

The fusion process consists of three main stages:

  1. Preprocessing – Preparing the input data for fusion.
  2. Fusion – Aligning and merging RoIs with the 3D point cloud.
  3. Postprocessing – Refining the fused output based on the algorithm’s requirements.

The specific operations performed during these stages may vary depending on the type of fusion being applied.

Step 4: Publishing the Fused Result

After the fusion process is completed, the fused output is published. The collector is then reset to an idle state, ready to process the next incoming message.

The figure below shows how the input data is fused in different scenarios. roi_sync_image2

Parameters

All of the fusion nodes have the common parameters described in the following

{{ json_to_markdown(“perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json”) }}

Parameter Settings

Timeout

The order in which RoIs or the msg3d message arrives at the fusion node depends on your system and sensor configuration. Since the primary goal is to fuse 2D RoIs with msg3d data, msg3d is essential for processing.

If RoIs arrive earlier, they must wait until msg3d is received. You can adjust the waiting time using the rois_timeout_sec parameter.

If msg3d arrives first, the fusion process should proceed as quickly as possible, so the waiting time for msg3d (msg3d_timeout_sec) should be kept minimal.

RoIs Offsets

The offset between each camera and the LiDAR is determined by their shutter timing. To ensure accurate fusion, users must understand the timing offset between the RoIs and msg3d. Once this offset is known, it should be specified in the parameter rois_timestamp_offsets.

In the figure below, the LiDAR completes a full scan from the rear in 100 milliseconds. When the LiDAR scan reaches the area where the camera is facing, the camera is triggered, capturing an image with a corresponding timestamp. The rois_timestamp_offsets can then be calculated by subtracting the LiDAR header timestamp from the camera header timestamp. As a result, the rois_timestamp_offsets would be [0.059, 0.010, 0.026, 0.042, 0.076, 0.093].

lidar_camera_sync

To check the header timestamp of the msg3d and RoIs, user can easily run

ros2 echo [topic] --header field

Matching Strategies

We provide two matching strategies for different scenarios:

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_image_projection_based_fusion

0.47.0 (2025-08-11)

  • chore(image_projection_based_fusion): add initializing status log (#11112)

    • chore(image_projection_based_fusion): add initializing status log

    * chore: change to warning ---------

  • style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(roi_cluster_fusion): fix bug in debug mode (#11054)

    • fix(roi_cluster_fusion): fix bug in debug mode
    • chore: refactor
    • chore: docs

    * fix debug iou ---------

  • fix(tier4_perception_launch): add one more camera fusion (#10973)

    • fix(tier4_perception_launch): add one more camera fusion
    • fix: missing launch
    • feat(detection.launch): add support for additional camera inputs (camera8)

    * fix: missing launch param ---------Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • fix(image_projection_based_fusion): loosen rois_number check (#10924)

  • feat(autoware_lidar_centerpoint): add class-wise confidence thresholds to CenterPoint (#10881)

    • Add PreprocessCuda to CenterPoint
    • style(pre-commit): autofix
    • style(pre-commit): autofix
    • Add intensity preprocessing
    • style(pre-commit): autofix
    • Fix config_.point_feature_size_ typo
    • style(pre-commit): autofix
    • Fix point typo
    • style(pre-commit): autofix
    • Change score_threshold to score_thresholds
    • Use <autoware/cuda_utils/cuda_utils.hpp> for clear_async
    • Rename pre_ptr_ to pre_proc_ptr_
    • Remove unused getCacheSize() and getIdx
    • Use template in generateVoxels_random_kernel instead
    • style(pre-commit): autofix
    • Remove references in generateVoxels_random_kernel
    • Remove references in generateVoxels_random_kernel
    • style(pre-commit): autofix
    • Remove generateIntensityFeatures_kernel and add the case of 11 to ENCODER_IN_FEATURE_SIZE for generateFeatures_kernel
    • style(pre-commit): autofix
    • Add class-wise confidence thresholds to CenterPoint
    • style(pre-commit): autofix
    • Remov empty line changes
    • Update score_threshold to score_thresholds in REAMME
    • style(pre-commit): autofix
    • Change score_thresholds from pass by value to pass by reference
    • style(pre-commit): autofix
    • Add information about class names in scehema
    • Change vector<double> to vector<float>
    • Remove thrust and add stream_ to PostProcessCUDA
    • style(pre-commit): autofix
    • Fix incorrect initialization of score_thresholds_ vector
    • Fix postprocess CudaMemCpy error
    • Fix postprocess score_thresholds_d_ptr_ typing error
    • Fix score_thresholds typing in node.cpp
    • Static casting params.score_thresholds vector
    • style(pre-commit): autofix
    • Update perception/autoware_lidar_centerpoint/src/node.cpp
    • Update perception/autoware_lidar_centerpoint/include/autoware/lidar_centerpoint/centerpoint_config.hpp
    • Update centerpoint_config.hpp
    • Update node.cpp
    • Update score_thresholds_ to double since ros2 supports only double instead of float
    • style(pre-commit): autofix
    • Fix cuda memory and revert double score_thresholds_ to float score_thresholds_

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

  • Contributors: Kok Seang Tan, Mete Fatih Cırıt, badai nguyen

File truncated at 100 lines see the full file

Launch files

  • launch/pointpainting_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /sensing/lidar/top/rectified/pointcloud]
      • output/objects [default: objects]
      • data_path [default: $(env HOME)/autoware_data]
      • model_name [default: pointpainting]
      • model_path [default: $(var data_path)/image_projection_based_fusion]
      • model_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting.param.yaml]
      • ml_package_param_path [default: $(var model_path)/$(var model_name)_ml_package.param.yaml]
      • class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • common_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting_common.param.yaml]
      • build_only [default: false]
      • use_pointcloud_container [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_cluster_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/clusters [default: clusters]
      • output/clusters [default: labeled_clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_cluster_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_detected_object_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/objects [default: objects]
      • output/objects [default: fused_objects]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_detected_object_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_pointcloud_fusion.launch.xml
      • pointcloud_container_name [default: pointcloud_container]
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /perception/object_recognition/detection/pointcloud_map_filtered/pointcloud]
      • output/clusters [default: output/clusters]
      • debug/clusters [default: roi_pointcloud_fusion/debug/clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_pointcloud_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/segmentation_pointcloud_fusion.launch.xml
      • input/camera_number [default: 1]
      • input/mask0 [default: /perception/object_recognition/detection/mask0]
      • input/camera_info0 [default: /sensing/camera/camera0/camera_info]
      • input/mask1 [default: /perception/object_recognition/detection/mask1]
      • input/camera_info1 [default: /sensing/camera/camera1/camera_info]
      • input/mask2 [default: /perception/object_recognition/detection/mask2]
      • input/camera_info2 [default: /sensing/camera/camera2/camera_info]
      • input/mask3 [default: /perception/object_recognition/detection/mask3]
      • input/camera_info3 [default: /sensing/camera/camera3/camera_info]
      • input/mask4 [default: /perception/object_recognition/detection/mask4]
      • input/camera_info4 [default: /sensing/camera/camera4/camera_info]
      • input/mask5 [default: /perception/object_recognition/detection/mask5]
      • input/camera_info5 [default: /sensing/camera/camera5/camera_info]
      • input/mask6 [default: /perception/object_recognition/detection/mask6]
      • input/camera_info6 [default: /sensing/camera/camera6/camera_info]
      • input/mask7 [default: /perception/object_recognition/detection/mask7]
      • input/camera_info7 [default: /sensing/camera/camera7/camera_info]
      • input/mask8 [default: /perception/object_recognition/detection/mask8]
      • input/camera_info8 [default: /sensing/camera/camera8/camera_info]
      • input/pointcloud [default: /sensing/lidar/top/outlier_filtered/pointcloud]
      • output/pointcloud [default: output/pointcloud]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • semantic_segmentation_based_filter_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/segmentation_pointcloud_fusion.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_image_projection_based_fusion at Robotics Stack Exchange

Package symbol

autoware_image_projection_based_fusion package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The autoware_image_projection_based_fusion package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Dai Nguyen
  • Kotaro Uetake
  • Tao Zhong
  • Taekjin Lee

Authors

No additional authors.

autoware_image_projection_based_fusion

Purpose

The autoware_image_projection_based_fusion package is designed to enhance obstacle detection accuracy by integrating information from both image-based and LiDAR-based perception. It fuses detected obstacles — such as bounding boxes or segmentation — from 2D images with 3D point clouds or other obstacle representations, including bounding boxes, clusters, or segmentation. This fusion helps to refine obstacle classification and detection in autonomous driving applications.

Fusion algorithms

The package provides multiple fusion algorithms, each designed for specific use cases. Below are the different fusion methods along with their descriptions and detailed documentation links:

Fusion Name Description Detail
roi_cluster_fusion Assigns classification labels to LiDAR-detected clusters by matching them with Regions of Interest (ROIs) from a 2D object detector. link
roi_detected_object_fusion Updates classification labels of detected objects using ROI information from a 2D object detector. link
pointpainting_fusion Augments the point cloud by painting each point with additional information from ROIs of a 2D object detector. The enriched point cloud is then processed by a 3D object detector for improved accuracy. link
roi_pointcloud_fusion Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects. link
segmentation_pointcloud_fusion Filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation. link

Inner Workings / Algorithms

fusion_algorithm

The fusion process operates on two primary types of input data:

  • Msg3d: This includes 3D data such as point clouds, bounding boxes, or clusters from LiDAR.
  • RoIs (Regions of Interest): These are 2D detections or proposals from camera-based perception modules, such as object detection bounding boxes.

Both inputs come with timestamps, which are crucial for synchronization and fusion. Since sensors operate at different frequencies and may experience network delays, a systematic approach is needed to handle their arrival, align their timestamps, and ensure reliable fusion.

The following steps describe how the node processes these inputs, synchronizes them, and performs multi-sensor fusion.

Step 1: Matching and Creating a Collector

When a Msg3d or a set of RoIs arrives, its timestamp is checked, and an offset is subtracted to determine the reference timestamp. The node then searches for an existing collector with the same reference timestamp.

  • If a matching collector is found, the incoming data is added to it.
  • If no matching collector exists, a new collector is created and initialized with the reference timestamp.

Step 2: Triggering the Timer

Once a collector is created, a countdown timer is started. The timeout duration depends on which message type arrived first and is defined by either msg3d_timeout_sec for msg3d or rois_timeout_sec for RoIs.

The collector will attempt to fuse the collected 3D and 2D data either:

  • When both Msg3d and RoI data are available, or
  • When the timer expires.

If no Msg3d is received before the timer expires, the collector will discard the data without performing fusion.

Step 3: Fusion Process

The fusion process consists of three main stages:

  1. Preprocessing – Preparing the input data for fusion.
  2. Fusion – Aligning and merging RoIs with the 3D point cloud.
  3. Postprocessing – Refining the fused output based on the algorithm’s requirements.

The specific operations performed during these stages may vary depending on the type of fusion being applied.

Step 4: Publishing the Fused Result

After the fusion process is completed, the fused output is published. The collector is then reset to an idle state, ready to process the next incoming message.

The figure below shows how the input data is fused in different scenarios. roi_sync_image2

Parameters

All of the fusion nodes have the common parameters described in the following

{{ json_to_markdown(“perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json”) }}

Parameter Settings

Timeout

The order in which RoIs or the msg3d message arrives at the fusion node depends on your system and sensor configuration. Since the primary goal is to fuse 2D RoIs with msg3d data, msg3d is essential for processing.

If RoIs arrive earlier, they must wait until msg3d is received. You can adjust the waiting time using the rois_timeout_sec parameter.

If msg3d arrives first, the fusion process should proceed as quickly as possible, so the waiting time for msg3d (msg3d_timeout_sec) should be kept minimal.

RoIs Offsets

The offset between each camera and the LiDAR is determined by their shutter timing. To ensure accurate fusion, users must understand the timing offset between the RoIs and msg3d. Once this offset is known, it should be specified in the parameter rois_timestamp_offsets.

In the figure below, the LiDAR completes a full scan from the rear in 100 milliseconds. When the LiDAR scan reaches the area where the camera is facing, the camera is triggered, capturing an image with a corresponding timestamp. The rois_timestamp_offsets can then be calculated by subtracting the LiDAR header timestamp from the camera header timestamp. As a result, the rois_timestamp_offsets would be [0.059, 0.010, 0.026, 0.042, 0.076, 0.093].

lidar_camera_sync

To check the header timestamp of the msg3d and RoIs, user can easily run

ros2 echo [topic] --header field

Matching Strategies

We provide two matching strategies for different scenarios:

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_image_projection_based_fusion

0.47.0 (2025-08-11)

  • chore(image_projection_based_fusion): add initializing status log (#11112)

    • chore(image_projection_based_fusion): add initializing status log

    * chore: change to warning ---------

  • style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(roi_cluster_fusion): fix bug in debug mode (#11054)

    • fix(roi_cluster_fusion): fix bug in debug mode
    • chore: refactor
    • chore: docs

    * fix debug iou ---------

  • fix(tier4_perception_launch): add one more camera fusion (#10973)

    • fix(tier4_perception_launch): add one more camera fusion
    • fix: missing launch
    • feat(detection.launch): add support for additional camera inputs (camera8)

    * fix: missing launch param ---------Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • fix(image_projection_based_fusion): loosen rois_number check (#10924)

  • feat(autoware_lidar_centerpoint): add class-wise confidence thresholds to CenterPoint (#10881)

    • Add PreprocessCuda to CenterPoint
    • style(pre-commit): autofix
    • style(pre-commit): autofix
    • Add intensity preprocessing
    • style(pre-commit): autofix
    • Fix config_.point_feature_size_ typo
    • style(pre-commit): autofix
    • Fix point typo
    • style(pre-commit): autofix
    • Change score_threshold to score_thresholds
    • Use <autoware/cuda_utils/cuda_utils.hpp> for clear_async
    • Rename pre_ptr_ to pre_proc_ptr_
    • Remove unused getCacheSize() and getIdx
    • Use template in generateVoxels_random_kernel instead
    • style(pre-commit): autofix
    • Remove references in generateVoxels_random_kernel
    • Remove references in generateVoxels_random_kernel
    • style(pre-commit): autofix
    • Remove generateIntensityFeatures_kernel and add the case of 11 to ENCODER_IN_FEATURE_SIZE for generateFeatures_kernel
    • style(pre-commit): autofix
    • Add class-wise confidence thresholds to CenterPoint
    • style(pre-commit): autofix
    • Remov empty line changes
    • Update score_threshold to score_thresholds in REAMME
    • style(pre-commit): autofix
    • Change score_thresholds from pass by value to pass by reference
    • style(pre-commit): autofix
    • Add information about class names in scehema
    • Change vector<double> to vector<float>
    • Remove thrust and add stream_ to PostProcessCUDA
    • style(pre-commit): autofix
    • Fix incorrect initialization of score_thresholds_ vector
    • Fix postprocess CudaMemCpy error
    • Fix postprocess score_thresholds_d_ptr_ typing error
    • Fix score_thresholds typing in node.cpp
    • Static casting params.score_thresholds vector
    • style(pre-commit): autofix
    • Update perception/autoware_lidar_centerpoint/src/node.cpp
    • Update perception/autoware_lidar_centerpoint/include/autoware/lidar_centerpoint/centerpoint_config.hpp
    • Update centerpoint_config.hpp
    • Update node.cpp
    • Update score_thresholds_ to double since ros2 supports only double instead of float
    • style(pre-commit): autofix
    • Fix cuda memory and revert double score_thresholds_ to float score_thresholds_

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

  • Contributors: Kok Seang Tan, Mete Fatih Cırıt, badai nguyen

File truncated at 100 lines see the full file

Launch files

  • launch/pointpainting_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /sensing/lidar/top/rectified/pointcloud]
      • output/objects [default: objects]
      • data_path [default: $(env HOME)/autoware_data]
      • model_name [default: pointpainting]
      • model_path [default: $(var data_path)/image_projection_based_fusion]
      • model_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting.param.yaml]
      • ml_package_param_path [default: $(var model_path)/$(var model_name)_ml_package.param.yaml]
      • class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • common_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting_common.param.yaml]
      • build_only [default: false]
      • use_pointcloud_container [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_cluster_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/clusters [default: clusters]
      • output/clusters [default: labeled_clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_cluster_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_detected_object_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/objects [default: objects]
      • output/objects [default: fused_objects]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_detected_object_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_pointcloud_fusion.launch.xml
      • pointcloud_container_name [default: pointcloud_container]
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /perception/object_recognition/detection/pointcloud_map_filtered/pointcloud]
      • output/clusters [default: output/clusters]
      • debug/clusters [default: roi_pointcloud_fusion/debug/clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_pointcloud_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/segmentation_pointcloud_fusion.launch.xml
      • input/camera_number [default: 1]
      • input/mask0 [default: /perception/object_recognition/detection/mask0]
      • input/camera_info0 [default: /sensing/camera/camera0/camera_info]
      • input/mask1 [default: /perception/object_recognition/detection/mask1]
      • input/camera_info1 [default: /sensing/camera/camera1/camera_info]
      • input/mask2 [default: /perception/object_recognition/detection/mask2]
      • input/camera_info2 [default: /sensing/camera/camera2/camera_info]
      • input/mask3 [default: /perception/object_recognition/detection/mask3]
      • input/camera_info3 [default: /sensing/camera/camera3/camera_info]
      • input/mask4 [default: /perception/object_recognition/detection/mask4]
      • input/camera_info4 [default: /sensing/camera/camera4/camera_info]
      • input/mask5 [default: /perception/object_recognition/detection/mask5]
      • input/camera_info5 [default: /sensing/camera/camera5/camera_info]
      • input/mask6 [default: /perception/object_recognition/detection/mask6]
      • input/camera_info6 [default: /sensing/camera/camera6/camera_info]
      • input/mask7 [default: /perception/object_recognition/detection/mask7]
      • input/camera_info7 [default: /sensing/camera/camera7/camera_info]
      • input/mask8 [default: /perception/object_recognition/detection/mask8]
      • input/camera_info8 [default: /sensing/camera/camera8/camera_info]
      • input/pointcloud [default: /sensing/lidar/top/outlier_filtered/pointcloud]
      • output/pointcloud [default: output/pointcloud]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • semantic_segmentation_based_filter_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/segmentation_pointcloud_fusion.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_image_projection_based_fusion at Robotics Stack Exchange

No version for distro galactic showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_image_projection_based_fusion package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The autoware_image_projection_based_fusion package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Dai Nguyen
  • Kotaro Uetake
  • Tao Zhong
  • Taekjin Lee

Authors

No additional authors.

autoware_image_projection_based_fusion

Purpose

The autoware_image_projection_based_fusion package is designed to enhance obstacle detection accuracy by integrating information from both image-based and LiDAR-based perception. It fuses detected obstacles — such as bounding boxes or segmentation — from 2D images with 3D point clouds or other obstacle representations, including bounding boxes, clusters, or segmentation. This fusion helps to refine obstacle classification and detection in autonomous driving applications.

Fusion algorithms

The package provides multiple fusion algorithms, each designed for specific use cases. Below are the different fusion methods along with their descriptions and detailed documentation links:

Fusion Name Description Detail
roi_cluster_fusion Assigns classification labels to LiDAR-detected clusters by matching them with Regions of Interest (ROIs) from a 2D object detector. link
roi_detected_object_fusion Updates classification labels of detected objects using ROI information from a 2D object detector. link
pointpainting_fusion Augments the point cloud by painting each point with additional information from ROIs of a 2D object detector. The enriched point cloud is then processed by a 3D object detector for improved accuracy. link
roi_pointcloud_fusion Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects. link
segmentation_pointcloud_fusion Filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation. link

Inner Workings / Algorithms

fusion_algorithm

The fusion process operates on two primary types of input data:

  • Msg3d: This includes 3D data such as point clouds, bounding boxes, or clusters from LiDAR.
  • RoIs (Regions of Interest): These are 2D detections or proposals from camera-based perception modules, such as object detection bounding boxes.

Both inputs come with timestamps, which are crucial for synchronization and fusion. Since sensors operate at different frequencies and may experience network delays, a systematic approach is needed to handle their arrival, align their timestamps, and ensure reliable fusion.

The following steps describe how the node processes these inputs, synchronizes them, and performs multi-sensor fusion.

Step 1: Matching and Creating a Collector

When a Msg3d or a set of RoIs arrives, its timestamp is checked, and an offset is subtracted to determine the reference timestamp. The node then searches for an existing collector with the same reference timestamp.

  • If a matching collector is found, the incoming data is added to it.
  • If no matching collector exists, a new collector is created and initialized with the reference timestamp.

Step 2: Triggering the Timer

Once a collector is created, a countdown timer is started. The timeout duration depends on which message type arrived first and is defined by either msg3d_timeout_sec for msg3d or rois_timeout_sec for RoIs.

The collector will attempt to fuse the collected 3D and 2D data either:

  • When both Msg3d and RoI data are available, or
  • When the timer expires.

If no Msg3d is received before the timer expires, the collector will discard the data without performing fusion.

Step 3: Fusion Process

The fusion process consists of three main stages:

  1. Preprocessing – Preparing the input data for fusion.
  2. Fusion – Aligning and merging RoIs with the 3D point cloud.
  3. Postprocessing – Refining the fused output based on the algorithm’s requirements.

The specific operations performed during these stages may vary depending on the type of fusion being applied.

Step 4: Publishing the Fused Result

After the fusion process is completed, the fused output is published. The collector is then reset to an idle state, ready to process the next incoming message.

The figure below shows how the input data is fused in different scenarios. roi_sync_image2

Parameters

All of the fusion nodes have the common parameters described in the following

{{ json_to_markdown(“perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json”) }}

Parameter Settings

Timeout

The order in which RoIs or the msg3d message arrives at the fusion node depends on your system and sensor configuration. Since the primary goal is to fuse 2D RoIs with msg3d data, msg3d is essential for processing.

If RoIs arrive earlier, they must wait until msg3d is received. You can adjust the waiting time using the rois_timeout_sec parameter.

If msg3d arrives first, the fusion process should proceed as quickly as possible, so the waiting time for msg3d (msg3d_timeout_sec) should be kept minimal.

RoIs Offsets

The offset between each camera and the LiDAR is determined by their shutter timing. To ensure accurate fusion, users must understand the timing offset between the RoIs and msg3d. Once this offset is known, it should be specified in the parameter rois_timestamp_offsets.

In the figure below, the LiDAR completes a full scan from the rear in 100 milliseconds. When the LiDAR scan reaches the area where the camera is facing, the camera is triggered, capturing an image with a corresponding timestamp. The rois_timestamp_offsets can then be calculated by subtracting the LiDAR header timestamp from the camera header timestamp. As a result, the rois_timestamp_offsets would be [0.059, 0.010, 0.026, 0.042, 0.076, 0.093].

lidar_camera_sync

To check the header timestamp of the msg3d and RoIs, user can easily run

ros2 echo [topic] --header field

Matching Strategies

We provide two matching strategies for different scenarios:

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_image_projection_based_fusion

0.47.0 (2025-08-11)

  • chore(image_projection_based_fusion): add initializing status log (#11112)

    • chore(image_projection_based_fusion): add initializing status log

    * chore: change to warning ---------

  • style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(roi_cluster_fusion): fix bug in debug mode (#11054)

    • fix(roi_cluster_fusion): fix bug in debug mode
    • chore: refactor
    • chore: docs

    * fix debug iou ---------

  • fix(tier4_perception_launch): add one more camera fusion (#10973)

    • fix(tier4_perception_launch): add one more camera fusion
    • fix: missing launch
    • feat(detection.launch): add support for additional camera inputs (camera8)

    * fix: missing launch param ---------Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • fix(image_projection_based_fusion): loosen rois_number check (#10924)

  • feat(autoware_lidar_centerpoint): add class-wise confidence thresholds to CenterPoint (#10881)

    • Add PreprocessCuda to CenterPoint
    • style(pre-commit): autofix
    • style(pre-commit): autofix
    • Add intensity preprocessing
    • style(pre-commit): autofix
    • Fix config_.point_feature_size_ typo
    • style(pre-commit): autofix
    • Fix point typo
    • style(pre-commit): autofix
    • Change score_threshold to score_thresholds
    • Use <autoware/cuda_utils/cuda_utils.hpp> for clear_async
    • Rename pre_ptr_ to pre_proc_ptr_
    • Remove unused getCacheSize() and getIdx
    • Use template in generateVoxels_random_kernel instead
    • style(pre-commit): autofix
    • Remove references in generateVoxels_random_kernel
    • Remove references in generateVoxels_random_kernel
    • style(pre-commit): autofix
    • Remove generateIntensityFeatures_kernel and add the case of 11 to ENCODER_IN_FEATURE_SIZE for generateFeatures_kernel
    • style(pre-commit): autofix
    • Add class-wise confidence thresholds to CenterPoint
    • style(pre-commit): autofix
    • Remov empty line changes
    • Update score_threshold to score_thresholds in REAMME
    • style(pre-commit): autofix
    • Change score_thresholds from pass by value to pass by reference
    • style(pre-commit): autofix
    • Add information about class names in scehema
    • Change vector<double> to vector<float>
    • Remove thrust and add stream_ to PostProcessCUDA
    • style(pre-commit): autofix
    • Fix incorrect initialization of score_thresholds_ vector
    • Fix postprocess CudaMemCpy error
    • Fix postprocess score_thresholds_d_ptr_ typing error
    • Fix score_thresholds typing in node.cpp
    • Static casting params.score_thresholds vector
    • style(pre-commit): autofix
    • Update perception/autoware_lidar_centerpoint/src/node.cpp
    • Update perception/autoware_lidar_centerpoint/include/autoware/lidar_centerpoint/centerpoint_config.hpp
    • Update centerpoint_config.hpp
    • Update node.cpp
    • Update score_thresholds_ to double since ros2 supports only double instead of float
    • style(pre-commit): autofix
    • Fix cuda memory and revert double score_thresholds_ to float score_thresholds_

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

  • Contributors: Kok Seang Tan, Mete Fatih Cırıt, badai nguyen

File truncated at 100 lines see the full file

Launch files

  • launch/pointpainting_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /sensing/lidar/top/rectified/pointcloud]
      • output/objects [default: objects]
      • data_path [default: $(env HOME)/autoware_data]
      • model_name [default: pointpainting]
      • model_path [default: $(var data_path)/image_projection_based_fusion]
      • model_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting.param.yaml]
      • ml_package_param_path [default: $(var model_path)/$(var model_name)_ml_package.param.yaml]
      • class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • common_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting_common.param.yaml]
      • build_only [default: false]
      • use_pointcloud_container [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_cluster_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/clusters [default: clusters]
      • output/clusters [default: labeled_clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_cluster_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_detected_object_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/objects [default: objects]
      • output/objects [default: fused_objects]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_detected_object_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_pointcloud_fusion.launch.xml
      • pointcloud_container_name [default: pointcloud_container]
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /perception/object_recognition/detection/pointcloud_map_filtered/pointcloud]
      • output/clusters [default: output/clusters]
      • debug/clusters [default: roi_pointcloud_fusion/debug/clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_pointcloud_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/segmentation_pointcloud_fusion.launch.xml
      • input/camera_number [default: 1]
      • input/mask0 [default: /perception/object_recognition/detection/mask0]
      • input/camera_info0 [default: /sensing/camera/camera0/camera_info]
      • input/mask1 [default: /perception/object_recognition/detection/mask1]
      • input/camera_info1 [default: /sensing/camera/camera1/camera_info]
      • input/mask2 [default: /perception/object_recognition/detection/mask2]
      • input/camera_info2 [default: /sensing/camera/camera2/camera_info]
      • input/mask3 [default: /perception/object_recognition/detection/mask3]
      • input/camera_info3 [default: /sensing/camera/camera3/camera_info]
      • input/mask4 [default: /perception/object_recognition/detection/mask4]
      • input/camera_info4 [default: /sensing/camera/camera4/camera_info]
      • input/mask5 [default: /perception/object_recognition/detection/mask5]
      • input/camera_info5 [default: /sensing/camera/camera5/camera_info]
      • input/mask6 [default: /perception/object_recognition/detection/mask6]
      • input/camera_info6 [default: /sensing/camera/camera6/camera_info]
      • input/mask7 [default: /perception/object_recognition/detection/mask7]
      • input/camera_info7 [default: /sensing/camera/camera7/camera_info]
      • input/mask8 [default: /perception/object_recognition/detection/mask8]
      • input/camera_info8 [default: /sensing/camera/camera8/camera_info]
      • input/pointcloud [default: /sensing/lidar/top/outlier_filtered/pointcloud]
      • output/pointcloud [default: output/pointcloud]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • semantic_segmentation_based_filter_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/segmentation_pointcloud_fusion.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_image_projection_based_fusion at Robotics Stack Exchange

No version for distro iron showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_image_projection_based_fusion package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The autoware_image_projection_based_fusion package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Dai Nguyen
  • Kotaro Uetake
  • Tao Zhong
  • Taekjin Lee

Authors

No additional authors.

autoware_image_projection_based_fusion

Purpose

The autoware_image_projection_based_fusion package is designed to enhance obstacle detection accuracy by integrating information from both image-based and LiDAR-based perception. It fuses detected obstacles — such as bounding boxes or segmentation — from 2D images with 3D point clouds or other obstacle representations, including bounding boxes, clusters, or segmentation. This fusion helps to refine obstacle classification and detection in autonomous driving applications.

Fusion algorithms

The package provides multiple fusion algorithms, each designed for specific use cases. Below are the different fusion methods along with their descriptions and detailed documentation links:

Fusion Name Description Detail
roi_cluster_fusion Assigns classification labels to LiDAR-detected clusters by matching them with Regions of Interest (ROIs) from a 2D object detector. link
roi_detected_object_fusion Updates classification labels of detected objects using ROI information from a 2D object detector. link
pointpainting_fusion Augments the point cloud by painting each point with additional information from ROIs of a 2D object detector. The enriched point cloud is then processed by a 3D object detector for improved accuracy. link
roi_pointcloud_fusion Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects. link
segmentation_pointcloud_fusion Filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation. link

Inner Workings / Algorithms

fusion_algorithm

The fusion process operates on two primary types of input data:

  • Msg3d: This includes 3D data such as point clouds, bounding boxes, or clusters from LiDAR.
  • RoIs (Regions of Interest): These are 2D detections or proposals from camera-based perception modules, such as object detection bounding boxes.

Both inputs come with timestamps, which are crucial for synchronization and fusion. Since sensors operate at different frequencies and may experience network delays, a systematic approach is needed to handle their arrival, align their timestamps, and ensure reliable fusion.

The following steps describe how the node processes these inputs, synchronizes them, and performs multi-sensor fusion.

Step 1: Matching and Creating a Collector

When a Msg3d or a set of RoIs arrives, its timestamp is checked, and an offset is subtracted to determine the reference timestamp. The node then searches for an existing collector with the same reference timestamp.

  • If a matching collector is found, the incoming data is added to it.
  • If no matching collector exists, a new collector is created and initialized with the reference timestamp.

Step 2: Triggering the Timer

Once a collector is created, a countdown timer is started. The timeout duration depends on which message type arrived first and is defined by either msg3d_timeout_sec for msg3d or rois_timeout_sec for RoIs.

The collector will attempt to fuse the collected 3D and 2D data either:

  • When both Msg3d and RoI data are available, or
  • When the timer expires.

If no Msg3d is received before the timer expires, the collector will discard the data without performing fusion.

Step 3: Fusion Process

The fusion process consists of three main stages:

  1. Preprocessing – Preparing the input data for fusion.
  2. Fusion – Aligning and merging RoIs with the 3D point cloud.
  3. Postprocessing – Refining the fused output based on the algorithm’s requirements.

The specific operations performed during these stages may vary depending on the type of fusion being applied.

Step 4: Publishing the Fused Result

After the fusion process is completed, the fused output is published. The collector is then reset to an idle state, ready to process the next incoming message.

The figure below shows how the input data is fused in different scenarios. roi_sync_image2

Parameters

All of the fusion nodes have the common parameters described in the following

{{ json_to_markdown(“perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json”) }}

Parameter Settings

Timeout

The order in which RoIs or the msg3d message arrives at the fusion node depends on your system and sensor configuration. Since the primary goal is to fuse 2D RoIs with msg3d data, msg3d is essential for processing.

If RoIs arrive earlier, they must wait until msg3d is received. You can adjust the waiting time using the rois_timeout_sec parameter.

If msg3d arrives first, the fusion process should proceed as quickly as possible, so the waiting time for msg3d (msg3d_timeout_sec) should be kept minimal.

RoIs Offsets

The offset between each camera and the LiDAR is determined by their shutter timing. To ensure accurate fusion, users must understand the timing offset between the RoIs and msg3d. Once this offset is known, it should be specified in the parameter rois_timestamp_offsets.

In the figure below, the LiDAR completes a full scan from the rear in 100 milliseconds. When the LiDAR scan reaches the area where the camera is facing, the camera is triggered, capturing an image with a corresponding timestamp. The rois_timestamp_offsets can then be calculated by subtracting the LiDAR header timestamp from the camera header timestamp. As a result, the rois_timestamp_offsets would be [0.059, 0.010, 0.026, 0.042, 0.076, 0.093].

lidar_camera_sync

To check the header timestamp of the msg3d and RoIs, user can easily run

ros2 echo [topic] --header field

Matching Strategies

We provide two matching strategies for different scenarios:

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_image_projection_based_fusion

0.47.0 (2025-08-11)

  • chore(image_projection_based_fusion): add initializing status log (#11112)

    • chore(image_projection_based_fusion): add initializing status log

    * chore: change to warning ---------

  • style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(roi_cluster_fusion): fix bug in debug mode (#11054)

    • fix(roi_cluster_fusion): fix bug in debug mode
    • chore: refactor
    • chore: docs

    * fix debug iou ---------

  • fix(tier4_perception_launch): add one more camera fusion (#10973)

    • fix(tier4_perception_launch): add one more camera fusion
    • fix: missing launch
    • feat(detection.launch): add support for additional camera inputs (camera8)

    * fix: missing launch param ---------Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • fix(image_projection_based_fusion): loosen rois_number check (#10924)

  • feat(autoware_lidar_centerpoint): add class-wise confidence thresholds to CenterPoint (#10881)

    • Add PreprocessCuda to CenterPoint
    • style(pre-commit): autofix
    • style(pre-commit): autofix
    • Add intensity preprocessing
    • style(pre-commit): autofix
    • Fix config_.point_feature_size_ typo
    • style(pre-commit): autofix
    • Fix point typo
    • style(pre-commit): autofix
    • Change score_threshold to score_thresholds
    • Use <autoware/cuda_utils/cuda_utils.hpp> for clear_async
    • Rename pre_ptr_ to pre_proc_ptr_
    • Remove unused getCacheSize() and getIdx
    • Use template in generateVoxels_random_kernel instead
    • style(pre-commit): autofix
    • Remove references in generateVoxels_random_kernel
    • Remove references in generateVoxels_random_kernel
    • style(pre-commit): autofix
    • Remove generateIntensityFeatures_kernel and add the case of 11 to ENCODER_IN_FEATURE_SIZE for generateFeatures_kernel
    • style(pre-commit): autofix
    • Add class-wise confidence thresholds to CenterPoint
    • style(pre-commit): autofix
    • Remov empty line changes
    • Update score_threshold to score_thresholds in REAMME
    • style(pre-commit): autofix
    • Change score_thresholds from pass by value to pass by reference
    • style(pre-commit): autofix
    • Add information about class names in scehema
    • Change vector<double> to vector<float>
    • Remove thrust and add stream_ to PostProcessCUDA
    • style(pre-commit): autofix
    • Fix incorrect initialization of score_thresholds_ vector
    • Fix postprocess CudaMemCpy error
    • Fix postprocess score_thresholds_d_ptr_ typing error
    • Fix score_thresholds typing in node.cpp
    • Static casting params.score_thresholds vector
    • style(pre-commit): autofix
    • Update perception/autoware_lidar_centerpoint/src/node.cpp
    • Update perception/autoware_lidar_centerpoint/include/autoware/lidar_centerpoint/centerpoint_config.hpp
    • Update centerpoint_config.hpp
    • Update node.cpp
    • Update score_thresholds_ to double since ros2 supports only double instead of float
    • style(pre-commit): autofix
    • Fix cuda memory and revert double score_thresholds_ to float score_thresholds_

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

  • Contributors: Kok Seang Tan, Mete Fatih Cırıt, badai nguyen

File truncated at 100 lines see the full file

Launch files

  • launch/pointpainting_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /sensing/lidar/top/rectified/pointcloud]
      • output/objects [default: objects]
      • data_path [default: $(env HOME)/autoware_data]
      • model_name [default: pointpainting]
      • model_path [default: $(var data_path)/image_projection_based_fusion]
      • model_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting.param.yaml]
      • ml_package_param_path [default: $(var model_path)/$(var model_name)_ml_package.param.yaml]
      • class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • common_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting_common.param.yaml]
      • build_only [default: false]
      • use_pointcloud_container [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_cluster_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/clusters [default: clusters]
      • output/clusters [default: labeled_clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_cluster_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_detected_object_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/objects [default: objects]
      • output/objects [default: fused_objects]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_detected_object_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_pointcloud_fusion.launch.xml
      • pointcloud_container_name [default: pointcloud_container]
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /perception/object_recognition/detection/pointcloud_map_filtered/pointcloud]
      • output/clusters [default: output/clusters]
      • debug/clusters [default: roi_pointcloud_fusion/debug/clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_pointcloud_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/segmentation_pointcloud_fusion.launch.xml
      • input/camera_number [default: 1]
      • input/mask0 [default: /perception/object_recognition/detection/mask0]
      • input/camera_info0 [default: /sensing/camera/camera0/camera_info]
      • input/mask1 [default: /perception/object_recognition/detection/mask1]
      • input/camera_info1 [default: /sensing/camera/camera1/camera_info]
      • input/mask2 [default: /perception/object_recognition/detection/mask2]
      • input/camera_info2 [default: /sensing/camera/camera2/camera_info]
      • input/mask3 [default: /perception/object_recognition/detection/mask3]
      • input/camera_info3 [default: /sensing/camera/camera3/camera_info]
      • input/mask4 [default: /perception/object_recognition/detection/mask4]
      • input/camera_info4 [default: /sensing/camera/camera4/camera_info]
      • input/mask5 [default: /perception/object_recognition/detection/mask5]
      • input/camera_info5 [default: /sensing/camera/camera5/camera_info]
      • input/mask6 [default: /perception/object_recognition/detection/mask6]
      • input/camera_info6 [default: /sensing/camera/camera6/camera_info]
      • input/mask7 [default: /perception/object_recognition/detection/mask7]
      • input/camera_info7 [default: /sensing/camera/camera7/camera_info]
      • input/mask8 [default: /perception/object_recognition/detection/mask8]
      • input/camera_info8 [default: /sensing/camera/camera8/camera_info]
      • input/pointcloud [default: /sensing/lidar/top/outlier_filtered/pointcloud]
      • output/pointcloud [default: output/pointcloud]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • semantic_segmentation_based_filter_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/segmentation_pointcloud_fusion.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_image_projection_based_fusion at Robotics Stack Exchange

No version for distro melodic showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_image_projection_based_fusion package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The autoware_image_projection_based_fusion package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Dai Nguyen
  • Kotaro Uetake
  • Tao Zhong
  • Taekjin Lee

Authors

No additional authors.

autoware_image_projection_based_fusion

Purpose

The autoware_image_projection_based_fusion package is designed to enhance obstacle detection accuracy by integrating information from both image-based and LiDAR-based perception. It fuses detected obstacles — such as bounding boxes or segmentation — from 2D images with 3D point clouds or other obstacle representations, including bounding boxes, clusters, or segmentation. This fusion helps to refine obstacle classification and detection in autonomous driving applications.

Fusion algorithms

The package provides multiple fusion algorithms, each designed for specific use cases. Below are the different fusion methods along with their descriptions and detailed documentation links:

Fusion Name Description Detail
roi_cluster_fusion Assigns classification labels to LiDAR-detected clusters by matching them with Regions of Interest (ROIs) from a 2D object detector. link
roi_detected_object_fusion Updates classification labels of detected objects using ROI information from a 2D object detector. link
pointpainting_fusion Augments the point cloud by painting each point with additional information from ROIs of a 2D object detector. The enriched point cloud is then processed by a 3D object detector for improved accuracy. link
roi_pointcloud_fusion Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects. link
segmentation_pointcloud_fusion Filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation. link

Inner Workings / Algorithms

fusion_algorithm

The fusion process operates on two primary types of input data:

  • Msg3d: This includes 3D data such as point clouds, bounding boxes, or clusters from LiDAR.
  • RoIs (Regions of Interest): These are 2D detections or proposals from camera-based perception modules, such as object detection bounding boxes.

Both inputs come with timestamps, which are crucial for synchronization and fusion. Since sensors operate at different frequencies and may experience network delays, a systematic approach is needed to handle their arrival, align their timestamps, and ensure reliable fusion.

The following steps describe how the node processes these inputs, synchronizes them, and performs multi-sensor fusion.

Step 1: Matching and Creating a Collector

When a Msg3d or a set of RoIs arrives, its timestamp is checked, and an offset is subtracted to determine the reference timestamp. The node then searches for an existing collector with the same reference timestamp.

  • If a matching collector is found, the incoming data is added to it.
  • If no matching collector exists, a new collector is created and initialized with the reference timestamp.

Step 2: Triggering the Timer

Once a collector is created, a countdown timer is started. The timeout duration depends on which message type arrived first and is defined by either msg3d_timeout_sec for msg3d or rois_timeout_sec for RoIs.

The collector will attempt to fuse the collected 3D and 2D data either:

  • When both Msg3d and RoI data are available, or
  • When the timer expires.

If no Msg3d is received before the timer expires, the collector will discard the data without performing fusion.

Step 3: Fusion Process

The fusion process consists of three main stages:

  1. Preprocessing – Preparing the input data for fusion.
  2. Fusion – Aligning and merging RoIs with the 3D point cloud.
  3. Postprocessing – Refining the fused output based on the algorithm’s requirements.

The specific operations performed during these stages may vary depending on the type of fusion being applied.

Step 4: Publishing the Fused Result

After the fusion process is completed, the fused output is published. The collector is then reset to an idle state, ready to process the next incoming message.

The figure below shows how the input data is fused in different scenarios. roi_sync_image2

Parameters

All of the fusion nodes have the common parameters described in the following

{{ json_to_markdown(“perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json”) }}

Parameter Settings

Timeout

The order in which RoIs or the msg3d message arrives at the fusion node depends on your system and sensor configuration. Since the primary goal is to fuse 2D RoIs with msg3d data, msg3d is essential for processing.

If RoIs arrive earlier, they must wait until msg3d is received. You can adjust the waiting time using the rois_timeout_sec parameter.

If msg3d arrives first, the fusion process should proceed as quickly as possible, so the waiting time for msg3d (msg3d_timeout_sec) should be kept minimal.

RoIs Offsets

The offset between each camera and the LiDAR is determined by their shutter timing. To ensure accurate fusion, users must understand the timing offset between the RoIs and msg3d. Once this offset is known, it should be specified in the parameter rois_timestamp_offsets.

In the figure below, the LiDAR completes a full scan from the rear in 100 milliseconds. When the LiDAR scan reaches the area where the camera is facing, the camera is triggered, capturing an image with a corresponding timestamp. The rois_timestamp_offsets can then be calculated by subtracting the LiDAR header timestamp from the camera header timestamp. As a result, the rois_timestamp_offsets would be [0.059, 0.010, 0.026, 0.042, 0.076, 0.093].

lidar_camera_sync

To check the header timestamp of the msg3d and RoIs, user can easily run

ros2 echo [topic] --header field

Matching Strategies

We provide two matching strategies for different scenarios:

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_image_projection_based_fusion

0.47.0 (2025-08-11)

  • chore(image_projection_based_fusion): add initializing status log (#11112)

    • chore(image_projection_based_fusion): add initializing status log

    * chore: change to warning ---------

  • style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(roi_cluster_fusion): fix bug in debug mode (#11054)

    • fix(roi_cluster_fusion): fix bug in debug mode
    • chore: refactor
    • chore: docs

    * fix debug iou ---------

  • fix(tier4_perception_launch): add one more camera fusion (#10973)

    • fix(tier4_perception_launch): add one more camera fusion
    • fix: missing launch
    • feat(detection.launch): add support for additional camera inputs (camera8)

    * fix: missing launch param ---------Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • fix(image_projection_based_fusion): loosen rois_number check (#10924)

  • feat(autoware_lidar_centerpoint): add class-wise confidence thresholds to CenterPoint (#10881)

    • Add PreprocessCuda to CenterPoint
    • style(pre-commit): autofix
    • style(pre-commit): autofix
    • Add intensity preprocessing
    • style(pre-commit): autofix
    • Fix config_.point_feature_size_ typo
    • style(pre-commit): autofix
    • Fix point typo
    • style(pre-commit): autofix
    • Change score_threshold to score_thresholds
    • Use <autoware/cuda_utils/cuda_utils.hpp> for clear_async
    • Rename pre_ptr_ to pre_proc_ptr_
    • Remove unused getCacheSize() and getIdx
    • Use template in generateVoxels_random_kernel instead
    • style(pre-commit): autofix
    • Remove references in generateVoxels_random_kernel
    • Remove references in generateVoxels_random_kernel
    • style(pre-commit): autofix
    • Remove generateIntensityFeatures_kernel and add the case of 11 to ENCODER_IN_FEATURE_SIZE for generateFeatures_kernel
    • style(pre-commit): autofix
    • Add class-wise confidence thresholds to CenterPoint
    • style(pre-commit): autofix
    • Remov empty line changes
    • Update score_threshold to score_thresholds in REAMME
    • style(pre-commit): autofix
    • Change score_thresholds from pass by value to pass by reference
    • style(pre-commit): autofix
    • Add information about class names in scehema
    • Change vector<double> to vector<float>
    • Remove thrust and add stream_ to PostProcessCUDA
    • style(pre-commit): autofix
    • Fix incorrect initialization of score_thresholds_ vector
    • Fix postprocess CudaMemCpy error
    • Fix postprocess score_thresholds_d_ptr_ typing error
    • Fix score_thresholds typing in node.cpp
    • Static casting params.score_thresholds vector
    • style(pre-commit): autofix
    • Update perception/autoware_lidar_centerpoint/src/node.cpp
    • Update perception/autoware_lidar_centerpoint/include/autoware/lidar_centerpoint/centerpoint_config.hpp
    • Update centerpoint_config.hpp
    • Update node.cpp
    • Update score_thresholds_ to double since ros2 supports only double instead of float
    • style(pre-commit): autofix
    • Fix cuda memory and revert double score_thresholds_ to float score_thresholds_

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

  • Contributors: Kok Seang Tan, Mete Fatih Cırıt, badai nguyen

File truncated at 100 lines see the full file

Launch files

  • launch/pointpainting_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /sensing/lidar/top/rectified/pointcloud]
      • output/objects [default: objects]
      • data_path [default: $(env HOME)/autoware_data]
      • model_name [default: pointpainting]
      • model_path [default: $(var data_path)/image_projection_based_fusion]
      • model_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting.param.yaml]
      • ml_package_param_path [default: $(var model_path)/$(var model_name)_ml_package.param.yaml]
      • class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • common_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting_common.param.yaml]
      • build_only [default: false]
      • use_pointcloud_container [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_cluster_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/clusters [default: clusters]
      • output/clusters [default: labeled_clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_cluster_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_detected_object_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/objects [default: objects]
      • output/objects [default: fused_objects]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_detected_object_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_pointcloud_fusion.launch.xml
      • pointcloud_container_name [default: pointcloud_container]
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /perception/object_recognition/detection/pointcloud_map_filtered/pointcloud]
      • output/clusters [default: output/clusters]
      • debug/clusters [default: roi_pointcloud_fusion/debug/clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_pointcloud_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/segmentation_pointcloud_fusion.launch.xml
      • input/camera_number [default: 1]
      • input/mask0 [default: /perception/object_recognition/detection/mask0]
      • input/camera_info0 [default: /sensing/camera/camera0/camera_info]
      • input/mask1 [default: /perception/object_recognition/detection/mask1]
      • input/camera_info1 [default: /sensing/camera/camera1/camera_info]
      • input/mask2 [default: /perception/object_recognition/detection/mask2]
      • input/camera_info2 [default: /sensing/camera/camera2/camera_info]
      • input/mask3 [default: /perception/object_recognition/detection/mask3]
      • input/camera_info3 [default: /sensing/camera/camera3/camera_info]
      • input/mask4 [default: /perception/object_recognition/detection/mask4]
      • input/camera_info4 [default: /sensing/camera/camera4/camera_info]
      • input/mask5 [default: /perception/object_recognition/detection/mask5]
      • input/camera_info5 [default: /sensing/camera/camera5/camera_info]
      • input/mask6 [default: /perception/object_recognition/detection/mask6]
      • input/camera_info6 [default: /sensing/camera/camera6/camera_info]
      • input/mask7 [default: /perception/object_recognition/detection/mask7]
      • input/camera_info7 [default: /sensing/camera/camera7/camera_info]
      • input/mask8 [default: /perception/object_recognition/detection/mask8]
      • input/camera_info8 [default: /sensing/camera/camera8/camera_info]
      • input/pointcloud [default: /sensing/lidar/top/outlier_filtered/pointcloud]
      • output/pointcloud [default: output/pointcloud]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • semantic_segmentation_based_filter_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/segmentation_pointcloud_fusion.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_image_projection_based_fusion at Robotics Stack Exchange

No version for distro noetic showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_image_projection_based_fusion package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The autoware_image_projection_based_fusion package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Dai Nguyen
  • Kotaro Uetake
  • Tao Zhong
  • Taekjin Lee

Authors

No additional authors.

autoware_image_projection_based_fusion

Purpose

The autoware_image_projection_based_fusion package is designed to enhance obstacle detection accuracy by integrating information from both image-based and LiDAR-based perception. It fuses detected obstacles — such as bounding boxes or segmentation — from 2D images with 3D point clouds or other obstacle representations, including bounding boxes, clusters, or segmentation. This fusion helps to refine obstacle classification and detection in autonomous driving applications.

Fusion algorithms

The package provides multiple fusion algorithms, each designed for specific use cases. Below are the different fusion methods along with their descriptions and detailed documentation links:

Fusion Name Description Detail
roi_cluster_fusion Assigns classification labels to LiDAR-detected clusters by matching them with Regions of Interest (ROIs) from a 2D object detector. link
roi_detected_object_fusion Updates classification labels of detected objects using ROI information from a 2D object detector. link
pointpainting_fusion Augments the point cloud by painting each point with additional information from ROIs of a 2D object detector. The enriched point cloud is then processed by a 3D object detector for improved accuracy. link
roi_pointcloud_fusion Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects. link
segmentation_pointcloud_fusion Filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation. link

Inner Workings / Algorithms

fusion_algorithm

The fusion process operates on two primary types of input data:

  • Msg3d: This includes 3D data such as point clouds, bounding boxes, or clusters from LiDAR.
  • RoIs (Regions of Interest): These are 2D detections or proposals from camera-based perception modules, such as object detection bounding boxes.

Both inputs come with timestamps, which are crucial for synchronization and fusion. Since sensors operate at different frequencies and may experience network delays, a systematic approach is needed to handle their arrival, align their timestamps, and ensure reliable fusion.

The following steps describe how the node processes these inputs, synchronizes them, and performs multi-sensor fusion.

Step 1: Matching and Creating a Collector

When a Msg3d or a set of RoIs arrives, its timestamp is checked, and an offset is subtracted to determine the reference timestamp. The node then searches for an existing collector with the same reference timestamp.

  • If a matching collector is found, the incoming data is added to it.
  • If no matching collector exists, a new collector is created and initialized with the reference timestamp.

Step 2: Triggering the Timer

Once a collector is created, a countdown timer is started. The timeout duration depends on which message type arrived first and is defined by either msg3d_timeout_sec for msg3d or rois_timeout_sec for RoIs.

The collector will attempt to fuse the collected 3D and 2D data either:

  • When both Msg3d and RoI data are available, or
  • When the timer expires.

If no Msg3d is received before the timer expires, the collector will discard the data without performing fusion.

Step 3: Fusion Process

The fusion process consists of three main stages:

  1. Preprocessing – Preparing the input data for fusion.
  2. Fusion – Aligning and merging RoIs with the 3D point cloud.
  3. Postprocessing – Refining the fused output based on the algorithm’s requirements.

The specific operations performed during these stages may vary depending on the type of fusion being applied.

Step 4: Publishing the Fused Result

After the fusion process is completed, the fused output is published. The collector is then reset to an idle state, ready to process the next incoming message.

The figure below shows how the input data is fused in different scenarios. roi_sync_image2

Parameters

All of the fusion nodes have the common parameters described in the following

{{ json_to_markdown(“perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json”) }}

Parameter Settings

Timeout

The order in which RoIs or the msg3d message arrives at the fusion node depends on your system and sensor configuration. Since the primary goal is to fuse 2D RoIs with msg3d data, msg3d is essential for processing.

If RoIs arrive earlier, they must wait until msg3d is received. You can adjust the waiting time using the rois_timeout_sec parameter.

If msg3d arrives first, the fusion process should proceed as quickly as possible, so the waiting time for msg3d (msg3d_timeout_sec) should be kept minimal.

RoIs Offsets

The offset between each camera and the LiDAR is determined by their shutter timing. To ensure accurate fusion, users must understand the timing offset between the RoIs and msg3d. Once this offset is known, it should be specified in the parameter rois_timestamp_offsets.

In the figure below, the LiDAR completes a full scan from the rear in 100 milliseconds. When the LiDAR scan reaches the area where the camera is facing, the camera is triggered, capturing an image with a corresponding timestamp. The rois_timestamp_offsets can then be calculated by subtracting the LiDAR header timestamp from the camera header timestamp. As a result, the rois_timestamp_offsets would be [0.059, 0.010, 0.026, 0.042, 0.076, 0.093].

lidar_camera_sync

To check the header timestamp of the msg3d and RoIs, user can easily run

ros2 echo [topic] --header field

Matching Strategies

We provide two matching strategies for different scenarios:

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_image_projection_based_fusion

0.47.0 (2025-08-11)

  • chore(image_projection_based_fusion): add initializing status log (#11112)

    • chore(image_projection_based_fusion): add initializing status log

    * chore: change to warning ---------

  • style(pre-commit): update to clang-format-20 (#11088) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(roi_cluster_fusion): fix bug in debug mode (#11054)

    • fix(roi_cluster_fusion): fix bug in debug mode
    • chore: refactor
    • chore: docs

    * fix debug iou ---------

  • fix(tier4_perception_launch): add one more camera fusion (#10973)

    • fix(tier4_perception_launch): add one more camera fusion
    • fix: missing launch
    • feat(detection.launch): add support for additional camera inputs (camera8)

    * fix: missing launch param ---------Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • fix(image_projection_based_fusion): loosen rois_number check (#10924)

  • feat(autoware_lidar_centerpoint): add class-wise confidence thresholds to CenterPoint (#10881)

    • Add PreprocessCuda to CenterPoint
    • style(pre-commit): autofix
    • style(pre-commit): autofix
    • Add intensity preprocessing
    • style(pre-commit): autofix
    • Fix config_.point_feature_size_ typo
    • style(pre-commit): autofix
    • Fix point typo
    • style(pre-commit): autofix
    • Change score_threshold to score_thresholds
    • Use <autoware/cuda_utils/cuda_utils.hpp> for clear_async
    • Rename pre_ptr_ to pre_proc_ptr_
    • Remove unused getCacheSize() and getIdx
    • Use template in generateVoxels_random_kernel instead
    • style(pre-commit): autofix
    • Remove references in generateVoxels_random_kernel
    • Remove references in generateVoxels_random_kernel
    • style(pre-commit): autofix
    • Remove generateIntensityFeatures_kernel and add the case of 11 to ENCODER_IN_FEATURE_SIZE for generateFeatures_kernel
    • style(pre-commit): autofix
    • Add class-wise confidence thresholds to CenterPoint
    • style(pre-commit): autofix
    • Remov empty line changes
    • Update score_threshold to score_thresholds in REAMME
    • style(pre-commit): autofix
    • Change score_thresholds from pass by value to pass by reference
    • style(pre-commit): autofix
    • Add information about class names in scehema
    • Change vector<double> to vector<float>
    • Remove thrust and add stream_ to PostProcessCUDA
    • style(pre-commit): autofix
    • Fix incorrect initialization of score_thresholds_ vector
    • Fix postprocess CudaMemCpy error
    • Fix postprocess score_thresholds_d_ptr_ typing error
    • Fix score_thresholds typing in node.cpp
    • Static casting params.score_thresholds vector
    • style(pre-commit): autofix
    • Update perception/autoware_lidar_centerpoint/src/node.cpp
    • Update perception/autoware_lidar_centerpoint/include/autoware/lidar_centerpoint/centerpoint_config.hpp
    • Update centerpoint_config.hpp
    • Update node.cpp
    • Update score_thresholds_ to double since ros2 supports only double instead of float
    • style(pre-commit): autofix
    • Fix cuda memory and revert double score_thresholds_ to float score_thresholds_

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

  • Contributors: Kok Seang Tan, Mete Fatih Cırıt, badai nguyen

File truncated at 100 lines see the full file

Launch files

  • launch/pointpainting_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /sensing/lidar/top/rectified/pointcloud]
      • output/objects [default: objects]
      • data_path [default: $(env HOME)/autoware_data]
      • model_name [default: pointpainting]
      • model_path [default: $(var data_path)/image_projection_based_fusion]
      • model_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting.param.yaml]
      • ml_package_param_path [default: $(var model_path)/$(var model_name)_ml_package.param.yaml]
      • class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • common_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting_common.param.yaml]
      • build_only [default: false]
      • use_pointcloud_container [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_cluster_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/clusters [default: clusters]
      • output/clusters [default: labeled_clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_cluster_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_detected_object_fusion.launch.xml
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/objects [default: objects]
      • output/objects [default: fused_objects]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_detected_object_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/roi_pointcloud_fusion.launch.xml
      • pointcloud_container_name [default: pointcloud_container]
      • input/rois_number [default: 6]
      • input/rois0 [default: rois0]
      • input/camera_info0 [default: /camera_info0]
      • input/rois1 [default: rois1]
      • input/camera_info1 [default: /camera_info1]
      • input/rois2 [default: rois2]
      • input/camera_info2 [default: /camera_info2]
      • input/rois3 [default: rois3]
      • input/camera_info3 [default: /camera_info3]
      • input/rois4 [default: rois4]
      • input/camera_info4 [default: /camera_info4]
      • input/rois5 [default: rois5]
      • input/camera_info5 [default: /camera_info5]
      • input/rois6 [default: rois6]
      • input/camera_info6 [default: /camera_info6]
      • input/rois7 [default: rois7]
      • input/camera_info7 [default: /camera_info7]
      • input/rois8 [default: rois8]
      • input/camera_info8 [default: /camera_info8]
      • input/pointcloud [default: /perception/object_recognition/detection/pointcloud_map_filtered/pointcloud]
      • output/clusters [default: output/clusters]
      • debug/clusters [default: roi_pointcloud_fusion/debug/clusters]
      • param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_pointcloud_fusion.param.yaml]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • input_rois_number [default: $(var input/rois_number)]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]
  • launch/segmentation_pointcloud_fusion.launch.xml
      • input/camera_number [default: 1]
      • input/mask0 [default: /perception/object_recognition/detection/mask0]
      • input/camera_info0 [default: /sensing/camera/camera0/camera_info]
      • input/mask1 [default: /perception/object_recognition/detection/mask1]
      • input/camera_info1 [default: /sensing/camera/camera1/camera_info]
      • input/mask2 [default: /perception/object_recognition/detection/mask2]
      • input/camera_info2 [default: /sensing/camera/camera2/camera_info]
      • input/mask3 [default: /perception/object_recognition/detection/mask3]
      • input/camera_info3 [default: /sensing/camera/camera3/camera_info]
      • input/mask4 [default: /perception/object_recognition/detection/mask4]
      • input/camera_info4 [default: /sensing/camera/camera4/camera_info]
      • input/mask5 [default: /perception/object_recognition/detection/mask5]
      • input/camera_info5 [default: /sensing/camera/camera5/camera_info]
      • input/mask6 [default: /perception/object_recognition/detection/mask6]
      • input/camera_info6 [default: /sensing/camera/camera6/camera_info]
      • input/mask7 [default: /perception/object_recognition/detection/mask7]
      • input/camera_info7 [default: /sensing/camera/camera7/camera_info]
      • input/mask8 [default: /perception/object_recognition/detection/mask8]
      • input/camera_info8 [default: /sensing/camera/camera8/camera_info]
      • input/pointcloud [default: /sensing/lidar/top/outlier_filtered/pointcloud]
      • output/pointcloud [default: output/pointcloud]
      • sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
      • semantic_segmentation_based_filter_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/segmentation_pointcloud_fusion.param.yaml]
      • input/image0 [default: /image_raw0]
      • input/image1 [default: /image_raw1]
      • input/image2 [default: /image_raw2]
      • input/image3 [default: /image_raw3]
      • input/image4 [default: /image_raw4]
      • input/image5 [default: /image_raw5]
      • input/image6 [default: /image_raw6]
      • input/image7 [default: /image_raw7]
      • input/image8 [default: /image_raw8]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_image_projection_based_fusion at Robotics Stack Exchange