No version for distro humble showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.50.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_launch.git
VCS Type git
VCS Version main
Last Updated 2026-03-17
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.50.0 (2026-02-13)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • chore: import tier4 launchers from universe (#1740)
  • Contributors: Taeseung Sohn, github-actions

0.49.0 (2025-12-30)

  • Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog

  • feat: add option for gpu-preprocessing in perception launch (#11728)

    • add option for GPU preprocessing

    * Rename CUDA pointclouds argument in perception launch ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • feat(camera_streampetr): add camera streampetr to tracker input (#11635) add camera streampetr to tracker

  • Contributors: Ryohsuke Mitsudome, Yoshi Ri, Yuxuan Liu

0.48.0 (2025-11-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • feat(image_object_locator): add near range camera VRU detector to perception pipeline (#11622) add near range camera VRU detector to perception pipeline

  • feat(mult object tracker): publish merged object if it is multi-channel mode (#11386)

    • feat(multi_object_tracker): add support for merged object output and related parameters
    • feat(multi_object_tracker): add function to convert DynamicObject to DetectedObject and implement merged object publishing
    • fix(multi_object_tracker): prevent merged objects publisher from being in input channel topics
    • fix(multi_object_tracker): improve warning message for merged objects publisher in input channel
    • feat(multi_object_tracker): add is_simulation parameter to control merged object publishing
    • fix(multi_object_tracker): correct ego_frame_id variable usage and declaration
    • feat(multi_object_tracker): update getMergedObjects to accept transform and apply frame conversion
    • feat(multi_object_tracker): optimize getMergedObjects for efficient frame transformation
    • fix(multi_object_tracker): fix bug when merged_objects_pub_ is nullptr
    • feat(multi_object_tracker): refactor orientation availability conversion to improve code clarity
    • fix(multi_object_tracker): remove redundant comment in publish method for clarity
    • feat(multi_object_tracker): rename parameters for clarity and add publish_merged_objects option
    • fix(multi_object_tracker): rename pruning parameters for consistency in schema

    * Update perception/autoware_multi_object_tracker/src/processor/processor.cpp Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

    * feat(multi_object_tracker): replace 'is_simulation' with 'publish_merged_objects' in launch files and parameters ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • fix(camera_2d_detector): typo (#11380)

  • feat(launch): add args to select the 2d camera detection model (#11364)

    • add args
    • add color map path

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • use_camera_vru_detector
      • camera_vru_detector_rois_ids [default: [0]]
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • irregular_object_detector_param_path
      • object_recognition_detection_object_sorter_radar_param_path
  • launch/object_recognition/detection/detector/camera_2d_detector.launch.xml
      • image_raw0 [default: /sensing/camera/camera0/image_raw]
      • image_raw1 [default: /sensing/camera/camera1/image_raw]
      • image_raw2 [default: /sensing/camera/camera2/image_raw]
      • image_raw3 [default: /sensing/camera/camera3/image_raw]
      • image_raw4 [default: /sensing/camera/camera4/image_raw]
      • image_raw5 [default: /sensing/camera/camera5/image_raw]
      • image_raw6 [default: /sensing/camera/camera6/image_raw]
      • image_raw7 [default: /sensing/camera/camera7/image_raw]
      • image_raw8 [default: /sensing/camera/camera8/image_raw]
      • image_raw9 [default: /sensing/camera/camera9/image_raw]
      • image_number [default: 1]
      • camera_index [default: 0]
      • use_bytetrack [default: true]
      • enable_visualizer [default: false]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • tensorrt_yolox_ns [default: ]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • enable_2d_detection [default: false]
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/concatenation_info
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
      • sync_param_path
  • launch/object_recognition/detection/detector/camera_vru_detector.launch.xml
      • ns
      • input/camera0/info [default: /sensing/camera/camera0/camera_info]
      • input/camera0/rois [default: /perception/object_recognition/detection/rois0]
      • input/camera1/info [default: /sensing/camera/camera1/camera_info]
      • input/camera1/rois [default: /perception/object_recognition/detection/rois1]
      • input/camera2/info [default: /sensing/camera/camera2/camera_info]
      • input/camera2/rois [default: /perception/object_recognition/detection/rois2]
      • input/camera3/info [default: /sensing/camera/camera3/camera_info]
      • input/camera3/rois [default: /perception/object_recognition/detection/rois3]
      • input/camera4/info [default: /sensing/camera/camera4/camera_info]
      • input/camera4/rois [default: /perception/object_recognition/detection/rois4]
      • input/camera5/info [default: /sensing/camera/camera5/camera_info]
      • input/camera5/rois [default: /perception/object_recognition/detection/rois5]
      • input/camera6/info [default: /sensing/camera/camera6/camera_info]
      • input/camera6/rois [default: /perception/object_recognition/detection/rois6]
      • input/camera7/info [default: /sensing/camera/camera7/camera_info]
      • input/camera7/rois [default: /perception/object_recognition/detection/rois7]
      • output/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • bbox_object_locator_param_path [default: $(find-pkg-share autoware_image_object_locator)/config/bbox_object_locator.param.yaml]
      • rois_ids [default: [0, 1]]
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
      • voxel_grid_based_euclidean_param_path
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_sorter_param_path [default: $(var object_recognition_detection_object_sorter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • prediction_model_type [default: map_based]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • use_camera_vru_detector
      • publish_merged_objects
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • input/camera_only/objects [default: /perception/object_recognition/detection/camera_only/objects]
      • input/camera_only/channel [default: camera_streampetr]
      • input/camera_vru/channel [default: camera_vru]
      • input/camera_vru/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • output/objects [default: $(var ns)/objects]
      • output/merged_objects [default: $(var ns)/merged_objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_near_range_camera_vru_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_sorter_radar_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • tracker_publish_merged_objects
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • camera_vru_detector_rois_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • input/concatenation_info [default: /sensing/lidar/concatenated/pointcloud_info]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_camera_vru_detector [default: false]
      • use_cuda_ground_segmentation [default: false]
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • input_pointcloud_for_traffic_light_occlusion_predictor
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • output/tracker_merged_objects [default: /perception/object_recognition/detection/objects]
      • downsample_perception_common_pointcloud [default: false]
      • cuda_pointcloud_preprocessing [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input_pointcloud_for_traffic_light_occlusion_predictor [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro jazzy showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.50.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_launch.git
VCS Type git
VCS Version main
Last Updated 2026-03-17
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.50.0 (2026-02-13)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • chore: import tier4 launchers from universe (#1740)
  • Contributors: Taeseung Sohn, github-actions

0.49.0 (2025-12-30)

  • Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog

  • feat: add option for gpu-preprocessing in perception launch (#11728)

    • add option for GPU preprocessing

    * Rename CUDA pointclouds argument in perception launch ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • feat(camera_streampetr): add camera streampetr to tracker input (#11635) add camera streampetr to tracker

  • Contributors: Ryohsuke Mitsudome, Yoshi Ri, Yuxuan Liu

0.48.0 (2025-11-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • feat(image_object_locator): add near range camera VRU detector to perception pipeline (#11622) add near range camera VRU detector to perception pipeline

  • feat(mult object tracker): publish merged object if it is multi-channel mode (#11386)

    • feat(multi_object_tracker): add support for merged object output and related parameters
    • feat(multi_object_tracker): add function to convert DynamicObject to DetectedObject and implement merged object publishing
    • fix(multi_object_tracker): prevent merged objects publisher from being in input channel topics
    • fix(multi_object_tracker): improve warning message for merged objects publisher in input channel
    • feat(multi_object_tracker): add is_simulation parameter to control merged object publishing
    • fix(multi_object_tracker): correct ego_frame_id variable usage and declaration
    • feat(multi_object_tracker): update getMergedObjects to accept transform and apply frame conversion
    • feat(multi_object_tracker): optimize getMergedObjects for efficient frame transformation
    • fix(multi_object_tracker): fix bug when merged_objects_pub_ is nullptr
    • feat(multi_object_tracker): refactor orientation availability conversion to improve code clarity
    • fix(multi_object_tracker): remove redundant comment in publish method for clarity
    • feat(multi_object_tracker): rename parameters for clarity and add publish_merged_objects option
    • fix(multi_object_tracker): rename pruning parameters for consistency in schema

    * Update perception/autoware_multi_object_tracker/src/processor/processor.cpp Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

    * feat(multi_object_tracker): replace 'is_simulation' with 'publish_merged_objects' in launch files and parameters ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • fix(camera_2d_detector): typo (#11380)

  • feat(launch): add args to select the 2d camera detection model (#11364)

    • add args
    • add color map path

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • use_camera_vru_detector
      • camera_vru_detector_rois_ids [default: [0]]
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • irregular_object_detector_param_path
      • object_recognition_detection_object_sorter_radar_param_path
  • launch/object_recognition/detection/detector/camera_2d_detector.launch.xml
      • image_raw0 [default: /sensing/camera/camera0/image_raw]
      • image_raw1 [default: /sensing/camera/camera1/image_raw]
      • image_raw2 [default: /sensing/camera/camera2/image_raw]
      • image_raw3 [default: /sensing/camera/camera3/image_raw]
      • image_raw4 [default: /sensing/camera/camera4/image_raw]
      • image_raw5 [default: /sensing/camera/camera5/image_raw]
      • image_raw6 [default: /sensing/camera/camera6/image_raw]
      • image_raw7 [default: /sensing/camera/camera7/image_raw]
      • image_raw8 [default: /sensing/camera/camera8/image_raw]
      • image_raw9 [default: /sensing/camera/camera9/image_raw]
      • image_number [default: 1]
      • camera_index [default: 0]
      • use_bytetrack [default: true]
      • enable_visualizer [default: false]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • tensorrt_yolox_ns [default: ]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • enable_2d_detection [default: false]
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/concatenation_info
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
      • sync_param_path
  • launch/object_recognition/detection/detector/camera_vru_detector.launch.xml
      • ns
      • input/camera0/info [default: /sensing/camera/camera0/camera_info]
      • input/camera0/rois [default: /perception/object_recognition/detection/rois0]
      • input/camera1/info [default: /sensing/camera/camera1/camera_info]
      • input/camera1/rois [default: /perception/object_recognition/detection/rois1]
      • input/camera2/info [default: /sensing/camera/camera2/camera_info]
      • input/camera2/rois [default: /perception/object_recognition/detection/rois2]
      • input/camera3/info [default: /sensing/camera/camera3/camera_info]
      • input/camera3/rois [default: /perception/object_recognition/detection/rois3]
      • input/camera4/info [default: /sensing/camera/camera4/camera_info]
      • input/camera4/rois [default: /perception/object_recognition/detection/rois4]
      • input/camera5/info [default: /sensing/camera/camera5/camera_info]
      • input/camera5/rois [default: /perception/object_recognition/detection/rois5]
      • input/camera6/info [default: /sensing/camera/camera6/camera_info]
      • input/camera6/rois [default: /perception/object_recognition/detection/rois6]
      • input/camera7/info [default: /sensing/camera/camera7/camera_info]
      • input/camera7/rois [default: /perception/object_recognition/detection/rois7]
      • output/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • bbox_object_locator_param_path [default: $(find-pkg-share autoware_image_object_locator)/config/bbox_object_locator.param.yaml]
      • rois_ids [default: [0, 1]]
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
      • voxel_grid_based_euclidean_param_path
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_sorter_param_path [default: $(var object_recognition_detection_object_sorter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • prediction_model_type [default: map_based]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • use_camera_vru_detector
      • publish_merged_objects
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • input/camera_only/objects [default: /perception/object_recognition/detection/camera_only/objects]
      • input/camera_only/channel [default: camera_streampetr]
      • input/camera_vru/channel [default: camera_vru]
      • input/camera_vru/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • output/objects [default: $(var ns)/objects]
      • output/merged_objects [default: $(var ns)/merged_objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_near_range_camera_vru_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_sorter_radar_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • tracker_publish_merged_objects
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • camera_vru_detector_rois_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • input/concatenation_info [default: /sensing/lidar/concatenated/pointcloud_info]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_camera_vru_detector [default: false]
      • use_cuda_ground_segmentation [default: false]
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • input_pointcloud_for_traffic_light_occlusion_predictor
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • output/tracker_merged_objects [default: /perception/object_recognition/detection/objects]
      • downsample_perception_common_pointcloud [default: false]
      • cuda_pointcloud_preprocessing [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input_pointcloud_for_traffic_light_occlusion_predictor [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro kilted showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.50.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_launch.git
VCS Type git
VCS Version main
Last Updated 2026-03-17
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.50.0 (2026-02-13)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • chore: import tier4 launchers from universe (#1740)
  • Contributors: Taeseung Sohn, github-actions

0.49.0 (2025-12-30)

  • Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog

  • feat: add option for gpu-preprocessing in perception launch (#11728)

    • add option for GPU preprocessing

    * Rename CUDA pointclouds argument in perception launch ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • feat(camera_streampetr): add camera streampetr to tracker input (#11635) add camera streampetr to tracker

  • Contributors: Ryohsuke Mitsudome, Yoshi Ri, Yuxuan Liu

0.48.0 (2025-11-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • feat(image_object_locator): add near range camera VRU detector to perception pipeline (#11622) add near range camera VRU detector to perception pipeline

  • feat(mult object tracker): publish merged object if it is multi-channel mode (#11386)

    • feat(multi_object_tracker): add support for merged object output and related parameters
    • feat(multi_object_tracker): add function to convert DynamicObject to DetectedObject and implement merged object publishing
    • fix(multi_object_tracker): prevent merged objects publisher from being in input channel topics
    • fix(multi_object_tracker): improve warning message for merged objects publisher in input channel
    • feat(multi_object_tracker): add is_simulation parameter to control merged object publishing
    • fix(multi_object_tracker): correct ego_frame_id variable usage and declaration
    • feat(multi_object_tracker): update getMergedObjects to accept transform and apply frame conversion
    • feat(multi_object_tracker): optimize getMergedObjects for efficient frame transformation
    • fix(multi_object_tracker): fix bug when merged_objects_pub_ is nullptr
    • feat(multi_object_tracker): refactor orientation availability conversion to improve code clarity
    • fix(multi_object_tracker): remove redundant comment in publish method for clarity
    • feat(multi_object_tracker): rename parameters for clarity and add publish_merged_objects option
    • fix(multi_object_tracker): rename pruning parameters for consistency in schema

    * Update perception/autoware_multi_object_tracker/src/processor/processor.cpp Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

    * feat(multi_object_tracker): replace 'is_simulation' with 'publish_merged_objects' in launch files and parameters ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • fix(camera_2d_detector): typo (#11380)

  • feat(launch): add args to select the 2d camera detection model (#11364)

    • add args
    • add color map path

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • use_camera_vru_detector
      • camera_vru_detector_rois_ids [default: [0]]
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • irregular_object_detector_param_path
      • object_recognition_detection_object_sorter_radar_param_path
  • launch/object_recognition/detection/detector/camera_2d_detector.launch.xml
      • image_raw0 [default: /sensing/camera/camera0/image_raw]
      • image_raw1 [default: /sensing/camera/camera1/image_raw]
      • image_raw2 [default: /sensing/camera/camera2/image_raw]
      • image_raw3 [default: /sensing/camera/camera3/image_raw]
      • image_raw4 [default: /sensing/camera/camera4/image_raw]
      • image_raw5 [default: /sensing/camera/camera5/image_raw]
      • image_raw6 [default: /sensing/camera/camera6/image_raw]
      • image_raw7 [default: /sensing/camera/camera7/image_raw]
      • image_raw8 [default: /sensing/camera/camera8/image_raw]
      • image_raw9 [default: /sensing/camera/camera9/image_raw]
      • image_number [default: 1]
      • camera_index [default: 0]
      • use_bytetrack [default: true]
      • enable_visualizer [default: false]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • tensorrt_yolox_ns [default: ]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • enable_2d_detection [default: false]
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/concatenation_info
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
      • sync_param_path
  • launch/object_recognition/detection/detector/camera_vru_detector.launch.xml
      • ns
      • input/camera0/info [default: /sensing/camera/camera0/camera_info]
      • input/camera0/rois [default: /perception/object_recognition/detection/rois0]
      • input/camera1/info [default: /sensing/camera/camera1/camera_info]
      • input/camera1/rois [default: /perception/object_recognition/detection/rois1]
      • input/camera2/info [default: /sensing/camera/camera2/camera_info]
      • input/camera2/rois [default: /perception/object_recognition/detection/rois2]
      • input/camera3/info [default: /sensing/camera/camera3/camera_info]
      • input/camera3/rois [default: /perception/object_recognition/detection/rois3]
      • input/camera4/info [default: /sensing/camera/camera4/camera_info]
      • input/camera4/rois [default: /perception/object_recognition/detection/rois4]
      • input/camera5/info [default: /sensing/camera/camera5/camera_info]
      • input/camera5/rois [default: /perception/object_recognition/detection/rois5]
      • input/camera6/info [default: /sensing/camera/camera6/camera_info]
      • input/camera6/rois [default: /perception/object_recognition/detection/rois6]
      • input/camera7/info [default: /sensing/camera/camera7/camera_info]
      • input/camera7/rois [default: /perception/object_recognition/detection/rois7]
      • output/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • bbox_object_locator_param_path [default: $(find-pkg-share autoware_image_object_locator)/config/bbox_object_locator.param.yaml]
      • rois_ids [default: [0, 1]]
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
      • voxel_grid_based_euclidean_param_path
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_sorter_param_path [default: $(var object_recognition_detection_object_sorter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • prediction_model_type [default: map_based]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • use_camera_vru_detector
      • publish_merged_objects
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • input/camera_only/objects [default: /perception/object_recognition/detection/camera_only/objects]
      • input/camera_only/channel [default: camera_streampetr]
      • input/camera_vru/channel [default: camera_vru]
      • input/camera_vru/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • output/objects [default: $(var ns)/objects]
      • output/merged_objects [default: $(var ns)/merged_objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_near_range_camera_vru_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_sorter_radar_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • tracker_publish_merged_objects
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • camera_vru_detector_rois_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • input/concatenation_info [default: /sensing/lidar/concatenated/pointcloud_info]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_camera_vru_detector [default: false]
      • use_cuda_ground_segmentation [default: false]
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • input_pointcloud_for_traffic_light_occlusion_predictor
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • output/tracker_merged_objects [default: /perception/object_recognition/detection/objects]
      • downsample_perception_common_pointcloud [default: false]
      • cuda_pointcloud_preprocessing [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input_pointcloud_for_traffic_light_occlusion_predictor [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro rolling showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.50.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_launch.git
VCS Type git
VCS Version main
Last Updated 2026-03-17
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.50.0 (2026-02-13)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • chore: import tier4 launchers from universe (#1740)
  • Contributors: Taeseung Sohn, github-actions

0.49.0 (2025-12-30)

  • Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog

  • feat: add option for gpu-preprocessing in perception launch (#11728)

    • add option for GPU preprocessing

    * Rename CUDA pointclouds argument in perception launch ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • feat(camera_streampetr): add camera streampetr to tracker input (#11635) add camera streampetr to tracker

  • Contributors: Ryohsuke Mitsudome, Yoshi Ri, Yuxuan Liu

0.48.0 (2025-11-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • feat(image_object_locator): add near range camera VRU detector to perception pipeline (#11622) add near range camera VRU detector to perception pipeline

  • feat(mult object tracker): publish merged object if it is multi-channel mode (#11386)

    • feat(multi_object_tracker): add support for merged object output and related parameters
    • feat(multi_object_tracker): add function to convert DynamicObject to DetectedObject and implement merged object publishing
    • fix(multi_object_tracker): prevent merged objects publisher from being in input channel topics
    • fix(multi_object_tracker): improve warning message for merged objects publisher in input channel
    • feat(multi_object_tracker): add is_simulation parameter to control merged object publishing
    • fix(multi_object_tracker): correct ego_frame_id variable usage and declaration
    • feat(multi_object_tracker): update getMergedObjects to accept transform and apply frame conversion
    • feat(multi_object_tracker): optimize getMergedObjects for efficient frame transformation
    • fix(multi_object_tracker): fix bug when merged_objects_pub_ is nullptr
    • feat(multi_object_tracker): refactor orientation availability conversion to improve code clarity
    • fix(multi_object_tracker): remove redundant comment in publish method for clarity
    • feat(multi_object_tracker): rename parameters for clarity and add publish_merged_objects option
    • fix(multi_object_tracker): rename pruning parameters for consistency in schema

    * Update perception/autoware_multi_object_tracker/src/processor/processor.cpp Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

    * feat(multi_object_tracker): replace 'is_simulation' with 'publish_merged_objects' in launch files and parameters ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • fix(camera_2d_detector): typo (#11380)

  • feat(launch): add args to select the 2d camera detection model (#11364)

    • add args
    • add color map path

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • use_camera_vru_detector
      • camera_vru_detector_rois_ids [default: [0]]
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • irregular_object_detector_param_path
      • object_recognition_detection_object_sorter_radar_param_path
  • launch/object_recognition/detection/detector/camera_2d_detector.launch.xml
      • image_raw0 [default: /sensing/camera/camera0/image_raw]
      • image_raw1 [default: /sensing/camera/camera1/image_raw]
      • image_raw2 [default: /sensing/camera/camera2/image_raw]
      • image_raw3 [default: /sensing/camera/camera3/image_raw]
      • image_raw4 [default: /sensing/camera/camera4/image_raw]
      • image_raw5 [default: /sensing/camera/camera5/image_raw]
      • image_raw6 [default: /sensing/camera/camera6/image_raw]
      • image_raw7 [default: /sensing/camera/camera7/image_raw]
      • image_raw8 [default: /sensing/camera/camera8/image_raw]
      • image_raw9 [default: /sensing/camera/camera9/image_raw]
      • image_number [default: 1]
      • camera_index [default: 0]
      • use_bytetrack [default: true]
      • enable_visualizer [default: false]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • tensorrt_yolox_ns [default: ]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • enable_2d_detection [default: false]
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/concatenation_info
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
      • sync_param_path
  • launch/object_recognition/detection/detector/camera_vru_detector.launch.xml
      • ns
      • input/camera0/info [default: /sensing/camera/camera0/camera_info]
      • input/camera0/rois [default: /perception/object_recognition/detection/rois0]
      • input/camera1/info [default: /sensing/camera/camera1/camera_info]
      • input/camera1/rois [default: /perception/object_recognition/detection/rois1]
      • input/camera2/info [default: /sensing/camera/camera2/camera_info]
      • input/camera2/rois [default: /perception/object_recognition/detection/rois2]
      • input/camera3/info [default: /sensing/camera/camera3/camera_info]
      • input/camera3/rois [default: /perception/object_recognition/detection/rois3]
      • input/camera4/info [default: /sensing/camera/camera4/camera_info]
      • input/camera4/rois [default: /perception/object_recognition/detection/rois4]
      • input/camera5/info [default: /sensing/camera/camera5/camera_info]
      • input/camera5/rois [default: /perception/object_recognition/detection/rois5]
      • input/camera6/info [default: /sensing/camera/camera6/camera_info]
      • input/camera6/rois [default: /perception/object_recognition/detection/rois6]
      • input/camera7/info [default: /sensing/camera/camera7/camera_info]
      • input/camera7/rois [default: /perception/object_recognition/detection/rois7]
      • output/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • bbox_object_locator_param_path [default: $(find-pkg-share autoware_image_object_locator)/config/bbox_object_locator.param.yaml]
      • rois_ids [default: [0, 1]]
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
      • voxel_grid_based_euclidean_param_path
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_sorter_param_path [default: $(var object_recognition_detection_object_sorter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • prediction_model_type [default: map_based]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • use_camera_vru_detector
      • publish_merged_objects
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • input/camera_only/objects [default: /perception/object_recognition/detection/camera_only/objects]
      • input/camera_only/channel [default: camera_streampetr]
      • input/camera_vru/channel [default: camera_vru]
      • input/camera_vru/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • output/objects [default: $(var ns)/objects]
      • output/merged_objects [default: $(var ns)/merged_objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_near_range_camera_vru_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_sorter_radar_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • tracker_publish_merged_objects
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • camera_vru_detector_rois_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • input/concatenation_info [default: /sensing/lidar/concatenated/pointcloud_info]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_camera_vru_detector [default: false]
      • use_cuda_ground_segmentation [default: false]
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • input_pointcloud_for_traffic_light_occlusion_predictor
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • output/tracker_merged_objects [default: /perception/object_recognition/detection/objects]
      • downsample_perception_common_pointcloud [default: false]
      • cuda_pointcloud_preprocessing [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input_pointcloud_for_traffic_light_occlusion_predictor [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

Package Summary

Version 0.50.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_launch.git
VCS Type git
VCS Version main
Last Updated 2026-03-17
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.50.0 (2026-02-13)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • chore: import tier4 launchers from universe (#1740)
  • Contributors: Taeseung Sohn, github-actions

0.49.0 (2025-12-30)

  • Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog

  • feat: add option for gpu-preprocessing in perception launch (#11728)

    • add option for GPU preprocessing

    * Rename CUDA pointclouds argument in perception launch ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • feat(camera_streampetr): add camera streampetr to tracker input (#11635) add camera streampetr to tracker

  • Contributors: Ryohsuke Mitsudome, Yoshi Ri, Yuxuan Liu

0.48.0 (2025-11-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • feat(image_object_locator): add near range camera VRU detector to perception pipeline (#11622) add near range camera VRU detector to perception pipeline

  • feat(mult object tracker): publish merged object if it is multi-channel mode (#11386)

    • feat(multi_object_tracker): add support for merged object output and related parameters
    • feat(multi_object_tracker): add function to convert DynamicObject to DetectedObject and implement merged object publishing
    • fix(multi_object_tracker): prevent merged objects publisher from being in input channel topics
    • fix(multi_object_tracker): improve warning message for merged objects publisher in input channel
    • feat(multi_object_tracker): add is_simulation parameter to control merged object publishing
    • fix(multi_object_tracker): correct ego_frame_id variable usage and declaration
    • feat(multi_object_tracker): update getMergedObjects to accept transform and apply frame conversion
    • feat(multi_object_tracker): optimize getMergedObjects for efficient frame transformation
    • fix(multi_object_tracker): fix bug when merged_objects_pub_ is nullptr
    • feat(multi_object_tracker): refactor orientation availability conversion to improve code clarity
    • fix(multi_object_tracker): remove redundant comment in publish method for clarity
    • feat(multi_object_tracker): rename parameters for clarity and add publish_merged_objects option
    • fix(multi_object_tracker): rename pruning parameters for consistency in schema

    * Update perception/autoware_multi_object_tracker/src/processor/processor.cpp Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

    * feat(multi_object_tracker): replace 'is_simulation' with 'publish_merged_objects' in launch files and parameters ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • fix(camera_2d_detector): typo (#11380)

  • feat(launch): add args to select the 2d camera detection model (#11364)

    • add args
    • add color map path

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • use_camera_vru_detector
      • camera_vru_detector_rois_ids [default: [0]]
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • irregular_object_detector_param_path
      • object_recognition_detection_object_sorter_radar_param_path
  • launch/object_recognition/detection/detector/camera_2d_detector.launch.xml
      • image_raw0 [default: /sensing/camera/camera0/image_raw]
      • image_raw1 [default: /sensing/camera/camera1/image_raw]
      • image_raw2 [default: /sensing/camera/camera2/image_raw]
      • image_raw3 [default: /sensing/camera/camera3/image_raw]
      • image_raw4 [default: /sensing/camera/camera4/image_raw]
      • image_raw5 [default: /sensing/camera/camera5/image_raw]
      • image_raw6 [default: /sensing/camera/camera6/image_raw]
      • image_raw7 [default: /sensing/camera/camera7/image_raw]
      • image_raw8 [default: /sensing/camera/camera8/image_raw]
      • image_raw9 [default: /sensing/camera/camera9/image_raw]
      • image_number [default: 1]
      • camera_index [default: 0]
      • use_bytetrack [default: true]
      • enable_visualizer [default: false]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • tensorrt_yolox_ns [default: ]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • enable_2d_detection [default: false]
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/concatenation_info
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
      • sync_param_path
  • launch/object_recognition/detection/detector/camera_vru_detector.launch.xml
      • ns
      • input/camera0/info [default: /sensing/camera/camera0/camera_info]
      • input/camera0/rois [default: /perception/object_recognition/detection/rois0]
      • input/camera1/info [default: /sensing/camera/camera1/camera_info]
      • input/camera1/rois [default: /perception/object_recognition/detection/rois1]
      • input/camera2/info [default: /sensing/camera/camera2/camera_info]
      • input/camera2/rois [default: /perception/object_recognition/detection/rois2]
      • input/camera3/info [default: /sensing/camera/camera3/camera_info]
      • input/camera3/rois [default: /perception/object_recognition/detection/rois3]
      • input/camera4/info [default: /sensing/camera/camera4/camera_info]
      • input/camera4/rois [default: /perception/object_recognition/detection/rois4]
      • input/camera5/info [default: /sensing/camera/camera5/camera_info]
      • input/camera5/rois [default: /perception/object_recognition/detection/rois5]
      • input/camera6/info [default: /sensing/camera/camera6/camera_info]
      • input/camera6/rois [default: /perception/object_recognition/detection/rois6]
      • input/camera7/info [default: /sensing/camera/camera7/camera_info]
      • input/camera7/rois [default: /perception/object_recognition/detection/rois7]
      • output/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • bbox_object_locator_param_path [default: $(find-pkg-share autoware_image_object_locator)/config/bbox_object_locator.param.yaml]
      • rois_ids [default: [0, 1]]
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
      • voxel_grid_based_euclidean_param_path
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_sorter_param_path [default: $(var object_recognition_detection_object_sorter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • prediction_model_type [default: map_based]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • use_camera_vru_detector
      • publish_merged_objects
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • input/camera_only/objects [default: /perception/object_recognition/detection/camera_only/objects]
      • input/camera_only/channel [default: camera_streampetr]
      • input/camera_vru/channel [default: camera_vru]
      • input/camera_vru/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • output/objects [default: $(var ns)/objects]
      • output/merged_objects [default: $(var ns)/merged_objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_near_range_camera_vru_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_sorter_radar_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • tracker_publish_merged_objects
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • camera_vru_detector_rois_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • input/concatenation_info [default: /sensing/lidar/concatenated/pointcloud_info]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_camera_vru_detector [default: false]
      • use_cuda_ground_segmentation [default: false]
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • input_pointcloud_for_traffic_light_occlusion_predictor
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • output/tracker_merged_objects [default: /perception/object_recognition/detection/objects]
      • downsample_perception_common_pointcloud [default: false]
      • cuda_pointcloud_preprocessing [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input_pointcloud_for_traffic_light_occlusion_predictor [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro galactic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.50.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_launch.git
VCS Type git
VCS Version main
Last Updated 2026-03-17
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.50.0 (2026-02-13)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • chore: import tier4 launchers from universe (#1740)
  • Contributors: Taeseung Sohn, github-actions

0.49.0 (2025-12-30)

  • Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog

  • feat: add option for gpu-preprocessing in perception launch (#11728)

    • add option for GPU preprocessing

    * Rename CUDA pointclouds argument in perception launch ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • feat(camera_streampetr): add camera streampetr to tracker input (#11635) add camera streampetr to tracker

  • Contributors: Ryohsuke Mitsudome, Yoshi Ri, Yuxuan Liu

0.48.0 (2025-11-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • feat(image_object_locator): add near range camera VRU detector to perception pipeline (#11622) add near range camera VRU detector to perception pipeline

  • feat(mult object tracker): publish merged object if it is multi-channel mode (#11386)

    • feat(multi_object_tracker): add support for merged object output and related parameters
    • feat(multi_object_tracker): add function to convert DynamicObject to DetectedObject and implement merged object publishing
    • fix(multi_object_tracker): prevent merged objects publisher from being in input channel topics
    • fix(multi_object_tracker): improve warning message for merged objects publisher in input channel
    • feat(multi_object_tracker): add is_simulation parameter to control merged object publishing
    • fix(multi_object_tracker): correct ego_frame_id variable usage and declaration
    • feat(multi_object_tracker): update getMergedObjects to accept transform and apply frame conversion
    • feat(multi_object_tracker): optimize getMergedObjects for efficient frame transformation
    • fix(multi_object_tracker): fix bug when merged_objects_pub_ is nullptr
    • feat(multi_object_tracker): refactor orientation availability conversion to improve code clarity
    • fix(multi_object_tracker): remove redundant comment in publish method for clarity
    • feat(multi_object_tracker): rename parameters for clarity and add publish_merged_objects option
    • fix(multi_object_tracker): rename pruning parameters for consistency in schema

    * Update perception/autoware_multi_object_tracker/src/processor/processor.cpp Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

    * feat(multi_object_tracker): replace 'is_simulation' with 'publish_merged_objects' in launch files and parameters ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • fix(camera_2d_detector): typo (#11380)

  • feat(launch): add args to select the 2d camera detection model (#11364)

    • add args
    • add color map path

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • use_camera_vru_detector
      • camera_vru_detector_rois_ids [default: [0]]
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • irregular_object_detector_param_path
      • object_recognition_detection_object_sorter_radar_param_path
  • launch/object_recognition/detection/detector/camera_2d_detector.launch.xml
      • image_raw0 [default: /sensing/camera/camera0/image_raw]
      • image_raw1 [default: /sensing/camera/camera1/image_raw]
      • image_raw2 [default: /sensing/camera/camera2/image_raw]
      • image_raw3 [default: /sensing/camera/camera3/image_raw]
      • image_raw4 [default: /sensing/camera/camera4/image_raw]
      • image_raw5 [default: /sensing/camera/camera5/image_raw]
      • image_raw6 [default: /sensing/camera/camera6/image_raw]
      • image_raw7 [default: /sensing/camera/camera7/image_raw]
      • image_raw8 [default: /sensing/camera/camera8/image_raw]
      • image_raw9 [default: /sensing/camera/camera9/image_raw]
      • image_number [default: 1]
      • camera_index [default: 0]
      • use_bytetrack [default: true]
      • enable_visualizer [default: false]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • tensorrt_yolox_ns [default: ]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • enable_2d_detection [default: false]
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/concatenation_info
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
      • sync_param_path
  • launch/object_recognition/detection/detector/camera_vru_detector.launch.xml
      • ns
      • input/camera0/info [default: /sensing/camera/camera0/camera_info]
      • input/camera0/rois [default: /perception/object_recognition/detection/rois0]
      • input/camera1/info [default: /sensing/camera/camera1/camera_info]
      • input/camera1/rois [default: /perception/object_recognition/detection/rois1]
      • input/camera2/info [default: /sensing/camera/camera2/camera_info]
      • input/camera2/rois [default: /perception/object_recognition/detection/rois2]
      • input/camera3/info [default: /sensing/camera/camera3/camera_info]
      • input/camera3/rois [default: /perception/object_recognition/detection/rois3]
      • input/camera4/info [default: /sensing/camera/camera4/camera_info]
      • input/camera4/rois [default: /perception/object_recognition/detection/rois4]
      • input/camera5/info [default: /sensing/camera/camera5/camera_info]
      • input/camera5/rois [default: /perception/object_recognition/detection/rois5]
      • input/camera6/info [default: /sensing/camera/camera6/camera_info]
      • input/camera6/rois [default: /perception/object_recognition/detection/rois6]
      • input/camera7/info [default: /sensing/camera/camera7/camera_info]
      • input/camera7/rois [default: /perception/object_recognition/detection/rois7]
      • output/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • bbox_object_locator_param_path [default: $(find-pkg-share autoware_image_object_locator)/config/bbox_object_locator.param.yaml]
      • rois_ids [default: [0, 1]]
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
      • voxel_grid_based_euclidean_param_path
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_sorter_param_path [default: $(var object_recognition_detection_object_sorter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • prediction_model_type [default: map_based]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • use_camera_vru_detector
      • publish_merged_objects
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • input/camera_only/objects [default: /perception/object_recognition/detection/camera_only/objects]
      • input/camera_only/channel [default: camera_streampetr]
      • input/camera_vru/channel [default: camera_vru]
      • input/camera_vru/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • output/objects [default: $(var ns)/objects]
      • output/merged_objects [default: $(var ns)/merged_objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_near_range_camera_vru_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_sorter_radar_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • tracker_publish_merged_objects
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • camera_vru_detector_rois_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • input/concatenation_info [default: /sensing/lidar/concatenated/pointcloud_info]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_camera_vru_detector [default: false]
      • use_cuda_ground_segmentation [default: false]
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • input_pointcloud_for_traffic_light_occlusion_predictor
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • output/tracker_merged_objects [default: /perception/object_recognition/detection/objects]
      • downsample_perception_common_pointcloud [default: false]
      • cuda_pointcloud_preprocessing [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input_pointcloud_for_traffic_light_occlusion_predictor [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro iron showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.50.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_launch.git
VCS Type git
VCS Version main
Last Updated 2026-03-17
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.50.0 (2026-02-13)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • chore: import tier4 launchers from universe (#1740)
  • Contributors: Taeseung Sohn, github-actions

0.49.0 (2025-12-30)

  • Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog

  • feat: add option for gpu-preprocessing in perception launch (#11728)

    • add option for GPU preprocessing

    * Rename CUDA pointclouds argument in perception launch ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • feat(camera_streampetr): add camera streampetr to tracker input (#11635) add camera streampetr to tracker

  • Contributors: Ryohsuke Mitsudome, Yoshi Ri, Yuxuan Liu

0.48.0 (2025-11-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • feat(image_object_locator): add near range camera VRU detector to perception pipeline (#11622) add near range camera VRU detector to perception pipeline

  • feat(mult object tracker): publish merged object if it is multi-channel mode (#11386)

    • feat(multi_object_tracker): add support for merged object output and related parameters
    • feat(multi_object_tracker): add function to convert DynamicObject to DetectedObject and implement merged object publishing
    • fix(multi_object_tracker): prevent merged objects publisher from being in input channel topics
    • fix(multi_object_tracker): improve warning message for merged objects publisher in input channel
    • feat(multi_object_tracker): add is_simulation parameter to control merged object publishing
    • fix(multi_object_tracker): correct ego_frame_id variable usage and declaration
    • feat(multi_object_tracker): update getMergedObjects to accept transform and apply frame conversion
    • feat(multi_object_tracker): optimize getMergedObjects for efficient frame transformation
    • fix(multi_object_tracker): fix bug when merged_objects_pub_ is nullptr
    • feat(multi_object_tracker): refactor orientation availability conversion to improve code clarity
    • fix(multi_object_tracker): remove redundant comment in publish method for clarity
    • feat(multi_object_tracker): rename parameters for clarity and add publish_merged_objects option
    • fix(multi_object_tracker): rename pruning parameters for consistency in schema

    * Update perception/autoware_multi_object_tracker/src/processor/processor.cpp Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

    * feat(multi_object_tracker): replace 'is_simulation' with 'publish_merged_objects' in launch files and parameters ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • fix(camera_2d_detector): typo (#11380)

  • feat(launch): add args to select the 2d camera detection model (#11364)

    • add args
    • add color map path

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • use_camera_vru_detector
      • camera_vru_detector_rois_ids [default: [0]]
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • irregular_object_detector_param_path
      • object_recognition_detection_object_sorter_radar_param_path
  • launch/object_recognition/detection/detector/camera_2d_detector.launch.xml
      • image_raw0 [default: /sensing/camera/camera0/image_raw]
      • image_raw1 [default: /sensing/camera/camera1/image_raw]
      • image_raw2 [default: /sensing/camera/camera2/image_raw]
      • image_raw3 [default: /sensing/camera/camera3/image_raw]
      • image_raw4 [default: /sensing/camera/camera4/image_raw]
      • image_raw5 [default: /sensing/camera/camera5/image_raw]
      • image_raw6 [default: /sensing/camera/camera6/image_raw]
      • image_raw7 [default: /sensing/camera/camera7/image_raw]
      • image_raw8 [default: /sensing/camera/camera8/image_raw]
      • image_raw9 [default: /sensing/camera/camera9/image_raw]
      • image_number [default: 1]
      • camera_index [default: 0]
      • use_bytetrack [default: true]
      • enable_visualizer [default: false]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • tensorrt_yolox_ns [default: ]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • enable_2d_detection [default: false]
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/concatenation_info
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
      • sync_param_path
  • launch/object_recognition/detection/detector/camera_vru_detector.launch.xml
      • ns
      • input/camera0/info [default: /sensing/camera/camera0/camera_info]
      • input/camera0/rois [default: /perception/object_recognition/detection/rois0]
      • input/camera1/info [default: /sensing/camera/camera1/camera_info]
      • input/camera1/rois [default: /perception/object_recognition/detection/rois1]
      • input/camera2/info [default: /sensing/camera/camera2/camera_info]
      • input/camera2/rois [default: /perception/object_recognition/detection/rois2]
      • input/camera3/info [default: /sensing/camera/camera3/camera_info]
      • input/camera3/rois [default: /perception/object_recognition/detection/rois3]
      • input/camera4/info [default: /sensing/camera/camera4/camera_info]
      • input/camera4/rois [default: /perception/object_recognition/detection/rois4]
      • input/camera5/info [default: /sensing/camera/camera5/camera_info]
      • input/camera5/rois [default: /perception/object_recognition/detection/rois5]
      • input/camera6/info [default: /sensing/camera/camera6/camera_info]
      • input/camera6/rois [default: /perception/object_recognition/detection/rois6]
      • input/camera7/info [default: /sensing/camera/camera7/camera_info]
      • input/camera7/rois [default: /perception/object_recognition/detection/rois7]
      • output/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • bbox_object_locator_param_path [default: $(find-pkg-share autoware_image_object_locator)/config/bbox_object_locator.param.yaml]
      • rois_ids [default: [0, 1]]
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
      • voxel_grid_based_euclidean_param_path
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_sorter_param_path [default: $(var object_recognition_detection_object_sorter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • prediction_model_type [default: map_based]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • use_camera_vru_detector
      • publish_merged_objects
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • input/camera_only/objects [default: /perception/object_recognition/detection/camera_only/objects]
      • input/camera_only/channel [default: camera_streampetr]
      • input/camera_vru/channel [default: camera_vru]
      • input/camera_vru/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • output/objects [default: $(var ns)/objects]
      • output/merged_objects [default: $(var ns)/merged_objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_near_range_camera_vru_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_sorter_radar_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • tracker_publish_merged_objects
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • camera_vru_detector_rois_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • input/concatenation_info [default: /sensing/lidar/concatenated/pointcloud_info]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_camera_vru_detector [default: false]
      • use_cuda_ground_segmentation [default: false]
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • input_pointcloud_for_traffic_light_occlusion_predictor
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • output/tracker_merged_objects [default: /perception/object_recognition/detection/objects]
      • downsample_perception_common_pointcloud [default: false]
      • cuda_pointcloud_preprocessing [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input_pointcloud_for_traffic_light_occlusion_predictor [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro melodic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.50.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_launch.git
VCS Type git
VCS Version main
Last Updated 2026-03-17
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.50.0 (2026-02-13)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • chore: import tier4 launchers from universe (#1740)
  • Contributors: Taeseung Sohn, github-actions

0.49.0 (2025-12-30)

  • Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog

  • feat: add option for gpu-preprocessing in perception launch (#11728)

    • add option for GPU preprocessing

    * Rename CUDA pointclouds argument in perception launch ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • feat(camera_streampetr): add camera streampetr to tracker input (#11635) add camera streampetr to tracker

  • Contributors: Ryohsuke Mitsudome, Yoshi Ri, Yuxuan Liu

0.48.0 (2025-11-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • feat(image_object_locator): add near range camera VRU detector to perception pipeline (#11622) add near range camera VRU detector to perception pipeline

  • feat(mult object tracker): publish merged object if it is multi-channel mode (#11386)

    • feat(multi_object_tracker): add support for merged object output and related parameters
    • feat(multi_object_tracker): add function to convert DynamicObject to DetectedObject and implement merged object publishing
    • fix(multi_object_tracker): prevent merged objects publisher from being in input channel topics
    • fix(multi_object_tracker): improve warning message for merged objects publisher in input channel
    • feat(multi_object_tracker): add is_simulation parameter to control merged object publishing
    • fix(multi_object_tracker): correct ego_frame_id variable usage and declaration
    • feat(multi_object_tracker): update getMergedObjects to accept transform and apply frame conversion
    • feat(multi_object_tracker): optimize getMergedObjects for efficient frame transformation
    • fix(multi_object_tracker): fix bug when merged_objects_pub_ is nullptr
    • feat(multi_object_tracker): refactor orientation availability conversion to improve code clarity
    • fix(multi_object_tracker): remove redundant comment in publish method for clarity
    • feat(multi_object_tracker): rename parameters for clarity and add publish_merged_objects option
    • fix(multi_object_tracker): rename pruning parameters for consistency in schema

    * Update perception/autoware_multi_object_tracker/src/processor/processor.cpp Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

    * feat(multi_object_tracker): replace 'is_simulation' with 'publish_merged_objects' in launch files and parameters ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • fix(camera_2d_detector): typo (#11380)

  • feat(launch): add args to select the 2d camera detection model (#11364)

    • add args
    • add color map path

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • use_camera_vru_detector
      • camera_vru_detector_rois_ids [default: [0]]
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • irregular_object_detector_param_path
      • object_recognition_detection_object_sorter_radar_param_path
  • launch/object_recognition/detection/detector/camera_2d_detector.launch.xml
      • image_raw0 [default: /sensing/camera/camera0/image_raw]
      • image_raw1 [default: /sensing/camera/camera1/image_raw]
      • image_raw2 [default: /sensing/camera/camera2/image_raw]
      • image_raw3 [default: /sensing/camera/camera3/image_raw]
      • image_raw4 [default: /sensing/camera/camera4/image_raw]
      • image_raw5 [default: /sensing/camera/camera5/image_raw]
      • image_raw6 [default: /sensing/camera/camera6/image_raw]
      • image_raw7 [default: /sensing/camera/camera7/image_raw]
      • image_raw8 [default: /sensing/camera/camera8/image_raw]
      • image_raw9 [default: /sensing/camera/camera9/image_raw]
      • image_number [default: 1]
      • camera_index [default: 0]
      • use_bytetrack [default: true]
      • enable_visualizer [default: false]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • tensorrt_yolox_ns [default: ]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • enable_2d_detection [default: false]
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/concatenation_info
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
      • sync_param_path
  • launch/object_recognition/detection/detector/camera_vru_detector.launch.xml
      • ns
      • input/camera0/info [default: /sensing/camera/camera0/camera_info]
      • input/camera0/rois [default: /perception/object_recognition/detection/rois0]
      • input/camera1/info [default: /sensing/camera/camera1/camera_info]
      • input/camera1/rois [default: /perception/object_recognition/detection/rois1]
      • input/camera2/info [default: /sensing/camera/camera2/camera_info]
      • input/camera2/rois [default: /perception/object_recognition/detection/rois2]
      • input/camera3/info [default: /sensing/camera/camera3/camera_info]
      • input/camera3/rois [default: /perception/object_recognition/detection/rois3]
      • input/camera4/info [default: /sensing/camera/camera4/camera_info]
      • input/camera4/rois [default: /perception/object_recognition/detection/rois4]
      • input/camera5/info [default: /sensing/camera/camera5/camera_info]
      • input/camera5/rois [default: /perception/object_recognition/detection/rois5]
      • input/camera6/info [default: /sensing/camera/camera6/camera_info]
      • input/camera6/rois [default: /perception/object_recognition/detection/rois6]
      • input/camera7/info [default: /sensing/camera/camera7/camera_info]
      • input/camera7/rois [default: /perception/object_recognition/detection/rois7]
      • output/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • bbox_object_locator_param_path [default: $(find-pkg-share autoware_image_object_locator)/config/bbox_object_locator.param.yaml]
      • rois_ids [default: [0, 1]]
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
      • voxel_grid_based_euclidean_param_path
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_sorter_param_path [default: $(var object_recognition_detection_object_sorter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • prediction_model_type [default: map_based]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • use_camera_vru_detector
      • publish_merged_objects
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • input/camera_only/objects [default: /perception/object_recognition/detection/camera_only/objects]
      • input/camera_only/channel [default: camera_streampetr]
      • input/camera_vru/channel [default: camera_vru]
      • input/camera_vru/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • output/objects [default: $(var ns)/objects]
      • output/merged_objects [default: $(var ns)/merged_objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_near_range_camera_vru_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_sorter_radar_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • tracker_publish_merged_objects
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • camera_vru_detector_rois_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • input/concatenation_info [default: /sensing/lidar/concatenated/pointcloud_info]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_camera_vru_detector [default: false]
      • use_cuda_ground_segmentation [default: false]
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • input_pointcloud_for_traffic_light_occlusion_predictor
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • output/tracker_merged_objects [default: /perception/object_recognition/detection/objects]
      • downsample_perception_common_pointcloud [default: false]
      • cuda_pointcloud_preprocessing [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input_pointcloud_for_traffic_light_occlusion_predictor [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro noetic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.50.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_launch.git
VCS Type git
VCS Version main
Last Updated 2026-03-17
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.50.0 (2026-02-13)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • chore: import tier4 launchers from universe (#1740)
  • Contributors: Taeseung Sohn, github-actions

0.49.0 (2025-12-30)

  • Merge remote-tracking branch 'origin/main' into prepare-0.49.0-changelog

  • feat: add option for gpu-preprocessing in perception launch (#11728)

    • add option for GPU preprocessing

    * Rename CUDA pointclouds argument in perception launch ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • feat(camera_streampetr): add camera streampetr to tracker input (#11635) add camera streampetr to tracker

  • Contributors: Ryohsuke Mitsudome, Yoshi Ri, Yuxuan Liu

0.48.0 (2025-11-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • feat(image_object_locator): add near range camera VRU detector to perception pipeline (#11622) add near range camera VRU detector to perception pipeline

  • feat(mult object tracker): publish merged object if it is multi-channel mode (#11386)

    • feat(multi_object_tracker): add support for merged object output and related parameters
    • feat(multi_object_tracker): add function to convert DynamicObject to DetectedObject and implement merged object publishing
    • fix(multi_object_tracker): prevent merged objects publisher from being in input channel topics
    • fix(multi_object_tracker): improve warning message for merged objects publisher in input channel
    • feat(multi_object_tracker): add is_simulation parameter to control merged object publishing
    • fix(multi_object_tracker): correct ego_frame_id variable usage and declaration
    • feat(multi_object_tracker): update getMergedObjects to accept transform and apply frame conversion
    • feat(multi_object_tracker): optimize getMergedObjects for efficient frame transformation
    • fix(multi_object_tracker): fix bug when merged_objects_pub_ is nullptr
    • feat(multi_object_tracker): refactor orientation availability conversion to improve code clarity
    • fix(multi_object_tracker): remove redundant comment in publish method for clarity
    • feat(multi_object_tracker): rename parameters for clarity and add publish_merged_objects option
    • fix(multi_object_tracker): rename pruning parameters for consistency in schema

    * Update perception/autoware_multi_object_tracker/src/processor/processor.cpp Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

    * feat(multi_object_tracker): replace 'is_simulation' with 'publish_merged_objects' in launch files and parameters ---------Co-authored-by: Yoshi Ri <<yoshiyoshidetteiu@gmail.com>>

  • fix(camera_2d_detector): typo (#11380)

  • feat(launch): add args to select the 2d camera detection model (#11364)

    • add args
    • add color map path

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

    * give color_map_path to yolox.launch Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>>

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • use_camera_vru_detector
      • camera_vru_detector_rois_ids [default: [0]]
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • irregular_object_detector_param_path
      • object_recognition_detection_object_sorter_radar_param_path
  • launch/object_recognition/detection/detector/camera_2d_detector.launch.xml
      • image_raw0 [default: /sensing/camera/camera0/image_raw]
      • image_raw1 [default: /sensing/camera/camera1/image_raw]
      • image_raw2 [default: /sensing/camera/camera2/image_raw]
      • image_raw3 [default: /sensing/camera/camera3/image_raw]
      • image_raw4 [default: /sensing/camera/camera4/image_raw]
      • image_raw5 [default: /sensing/camera/camera5/image_raw]
      • image_raw6 [default: /sensing/camera/camera6/image_raw]
      • image_raw7 [default: /sensing/camera/camera7/image_raw]
      • image_raw8 [default: /sensing/camera/camera8/image_raw]
      • image_raw9 [default: /sensing/camera/camera9/image_raw]
      • image_number [default: 1]
      • camera_index [default: 0]
      • use_bytetrack [default: true]
      • enable_visualizer [default: false]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • tensorrt_yolox_ns [default: ]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/concatenation_info
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • sync_param_path
      • voxel_grid_based_euclidean_param_path
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • enable_2d_detection [default: false]
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/concatenation_info
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
      • sync_param_path
  • launch/object_recognition/detection/detector/camera_vru_detector.launch.xml
      • ns
      • input/camera0/info [default: /sensing/camera/camera0/camera_info]
      • input/camera0/rois [default: /perception/object_recognition/detection/rois0]
      • input/camera1/info [default: /sensing/camera/camera1/camera_info]
      • input/camera1/rois [default: /perception/object_recognition/detection/rois1]
      • input/camera2/info [default: /sensing/camera/camera2/camera_info]
      • input/camera2/rois [default: /perception/object_recognition/detection/rois2]
      • input/camera3/info [default: /sensing/camera/camera3/camera_info]
      • input/camera3/rois [default: /perception/object_recognition/detection/rois3]
      • input/camera4/info [default: /sensing/camera/camera4/camera_info]
      • input/camera4/rois [default: /perception/object_recognition/detection/rois4]
      • input/camera5/info [default: /sensing/camera/camera5/camera_info]
      • input/camera5/rois [default: /perception/object_recognition/detection/rois5]
      • input/camera6/info [default: /sensing/camera/camera6/camera_info]
      • input/camera6/rois [default: /perception/object_recognition/detection/rois6]
      • input/camera7/info [default: /sensing/camera/camera7/camera_info]
      • input/camera7/rois [default: /perception/object_recognition/detection/rois7]
      • output/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • bbox_object_locator_param_path [default: $(find-pkg-share autoware_image_object_locator)/config/bbox_object_locator.param.yaml]
      • rois_ids [default: [0, 1]]
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
      • voxel_grid_based_euclidean_param_path
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_sorter_param_path [default: $(var object_recognition_detection_object_sorter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • prediction_model_type [default: map_based]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • use_camera_vru_detector
      • publish_merged_objects
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • input/camera_only/objects [default: /perception/object_recognition/detection/camera_only/objects]
      • input/camera_only/channel [default: camera_streampetr]
      • input/camera_vru/channel [default: camera_vru]
      • input/camera_vru/objects [default: /perception/object_recognition/detection/camera_vru/objects]
      • output/objects [default: $(var ns)/objects]
      • output/merged_objects [default: $(var ns)/merged_objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_near_range_camera_vru_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_sorter_radar_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • tracker_publish_merged_objects
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • camera_2d_detector/model_path
      • camera_2d_detector/label_path
      • camera_2d_detector/color_map_path
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • camera_vru_detector_rois_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • input/concatenation_info [default: /sensing/lidar/concatenated/pointcloud_info]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_camera_vru_detector [default: false]
      • use_cuda_ground_segmentation [default: false]
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • input_pointcloud_for_traffic_light_occlusion_predictor
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • output/tracker_merged_objects [default: /perception/object_recognition/detection/objects]
      • downsample_perception_common_pointcloud [default: false]
      • cuda_pointcloud_preprocessing [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input_pointcloud_for_traffic_light_occlusion_predictor [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange