Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Taekjin Lee
- Masato Saeki
Authors
tier4_perception_launch
Structure
Package Dependencies
Please see <exec_depend>
in package.xml
.
Usage
You can include as follows in *.launch.xml
to use perception.launch.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of perception.launch.xml
.
<include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
<!-- options for mode: camera_lidar_fusion, lidar, camera -->
<arg name="mode" value="lidar" />
<!-- Parameter files -->
<arg name="FOO_param_path" value="..."/>
<arg name="BAR_param_path" value="..."/>
...
</include>
Changelog for package tier4_perception_launch
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(multi_object_tracker): add irregular objects topic (#11102)
- fix(multi_object_tracker): add irregular objects topic
- fix: change channel order
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
- fix: unused channels
- fix: schema
- docs: update readme
- style(pre-commit): autofix
- fix: short name
* feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>
-
chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(autoware_object_merger): add merger priority_mode (#11042)
* fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix
- fix: add Confidence mode support
- docs: schema update
- fix: launch
* fix: schema json ---------
-
feat(tier4_perception_launch): add missing remappings to launch file (#11037)
-
feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)
-
feat(tracking): add short range detection support and update related
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/object_recognition/detection/detection.launch.xml
-
- mode
- lidar_detection_model_type
- lidar_detection_model_name
- use_short_range_detection
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- use_object_filter
- objects_filter_method
- use_pointcloud_map
- use_detection_by_tracker
- use_validator
- objects_validation_method
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- use_multi_channel_tracker_merger
- use_radar_tracking_fusion
- use_irregular_object_detector
- irregular_object_detector_fusion_camera_ids [default: [0]]
- ml_camera_lidar_merger_priority_mode
- number_of_cameras
- node/pointcloud_container
- input/pointcloud
- input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- image_topic_name
- segmentation_pointcloud_fusion_camera_ids
- input/radar
- input/tracked_objects [default: /perception/object_recognition/tracking/objects]
- output/objects [default: objects]
- launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
-
- input/camera0/image
- input/camera0/info
- input/camera1/image
- input/camera1/info
- input/camera2/image
- input/camera2/info
- input/camera3/image
- input/camera3/info
- input/camera4/image
- input/camera4/info
- input/camera5/image
- input/camera5/info
- input/camera6/image
- input/camera6/info
- input/camera7/image
- input/camera7/info
- output/objects
- number_of_cameras
- data_path [default: $(env HOME)/autoware_data]
- bevdet_model_name [default: bevdet_one_lt_d]
- bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
- launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
-
- ns
- lidar_detection_model_type
- lidar_detection_model_name
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- segmentation_pointcloud_fusion_camera_ids
- image_topic_name
- node/pointcloud_container
- input/pointcloud
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/ml_detector/objects
- output/rule_detector/objects
- output/clustering/cluster_objects
- launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
-
- ns
- pipeline_ns
- input/pointcloud
- fusion_camera_ids [default: [0]]
- image_topic_name [default: image_raw]
- irregular_object_detector_param_path
- launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
-
- lidar_detection_model_type
- lidar_detection_model_name
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- node/pointcloud_container
- input/pointcloud
- output/objects
- output/short_range_objects
- lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
-
- ns
- node/pointcloud_container
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/cluster_objects
- output/objects
- launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
-
- input/clusters
- input/tracked_objects
- output/objects
- launch/object_recognition/detection/filter/object_filter.launch.xml
-
- objects_filter_method [default: lanelet_filter]
- input/objects
- output/objects
- launch/object_recognition/detection/filter/object_validator.launch.xml
-
- objects_validation_method
- input/obstacle_pointcloud
- input/objects
- output/objects
- launch/object_recognition/detection/filter/radar_filter.launch.xml
-
- object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
- object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
- radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
- input/radar
- output/objects
- launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- far_object_merger_sync_queue_size [default: 20]
- lidar_detection_model_type
- use_radar_tracking_fusion
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/radar/objects
- input/radar_far/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_object_filter
- objects_filter_method
- input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
- input/lidar_rule/objects [default: clustering/objects]
- input/detection_by_tracker/objects [default: detection_by_tracker/objects]
- output/objects
- launch/object_recognition/prediction/prediction.launch.xml
-
- use_vector_map [default: false]
- input/objects [default: /perception/object_recognition/tracking/objects]
- launch/object_recognition/tracking/tracking.launch.xml
-
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- object_recognition_tracking_object_merger_data_association_matrix_param_path
- object_recognition_tracking_object_merger_node_param_path
- mode [default: lidar]
- use_radar_tracking_fusion [default: false]
- use_multi_channel_tracker_merger
- use_validator
- use_short_range_detection
- lidar_detection_model_type [default: centerpoint]
- input/merged_detection/channel [default: detected_objects]
- input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
- input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
- input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
- input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
- input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
- input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
- input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
- input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
- input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
- input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
- input/tracker_based_detector/channel [default: detection_by_tracker]
- input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
- input/radar/channel [default: radar]
- input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
- input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
- input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
- output/objects [default: $(var ns)/objects]
- launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
-
- input/obstacle_pointcloud [default: concatenated/pointcloud]
- input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
- output [default: /perception/occupancy_grid_map/map]
- use_intra_process [default: false]
- use_multithread [default: false]
- pointcloud_container_name [default: pointcloud_container]
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- input_obstacle_pointcloud [default: false]
- input_obstacle_and_raw_pointcloud [default: true]
- use_pointcloud_container [default: true]
- launch/perception.launch.xml
-
- object_recognition_detection_euclidean_cluster_param_path
- object_recognition_detection_outlier_param_path
- object_recognition_detection_object_lanelet_filter_param_path
- object_recognition_detection_object_position_filter_param_path
- object_recognition_detection_pointcloud_map_filter_param_path
- object_recognition_prediction_map_based_prediction_param_path
- object_recognition_detection_object_merger_data_association_matrix_param_path
- ml_camera_lidar_object_association_merger_param_path
- object_recognition_detection_object_merger_distance_threshold_list_path
- object_recognition_detection_fusion_sync_param_path
- object_recognition_detection_roi_cluster_fusion_param_path
- object_recognition_detection_irregular_object_detector_param_path
- object_recognition_detection_roi_detected_object_fusion_param_path
- object_recognition_detection_pointpainting_fusion_common_param_path
- object_recognition_detection_lidar_model_param_path
- object_recognition_detection_radar_lanelet_filtering_range_param_path
- object_recognition_detection_object_velocity_splitter_radar_param_path
- object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
- object_recognition_detection_object_range_splitter_radar_param_path
- object_recognition_detection_object_range_splitter_radar_fusion_param_path
- object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
- object_recognition_tracking_multi_object_tracker_input_channels_param_path
- object_recognition_tracking_multi_object_tracker_node_param_path
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- obstacle_segmentation_ground_segmentation_param_path
- obstacle_segmentation_ground_segmentation_elevation_map_param_path
- object_recognition_detection_obstacle_pointcloud_based_validator_param_path
- object_recognition_detection_detection_by_tracker_param
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- lidar_detection_model
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
- lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type [default: centerpoint_short_range]
- lidar_short_range_detection_model_name [default: centerpoint_short_range]
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- mode [default: camera_lidar_fusion]
- data_path [default: $(env HOME)/autoware_data]
- lidar_detection_model_type [default: $(var lidar_detection_model_type)]
- lidar_detection_model_name [default: $(var lidar_detection_model_name)]
- image_raw0 [default: /sensing/camera/camera0/image_rect_color]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- detection_rois0 [default: /perception/object_recognition/detection/rois0]
- image_raw1 [default: /sensing/camera/camera1/image_rect_color]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- detection_rois1 [default: /perception/object_recognition/detection/rois1]
- image_raw2 [default: /sensing/camera/camera2/image_rect_color]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- detection_rois2 [default: /perception/object_recognition/detection/rois2]
- image_raw3 [default: /sensing/camera/camera3/image_rect_color]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- detection_rois3 [default: /perception/object_recognition/detection/rois3]
- image_raw4 [default: /sensing/camera/camera4/image_rect_color]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- detection_rois4 [default: /perception/object_recognition/detection/rois4]
- image_raw5 [default: /sensing/camera/camera5/image_rect_color]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- detection_rois5 [default: /perception/object_recognition/detection/rois5]
- image_raw6 [default: /sensing/camera/camera6/image_rect_color]
- camera_info6 [default: /sensing/camera/camera6/camera_info]
- detection_rois6 [default: /perception/object_recognition/detection/rois6]
- image_raw7 [default: /sensing/camera/camera7/image_rect_color]
- camera_info7 [default: /sensing/camera/camera7/camera_info]
- detection_rois7 [default: /perception/object_recognition/detection/rois7]
- image_raw8 [default: /sensing/camera/camera8/image_rect_color]
- camera_info8 [default: /sensing/camera/camera8/camera_info]
- detection_rois8 [default: /perception/object_recognition/detection/rois8]
- image_number [default: 6]
- image_topic_name [default: image_rect_color]
- segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
- ml_camera_lidar_merger_priority_mode [default: 0]
- pointcloud_container_name [default: pointcloud_container]
- use_vector_map [default: true]
- use_pointcloud_map [default: true]
- use_low_height_cropbox [default: true]
- use_object_filter [default: true]
- objects_filter_method [default: lanelet_filter]
- use_irregular_object_detector [default: true]
- use_low_intensity_cluster_filter [default: true]
- use_image_segmentation_based_filter [default: false]
- use_empty_dynamic_object_publisher [default: false]
- use_object_validator [default: true]
- objects_validation_method [default: obstacle_pointcloud]
- use_perception_online_evaluator [default: false]
- use_perception_analytics_publisher [default: true]
- use_obstacle_segmentation_single_frame_filter
- use_obstacle_segmentation_time_series_filter
- use_traffic_light_recognition
- traffic_light_recognition/fusion_only
- traffic_light_recognition/camera_namespaces
- traffic_light_recognition/use_high_accuracy_detection
- traffic_light_recognition/high_accuracy_detection_type
- traffic_light_recognition/whole_image_detection/model_path
- traffic_light_recognition/whole_image_detection/label_path
- traffic_light_recognition/fine_detection/model_path
- traffic_light_recognition/fine_detection/label_path
- traffic_light_recognition/classification/car/model_path
- traffic_light_recognition/classification/car/label_path
- traffic_light_recognition/classification/pedestrian/model_path
- traffic_light_recognition/classification/pedestrian/label_path
- use_detection_by_tracker [default: true]
- use_radar_tracking_fusion [default: true]
- input/radar [default: /sensing/radar/detected_objects]
- use_multi_channel_tracker_merger [default: false]
- downsample_perception_common_pointcloud [default: false]
- common_downsample_voxel_size_x [default: 0.05]
- common_downsample_voxel_size_y [default: 0.05]
- common_downsample_voxel_size_z [default: 0.05]
- launch/traffic_light_recognition/traffic_light.launch.xml
-
- enable_image_decompressor [default: true]
- fusion_only
- camera_namespaces
- use_high_accuracy_detection
- high_accuracy_detection_type
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- whole_image_detection/model_path
- whole_image_detection/label_path
- fine_detection/model_path
- fine_detection/label_path
- classification/car/model_path
- classification/car/label_path
- classification/pedestrian/model_path
- classification/pedestrian/label_path
- input/vector_map [default: /map/vector_map]
- input/route [default: /planning/mission_planning/route]
- input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
- internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
- external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
- judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
Messages
Services
Plugins
Recent questions tagged tier4_perception_launch at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Taekjin Lee
- Masato Saeki
Authors
tier4_perception_launch
Structure
Package Dependencies
Please see <exec_depend>
in package.xml
.
Usage
You can include as follows in *.launch.xml
to use perception.launch.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of perception.launch.xml
.
<include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
<!-- options for mode: camera_lidar_fusion, lidar, camera -->
<arg name="mode" value="lidar" />
<!-- Parameter files -->
<arg name="FOO_param_path" value="..."/>
<arg name="BAR_param_path" value="..."/>
...
</include>
Changelog for package tier4_perception_launch
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(multi_object_tracker): add irregular objects topic (#11102)
- fix(multi_object_tracker): add irregular objects topic
- fix: change channel order
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
- fix: unused channels
- fix: schema
- docs: update readme
- style(pre-commit): autofix
- fix: short name
* feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>
-
chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(autoware_object_merger): add merger priority_mode (#11042)
* fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix
- fix: add Confidence mode support
- docs: schema update
- fix: launch
* fix: schema json ---------
-
feat(tier4_perception_launch): add missing remappings to launch file (#11037)
-
feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)
-
feat(tracking): add short range detection support and update related
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/object_recognition/detection/detection.launch.xml
-
- mode
- lidar_detection_model_type
- lidar_detection_model_name
- use_short_range_detection
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- use_object_filter
- objects_filter_method
- use_pointcloud_map
- use_detection_by_tracker
- use_validator
- objects_validation_method
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- use_multi_channel_tracker_merger
- use_radar_tracking_fusion
- use_irregular_object_detector
- irregular_object_detector_fusion_camera_ids [default: [0]]
- ml_camera_lidar_merger_priority_mode
- number_of_cameras
- node/pointcloud_container
- input/pointcloud
- input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- image_topic_name
- segmentation_pointcloud_fusion_camera_ids
- input/radar
- input/tracked_objects [default: /perception/object_recognition/tracking/objects]
- output/objects [default: objects]
- launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
-
- input/camera0/image
- input/camera0/info
- input/camera1/image
- input/camera1/info
- input/camera2/image
- input/camera2/info
- input/camera3/image
- input/camera3/info
- input/camera4/image
- input/camera4/info
- input/camera5/image
- input/camera5/info
- input/camera6/image
- input/camera6/info
- input/camera7/image
- input/camera7/info
- output/objects
- number_of_cameras
- data_path [default: $(env HOME)/autoware_data]
- bevdet_model_name [default: bevdet_one_lt_d]
- bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
- launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
-
- ns
- lidar_detection_model_type
- lidar_detection_model_name
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- segmentation_pointcloud_fusion_camera_ids
- image_topic_name
- node/pointcloud_container
- input/pointcloud
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/ml_detector/objects
- output/rule_detector/objects
- output/clustering/cluster_objects
- launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
-
- ns
- pipeline_ns
- input/pointcloud
- fusion_camera_ids [default: [0]]
- image_topic_name [default: image_raw]
- irregular_object_detector_param_path
- launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
-
- lidar_detection_model_type
- lidar_detection_model_name
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- node/pointcloud_container
- input/pointcloud
- output/objects
- output/short_range_objects
- lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
-
- ns
- node/pointcloud_container
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/cluster_objects
- output/objects
- launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
-
- input/clusters
- input/tracked_objects
- output/objects
- launch/object_recognition/detection/filter/object_filter.launch.xml
-
- objects_filter_method [default: lanelet_filter]
- input/objects
- output/objects
- launch/object_recognition/detection/filter/object_validator.launch.xml
-
- objects_validation_method
- input/obstacle_pointcloud
- input/objects
- output/objects
- launch/object_recognition/detection/filter/radar_filter.launch.xml
-
- object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
- object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
- radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
- input/radar
- output/objects
- launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- far_object_merger_sync_queue_size [default: 20]
- lidar_detection_model_type
- use_radar_tracking_fusion
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/radar/objects
- input/radar_far/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_object_filter
- objects_filter_method
- input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
- input/lidar_rule/objects [default: clustering/objects]
- input/detection_by_tracker/objects [default: detection_by_tracker/objects]
- output/objects
- launch/object_recognition/prediction/prediction.launch.xml
-
- use_vector_map [default: false]
- input/objects [default: /perception/object_recognition/tracking/objects]
- launch/object_recognition/tracking/tracking.launch.xml
-
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- object_recognition_tracking_object_merger_data_association_matrix_param_path
- object_recognition_tracking_object_merger_node_param_path
- mode [default: lidar]
- use_radar_tracking_fusion [default: false]
- use_multi_channel_tracker_merger
- use_validator
- use_short_range_detection
- lidar_detection_model_type [default: centerpoint]
- input/merged_detection/channel [default: detected_objects]
- input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
- input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
- input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
- input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
- input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
- input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
- input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
- input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
- input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
- input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
- input/tracker_based_detector/channel [default: detection_by_tracker]
- input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
- input/radar/channel [default: radar]
- input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
- input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
- input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
- output/objects [default: $(var ns)/objects]
- launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
-
- input/obstacle_pointcloud [default: concatenated/pointcloud]
- input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
- output [default: /perception/occupancy_grid_map/map]
- use_intra_process [default: false]
- use_multithread [default: false]
- pointcloud_container_name [default: pointcloud_container]
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- input_obstacle_pointcloud [default: false]
- input_obstacle_and_raw_pointcloud [default: true]
- use_pointcloud_container [default: true]
- launch/perception.launch.xml
-
- object_recognition_detection_euclidean_cluster_param_path
- object_recognition_detection_outlier_param_path
- object_recognition_detection_object_lanelet_filter_param_path
- object_recognition_detection_object_position_filter_param_path
- object_recognition_detection_pointcloud_map_filter_param_path
- object_recognition_prediction_map_based_prediction_param_path
- object_recognition_detection_object_merger_data_association_matrix_param_path
- ml_camera_lidar_object_association_merger_param_path
- object_recognition_detection_object_merger_distance_threshold_list_path
- object_recognition_detection_fusion_sync_param_path
- object_recognition_detection_roi_cluster_fusion_param_path
- object_recognition_detection_irregular_object_detector_param_path
- object_recognition_detection_roi_detected_object_fusion_param_path
- object_recognition_detection_pointpainting_fusion_common_param_path
- object_recognition_detection_lidar_model_param_path
- object_recognition_detection_radar_lanelet_filtering_range_param_path
- object_recognition_detection_object_velocity_splitter_radar_param_path
- object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
- object_recognition_detection_object_range_splitter_radar_param_path
- object_recognition_detection_object_range_splitter_radar_fusion_param_path
- object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
- object_recognition_tracking_multi_object_tracker_input_channels_param_path
- object_recognition_tracking_multi_object_tracker_node_param_path
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- obstacle_segmentation_ground_segmentation_param_path
- obstacle_segmentation_ground_segmentation_elevation_map_param_path
- object_recognition_detection_obstacle_pointcloud_based_validator_param_path
- object_recognition_detection_detection_by_tracker_param
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- lidar_detection_model
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
- lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type [default: centerpoint_short_range]
- lidar_short_range_detection_model_name [default: centerpoint_short_range]
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- mode [default: camera_lidar_fusion]
- data_path [default: $(env HOME)/autoware_data]
- lidar_detection_model_type [default: $(var lidar_detection_model_type)]
- lidar_detection_model_name [default: $(var lidar_detection_model_name)]
- image_raw0 [default: /sensing/camera/camera0/image_rect_color]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- detection_rois0 [default: /perception/object_recognition/detection/rois0]
- image_raw1 [default: /sensing/camera/camera1/image_rect_color]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- detection_rois1 [default: /perception/object_recognition/detection/rois1]
- image_raw2 [default: /sensing/camera/camera2/image_rect_color]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- detection_rois2 [default: /perception/object_recognition/detection/rois2]
- image_raw3 [default: /sensing/camera/camera3/image_rect_color]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- detection_rois3 [default: /perception/object_recognition/detection/rois3]
- image_raw4 [default: /sensing/camera/camera4/image_rect_color]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- detection_rois4 [default: /perception/object_recognition/detection/rois4]
- image_raw5 [default: /sensing/camera/camera5/image_rect_color]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- detection_rois5 [default: /perception/object_recognition/detection/rois5]
- image_raw6 [default: /sensing/camera/camera6/image_rect_color]
- camera_info6 [default: /sensing/camera/camera6/camera_info]
- detection_rois6 [default: /perception/object_recognition/detection/rois6]
- image_raw7 [default: /sensing/camera/camera7/image_rect_color]
- camera_info7 [default: /sensing/camera/camera7/camera_info]
- detection_rois7 [default: /perception/object_recognition/detection/rois7]
- image_raw8 [default: /sensing/camera/camera8/image_rect_color]
- camera_info8 [default: /sensing/camera/camera8/camera_info]
- detection_rois8 [default: /perception/object_recognition/detection/rois8]
- image_number [default: 6]
- image_topic_name [default: image_rect_color]
- segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
- ml_camera_lidar_merger_priority_mode [default: 0]
- pointcloud_container_name [default: pointcloud_container]
- use_vector_map [default: true]
- use_pointcloud_map [default: true]
- use_low_height_cropbox [default: true]
- use_object_filter [default: true]
- objects_filter_method [default: lanelet_filter]
- use_irregular_object_detector [default: true]
- use_low_intensity_cluster_filter [default: true]
- use_image_segmentation_based_filter [default: false]
- use_empty_dynamic_object_publisher [default: false]
- use_object_validator [default: true]
- objects_validation_method [default: obstacle_pointcloud]
- use_perception_online_evaluator [default: false]
- use_perception_analytics_publisher [default: true]
- use_obstacle_segmentation_single_frame_filter
- use_obstacle_segmentation_time_series_filter
- use_traffic_light_recognition
- traffic_light_recognition/fusion_only
- traffic_light_recognition/camera_namespaces
- traffic_light_recognition/use_high_accuracy_detection
- traffic_light_recognition/high_accuracy_detection_type
- traffic_light_recognition/whole_image_detection/model_path
- traffic_light_recognition/whole_image_detection/label_path
- traffic_light_recognition/fine_detection/model_path
- traffic_light_recognition/fine_detection/label_path
- traffic_light_recognition/classification/car/model_path
- traffic_light_recognition/classification/car/label_path
- traffic_light_recognition/classification/pedestrian/model_path
- traffic_light_recognition/classification/pedestrian/label_path
- use_detection_by_tracker [default: true]
- use_radar_tracking_fusion [default: true]
- input/radar [default: /sensing/radar/detected_objects]
- use_multi_channel_tracker_merger [default: false]
- downsample_perception_common_pointcloud [default: false]
- common_downsample_voxel_size_x [default: 0.05]
- common_downsample_voxel_size_y [default: 0.05]
- common_downsample_voxel_size_z [default: 0.05]
- launch/traffic_light_recognition/traffic_light.launch.xml
-
- enable_image_decompressor [default: true]
- fusion_only
- camera_namespaces
- use_high_accuracy_detection
- high_accuracy_detection_type
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- whole_image_detection/model_path
- whole_image_detection/label_path
- fine_detection/model_path
- fine_detection/label_path
- classification/car/model_path
- classification/car/label_path
- classification/pedestrian/model_path
- classification/pedestrian/label_path
- input/vector_map [default: /map/vector_map]
- input/route [default: /planning/mission_planning/route]
- input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
- internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
- external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
- judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
Messages
Services
Plugins
Recent questions tagged tier4_perception_launch at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Taekjin Lee
- Masato Saeki
Authors
tier4_perception_launch
Structure
Package Dependencies
Please see <exec_depend>
in package.xml
.
Usage
You can include as follows in *.launch.xml
to use perception.launch.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of perception.launch.xml
.
<include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
<!-- options for mode: camera_lidar_fusion, lidar, camera -->
<arg name="mode" value="lidar" />
<!-- Parameter files -->
<arg name="FOO_param_path" value="..."/>
<arg name="BAR_param_path" value="..."/>
...
</include>
Changelog for package tier4_perception_launch
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(multi_object_tracker): add irregular objects topic (#11102)
- fix(multi_object_tracker): add irregular objects topic
- fix: change channel order
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
- fix: unused channels
- fix: schema
- docs: update readme
- style(pre-commit): autofix
- fix: short name
* feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>
-
chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(autoware_object_merger): add merger priority_mode (#11042)
* fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix
- fix: add Confidence mode support
- docs: schema update
- fix: launch
* fix: schema json ---------
-
feat(tier4_perception_launch): add missing remappings to launch file (#11037)
-
feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)
-
feat(tracking): add short range detection support and update related
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/object_recognition/detection/detection.launch.xml
-
- mode
- lidar_detection_model_type
- lidar_detection_model_name
- use_short_range_detection
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- use_object_filter
- objects_filter_method
- use_pointcloud_map
- use_detection_by_tracker
- use_validator
- objects_validation_method
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- use_multi_channel_tracker_merger
- use_radar_tracking_fusion
- use_irregular_object_detector
- irregular_object_detector_fusion_camera_ids [default: [0]]
- ml_camera_lidar_merger_priority_mode
- number_of_cameras
- node/pointcloud_container
- input/pointcloud
- input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- image_topic_name
- segmentation_pointcloud_fusion_camera_ids
- input/radar
- input/tracked_objects [default: /perception/object_recognition/tracking/objects]
- output/objects [default: objects]
- launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
-
- input/camera0/image
- input/camera0/info
- input/camera1/image
- input/camera1/info
- input/camera2/image
- input/camera2/info
- input/camera3/image
- input/camera3/info
- input/camera4/image
- input/camera4/info
- input/camera5/image
- input/camera5/info
- input/camera6/image
- input/camera6/info
- input/camera7/image
- input/camera7/info
- output/objects
- number_of_cameras
- data_path [default: $(env HOME)/autoware_data]
- bevdet_model_name [default: bevdet_one_lt_d]
- bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
- launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
-
- ns
- lidar_detection_model_type
- lidar_detection_model_name
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- segmentation_pointcloud_fusion_camera_ids
- image_topic_name
- node/pointcloud_container
- input/pointcloud
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/ml_detector/objects
- output/rule_detector/objects
- output/clustering/cluster_objects
- launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
-
- ns
- pipeline_ns
- input/pointcloud
- fusion_camera_ids [default: [0]]
- image_topic_name [default: image_raw]
- irregular_object_detector_param_path
- launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
-
- lidar_detection_model_type
- lidar_detection_model_name
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- node/pointcloud_container
- input/pointcloud
- output/objects
- output/short_range_objects
- lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
-
- ns
- node/pointcloud_container
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/cluster_objects
- output/objects
- launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
-
- input/clusters
- input/tracked_objects
- output/objects
- launch/object_recognition/detection/filter/object_filter.launch.xml
-
- objects_filter_method [default: lanelet_filter]
- input/objects
- output/objects
- launch/object_recognition/detection/filter/object_validator.launch.xml
-
- objects_validation_method
- input/obstacle_pointcloud
- input/objects
- output/objects
- launch/object_recognition/detection/filter/radar_filter.launch.xml
-
- object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
- object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
- radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
- input/radar
- output/objects
- launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- far_object_merger_sync_queue_size [default: 20]
- lidar_detection_model_type
- use_radar_tracking_fusion
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/radar/objects
- input/radar_far/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_object_filter
- objects_filter_method
- input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
- input/lidar_rule/objects [default: clustering/objects]
- input/detection_by_tracker/objects [default: detection_by_tracker/objects]
- output/objects
- launch/object_recognition/prediction/prediction.launch.xml
-
- use_vector_map [default: false]
- input/objects [default: /perception/object_recognition/tracking/objects]
- launch/object_recognition/tracking/tracking.launch.xml
-
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- object_recognition_tracking_object_merger_data_association_matrix_param_path
- object_recognition_tracking_object_merger_node_param_path
- mode [default: lidar]
- use_radar_tracking_fusion [default: false]
- use_multi_channel_tracker_merger
- use_validator
- use_short_range_detection
- lidar_detection_model_type [default: centerpoint]
- input/merged_detection/channel [default: detected_objects]
- input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
- input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
- input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
- input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
- input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
- input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
- input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
- input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
- input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
- input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
- input/tracker_based_detector/channel [default: detection_by_tracker]
- input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
- input/radar/channel [default: radar]
- input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
- input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
- input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
- output/objects [default: $(var ns)/objects]
- launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
-
- input/obstacle_pointcloud [default: concatenated/pointcloud]
- input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
- output [default: /perception/occupancy_grid_map/map]
- use_intra_process [default: false]
- use_multithread [default: false]
- pointcloud_container_name [default: pointcloud_container]
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- input_obstacle_pointcloud [default: false]
- input_obstacle_and_raw_pointcloud [default: true]
- use_pointcloud_container [default: true]
- launch/perception.launch.xml
-
- object_recognition_detection_euclidean_cluster_param_path
- object_recognition_detection_outlier_param_path
- object_recognition_detection_object_lanelet_filter_param_path
- object_recognition_detection_object_position_filter_param_path
- object_recognition_detection_pointcloud_map_filter_param_path
- object_recognition_prediction_map_based_prediction_param_path
- object_recognition_detection_object_merger_data_association_matrix_param_path
- ml_camera_lidar_object_association_merger_param_path
- object_recognition_detection_object_merger_distance_threshold_list_path
- object_recognition_detection_fusion_sync_param_path
- object_recognition_detection_roi_cluster_fusion_param_path
- object_recognition_detection_irregular_object_detector_param_path
- object_recognition_detection_roi_detected_object_fusion_param_path
- object_recognition_detection_pointpainting_fusion_common_param_path
- object_recognition_detection_lidar_model_param_path
- object_recognition_detection_radar_lanelet_filtering_range_param_path
- object_recognition_detection_object_velocity_splitter_radar_param_path
- object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
- object_recognition_detection_object_range_splitter_radar_param_path
- object_recognition_detection_object_range_splitter_radar_fusion_param_path
- object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
- object_recognition_tracking_multi_object_tracker_input_channels_param_path
- object_recognition_tracking_multi_object_tracker_node_param_path
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- obstacle_segmentation_ground_segmentation_param_path
- obstacle_segmentation_ground_segmentation_elevation_map_param_path
- object_recognition_detection_obstacle_pointcloud_based_validator_param_path
- object_recognition_detection_detection_by_tracker_param
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- lidar_detection_model
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
- lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type [default: centerpoint_short_range]
- lidar_short_range_detection_model_name [default: centerpoint_short_range]
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- mode [default: camera_lidar_fusion]
- data_path [default: $(env HOME)/autoware_data]
- lidar_detection_model_type [default: $(var lidar_detection_model_type)]
- lidar_detection_model_name [default: $(var lidar_detection_model_name)]
- image_raw0 [default: /sensing/camera/camera0/image_rect_color]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- detection_rois0 [default: /perception/object_recognition/detection/rois0]
- image_raw1 [default: /sensing/camera/camera1/image_rect_color]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- detection_rois1 [default: /perception/object_recognition/detection/rois1]
- image_raw2 [default: /sensing/camera/camera2/image_rect_color]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- detection_rois2 [default: /perception/object_recognition/detection/rois2]
- image_raw3 [default: /sensing/camera/camera3/image_rect_color]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- detection_rois3 [default: /perception/object_recognition/detection/rois3]
- image_raw4 [default: /sensing/camera/camera4/image_rect_color]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- detection_rois4 [default: /perception/object_recognition/detection/rois4]
- image_raw5 [default: /sensing/camera/camera5/image_rect_color]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- detection_rois5 [default: /perception/object_recognition/detection/rois5]
- image_raw6 [default: /sensing/camera/camera6/image_rect_color]
- camera_info6 [default: /sensing/camera/camera6/camera_info]
- detection_rois6 [default: /perception/object_recognition/detection/rois6]
- image_raw7 [default: /sensing/camera/camera7/image_rect_color]
- camera_info7 [default: /sensing/camera/camera7/camera_info]
- detection_rois7 [default: /perception/object_recognition/detection/rois7]
- image_raw8 [default: /sensing/camera/camera8/image_rect_color]
- camera_info8 [default: /sensing/camera/camera8/camera_info]
- detection_rois8 [default: /perception/object_recognition/detection/rois8]
- image_number [default: 6]
- image_topic_name [default: image_rect_color]
- segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
- ml_camera_lidar_merger_priority_mode [default: 0]
- pointcloud_container_name [default: pointcloud_container]
- use_vector_map [default: true]
- use_pointcloud_map [default: true]
- use_low_height_cropbox [default: true]
- use_object_filter [default: true]
- objects_filter_method [default: lanelet_filter]
- use_irregular_object_detector [default: true]
- use_low_intensity_cluster_filter [default: true]
- use_image_segmentation_based_filter [default: false]
- use_empty_dynamic_object_publisher [default: false]
- use_object_validator [default: true]
- objects_validation_method [default: obstacle_pointcloud]
- use_perception_online_evaluator [default: false]
- use_perception_analytics_publisher [default: true]
- use_obstacle_segmentation_single_frame_filter
- use_obstacle_segmentation_time_series_filter
- use_traffic_light_recognition
- traffic_light_recognition/fusion_only
- traffic_light_recognition/camera_namespaces
- traffic_light_recognition/use_high_accuracy_detection
- traffic_light_recognition/high_accuracy_detection_type
- traffic_light_recognition/whole_image_detection/model_path
- traffic_light_recognition/whole_image_detection/label_path
- traffic_light_recognition/fine_detection/model_path
- traffic_light_recognition/fine_detection/label_path
- traffic_light_recognition/classification/car/model_path
- traffic_light_recognition/classification/car/label_path
- traffic_light_recognition/classification/pedestrian/model_path
- traffic_light_recognition/classification/pedestrian/label_path
- use_detection_by_tracker [default: true]
- use_radar_tracking_fusion [default: true]
- input/radar [default: /sensing/radar/detected_objects]
- use_multi_channel_tracker_merger [default: false]
- downsample_perception_common_pointcloud [default: false]
- common_downsample_voxel_size_x [default: 0.05]
- common_downsample_voxel_size_y [default: 0.05]
- common_downsample_voxel_size_z [default: 0.05]
- launch/traffic_light_recognition/traffic_light.launch.xml
-
- enable_image_decompressor [default: true]
- fusion_only
- camera_namespaces
- use_high_accuracy_detection
- high_accuracy_detection_type
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- whole_image_detection/model_path
- whole_image_detection/label_path
- fine_detection/model_path
- fine_detection/label_path
- classification/car/model_path
- classification/car/label_path
- classification/pedestrian/model_path
- classification/pedestrian/label_path
- input/vector_map [default: /map/vector_map]
- input/route [default: /planning/mission_planning/route]
- input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
- internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
- external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
- judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
Messages
Services
Plugins
Recent questions tagged tier4_perception_launch at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Taekjin Lee
- Masato Saeki
Authors
tier4_perception_launch
Structure
Package Dependencies
Please see <exec_depend>
in package.xml
.
Usage
You can include as follows in *.launch.xml
to use perception.launch.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of perception.launch.xml
.
<include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
<!-- options for mode: camera_lidar_fusion, lidar, camera -->
<arg name="mode" value="lidar" />
<!-- Parameter files -->
<arg name="FOO_param_path" value="..."/>
<arg name="BAR_param_path" value="..."/>
...
</include>
Changelog for package tier4_perception_launch
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(multi_object_tracker): add irregular objects topic (#11102)
- fix(multi_object_tracker): add irregular objects topic
- fix: change channel order
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
- fix: unused channels
- fix: schema
- docs: update readme
- style(pre-commit): autofix
- fix: short name
* feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>
-
chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(autoware_object_merger): add merger priority_mode (#11042)
* fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix
- fix: add Confidence mode support
- docs: schema update
- fix: launch
* fix: schema json ---------
-
feat(tier4_perception_launch): add missing remappings to launch file (#11037)
-
feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)
-
feat(tracking): add short range detection support and update related
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/object_recognition/detection/detection.launch.xml
-
- mode
- lidar_detection_model_type
- lidar_detection_model_name
- use_short_range_detection
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- use_object_filter
- objects_filter_method
- use_pointcloud_map
- use_detection_by_tracker
- use_validator
- objects_validation_method
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- use_multi_channel_tracker_merger
- use_radar_tracking_fusion
- use_irregular_object_detector
- irregular_object_detector_fusion_camera_ids [default: [0]]
- ml_camera_lidar_merger_priority_mode
- number_of_cameras
- node/pointcloud_container
- input/pointcloud
- input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- image_topic_name
- segmentation_pointcloud_fusion_camera_ids
- input/radar
- input/tracked_objects [default: /perception/object_recognition/tracking/objects]
- output/objects [default: objects]
- launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
-
- input/camera0/image
- input/camera0/info
- input/camera1/image
- input/camera1/info
- input/camera2/image
- input/camera2/info
- input/camera3/image
- input/camera3/info
- input/camera4/image
- input/camera4/info
- input/camera5/image
- input/camera5/info
- input/camera6/image
- input/camera6/info
- input/camera7/image
- input/camera7/info
- output/objects
- number_of_cameras
- data_path [default: $(env HOME)/autoware_data]
- bevdet_model_name [default: bevdet_one_lt_d]
- bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
- launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
-
- ns
- lidar_detection_model_type
- lidar_detection_model_name
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- segmentation_pointcloud_fusion_camera_ids
- image_topic_name
- node/pointcloud_container
- input/pointcloud
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/ml_detector/objects
- output/rule_detector/objects
- output/clustering/cluster_objects
- launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
-
- ns
- pipeline_ns
- input/pointcloud
- fusion_camera_ids [default: [0]]
- image_topic_name [default: image_raw]
- irregular_object_detector_param_path
- launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
-
- lidar_detection_model_type
- lidar_detection_model_name
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- node/pointcloud_container
- input/pointcloud
- output/objects
- output/short_range_objects
- lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
-
- ns
- node/pointcloud_container
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/cluster_objects
- output/objects
- launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
-
- input/clusters
- input/tracked_objects
- output/objects
- launch/object_recognition/detection/filter/object_filter.launch.xml
-
- objects_filter_method [default: lanelet_filter]
- input/objects
- output/objects
- launch/object_recognition/detection/filter/object_validator.launch.xml
-
- objects_validation_method
- input/obstacle_pointcloud
- input/objects
- output/objects
- launch/object_recognition/detection/filter/radar_filter.launch.xml
-
- object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
- object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
- radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
- input/radar
- output/objects
- launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- far_object_merger_sync_queue_size [default: 20]
- lidar_detection_model_type
- use_radar_tracking_fusion
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/radar/objects
- input/radar_far/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_object_filter
- objects_filter_method
- input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
- input/lidar_rule/objects [default: clustering/objects]
- input/detection_by_tracker/objects [default: detection_by_tracker/objects]
- output/objects
- launch/object_recognition/prediction/prediction.launch.xml
-
- use_vector_map [default: false]
- input/objects [default: /perception/object_recognition/tracking/objects]
- launch/object_recognition/tracking/tracking.launch.xml
-
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- object_recognition_tracking_object_merger_data_association_matrix_param_path
- object_recognition_tracking_object_merger_node_param_path
- mode [default: lidar]
- use_radar_tracking_fusion [default: false]
- use_multi_channel_tracker_merger
- use_validator
- use_short_range_detection
- lidar_detection_model_type [default: centerpoint]
- input/merged_detection/channel [default: detected_objects]
- input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
- input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
- input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
- input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
- input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
- input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
- input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
- input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
- input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
- input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
- input/tracker_based_detector/channel [default: detection_by_tracker]
- input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
- input/radar/channel [default: radar]
- input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
- input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
- input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
- output/objects [default: $(var ns)/objects]
- launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
-
- input/obstacle_pointcloud [default: concatenated/pointcloud]
- input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
- output [default: /perception/occupancy_grid_map/map]
- use_intra_process [default: false]
- use_multithread [default: false]
- pointcloud_container_name [default: pointcloud_container]
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- input_obstacle_pointcloud [default: false]
- input_obstacle_and_raw_pointcloud [default: true]
- use_pointcloud_container [default: true]
- launch/perception.launch.xml
-
- object_recognition_detection_euclidean_cluster_param_path
- object_recognition_detection_outlier_param_path
- object_recognition_detection_object_lanelet_filter_param_path
- object_recognition_detection_object_position_filter_param_path
- object_recognition_detection_pointcloud_map_filter_param_path
- object_recognition_prediction_map_based_prediction_param_path
- object_recognition_detection_object_merger_data_association_matrix_param_path
- ml_camera_lidar_object_association_merger_param_path
- object_recognition_detection_object_merger_distance_threshold_list_path
- object_recognition_detection_fusion_sync_param_path
- object_recognition_detection_roi_cluster_fusion_param_path
- object_recognition_detection_irregular_object_detector_param_path
- object_recognition_detection_roi_detected_object_fusion_param_path
- object_recognition_detection_pointpainting_fusion_common_param_path
- object_recognition_detection_lidar_model_param_path
- object_recognition_detection_radar_lanelet_filtering_range_param_path
- object_recognition_detection_object_velocity_splitter_radar_param_path
- object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
- object_recognition_detection_object_range_splitter_radar_param_path
- object_recognition_detection_object_range_splitter_radar_fusion_param_path
- object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
- object_recognition_tracking_multi_object_tracker_input_channels_param_path
- object_recognition_tracking_multi_object_tracker_node_param_path
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- obstacle_segmentation_ground_segmentation_param_path
- obstacle_segmentation_ground_segmentation_elevation_map_param_path
- object_recognition_detection_obstacle_pointcloud_based_validator_param_path
- object_recognition_detection_detection_by_tracker_param
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- lidar_detection_model
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
- lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type [default: centerpoint_short_range]
- lidar_short_range_detection_model_name [default: centerpoint_short_range]
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- mode [default: camera_lidar_fusion]
- data_path [default: $(env HOME)/autoware_data]
- lidar_detection_model_type [default: $(var lidar_detection_model_type)]
- lidar_detection_model_name [default: $(var lidar_detection_model_name)]
- image_raw0 [default: /sensing/camera/camera0/image_rect_color]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- detection_rois0 [default: /perception/object_recognition/detection/rois0]
- image_raw1 [default: /sensing/camera/camera1/image_rect_color]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- detection_rois1 [default: /perception/object_recognition/detection/rois1]
- image_raw2 [default: /sensing/camera/camera2/image_rect_color]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- detection_rois2 [default: /perception/object_recognition/detection/rois2]
- image_raw3 [default: /sensing/camera/camera3/image_rect_color]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- detection_rois3 [default: /perception/object_recognition/detection/rois3]
- image_raw4 [default: /sensing/camera/camera4/image_rect_color]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- detection_rois4 [default: /perception/object_recognition/detection/rois4]
- image_raw5 [default: /sensing/camera/camera5/image_rect_color]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- detection_rois5 [default: /perception/object_recognition/detection/rois5]
- image_raw6 [default: /sensing/camera/camera6/image_rect_color]
- camera_info6 [default: /sensing/camera/camera6/camera_info]
- detection_rois6 [default: /perception/object_recognition/detection/rois6]
- image_raw7 [default: /sensing/camera/camera7/image_rect_color]
- camera_info7 [default: /sensing/camera/camera7/camera_info]
- detection_rois7 [default: /perception/object_recognition/detection/rois7]
- image_raw8 [default: /sensing/camera/camera8/image_rect_color]
- camera_info8 [default: /sensing/camera/camera8/camera_info]
- detection_rois8 [default: /perception/object_recognition/detection/rois8]
- image_number [default: 6]
- image_topic_name [default: image_rect_color]
- segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
- ml_camera_lidar_merger_priority_mode [default: 0]
- pointcloud_container_name [default: pointcloud_container]
- use_vector_map [default: true]
- use_pointcloud_map [default: true]
- use_low_height_cropbox [default: true]
- use_object_filter [default: true]
- objects_filter_method [default: lanelet_filter]
- use_irregular_object_detector [default: true]
- use_low_intensity_cluster_filter [default: true]
- use_image_segmentation_based_filter [default: false]
- use_empty_dynamic_object_publisher [default: false]
- use_object_validator [default: true]
- objects_validation_method [default: obstacle_pointcloud]
- use_perception_online_evaluator [default: false]
- use_perception_analytics_publisher [default: true]
- use_obstacle_segmentation_single_frame_filter
- use_obstacle_segmentation_time_series_filter
- use_traffic_light_recognition
- traffic_light_recognition/fusion_only
- traffic_light_recognition/camera_namespaces
- traffic_light_recognition/use_high_accuracy_detection
- traffic_light_recognition/high_accuracy_detection_type
- traffic_light_recognition/whole_image_detection/model_path
- traffic_light_recognition/whole_image_detection/label_path
- traffic_light_recognition/fine_detection/model_path
- traffic_light_recognition/fine_detection/label_path
- traffic_light_recognition/classification/car/model_path
- traffic_light_recognition/classification/car/label_path
- traffic_light_recognition/classification/pedestrian/model_path
- traffic_light_recognition/classification/pedestrian/label_path
- use_detection_by_tracker [default: true]
- use_radar_tracking_fusion [default: true]
- input/radar [default: /sensing/radar/detected_objects]
- use_multi_channel_tracker_merger [default: false]
- downsample_perception_common_pointcloud [default: false]
- common_downsample_voxel_size_x [default: 0.05]
- common_downsample_voxel_size_y [default: 0.05]
- common_downsample_voxel_size_z [default: 0.05]
- launch/traffic_light_recognition/traffic_light.launch.xml
-
- enable_image_decompressor [default: true]
- fusion_only
- camera_namespaces
- use_high_accuracy_detection
- high_accuracy_detection_type
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- whole_image_detection/model_path
- whole_image_detection/label_path
- fine_detection/model_path
- fine_detection/label_path
- classification/car/model_path
- classification/car/label_path
- classification/pedestrian/model_path
- classification/pedestrian/label_path
- input/vector_map [default: /map/vector_map]
- input/route [default: /planning/mission_planning/route]
- input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
- internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
- external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
- judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
Messages
Services
Plugins
Recent questions tagged tier4_perception_launch at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Taekjin Lee
- Masato Saeki
Authors
tier4_perception_launch
Structure
Package Dependencies
Please see <exec_depend>
in package.xml
.
Usage
You can include as follows in *.launch.xml
to use perception.launch.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of perception.launch.xml
.
<include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
<!-- options for mode: camera_lidar_fusion, lidar, camera -->
<arg name="mode" value="lidar" />
<!-- Parameter files -->
<arg name="FOO_param_path" value="..."/>
<arg name="BAR_param_path" value="..."/>
...
</include>
Changelog for package tier4_perception_launch
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(multi_object_tracker): add irregular objects topic (#11102)
- fix(multi_object_tracker): add irregular objects topic
- fix: change channel order
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
- fix: unused channels
- fix: schema
- docs: update readme
- style(pre-commit): autofix
- fix: short name
* feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>
-
chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(autoware_object_merger): add merger priority_mode (#11042)
* fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix
- fix: add Confidence mode support
- docs: schema update
- fix: launch
* fix: schema json ---------
-
feat(tier4_perception_launch): add missing remappings to launch file (#11037)
-
feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)
-
feat(tracking): add short range detection support and update related
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/object_recognition/detection/detection.launch.xml
-
- mode
- lidar_detection_model_type
- lidar_detection_model_name
- use_short_range_detection
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- use_object_filter
- objects_filter_method
- use_pointcloud_map
- use_detection_by_tracker
- use_validator
- objects_validation_method
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- use_multi_channel_tracker_merger
- use_radar_tracking_fusion
- use_irregular_object_detector
- irregular_object_detector_fusion_camera_ids [default: [0]]
- ml_camera_lidar_merger_priority_mode
- number_of_cameras
- node/pointcloud_container
- input/pointcloud
- input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- image_topic_name
- segmentation_pointcloud_fusion_camera_ids
- input/radar
- input/tracked_objects [default: /perception/object_recognition/tracking/objects]
- output/objects [default: objects]
- launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
-
- input/camera0/image
- input/camera0/info
- input/camera1/image
- input/camera1/info
- input/camera2/image
- input/camera2/info
- input/camera3/image
- input/camera3/info
- input/camera4/image
- input/camera4/info
- input/camera5/image
- input/camera5/info
- input/camera6/image
- input/camera6/info
- input/camera7/image
- input/camera7/info
- output/objects
- number_of_cameras
- data_path [default: $(env HOME)/autoware_data]
- bevdet_model_name [default: bevdet_one_lt_d]
- bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
- launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
-
- ns
- lidar_detection_model_type
- lidar_detection_model_name
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- segmentation_pointcloud_fusion_camera_ids
- image_topic_name
- node/pointcloud_container
- input/pointcloud
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/ml_detector/objects
- output/rule_detector/objects
- output/clustering/cluster_objects
- launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
-
- ns
- pipeline_ns
- input/pointcloud
- fusion_camera_ids [default: [0]]
- image_topic_name [default: image_raw]
- irregular_object_detector_param_path
- launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
-
- lidar_detection_model_type
- lidar_detection_model_name
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- node/pointcloud_container
- input/pointcloud
- output/objects
- output/short_range_objects
- lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
-
- ns
- node/pointcloud_container
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/cluster_objects
- output/objects
- launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
-
- input/clusters
- input/tracked_objects
- output/objects
- launch/object_recognition/detection/filter/object_filter.launch.xml
-
- objects_filter_method [default: lanelet_filter]
- input/objects
- output/objects
- launch/object_recognition/detection/filter/object_validator.launch.xml
-
- objects_validation_method
- input/obstacle_pointcloud
- input/objects
- output/objects
- launch/object_recognition/detection/filter/radar_filter.launch.xml
-
- object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
- object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
- radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
- input/radar
- output/objects
- launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- far_object_merger_sync_queue_size [default: 20]
- lidar_detection_model_type
- use_radar_tracking_fusion
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/radar/objects
- input/radar_far/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_object_filter
- objects_filter_method
- input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
- input/lidar_rule/objects [default: clustering/objects]
- input/detection_by_tracker/objects [default: detection_by_tracker/objects]
- output/objects
- launch/object_recognition/prediction/prediction.launch.xml
-
- use_vector_map [default: false]
- input/objects [default: /perception/object_recognition/tracking/objects]
- launch/object_recognition/tracking/tracking.launch.xml
-
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- object_recognition_tracking_object_merger_data_association_matrix_param_path
- object_recognition_tracking_object_merger_node_param_path
- mode [default: lidar]
- use_radar_tracking_fusion [default: false]
- use_multi_channel_tracker_merger
- use_validator
- use_short_range_detection
- lidar_detection_model_type [default: centerpoint]
- input/merged_detection/channel [default: detected_objects]
- input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
- input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
- input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
- input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
- input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
- input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
- input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
- input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
- input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
- input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
- input/tracker_based_detector/channel [default: detection_by_tracker]
- input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
- input/radar/channel [default: radar]
- input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
- input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
- input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
- output/objects [default: $(var ns)/objects]
- launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
-
- input/obstacle_pointcloud [default: concatenated/pointcloud]
- input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
- output [default: /perception/occupancy_grid_map/map]
- use_intra_process [default: false]
- use_multithread [default: false]
- pointcloud_container_name [default: pointcloud_container]
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- input_obstacle_pointcloud [default: false]
- input_obstacle_and_raw_pointcloud [default: true]
- use_pointcloud_container [default: true]
- launch/perception.launch.xml
-
- object_recognition_detection_euclidean_cluster_param_path
- object_recognition_detection_outlier_param_path
- object_recognition_detection_object_lanelet_filter_param_path
- object_recognition_detection_object_position_filter_param_path
- object_recognition_detection_pointcloud_map_filter_param_path
- object_recognition_prediction_map_based_prediction_param_path
- object_recognition_detection_object_merger_data_association_matrix_param_path
- ml_camera_lidar_object_association_merger_param_path
- object_recognition_detection_object_merger_distance_threshold_list_path
- object_recognition_detection_fusion_sync_param_path
- object_recognition_detection_roi_cluster_fusion_param_path
- object_recognition_detection_irregular_object_detector_param_path
- object_recognition_detection_roi_detected_object_fusion_param_path
- object_recognition_detection_pointpainting_fusion_common_param_path
- object_recognition_detection_lidar_model_param_path
- object_recognition_detection_radar_lanelet_filtering_range_param_path
- object_recognition_detection_object_velocity_splitter_radar_param_path
- object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
- object_recognition_detection_object_range_splitter_radar_param_path
- object_recognition_detection_object_range_splitter_radar_fusion_param_path
- object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
- object_recognition_tracking_multi_object_tracker_input_channels_param_path
- object_recognition_tracking_multi_object_tracker_node_param_path
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- obstacle_segmentation_ground_segmentation_param_path
- obstacle_segmentation_ground_segmentation_elevation_map_param_path
- object_recognition_detection_obstacle_pointcloud_based_validator_param_path
- object_recognition_detection_detection_by_tracker_param
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- lidar_detection_model
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
- lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type [default: centerpoint_short_range]
- lidar_short_range_detection_model_name [default: centerpoint_short_range]
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- mode [default: camera_lidar_fusion]
- data_path [default: $(env HOME)/autoware_data]
- lidar_detection_model_type [default: $(var lidar_detection_model_type)]
- lidar_detection_model_name [default: $(var lidar_detection_model_name)]
- image_raw0 [default: /sensing/camera/camera0/image_rect_color]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- detection_rois0 [default: /perception/object_recognition/detection/rois0]
- image_raw1 [default: /sensing/camera/camera1/image_rect_color]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- detection_rois1 [default: /perception/object_recognition/detection/rois1]
- image_raw2 [default: /sensing/camera/camera2/image_rect_color]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- detection_rois2 [default: /perception/object_recognition/detection/rois2]
- image_raw3 [default: /sensing/camera/camera3/image_rect_color]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- detection_rois3 [default: /perception/object_recognition/detection/rois3]
- image_raw4 [default: /sensing/camera/camera4/image_rect_color]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- detection_rois4 [default: /perception/object_recognition/detection/rois4]
- image_raw5 [default: /sensing/camera/camera5/image_rect_color]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- detection_rois5 [default: /perception/object_recognition/detection/rois5]
- image_raw6 [default: /sensing/camera/camera6/image_rect_color]
- camera_info6 [default: /sensing/camera/camera6/camera_info]
- detection_rois6 [default: /perception/object_recognition/detection/rois6]
- image_raw7 [default: /sensing/camera/camera7/image_rect_color]
- camera_info7 [default: /sensing/camera/camera7/camera_info]
- detection_rois7 [default: /perception/object_recognition/detection/rois7]
- image_raw8 [default: /sensing/camera/camera8/image_rect_color]
- camera_info8 [default: /sensing/camera/camera8/camera_info]
- detection_rois8 [default: /perception/object_recognition/detection/rois8]
- image_number [default: 6]
- image_topic_name [default: image_rect_color]
- segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
- ml_camera_lidar_merger_priority_mode [default: 0]
- pointcloud_container_name [default: pointcloud_container]
- use_vector_map [default: true]
- use_pointcloud_map [default: true]
- use_low_height_cropbox [default: true]
- use_object_filter [default: true]
- objects_filter_method [default: lanelet_filter]
- use_irregular_object_detector [default: true]
- use_low_intensity_cluster_filter [default: true]
- use_image_segmentation_based_filter [default: false]
- use_empty_dynamic_object_publisher [default: false]
- use_object_validator [default: true]
- objects_validation_method [default: obstacle_pointcloud]
- use_perception_online_evaluator [default: false]
- use_perception_analytics_publisher [default: true]
- use_obstacle_segmentation_single_frame_filter
- use_obstacle_segmentation_time_series_filter
- use_traffic_light_recognition
- traffic_light_recognition/fusion_only
- traffic_light_recognition/camera_namespaces
- traffic_light_recognition/use_high_accuracy_detection
- traffic_light_recognition/high_accuracy_detection_type
- traffic_light_recognition/whole_image_detection/model_path
- traffic_light_recognition/whole_image_detection/label_path
- traffic_light_recognition/fine_detection/model_path
- traffic_light_recognition/fine_detection/label_path
- traffic_light_recognition/classification/car/model_path
- traffic_light_recognition/classification/car/label_path
- traffic_light_recognition/classification/pedestrian/model_path
- traffic_light_recognition/classification/pedestrian/label_path
- use_detection_by_tracker [default: true]
- use_radar_tracking_fusion [default: true]
- input/radar [default: /sensing/radar/detected_objects]
- use_multi_channel_tracker_merger [default: false]
- downsample_perception_common_pointcloud [default: false]
- common_downsample_voxel_size_x [default: 0.05]
- common_downsample_voxel_size_y [default: 0.05]
- common_downsample_voxel_size_z [default: 0.05]
- launch/traffic_light_recognition/traffic_light.launch.xml
-
- enable_image_decompressor [default: true]
- fusion_only
- camera_namespaces
- use_high_accuracy_detection
- high_accuracy_detection_type
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- whole_image_detection/model_path
- whole_image_detection/label_path
- fine_detection/model_path
- fine_detection/label_path
- classification/car/model_path
- classification/car/label_path
- classification/pedestrian/model_path
- classification/pedestrian/label_path
- input/vector_map [default: /map/vector_map]
- input/route [default: /planning/mission_planning/route]
- input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
- internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
- external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
- judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
Messages
Services
Plugins
Recent questions tagged tier4_perception_launch at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Taekjin Lee
- Masato Saeki
Authors
tier4_perception_launch
Structure
Package Dependencies
Please see <exec_depend>
in package.xml
.
Usage
You can include as follows in *.launch.xml
to use perception.launch.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of perception.launch.xml
.
<include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
<!-- options for mode: camera_lidar_fusion, lidar, camera -->
<arg name="mode" value="lidar" />
<!-- Parameter files -->
<arg name="FOO_param_path" value="..."/>
<arg name="BAR_param_path" value="..."/>
...
</include>
Changelog for package tier4_perception_launch
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(multi_object_tracker): add irregular objects topic (#11102)
- fix(multi_object_tracker): add irregular objects topic
- fix: change channel order
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
- fix: unused channels
- fix: schema
- docs: update readme
- style(pre-commit): autofix
- fix: short name
* feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>
-
chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(autoware_object_merger): add merger priority_mode (#11042)
* fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix
- fix: add Confidence mode support
- docs: schema update
- fix: launch
* fix: schema json ---------
-
feat(tier4_perception_launch): add missing remappings to launch file (#11037)
-
feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)
-
feat(tracking): add short range detection support and update related
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/object_recognition/detection/detection.launch.xml
-
- mode
- lidar_detection_model_type
- lidar_detection_model_name
- use_short_range_detection
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- use_object_filter
- objects_filter_method
- use_pointcloud_map
- use_detection_by_tracker
- use_validator
- objects_validation_method
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- use_multi_channel_tracker_merger
- use_radar_tracking_fusion
- use_irregular_object_detector
- irregular_object_detector_fusion_camera_ids [default: [0]]
- ml_camera_lidar_merger_priority_mode
- number_of_cameras
- node/pointcloud_container
- input/pointcloud
- input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- image_topic_name
- segmentation_pointcloud_fusion_camera_ids
- input/radar
- input/tracked_objects [default: /perception/object_recognition/tracking/objects]
- output/objects [default: objects]
- launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
-
- input/camera0/image
- input/camera0/info
- input/camera1/image
- input/camera1/info
- input/camera2/image
- input/camera2/info
- input/camera3/image
- input/camera3/info
- input/camera4/image
- input/camera4/info
- input/camera5/image
- input/camera5/info
- input/camera6/image
- input/camera6/info
- input/camera7/image
- input/camera7/info
- output/objects
- number_of_cameras
- data_path [default: $(env HOME)/autoware_data]
- bevdet_model_name [default: bevdet_one_lt_d]
- bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
- launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
-
- ns
- lidar_detection_model_type
- lidar_detection_model_name
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- segmentation_pointcloud_fusion_camera_ids
- image_topic_name
- node/pointcloud_container
- input/pointcloud
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/ml_detector/objects
- output/rule_detector/objects
- output/clustering/cluster_objects
- launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
-
- ns
- pipeline_ns
- input/pointcloud
- fusion_camera_ids [default: [0]]
- image_topic_name [default: image_raw]
- irregular_object_detector_param_path
- launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
-
- lidar_detection_model_type
- lidar_detection_model_name
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- node/pointcloud_container
- input/pointcloud
- output/objects
- output/short_range_objects
- lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
-
- ns
- node/pointcloud_container
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/cluster_objects
- output/objects
- launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
-
- input/clusters
- input/tracked_objects
- output/objects
- launch/object_recognition/detection/filter/object_filter.launch.xml
-
- objects_filter_method [default: lanelet_filter]
- input/objects
- output/objects
- launch/object_recognition/detection/filter/object_validator.launch.xml
-
- objects_validation_method
- input/obstacle_pointcloud
- input/objects
- output/objects
- launch/object_recognition/detection/filter/radar_filter.launch.xml
-
- object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
- object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
- radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
- input/radar
- output/objects
- launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- far_object_merger_sync_queue_size [default: 20]
- lidar_detection_model_type
- use_radar_tracking_fusion
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/radar/objects
- input/radar_far/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_object_filter
- objects_filter_method
- input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
- input/lidar_rule/objects [default: clustering/objects]
- input/detection_by_tracker/objects [default: detection_by_tracker/objects]
- output/objects
- launch/object_recognition/prediction/prediction.launch.xml
-
- use_vector_map [default: false]
- input/objects [default: /perception/object_recognition/tracking/objects]
- launch/object_recognition/tracking/tracking.launch.xml
-
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- object_recognition_tracking_object_merger_data_association_matrix_param_path
- object_recognition_tracking_object_merger_node_param_path
- mode [default: lidar]
- use_radar_tracking_fusion [default: false]
- use_multi_channel_tracker_merger
- use_validator
- use_short_range_detection
- lidar_detection_model_type [default: centerpoint]
- input/merged_detection/channel [default: detected_objects]
- input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
- input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
- input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
- input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
- input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
- input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
- input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
- input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
- input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
- input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
- input/tracker_based_detector/channel [default: detection_by_tracker]
- input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
- input/radar/channel [default: radar]
- input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
- input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
- input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
- output/objects [default: $(var ns)/objects]
- launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
-
- input/obstacle_pointcloud [default: concatenated/pointcloud]
- input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
- output [default: /perception/occupancy_grid_map/map]
- use_intra_process [default: false]
- use_multithread [default: false]
- pointcloud_container_name [default: pointcloud_container]
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- input_obstacle_pointcloud [default: false]
- input_obstacle_and_raw_pointcloud [default: true]
- use_pointcloud_container [default: true]
- launch/perception.launch.xml
-
- object_recognition_detection_euclidean_cluster_param_path
- object_recognition_detection_outlier_param_path
- object_recognition_detection_object_lanelet_filter_param_path
- object_recognition_detection_object_position_filter_param_path
- object_recognition_detection_pointcloud_map_filter_param_path
- object_recognition_prediction_map_based_prediction_param_path
- object_recognition_detection_object_merger_data_association_matrix_param_path
- ml_camera_lidar_object_association_merger_param_path
- object_recognition_detection_object_merger_distance_threshold_list_path
- object_recognition_detection_fusion_sync_param_path
- object_recognition_detection_roi_cluster_fusion_param_path
- object_recognition_detection_irregular_object_detector_param_path
- object_recognition_detection_roi_detected_object_fusion_param_path
- object_recognition_detection_pointpainting_fusion_common_param_path
- object_recognition_detection_lidar_model_param_path
- object_recognition_detection_radar_lanelet_filtering_range_param_path
- object_recognition_detection_object_velocity_splitter_radar_param_path
- object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
- object_recognition_detection_object_range_splitter_radar_param_path
- object_recognition_detection_object_range_splitter_radar_fusion_param_path
- object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
- object_recognition_tracking_multi_object_tracker_input_channels_param_path
- object_recognition_tracking_multi_object_tracker_node_param_path
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- obstacle_segmentation_ground_segmentation_param_path
- obstacle_segmentation_ground_segmentation_elevation_map_param_path
- object_recognition_detection_obstacle_pointcloud_based_validator_param_path
- object_recognition_detection_detection_by_tracker_param
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- lidar_detection_model
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
- lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type [default: centerpoint_short_range]
- lidar_short_range_detection_model_name [default: centerpoint_short_range]
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- mode [default: camera_lidar_fusion]
- data_path [default: $(env HOME)/autoware_data]
- lidar_detection_model_type [default: $(var lidar_detection_model_type)]
- lidar_detection_model_name [default: $(var lidar_detection_model_name)]
- image_raw0 [default: /sensing/camera/camera0/image_rect_color]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- detection_rois0 [default: /perception/object_recognition/detection/rois0]
- image_raw1 [default: /sensing/camera/camera1/image_rect_color]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- detection_rois1 [default: /perception/object_recognition/detection/rois1]
- image_raw2 [default: /sensing/camera/camera2/image_rect_color]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- detection_rois2 [default: /perception/object_recognition/detection/rois2]
- image_raw3 [default: /sensing/camera/camera3/image_rect_color]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- detection_rois3 [default: /perception/object_recognition/detection/rois3]
- image_raw4 [default: /sensing/camera/camera4/image_rect_color]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- detection_rois4 [default: /perception/object_recognition/detection/rois4]
- image_raw5 [default: /sensing/camera/camera5/image_rect_color]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- detection_rois5 [default: /perception/object_recognition/detection/rois5]
- image_raw6 [default: /sensing/camera/camera6/image_rect_color]
- camera_info6 [default: /sensing/camera/camera6/camera_info]
- detection_rois6 [default: /perception/object_recognition/detection/rois6]
- image_raw7 [default: /sensing/camera/camera7/image_rect_color]
- camera_info7 [default: /sensing/camera/camera7/camera_info]
- detection_rois7 [default: /perception/object_recognition/detection/rois7]
- image_raw8 [default: /sensing/camera/camera8/image_rect_color]
- camera_info8 [default: /sensing/camera/camera8/camera_info]
- detection_rois8 [default: /perception/object_recognition/detection/rois8]
- image_number [default: 6]
- image_topic_name [default: image_rect_color]
- segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
- ml_camera_lidar_merger_priority_mode [default: 0]
- pointcloud_container_name [default: pointcloud_container]
- use_vector_map [default: true]
- use_pointcloud_map [default: true]
- use_low_height_cropbox [default: true]
- use_object_filter [default: true]
- objects_filter_method [default: lanelet_filter]
- use_irregular_object_detector [default: true]
- use_low_intensity_cluster_filter [default: true]
- use_image_segmentation_based_filter [default: false]
- use_empty_dynamic_object_publisher [default: false]
- use_object_validator [default: true]
- objects_validation_method [default: obstacle_pointcloud]
- use_perception_online_evaluator [default: false]
- use_perception_analytics_publisher [default: true]
- use_obstacle_segmentation_single_frame_filter
- use_obstacle_segmentation_time_series_filter
- use_traffic_light_recognition
- traffic_light_recognition/fusion_only
- traffic_light_recognition/camera_namespaces
- traffic_light_recognition/use_high_accuracy_detection
- traffic_light_recognition/high_accuracy_detection_type
- traffic_light_recognition/whole_image_detection/model_path
- traffic_light_recognition/whole_image_detection/label_path
- traffic_light_recognition/fine_detection/model_path
- traffic_light_recognition/fine_detection/label_path
- traffic_light_recognition/classification/car/model_path
- traffic_light_recognition/classification/car/label_path
- traffic_light_recognition/classification/pedestrian/model_path
- traffic_light_recognition/classification/pedestrian/label_path
- use_detection_by_tracker [default: true]
- use_radar_tracking_fusion [default: true]
- input/radar [default: /sensing/radar/detected_objects]
- use_multi_channel_tracker_merger [default: false]
- downsample_perception_common_pointcloud [default: false]
- common_downsample_voxel_size_x [default: 0.05]
- common_downsample_voxel_size_y [default: 0.05]
- common_downsample_voxel_size_z [default: 0.05]
- launch/traffic_light_recognition/traffic_light.launch.xml
-
- enable_image_decompressor [default: true]
- fusion_only
- camera_namespaces
- use_high_accuracy_detection
- high_accuracy_detection_type
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- whole_image_detection/model_path
- whole_image_detection/label_path
- fine_detection/model_path
- fine_detection/label_path
- classification/car/model_path
- classification/car/label_path
- classification/pedestrian/model_path
- classification/pedestrian/label_path
- input/vector_map [default: /map/vector_map]
- input/route [default: /planning/mission_planning/route]
- input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
- internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
- external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
- judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
Messages
Services
Plugins
Recent questions tagged tier4_perception_launch at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Taekjin Lee
- Masato Saeki
Authors
tier4_perception_launch
Structure
Package Dependencies
Please see <exec_depend>
in package.xml
.
Usage
You can include as follows in *.launch.xml
to use perception.launch.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of perception.launch.xml
.
<include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
<!-- options for mode: camera_lidar_fusion, lidar, camera -->
<arg name="mode" value="lidar" />
<!-- Parameter files -->
<arg name="FOO_param_path" value="..."/>
<arg name="BAR_param_path" value="..."/>
...
</include>
Changelog for package tier4_perception_launch
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(multi_object_tracker): add irregular objects topic (#11102)
- fix(multi_object_tracker): add irregular objects topic
- fix: change channel order
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
- fix: unused channels
- fix: schema
- docs: update readme
- style(pre-commit): autofix
- fix: short name
* feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>
-
chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(autoware_object_merger): add merger priority_mode (#11042)
* fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix
- fix: add Confidence mode support
- docs: schema update
- fix: launch
* fix: schema json ---------
-
feat(tier4_perception_launch): add missing remappings to launch file (#11037)
-
feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)
-
feat(tracking): add short range detection support and update related
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/object_recognition/detection/detection.launch.xml
-
- mode
- lidar_detection_model_type
- lidar_detection_model_name
- use_short_range_detection
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- use_object_filter
- objects_filter_method
- use_pointcloud_map
- use_detection_by_tracker
- use_validator
- objects_validation_method
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- use_multi_channel_tracker_merger
- use_radar_tracking_fusion
- use_irregular_object_detector
- irregular_object_detector_fusion_camera_ids [default: [0]]
- ml_camera_lidar_merger_priority_mode
- number_of_cameras
- node/pointcloud_container
- input/pointcloud
- input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- image_topic_name
- segmentation_pointcloud_fusion_camera_ids
- input/radar
- input/tracked_objects [default: /perception/object_recognition/tracking/objects]
- output/objects [default: objects]
- launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
-
- input/camera0/image
- input/camera0/info
- input/camera1/image
- input/camera1/info
- input/camera2/image
- input/camera2/info
- input/camera3/image
- input/camera3/info
- input/camera4/image
- input/camera4/info
- input/camera5/image
- input/camera5/info
- input/camera6/image
- input/camera6/info
- input/camera7/image
- input/camera7/info
- output/objects
- number_of_cameras
- data_path [default: $(env HOME)/autoware_data]
- bevdet_model_name [default: bevdet_one_lt_d]
- bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
- launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
-
- ns
- lidar_detection_model_type
- lidar_detection_model_name
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- segmentation_pointcloud_fusion_camera_ids
- image_topic_name
- node/pointcloud_container
- input/pointcloud
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/ml_detector/objects
- output/rule_detector/objects
- output/clustering/cluster_objects
- launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
-
- ns
- pipeline_ns
- input/pointcloud
- fusion_camera_ids [default: [0]]
- image_topic_name [default: image_raw]
- irregular_object_detector_param_path
- launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
-
- lidar_detection_model_type
- lidar_detection_model_name
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- node/pointcloud_container
- input/pointcloud
- output/objects
- output/short_range_objects
- lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
-
- ns
- node/pointcloud_container
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/cluster_objects
- output/objects
- launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
-
- input/clusters
- input/tracked_objects
- output/objects
- launch/object_recognition/detection/filter/object_filter.launch.xml
-
- objects_filter_method [default: lanelet_filter]
- input/objects
- output/objects
- launch/object_recognition/detection/filter/object_validator.launch.xml
-
- objects_validation_method
- input/obstacle_pointcloud
- input/objects
- output/objects
- launch/object_recognition/detection/filter/radar_filter.launch.xml
-
- object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
- object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
- radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
- input/radar
- output/objects
- launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- far_object_merger_sync_queue_size [default: 20]
- lidar_detection_model_type
- use_radar_tracking_fusion
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/radar/objects
- input/radar_far/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_object_filter
- objects_filter_method
- input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
- input/lidar_rule/objects [default: clustering/objects]
- input/detection_by_tracker/objects [default: detection_by_tracker/objects]
- output/objects
- launch/object_recognition/prediction/prediction.launch.xml
-
- use_vector_map [default: false]
- input/objects [default: /perception/object_recognition/tracking/objects]
- launch/object_recognition/tracking/tracking.launch.xml
-
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- object_recognition_tracking_object_merger_data_association_matrix_param_path
- object_recognition_tracking_object_merger_node_param_path
- mode [default: lidar]
- use_radar_tracking_fusion [default: false]
- use_multi_channel_tracker_merger
- use_validator
- use_short_range_detection
- lidar_detection_model_type [default: centerpoint]
- input/merged_detection/channel [default: detected_objects]
- input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
- input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
- input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
- input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
- input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
- input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
- input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
- input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
- input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
- input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
- input/tracker_based_detector/channel [default: detection_by_tracker]
- input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
- input/radar/channel [default: radar]
- input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
- input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
- input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
- output/objects [default: $(var ns)/objects]
- launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
-
- input/obstacle_pointcloud [default: concatenated/pointcloud]
- input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
- output [default: /perception/occupancy_grid_map/map]
- use_intra_process [default: false]
- use_multithread [default: false]
- pointcloud_container_name [default: pointcloud_container]
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- input_obstacle_pointcloud [default: false]
- input_obstacle_and_raw_pointcloud [default: true]
- use_pointcloud_container [default: true]
- launch/perception.launch.xml
-
- object_recognition_detection_euclidean_cluster_param_path
- object_recognition_detection_outlier_param_path
- object_recognition_detection_object_lanelet_filter_param_path
- object_recognition_detection_object_position_filter_param_path
- object_recognition_detection_pointcloud_map_filter_param_path
- object_recognition_prediction_map_based_prediction_param_path
- object_recognition_detection_object_merger_data_association_matrix_param_path
- ml_camera_lidar_object_association_merger_param_path
- object_recognition_detection_object_merger_distance_threshold_list_path
- object_recognition_detection_fusion_sync_param_path
- object_recognition_detection_roi_cluster_fusion_param_path
- object_recognition_detection_irregular_object_detector_param_path
- object_recognition_detection_roi_detected_object_fusion_param_path
- object_recognition_detection_pointpainting_fusion_common_param_path
- object_recognition_detection_lidar_model_param_path
- object_recognition_detection_radar_lanelet_filtering_range_param_path
- object_recognition_detection_object_velocity_splitter_radar_param_path
- object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
- object_recognition_detection_object_range_splitter_radar_param_path
- object_recognition_detection_object_range_splitter_radar_fusion_param_path
- object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
- object_recognition_tracking_multi_object_tracker_input_channels_param_path
- object_recognition_tracking_multi_object_tracker_node_param_path
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- obstacle_segmentation_ground_segmentation_param_path
- obstacle_segmentation_ground_segmentation_elevation_map_param_path
- object_recognition_detection_obstacle_pointcloud_based_validator_param_path
- object_recognition_detection_detection_by_tracker_param
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- lidar_detection_model
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
- lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type [default: centerpoint_short_range]
- lidar_short_range_detection_model_name [default: centerpoint_short_range]
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- mode [default: camera_lidar_fusion]
- data_path [default: $(env HOME)/autoware_data]
- lidar_detection_model_type [default: $(var lidar_detection_model_type)]
- lidar_detection_model_name [default: $(var lidar_detection_model_name)]
- image_raw0 [default: /sensing/camera/camera0/image_rect_color]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- detection_rois0 [default: /perception/object_recognition/detection/rois0]
- image_raw1 [default: /sensing/camera/camera1/image_rect_color]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- detection_rois1 [default: /perception/object_recognition/detection/rois1]
- image_raw2 [default: /sensing/camera/camera2/image_rect_color]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- detection_rois2 [default: /perception/object_recognition/detection/rois2]
- image_raw3 [default: /sensing/camera/camera3/image_rect_color]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- detection_rois3 [default: /perception/object_recognition/detection/rois3]
- image_raw4 [default: /sensing/camera/camera4/image_rect_color]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- detection_rois4 [default: /perception/object_recognition/detection/rois4]
- image_raw5 [default: /sensing/camera/camera5/image_rect_color]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- detection_rois5 [default: /perception/object_recognition/detection/rois5]
- image_raw6 [default: /sensing/camera/camera6/image_rect_color]
- camera_info6 [default: /sensing/camera/camera6/camera_info]
- detection_rois6 [default: /perception/object_recognition/detection/rois6]
- image_raw7 [default: /sensing/camera/camera7/image_rect_color]
- camera_info7 [default: /sensing/camera/camera7/camera_info]
- detection_rois7 [default: /perception/object_recognition/detection/rois7]
- image_raw8 [default: /sensing/camera/camera8/image_rect_color]
- camera_info8 [default: /sensing/camera/camera8/camera_info]
- detection_rois8 [default: /perception/object_recognition/detection/rois8]
- image_number [default: 6]
- image_topic_name [default: image_rect_color]
- segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
- ml_camera_lidar_merger_priority_mode [default: 0]
- pointcloud_container_name [default: pointcloud_container]
- use_vector_map [default: true]
- use_pointcloud_map [default: true]
- use_low_height_cropbox [default: true]
- use_object_filter [default: true]
- objects_filter_method [default: lanelet_filter]
- use_irregular_object_detector [default: true]
- use_low_intensity_cluster_filter [default: true]
- use_image_segmentation_based_filter [default: false]
- use_empty_dynamic_object_publisher [default: false]
- use_object_validator [default: true]
- objects_validation_method [default: obstacle_pointcloud]
- use_perception_online_evaluator [default: false]
- use_perception_analytics_publisher [default: true]
- use_obstacle_segmentation_single_frame_filter
- use_obstacle_segmentation_time_series_filter
- use_traffic_light_recognition
- traffic_light_recognition/fusion_only
- traffic_light_recognition/camera_namespaces
- traffic_light_recognition/use_high_accuracy_detection
- traffic_light_recognition/high_accuracy_detection_type
- traffic_light_recognition/whole_image_detection/model_path
- traffic_light_recognition/whole_image_detection/label_path
- traffic_light_recognition/fine_detection/model_path
- traffic_light_recognition/fine_detection/label_path
- traffic_light_recognition/classification/car/model_path
- traffic_light_recognition/classification/car/label_path
- traffic_light_recognition/classification/pedestrian/model_path
- traffic_light_recognition/classification/pedestrian/label_path
- use_detection_by_tracker [default: true]
- use_radar_tracking_fusion [default: true]
- input/radar [default: /sensing/radar/detected_objects]
- use_multi_channel_tracker_merger [default: false]
- downsample_perception_common_pointcloud [default: false]
- common_downsample_voxel_size_x [default: 0.05]
- common_downsample_voxel_size_y [default: 0.05]
- common_downsample_voxel_size_z [default: 0.05]
- launch/traffic_light_recognition/traffic_light.launch.xml
-
- enable_image_decompressor [default: true]
- fusion_only
- camera_namespaces
- use_high_accuracy_detection
- high_accuracy_detection_type
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- whole_image_detection/model_path
- whole_image_detection/label_path
- fine_detection/model_path
- fine_detection/label_path
- classification/car/model_path
- classification/car/label_path
- classification/pedestrian/model_path
- classification/pedestrian/label_path
- input/vector_map [default: /map/vector_map]
- input/route [default: /planning/mission_planning/route]
- input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
- internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
- external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
- judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
Messages
Services
Plugins
Recent questions tagged tier4_perception_launch at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Taekjin Lee
- Masato Saeki
Authors
tier4_perception_launch
Structure
Package Dependencies
Please see <exec_depend>
in package.xml
.
Usage
You can include as follows in *.launch.xml
to use perception.launch.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of perception.launch.xml
.
<include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
<!-- options for mode: camera_lidar_fusion, lidar, camera -->
<arg name="mode" value="lidar" />
<!-- Parameter files -->
<arg name="FOO_param_path" value="..."/>
<arg name="BAR_param_path" value="..."/>
...
</include>
Changelog for package tier4_perception_launch
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(multi_object_tracker): add irregular objects topic (#11102)
- fix(multi_object_tracker): add irregular objects topic
- fix: change channel order
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
- fix: unused channels
- fix: schema
- docs: update readme
- style(pre-commit): autofix
- fix: short name
* feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>
-
chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(autoware_object_merger): add merger priority_mode (#11042)
* fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix
- fix: add Confidence mode support
- docs: schema update
- fix: launch
* fix: schema json ---------
-
feat(tier4_perception_launch): add missing remappings to launch file (#11037)
-
feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)
-
feat(tracking): add short range detection support and update related
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/object_recognition/detection/detection.launch.xml
-
- mode
- lidar_detection_model_type
- lidar_detection_model_name
- use_short_range_detection
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- use_object_filter
- objects_filter_method
- use_pointcloud_map
- use_detection_by_tracker
- use_validator
- objects_validation_method
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- use_multi_channel_tracker_merger
- use_radar_tracking_fusion
- use_irregular_object_detector
- irregular_object_detector_fusion_camera_ids [default: [0]]
- ml_camera_lidar_merger_priority_mode
- number_of_cameras
- node/pointcloud_container
- input/pointcloud
- input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- image_topic_name
- segmentation_pointcloud_fusion_camera_ids
- input/radar
- input/tracked_objects [default: /perception/object_recognition/tracking/objects]
- output/objects [default: objects]
- launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
-
- input/camera0/image
- input/camera0/info
- input/camera1/image
- input/camera1/info
- input/camera2/image
- input/camera2/info
- input/camera3/image
- input/camera3/info
- input/camera4/image
- input/camera4/info
- input/camera5/image
- input/camera5/info
- input/camera6/image
- input/camera6/info
- input/camera7/image
- input/camera7/info
- output/objects
- number_of_cameras
- data_path [default: $(env HOME)/autoware_data]
- bevdet_model_name [default: bevdet_one_lt_d]
- bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
- launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
-
- ns
- lidar_detection_model_type
- lidar_detection_model_name
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- segmentation_pointcloud_fusion_camera_ids
- image_topic_name
- node/pointcloud_container
- input/pointcloud
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/ml_detector/objects
- output/rule_detector/objects
- output/clustering/cluster_objects
- launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
-
- ns
- pipeline_ns
- input/pointcloud
- fusion_camera_ids [default: [0]]
- image_topic_name [default: image_raw]
- irregular_object_detector_param_path
- launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
-
- lidar_detection_model_type
- lidar_detection_model_name
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- node/pointcloud_container
- input/pointcloud
- output/objects
- output/short_range_objects
- lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
-
- ns
- node/pointcloud_container
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/cluster_objects
- output/objects
- launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
-
- input/clusters
- input/tracked_objects
- output/objects
- launch/object_recognition/detection/filter/object_filter.launch.xml
-
- objects_filter_method [default: lanelet_filter]
- input/objects
- output/objects
- launch/object_recognition/detection/filter/object_validator.launch.xml
-
- objects_validation_method
- input/obstacle_pointcloud
- input/objects
- output/objects
- launch/object_recognition/detection/filter/radar_filter.launch.xml
-
- object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
- object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
- radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
- input/radar
- output/objects
- launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- far_object_merger_sync_queue_size [default: 20]
- lidar_detection_model_type
- use_radar_tracking_fusion
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/radar/objects
- input/radar_far/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_object_filter
- objects_filter_method
- input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
- input/lidar_rule/objects [default: clustering/objects]
- input/detection_by_tracker/objects [default: detection_by_tracker/objects]
- output/objects
- launch/object_recognition/prediction/prediction.launch.xml
-
- use_vector_map [default: false]
- input/objects [default: /perception/object_recognition/tracking/objects]
- launch/object_recognition/tracking/tracking.launch.xml
-
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- object_recognition_tracking_object_merger_data_association_matrix_param_path
- object_recognition_tracking_object_merger_node_param_path
- mode [default: lidar]
- use_radar_tracking_fusion [default: false]
- use_multi_channel_tracker_merger
- use_validator
- use_short_range_detection
- lidar_detection_model_type [default: centerpoint]
- input/merged_detection/channel [default: detected_objects]
- input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
- input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
- input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
- input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
- input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
- input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
- input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
- input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
- input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
- input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
- input/tracker_based_detector/channel [default: detection_by_tracker]
- input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
- input/radar/channel [default: radar]
- input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
- input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
- input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
- output/objects [default: $(var ns)/objects]
- launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
-
- input/obstacle_pointcloud [default: concatenated/pointcloud]
- input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
- output [default: /perception/occupancy_grid_map/map]
- use_intra_process [default: false]
- use_multithread [default: false]
- pointcloud_container_name [default: pointcloud_container]
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- input_obstacle_pointcloud [default: false]
- input_obstacle_and_raw_pointcloud [default: true]
- use_pointcloud_container [default: true]
- launch/perception.launch.xml
-
- object_recognition_detection_euclidean_cluster_param_path
- object_recognition_detection_outlier_param_path
- object_recognition_detection_object_lanelet_filter_param_path
- object_recognition_detection_object_position_filter_param_path
- object_recognition_detection_pointcloud_map_filter_param_path
- object_recognition_prediction_map_based_prediction_param_path
- object_recognition_detection_object_merger_data_association_matrix_param_path
- ml_camera_lidar_object_association_merger_param_path
- object_recognition_detection_object_merger_distance_threshold_list_path
- object_recognition_detection_fusion_sync_param_path
- object_recognition_detection_roi_cluster_fusion_param_path
- object_recognition_detection_irregular_object_detector_param_path
- object_recognition_detection_roi_detected_object_fusion_param_path
- object_recognition_detection_pointpainting_fusion_common_param_path
- object_recognition_detection_lidar_model_param_path
- object_recognition_detection_radar_lanelet_filtering_range_param_path
- object_recognition_detection_object_velocity_splitter_radar_param_path
- object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
- object_recognition_detection_object_range_splitter_radar_param_path
- object_recognition_detection_object_range_splitter_radar_fusion_param_path
- object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
- object_recognition_tracking_multi_object_tracker_input_channels_param_path
- object_recognition_tracking_multi_object_tracker_node_param_path
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- obstacle_segmentation_ground_segmentation_param_path
- obstacle_segmentation_ground_segmentation_elevation_map_param_path
- object_recognition_detection_obstacle_pointcloud_based_validator_param_path
- object_recognition_detection_detection_by_tracker_param
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- lidar_detection_model
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
- lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type [default: centerpoint_short_range]
- lidar_short_range_detection_model_name [default: centerpoint_short_range]
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- mode [default: camera_lidar_fusion]
- data_path [default: $(env HOME)/autoware_data]
- lidar_detection_model_type [default: $(var lidar_detection_model_type)]
- lidar_detection_model_name [default: $(var lidar_detection_model_name)]
- image_raw0 [default: /sensing/camera/camera0/image_rect_color]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- detection_rois0 [default: /perception/object_recognition/detection/rois0]
- image_raw1 [default: /sensing/camera/camera1/image_rect_color]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- detection_rois1 [default: /perception/object_recognition/detection/rois1]
- image_raw2 [default: /sensing/camera/camera2/image_rect_color]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- detection_rois2 [default: /perception/object_recognition/detection/rois2]
- image_raw3 [default: /sensing/camera/camera3/image_rect_color]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- detection_rois3 [default: /perception/object_recognition/detection/rois3]
- image_raw4 [default: /sensing/camera/camera4/image_rect_color]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- detection_rois4 [default: /perception/object_recognition/detection/rois4]
- image_raw5 [default: /sensing/camera/camera5/image_rect_color]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- detection_rois5 [default: /perception/object_recognition/detection/rois5]
- image_raw6 [default: /sensing/camera/camera6/image_rect_color]
- camera_info6 [default: /sensing/camera/camera6/camera_info]
- detection_rois6 [default: /perception/object_recognition/detection/rois6]
- image_raw7 [default: /sensing/camera/camera7/image_rect_color]
- camera_info7 [default: /sensing/camera/camera7/camera_info]
- detection_rois7 [default: /perception/object_recognition/detection/rois7]
- image_raw8 [default: /sensing/camera/camera8/image_rect_color]
- camera_info8 [default: /sensing/camera/camera8/camera_info]
- detection_rois8 [default: /perception/object_recognition/detection/rois8]
- image_number [default: 6]
- image_topic_name [default: image_rect_color]
- segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
- ml_camera_lidar_merger_priority_mode [default: 0]
- pointcloud_container_name [default: pointcloud_container]
- use_vector_map [default: true]
- use_pointcloud_map [default: true]
- use_low_height_cropbox [default: true]
- use_object_filter [default: true]
- objects_filter_method [default: lanelet_filter]
- use_irregular_object_detector [default: true]
- use_low_intensity_cluster_filter [default: true]
- use_image_segmentation_based_filter [default: false]
- use_empty_dynamic_object_publisher [default: false]
- use_object_validator [default: true]
- objects_validation_method [default: obstacle_pointcloud]
- use_perception_online_evaluator [default: false]
- use_perception_analytics_publisher [default: true]
- use_obstacle_segmentation_single_frame_filter
- use_obstacle_segmentation_time_series_filter
- use_traffic_light_recognition
- traffic_light_recognition/fusion_only
- traffic_light_recognition/camera_namespaces
- traffic_light_recognition/use_high_accuracy_detection
- traffic_light_recognition/high_accuracy_detection_type
- traffic_light_recognition/whole_image_detection/model_path
- traffic_light_recognition/whole_image_detection/label_path
- traffic_light_recognition/fine_detection/model_path
- traffic_light_recognition/fine_detection/label_path
- traffic_light_recognition/classification/car/model_path
- traffic_light_recognition/classification/car/label_path
- traffic_light_recognition/classification/pedestrian/model_path
- traffic_light_recognition/classification/pedestrian/label_path
- use_detection_by_tracker [default: true]
- use_radar_tracking_fusion [default: true]
- input/radar [default: /sensing/radar/detected_objects]
- use_multi_channel_tracker_merger [default: false]
- downsample_perception_common_pointcloud [default: false]
- common_downsample_voxel_size_x [default: 0.05]
- common_downsample_voxel_size_y [default: 0.05]
- common_downsample_voxel_size_z [default: 0.05]
- launch/traffic_light_recognition/traffic_light.launch.xml
-
- enable_image_decompressor [default: true]
- fusion_only
- camera_namespaces
- use_high_accuracy_detection
- high_accuracy_detection_type
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- whole_image_detection/model_path
- whole_image_detection/label_path
- fine_detection/model_path
- fine_detection/label_path
- classification/car/model_path
- classification/car/label_path
- classification/pedestrian/model_path
- classification/pedestrian/label_path
- input/vector_map [default: /map/vector_map]
- input/route [default: /planning/mission_planning/route]
- input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
- internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
- external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
- judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
Messages
Services
Plugins
Recent questions tagged tier4_perception_launch at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Taekjin Lee
- Masato Saeki
Authors
tier4_perception_launch
Structure
Package Dependencies
Please see <exec_depend>
in package.xml
.
Usage
You can include as follows in *.launch.xml
to use perception.launch.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of perception.launch.xml
.
<include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
<!-- options for mode: camera_lidar_fusion, lidar, camera -->
<arg name="mode" value="lidar" />
<!-- Parameter files -->
<arg name="FOO_param_path" value="..."/>
<arg name="BAR_param_path" value="..."/>
...
</include>
Changelog for package tier4_perception_launch
0.47.0 (2025-08-11)
-
feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)
* feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation
* feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions
- fix: unittest error
- style(pre-commit): autofix
- refactor: replace MOB keyword with generalized expression of perception analytics
- chore: improve comment
* refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring
- style(pre-commit): autofix
* fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(multi_object_tracker): add irregular objects topic (#11102)
- fix(multi_object_tracker): add irregular objects topic
- fix: change channel order
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
* Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
- fix: unused channels
- fix: schema
- docs: update readme
- style(pre-commit): autofix
- fix: short name
* feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>
-
chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
fix(autoware_object_merger): add merger priority_mode (#11042)
* fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix
- fix: add Confidence mode support
- docs: schema update
- fix: launch
* fix: schema json ---------
-
feat(tier4_perception_launch): add missing remappings to launch file (#11037)
-
feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)
-
feat(tracking): add short range detection support and update related
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/object_recognition/detection/detection.launch.xml
-
- mode
- lidar_detection_model_type
- lidar_detection_model_name
- use_short_range_detection
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- use_object_filter
- objects_filter_method
- use_pointcloud_map
- use_detection_by_tracker
- use_validator
- objects_validation_method
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- use_multi_channel_tracker_merger
- use_radar_tracking_fusion
- use_irregular_object_detector
- irregular_object_detector_fusion_camera_ids [default: [0]]
- ml_camera_lidar_merger_priority_mode
- number_of_cameras
- node/pointcloud_container
- input/pointcloud
- input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- image_topic_name
- segmentation_pointcloud_fusion_camera_ids
- input/radar
- input/tracked_objects [default: /perception/object_recognition/tracking/objects]
- output/objects [default: objects]
- launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
-
- input/camera0/image
- input/camera0/info
- input/camera1/image
- input/camera1/info
- input/camera2/image
- input/camera2/info
- input/camera3/image
- input/camera3/info
- input/camera4/image
- input/camera4/info
- input/camera5/image
- input/camera5/info
- input/camera6/image
- input/camera6/info
- input/camera7/image
- input/camera7/info
- output/objects
- number_of_cameras
- data_path [default: $(env HOME)/autoware_data]
- bevdet_model_name [default: bevdet_one_lt_d]
- bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
- launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
-
- ns
- lidar_detection_model_type
- lidar_detection_model_name
- use_low_intensity_cluster_filter
- use_image_segmentation_based_filter
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- segmentation_pointcloud_fusion_camera_ids
- image_topic_name
- node/pointcloud_container
- input/pointcloud
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/ml_detector/objects
- output/rule_detector/objects
- output/clustering/cluster_objects
- launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
-
- ns
- pipeline_ns
- input/pointcloud
- fusion_camera_ids [default: [0]]
- image_topic_name [default: image_raw]
- irregular_object_detector_param_path
- launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
-
- lidar_detection_model_type
- lidar_detection_model_name
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type
- lidar_short_range_detection_model_name
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- node/pointcloud_container
- input/pointcloud
- output/objects
- output/short_range_objects
- lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
- lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
- launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
-
- ns
- node/pointcloud_container
- input/pointcloud_map/pointcloud
- input/obstacle_segmentation/pointcloud
- output/cluster_objects
- output/objects
- launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
-
- input/clusters
- input/tracked_objects
- output/objects
- launch/object_recognition/detection/filter/object_filter.launch.xml
-
- objects_filter_method [default: lanelet_filter]
- input/objects
- output/objects
- launch/object_recognition/detection/filter/object_validator.launch.xml
-
- objects_validation_method
- input/obstacle_pointcloud
- input/objects
- output/objects
- launch/object_recognition/detection/filter/radar_filter.launch.xml
-
- object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
- object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
- radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
- input/radar
- output/objects
- launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
- far_object_merger_sync_queue_size [default: 20]
- lidar_detection_model_type
- use_radar_tracking_fusion
- use_detection_by_tracker
- use_irregular_object_detector
- use_object_filter
- objects_filter_method
- number_of_cameras
- input/camera0/image
- input/camera0/info
- input/camera0/rois
- input/camera1/image
- input/camera1/info
- input/camera1/rois
- input/camera2/image
- input/camera2/info
- input/camera2/rois
- input/camera3/image
- input/camera3/info
- input/camera3/rois
- input/camera4/image
- input/camera4/info
- input/camera4/rois
- input/camera5/image
- input/camera5/info
- input/camera5/rois
- input/camera6/image
- input/camera6/info
- input/camera6/rois
- input/camera7/image
- input/camera7/info
- input/camera7/rois
- input/camera8/image
- input/camera8/info
- input/camera8/rois
- input/lidar_ml/objects
- input/lidar_rule/objects
- input/radar/objects
- input/radar_far/objects
- input/detection_by_tracker/objects
- output/objects [default: objects]
- alpha_merger_priority_mode [default: 0]
- launch/object_recognition/detection/merger/lidar_merger.launch.xml
-
- object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
- object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
- lidar_detection_model_type
- use_detection_by_tracker
- use_object_filter
- objects_filter_method
- input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
- input/lidar_rule/objects [default: clustering/objects]
- input/detection_by_tracker/objects [default: detection_by_tracker/objects]
- output/objects
- launch/object_recognition/prediction/prediction.launch.xml
-
- use_vector_map [default: false]
- input/objects [default: /perception/object_recognition/tracking/objects]
- launch/object_recognition/tracking/tracking.launch.xml
-
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- object_recognition_tracking_object_merger_data_association_matrix_param_path
- object_recognition_tracking_object_merger_node_param_path
- mode [default: lidar]
- use_radar_tracking_fusion [default: false]
- use_multi_channel_tracker_merger
- use_validator
- use_short_range_detection
- lidar_detection_model_type [default: centerpoint]
- input/merged_detection/channel [default: detected_objects]
- input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
- input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
- input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
- input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
- input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
- input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
- input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
- input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
- input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
- input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
- input/tracker_based_detector/channel [default: detection_by_tracker]
- input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
- input/radar/channel [default: radar]
- input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
- input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
- input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
- output/objects [default: $(var ns)/objects]
- launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
-
- input/obstacle_pointcloud [default: concatenated/pointcloud]
- input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
- output [default: /perception/occupancy_grid_map/map]
- use_intra_process [default: false]
- use_multithread [default: false]
- pointcloud_container_name [default: pointcloud_container]
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- input_obstacle_pointcloud [default: false]
- input_obstacle_and_raw_pointcloud [default: true]
- use_pointcloud_container [default: true]
- launch/perception.launch.xml
-
- object_recognition_detection_euclidean_cluster_param_path
- object_recognition_detection_outlier_param_path
- object_recognition_detection_object_lanelet_filter_param_path
- object_recognition_detection_object_position_filter_param_path
- object_recognition_detection_pointcloud_map_filter_param_path
- object_recognition_prediction_map_based_prediction_param_path
- object_recognition_detection_object_merger_data_association_matrix_param_path
- ml_camera_lidar_object_association_merger_param_path
- object_recognition_detection_object_merger_distance_threshold_list_path
- object_recognition_detection_fusion_sync_param_path
- object_recognition_detection_roi_cluster_fusion_param_path
- object_recognition_detection_irregular_object_detector_param_path
- object_recognition_detection_roi_detected_object_fusion_param_path
- object_recognition_detection_pointpainting_fusion_common_param_path
- object_recognition_detection_lidar_model_param_path
- object_recognition_detection_radar_lanelet_filtering_range_param_path
- object_recognition_detection_object_velocity_splitter_radar_param_path
- object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
- object_recognition_detection_object_range_splitter_radar_param_path
- object_recognition_detection_object_range_splitter_radar_fusion_param_path
- object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
- object_recognition_tracking_multi_object_tracker_input_channels_param_path
- object_recognition_tracking_multi_object_tracker_node_param_path
- object_recognition_tracking_radar_tracked_object_sorter_param_path
- object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
- obstacle_segmentation_ground_segmentation_param_path
- obstacle_segmentation_ground_segmentation_elevation_map_param_path
- object_recognition_detection_obstacle_pointcloud_based_validator_param_path
- object_recognition_detection_detection_by_tracker_param
- occupancy_grid_map_method
- occupancy_grid_map_param_path
- occupancy_grid_map_updater
- occupancy_grid_map_updater_param_path
- lidar_detection_model
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
- lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
- use_short_range_detection [default: false]
- lidar_short_range_detection_model_type [default: centerpoint_short_range]
- lidar_short_range_detection_model_name [default: centerpoint_short_range]
- bevfusion_model_path [default: $(var data_path)/bevfusion]
- centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
- transfusion_model_path [default: $(var data_path)/lidar_transfusion]
- short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
- pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
- input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
- mode [default: camera_lidar_fusion]
- data_path [default: $(env HOME)/autoware_data]
- lidar_detection_model_type [default: $(var lidar_detection_model_type)]
- lidar_detection_model_name [default: $(var lidar_detection_model_name)]
- image_raw0 [default: /sensing/camera/camera0/image_rect_color]
- camera_info0 [default: /sensing/camera/camera0/camera_info]
- detection_rois0 [default: /perception/object_recognition/detection/rois0]
- image_raw1 [default: /sensing/camera/camera1/image_rect_color]
- camera_info1 [default: /sensing/camera/camera1/camera_info]
- detection_rois1 [default: /perception/object_recognition/detection/rois1]
- image_raw2 [default: /sensing/camera/camera2/image_rect_color]
- camera_info2 [default: /sensing/camera/camera2/camera_info]
- detection_rois2 [default: /perception/object_recognition/detection/rois2]
- image_raw3 [default: /sensing/camera/camera3/image_rect_color]
- camera_info3 [default: /sensing/camera/camera3/camera_info]
- detection_rois3 [default: /perception/object_recognition/detection/rois3]
- image_raw4 [default: /sensing/camera/camera4/image_rect_color]
- camera_info4 [default: /sensing/camera/camera4/camera_info]
- detection_rois4 [default: /perception/object_recognition/detection/rois4]
- image_raw5 [default: /sensing/camera/camera5/image_rect_color]
- camera_info5 [default: /sensing/camera/camera5/camera_info]
- detection_rois5 [default: /perception/object_recognition/detection/rois5]
- image_raw6 [default: /sensing/camera/camera6/image_rect_color]
- camera_info6 [default: /sensing/camera/camera6/camera_info]
- detection_rois6 [default: /perception/object_recognition/detection/rois6]
- image_raw7 [default: /sensing/camera/camera7/image_rect_color]
- camera_info7 [default: /sensing/camera/camera7/camera_info]
- detection_rois7 [default: /perception/object_recognition/detection/rois7]
- image_raw8 [default: /sensing/camera/camera8/image_rect_color]
- camera_info8 [default: /sensing/camera/camera8/camera_info]
- detection_rois8 [default: /perception/object_recognition/detection/rois8]
- image_number [default: 6]
- image_topic_name [default: image_rect_color]
- segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
- ml_camera_lidar_merger_priority_mode [default: 0]
- pointcloud_container_name [default: pointcloud_container]
- use_vector_map [default: true]
- use_pointcloud_map [default: true]
- use_low_height_cropbox [default: true]
- use_object_filter [default: true]
- objects_filter_method [default: lanelet_filter]
- use_irregular_object_detector [default: true]
- use_low_intensity_cluster_filter [default: true]
- use_image_segmentation_based_filter [default: false]
- use_empty_dynamic_object_publisher [default: false]
- use_object_validator [default: true]
- objects_validation_method [default: obstacle_pointcloud]
- use_perception_online_evaluator [default: false]
- use_perception_analytics_publisher [default: true]
- use_obstacle_segmentation_single_frame_filter
- use_obstacle_segmentation_time_series_filter
- use_traffic_light_recognition
- traffic_light_recognition/fusion_only
- traffic_light_recognition/camera_namespaces
- traffic_light_recognition/use_high_accuracy_detection
- traffic_light_recognition/high_accuracy_detection_type
- traffic_light_recognition/whole_image_detection/model_path
- traffic_light_recognition/whole_image_detection/label_path
- traffic_light_recognition/fine_detection/model_path
- traffic_light_recognition/fine_detection/label_path
- traffic_light_recognition/classification/car/model_path
- traffic_light_recognition/classification/car/label_path
- traffic_light_recognition/classification/pedestrian/model_path
- traffic_light_recognition/classification/pedestrian/label_path
- use_detection_by_tracker [default: true]
- use_radar_tracking_fusion [default: true]
- input/radar [default: /sensing/radar/detected_objects]
- use_multi_channel_tracker_merger [default: false]
- downsample_perception_common_pointcloud [default: false]
- common_downsample_voxel_size_x [default: 0.05]
- common_downsample_voxel_size_y [default: 0.05]
- common_downsample_voxel_size_z [default: 0.05]
- launch/traffic_light_recognition/traffic_light.launch.xml
-
- enable_image_decompressor [default: true]
- fusion_only
- camera_namespaces
- use_high_accuracy_detection
- high_accuracy_detection_type
- each_traffic_light_map_based_detector_param_path
- traffic_light_fine_detector_param_path
- yolox_traffic_light_detector_param_path
- car_traffic_light_classifier_param_path
- pedestrian_traffic_light_classifier_param_path
- traffic_light_roi_visualizer_param_path
- traffic_light_occlusion_predictor_param_path
- traffic_light_multi_camera_fusion_param_path
- traffic_light_arbiter_param_path
- crosswalk_traffic_light_estimator_param_path
- whole_image_detection/model_path
- whole_image_detection/label_path
- fine_detection/model_path
- fine_detection/label_path
- classification/car/model_path
- classification/car/label_path
- classification/pedestrian/model_path
- classification/pedestrian/label_path
- input/vector_map [default: /map/vector_map]
- input/route [default: /planning/mission_planning/route]
- input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
- internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
- external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
- judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]