No version for distro humble showing github. Known supported distros are highlighted in the buttons above.
Package symbol

tier4_perception_launch package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.47.0 (2025-08-11)

  • feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)

    * feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation

    * feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions

    • fix: unittest error
    • style(pre-commit): autofix
    • refactor: replace MOB keyword with generalized expression of perception analytics
    • chore: improve comment

    * refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring

    • style(pre-commit): autofix

    * fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(multi_object_tracker): add irregular objects topic (#11102)

    • fix(multi_object_tracker): add irregular objects topic
    • fix: change channel order

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    • fix: unused channels
    • fix: schema
    • docs: update readme
    • style(pre-commit): autofix
    • fix: short name

    * feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(autoware_object_merger): add merger priority_mode (#11042)

    * fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix

    • fix: add Confidence mode support
    • docs: schema update
    • fix: launch

    * fix: schema json ---------

  • feat(tier4_perception_launch): add missing remappings to launch file (#11037)

  • feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)

  • feat(tracking): add short range detection support and update related

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
      • object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • output/objects [default: $(var ns)/objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_velocity_splitter_radar_param_path
      • object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
      • object_recognition_detection_object_range_splitter_radar_param_path
      • object_recognition_detection_object_range_splitter_radar_fusion_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
      • lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • lidar_detection_model_type [default: $(var lidar_detection_model_type)]
      • lidar_detection_model_name [default: $(var lidar_detection_model_name)]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • downsample_perception_common_pointcloud [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro jazzy showing github. Known supported distros are highlighted in the buttons above.
Package symbol

tier4_perception_launch package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.47.0 (2025-08-11)

  • feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)

    * feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation

    * feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions

    • fix: unittest error
    • style(pre-commit): autofix
    • refactor: replace MOB keyword with generalized expression of perception analytics
    • chore: improve comment

    * refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring

    • style(pre-commit): autofix

    * fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(multi_object_tracker): add irregular objects topic (#11102)

    • fix(multi_object_tracker): add irregular objects topic
    • fix: change channel order

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    • fix: unused channels
    • fix: schema
    • docs: update readme
    • style(pre-commit): autofix
    • fix: short name

    * feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(autoware_object_merger): add merger priority_mode (#11042)

    * fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix

    • fix: add Confidence mode support
    • docs: schema update
    • fix: launch

    * fix: schema json ---------

  • feat(tier4_perception_launch): add missing remappings to launch file (#11037)

  • feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)

  • feat(tracking): add short range detection support and update related

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
      • object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • output/objects [default: $(var ns)/objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_velocity_splitter_radar_param_path
      • object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
      • object_recognition_detection_object_range_splitter_radar_param_path
      • object_recognition_detection_object_range_splitter_radar_fusion_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
      • lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • lidar_detection_model_type [default: $(var lidar_detection_model_type)]
      • lidar_detection_model_name [default: $(var lidar_detection_model_name)]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • downsample_perception_common_pointcloud [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro kilted showing github. Known supported distros are highlighted in the buttons above.
Package symbol

tier4_perception_launch package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.47.0 (2025-08-11)

  • feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)

    * feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation

    * feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions

    • fix: unittest error
    • style(pre-commit): autofix
    • refactor: replace MOB keyword with generalized expression of perception analytics
    • chore: improve comment

    * refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring

    • style(pre-commit): autofix

    * fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(multi_object_tracker): add irregular objects topic (#11102)

    • fix(multi_object_tracker): add irregular objects topic
    • fix: change channel order

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    • fix: unused channels
    • fix: schema
    • docs: update readme
    • style(pre-commit): autofix
    • fix: short name

    * feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(autoware_object_merger): add merger priority_mode (#11042)

    * fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix

    • fix: add Confidence mode support
    • docs: schema update
    • fix: launch

    * fix: schema json ---------

  • feat(tier4_perception_launch): add missing remappings to launch file (#11037)

  • feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)

  • feat(tracking): add short range detection support and update related

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
      • object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • output/objects [default: $(var ns)/objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_velocity_splitter_radar_param_path
      • object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
      • object_recognition_detection_object_range_splitter_radar_param_path
      • object_recognition_detection_object_range_splitter_radar_fusion_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
      • lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • lidar_detection_model_type [default: $(var lidar_detection_model_type)]
      • lidar_detection_model_name [default: $(var lidar_detection_model_name)]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • downsample_perception_common_pointcloud [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro rolling showing github. Known supported distros are highlighted in the buttons above.
Package symbol

tier4_perception_launch package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.47.0 (2025-08-11)

  • feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)

    * feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation

    * feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions

    • fix: unittest error
    • style(pre-commit): autofix
    • refactor: replace MOB keyword with generalized expression of perception analytics
    • chore: improve comment

    * refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring

    • style(pre-commit): autofix

    * fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(multi_object_tracker): add irregular objects topic (#11102)

    • fix(multi_object_tracker): add irregular objects topic
    • fix: change channel order

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    • fix: unused channels
    • fix: schema
    • docs: update readme
    • style(pre-commit): autofix
    • fix: short name

    * feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(autoware_object_merger): add merger priority_mode (#11042)

    * fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix

    • fix: add Confidence mode support
    • docs: schema update
    • fix: launch

    * fix: schema json ---------

  • feat(tier4_perception_launch): add missing remappings to launch file (#11037)

  • feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)

  • feat(tracking): add short range detection support and update related

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
      • object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • output/objects [default: $(var ns)/objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_velocity_splitter_radar_param_path
      • object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
      • object_recognition_detection_object_range_splitter_radar_param_path
      • object_recognition_detection_object_range_splitter_radar_fusion_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
      • lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • lidar_detection_model_type [default: $(var lidar_detection_model_type)]
      • lidar_detection_model_name [default: $(var lidar_detection_model_name)]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • downsample_perception_common_pointcloud [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

Package symbol

tier4_perception_launch package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.47.0 (2025-08-11)

  • feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)

    * feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation

    * feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions

    • fix: unittest error
    • style(pre-commit): autofix
    • refactor: replace MOB keyword with generalized expression of perception analytics
    • chore: improve comment

    * refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring

    • style(pre-commit): autofix

    * fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(multi_object_tracker): add irregular objects topic (#11102)

    • fix(multi_object_tracker): add irregular objects topic
    • fix: change channel order

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    • fix: unused channels
    • fix: schema
    • docs: update readme
    • style(pre-commit): autofix
    • fix: short name

    * feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(autoware_object_merger): add merger priority_mode (#11042)

    * fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix

    • fix: add Confidence mode support
    • docs: schema update
    • fix: launch

    * fix: schema json ---------

  • feat(tier4_perception_launch): add missing remappings to launch file (#11037)

  • feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)

  • feat(tracking): add short range detection support and update related

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
      • object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • output/objects [default: $(var ns)/objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_velocity_splitter_radar_param_path
      • object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
      • object_recognition_detection_object_range_splitter_radar_param_path
      • object_recognition_detection_object_range_splitter_radar_fusion_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
      • lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • lidar_detection_model_type [default: $(var lidar_detection_model_type)]
      • lidar_detection_model_name [default: $(var lidar_detection_model_name)]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • downsample_perception_common_pointcloud [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro galactic showing github. Known supported distros are highlighted in the buttons above.
Package symbol

tier4_perception_launch package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.47.0 (2025-08-11)

  • feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)

    * feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation

    * feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions

    • fix: unittest error
    • style(pre-commit): autofix
    • refactor: replace MOB keyword with generalized expression of perception analytics
    • chore: improve comment

    * refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring

    • style(pre-commit): autofix

    * fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(multi_object_tracker): add irregular objects topic (#11102)

    • fix(multi_object_tracker): add irregular objects topic
    • fix: change channel order

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    • fix: unused channels
    • fix: schema
    • docs: update readme
    • style(pre-commit): autofix
    • fix: short name

    * feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(autoware_object_merger): add merger priority_mode (#11042)

    * fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix

    • fix: add Confidence mode support
    • docs: schema update
    • fix: launch

    * fix: schema json ---------

  • feat(tier4_perception_launch): add missing remappings to launch file (#11037)

  • feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)

  • feat(tracking): add short range detection support and update related

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
      • object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • output/objects [default: $(var ns)/objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_velocity_splitter_radar_param_path
      • object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
      • object_recognition_detection_object_range_splitter_radar_param_path
      • object_recognition_detection_object_range_splitter_radar_fusion_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
      • lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • lidar_detection_model_type [default: $(var lidar_detection_model_type)]
      • lidar_detection_model_name [default: $(var lidar_detection_model_name)]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • downsample_perception_common_pointcloud [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro iron showing github. Known supported distros are highlighted in the buttons above.
Package symbol

tier4_perception_launch package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.47.0 (2025-08-11)

  • feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)

    * feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation

    * feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions

    • fix: unittest error
    • style(pre-commit): autofix
    • refactor: replace MOB keyword with generalized expression of perception analytics
    • chore: improve comment

    * refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring

    • style(pre-commit): autofix

    * fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(multi_object_tracker): add irregular objects topic (#11102)

    • fix(multi_object_tracker): add irregular objects topic
    • fix: change channel order

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    • fix: unused channels
    • fix: schema
    • docs: update readme
    • style(pre-commit): autofix
    • fix: short name

    * feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(autoware_object_merger): add merger priority_mode (#11042)

    * fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix

    • fix: add Confidence mode support
    • docs: schema update
    • fix: launch

    * fix: schema json ---------

  • feat(tier4_perception_launch): add missing remappings to launch file (#11037)

  • feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)

  • feat(tracking): add short range detection support and update related

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
      • object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • output/objects [default: $(var ns)/objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_velocity_splitter_radar_param_path
      • object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
      • object_recognition_detection_object_range_splitter_radar_param_path
      • object_recognition_detection_object_range_splitter_radar_fusion_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
      • lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • lidar_detection_model_type [default: $(var lidar_detection_model_type)]
      • lidar_detection_model_name [default: $(var lidar_detection_model_name)]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • downsample_perception_common_pointcloud [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro melodic showing github. Known supported distros are highlighted in the buttons above.
Package symbol

tier4_perception_launch package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.47.0 (2025-08-11)

  • feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)

    * feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation

    * feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions

    • fix: unittest error
    • style(pre-commit): autofix
    • refactor: replace MOB keyword with generalized expression of perception analytics
    • chore: improve comment

    * refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring

    • style(pre-commit): autofix

    * fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(multi_object_tracker): add irregular objects topic (#11102)

    • fix(multi_object_tracker): add irregular objects topic
    • fix: change channel order

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    • fix: unused channels
    • fix: schema
    • docs: update readme
    • style(pre-commit): autofix
    • fix: short name

    * feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(autoware_object_merger): add merger priority_mode (#11042)

    * fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix

    • fix: add Confidence mode support
    • docs: schema update
    • fix: launch

    * fix: schema json ---------

  • feat(tier4_perception_launch): add missing remappings to launch file (#11037)

  • feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)

  • feat(tracking): add short range detection support and update related

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
      • object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • output/objects [default: $(var ns)/objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_velocity_splitter_radar_param_path
      • object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
      • object_recognition_detection_object_range_splitter_radar_param_path
      • object_recognition_detection_object_range_splitter_radar_fusion_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
      • lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • lidar_detection_model_type [default: $(var lidar_detection_model_type)]
      • lidar_detection_model_name [default: $(var lidar_detection_model_name)]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • downsample_perception_common_pointcloud [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange

No version for distro noetic showing github. Known supported distros are highlighted in the buttons above.
Package symbol

tier4_perception_launch package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

The tier4_perception_launch package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Taekjin Lee
  • Masato Saeki

Authors

No additional authors.

tier4_perception_launch

Structure

tier4_perception_launch

Package Dependencies

Please see <exec_depend> in package.xml.

Usage

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file="$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml">
    <!-- options for mode: camera_lidar_fusion, lidar, camera -->
    <arg name="mode" value="lidar" />

    <!-- Parameter files -->
    <arg name="FOO_param_path" value="..."/>
    <arg name="BAR_param_path" value="..."/>
    ...
  </include>

CHANGELOG

Changelog for package tier4_perception_launch

0.47.0 (2025-08-11)

  • feat(perception_online_evaluator): add functionality to publish perception analytics info (#11089)

    * feat: add functionality to calculate perception metrics for MOB in autoware_perception_online_evaluator chore: configure settings for mob metrics calculation

    * feat: change implementation from one topic per metric to all metrics published in one metric for better management by metric agent refactor: rename FrameMetrics member to clarify variable meaning refactor: use array/vector instead of unorder_map for FrameMetrics for better performance chore: remap published topic name to match msg conventions

    • fix: unittest error
    • style(pre-commit): autofix
    • refactor: replace MOB keyword with generalized expression of perception analytics
    • chore: improve comment

    * refactor: add a new autoware_perception_analytics_publisher_node to publish perception analytics info instead of using previous autoware_perception_online_evaluator_node chore: modify default launch setting to match the refactoring

    • style(pre-commit): autofix

    * fix: add initialization for [latencies_]{.title-ref} fix: use tf of objects timestamp instead of latest feat: use ConstSharedPtr to avoid repeated copy of large message in [PerceptionAnalyticsCalculator::setPredictedObjects]{.title-ref} ---------Co-authored-by: Jian Kang <<jian.kang@tier4.jp>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(multi_object_tracker): add irregular objects topic (#11102)

    • fix(multi_object_tracker): add irregular objects topic
    • fix: change channel order

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update perception/autoware_multi_object_tracker/config/input_channels.param.yaml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    * Update launch/tier4_perception_launch/launch/object_recognition/tracking/tracking.launch.xml Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>

    • fix: unused channels
    • fix: schema
    • docs: update readme
    • style(pre-commit): autofix
    • fix: short name

    * feat: add lidar_centerpoint_short_range input channel with default flags ---------Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>

  • chore: sync files (#11091) Co-authored-by: github-actions <<github-actions@github.com>> Co-authored-by: M. Fatih Cırıt <<mfc@autoware.org>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • fix(autoware_object_merger): add merger priority_mode (#11042)

    * fix: add merger priority_mode fix: add priority mode into launch fix: add class based priority matrix fix: adjust priority matrix

    • fix: add Confidence mode support
    • docs: schema update
    • fix: launch

    * fix: schema json ---------

  • feat(tier4_perception_launch): add missing remappings to launch file (#11037)

  • feat(autoware_bevdet): implementation of bevdet using tensorrt (#10441)

  • feat(tracking): add short range detection support and update related

File truncated at 100 lines see the full file

Package Dependencies

System Dependencies

No direct system dependencies.

Launch files

  • launch/object_recognition/detection/detection.launch.xml
      • mode
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_short_range_detection
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • use_object_filter
      • objects_filter_method
      • use_pointcloud_map
      • use_detection_by_tracker
      • use_validator
      • objects_validation_method
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • use_multi_channel_tracker_merger
      • use_radar_tracking_fusion
      • use_irregular_object_detector
      • irregular_object_detector_fusion_camera_ids [default: [0]]
      • ml_camera_lidar_merger_priority_mode
      • number_of_cameras
      • node/pointcloud_container
      • input/pointcloud
      • input/obstacle_segmentation/pointcloud [default: /perception/obstacle_segmentation/pointcloud]
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • image_topic_name
      • segmentation_pointcloud_fusion_camera_ids
      • input/radar
      • input/tracked_objects [default: /perception/object_recognition/tracking/objects]
      • output/objects [default: objects]
  • launch/object_recognition/detection/detector/camera_bev_detector.launch.xml
      • input/camera0/image
      • input/camera0/info
      • input/camera1/image
      • input/camera1/info
      • input/camera2/image
      • input/camera2/info
      • input/camera3/image
      • input/camera3/info
      • input/camera4/image
      • input/camera4/info
      • input/camera5/image
      • input/camera5/info
      • input/camera6/image
      • input/camera6/info
      • input/camera7/image
      • input/camera7/info
      • output/objects
      • number_of_cameras
      • data_path [default: $(env HOME)/autoware_data]
      • bevdet_model_name [default: bevdet_one_lt_d]
      • bevdet_model_path [default: $(var data_path)/tensorrt_bevdet]
  • launch/object_recognition/detection/detector/camera_lidar_detector.launch.xml
      • ns
      • lidar_detection_model_type
      • lidar_detection_model_name
      • use_low_intensity_cluster_filter
      • use_image_segmentation_based_filter
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • segmentation_pointcloud_fusion_camera_ids
      • image_topic_name
      • node/pointcloud_container
      • input/pointcloud
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/ml_detector/objects
      • output/rule_detector/objects
      • output/clustering/cluster_objects
  • launch/object_recognition/detection/detector/camera_lidar_irregular_object_detector.launch.xml
      • ns
      • pipeline_ns
      • input/pointcloud
      • fusion_camera_ids [default: [0]]
      • image_topic_name [default: image_raw]
      • irregular_object_detector_param_path
  • launch/object_recognition/detection/detector/lidar_dnn_detector.launch.xml
      • lidar_detection_model_type
      • lidar_detection_model_name
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type
      • lidar_short_range_detection_model_name
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • node/pointcloud_container
      • input/pointcloud
      • output/objects
      • output/short_range_objects
      • lidar_short_range_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_bevfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_transfusion)/config]
      • lidar_model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config]
  • launch/object_recognition/detection/detector/lidar_rule_detector.launch.xml
      • ns
      • node/pointcloud_container
      • input/pointcloud_map/pointcloud
      • input/obstacle_segmentation/pointcloud
      • output/cluster_objects
      • output/objects
  • launch/object_recognition/detection/detector/tracker_based_detector.launch.xml
      • input/clusters
      • input/tracked_objects
      • output/objects
  • launch/object_recognition/detection/filter/object_filter.launch.xml
      • objects_filter_method [default: lanelet_filter]
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/object_validator.launch.xml
      • objects_validation_method
      • input/obstacle_pointcloud
      • input/objects
      • output/objects
  • launch/object_recognition/detection/filter/radar_filter.launch.xml
      • object_velocity_splitter_param_path [default: $(var object_recognition_detection_object_velocity_splitter_radar_param_path)]
      • object_range_splitter_param_path [default: $(var object_recognition_detection_object_range_splitter_radar_param_path)]
      • radar_lanelet_filtering_range_param_path [default: $(find-pkg-share autoware_detected_object_validation)/config/object_lanelet_filter.param.yaml]
      • input/radar
      • output/objects
  • launch/object_recognition/detection/merger/camera_lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/camera_lidar_radar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • ml_camera_lidar_object_association_merger_param_path [default: $(find-pkg-share autoware_object_merger)/config/object_association_merger.param.yaml]
      • far_object_merger_sync_queue_size [default: 20]
      • lidar_detection_model_type
      • use_radar_tracking_fusion
      • use_detection_by_tracker
      • use_irregular_object_detector
      • use_object_filter
      • objects_filter_method
      • number_of_cameras
      • input/camera0/image
      • input/camera0/info
      • input/camera0/rois
      • input/camera1/image
      • input/camera1/info
      • input/camera1/rois
      • input/camera2/image
      • input/camera2/info
      • input/camera2/rois
      • input/camera3/image
      • input/camera3/info
      • input/camera3/rois
      • input/camera4/image
      • input/camera4/info
      • input/camera4/rois
      • input/camera5/image
      • input/camera5/info
      • input/camera5/rois
      • input/camera6/image
      • input/camera6/info
      • input/camera6/rois
      • input/camera7/image
      • input/camera7/info
      • input/camera7/rois
      • input/camera8/image
      • input/camera8/info
      • input/camera8/rois
      • input/lidar_ml/objects
      • input/lidar_rule/objects
      • input/radar/objects
      • input/radar_far/objects
      • input/detection_by_tracker/objects
      • output/objects [default: objects]
      • alpha_merger_priority_mode [default: 0]
  • launch/object_recognition/detection/merger/lidar_merger.launch.xml
      • object_recognition_detection_object_merger_data_association_matrix_param_path [default: $(find-pkg-share autoware_object_merger)/config/data_association_matrix.param.yaml]
      • object_recognition_detection_object_merger_distance_threshold_list_path [default: $(find-pkg-share autoware_object_merger)/config/overlapped_judge.param.yaml]
      • lidar_detection_model_type
      • use_detection_by_tracker
      • use_object_filter
      • objects_filter_method
      • input/lidar_ml/objects [default: $(var lidar_detection_model_type)/objects]
      • input/lidar_rule/objects [default: clustering/objects]
      • input/detection_by_tracker/objects [default: detection_by_tracker/objects]
      • output/objects
  • launch/object_recognition/prediction/prediction.launch.xml
      • use_vector_map [default: false]
      • input/objects [default: /perception/object_recognition/tracking/objects]
  • launch/object_recognition/tracking/tracking.launch.xml
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • object_recognition_tracking_object_merger_data_association_matrix_param_path
      • object_recognition_tracking_object_merger_node_param_path
      • mode [default: lidar]
      • use_radar_tracking_fusion [default: false]
      • use_multi_channel_tracker_merger
      • use_validator
      • use_short_range_detection
      • lidar_detection_model_type [default: centerpoint]
      • input/merged_detection/channel [default: detected_objects]
      • input/merged_detection/objects [default: /perception/object_recognition/detection/objects]
      • input/lidar_dnn/channel [default: lidar_$(var lidar_detection_model_type)]
      • input/lidar_dnn/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/objects]
      • input/lidar_dnn_validated/objects [default: /perception/object_recognition/detection/$(var lidar_detection_model_type)/validation/objects]
      • input/lidar_dnn_short_range/channel [default: lidar_$(var lidar_short_range_detection_model_type)]
      • input/lidar_dnn_short_range/objects [default: /perception/object_recognition/detection/$(var lidar_short_range_detection_model_type)/objects]
      • input/camera_lidar_rule_detector/channel [default: camera_lidar_fusion]
      • input/camera_lidar_rule_detector/objects [default: /perception/object_recognition/detection/clustering/camera_lidar_fusion/objects]
      • input/irregular_object_detector/channel [default: camera_lidar_fusion_irregular]
      • input/irregular_object_detector/objects [default: /perception/object_recognition/detection/irregular_object/objects]
      • input/tracker_based_detector/channel [default: detection_by_tracker]
      • input/tracker_based_detector/objects [default: /perception/object_recognition/detection/detection_by_tracker/objects]
      • input/radar/channel [default: radar]
      • input/radar/far_objects [default: /perception/object_recognition/detection/radar/far_objects]
      • input/radar/objects [default: /perception/object_recognition/detection/radar/objects]
      • input/radar/tracked_objects [default: /sensing/radar/tracked_objects]
      • output/objects [default: $(var ns)/objects]
  • launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml
      • input/obstacle_pointcloud [default: concatenated/pointcloud]
      • input/raw_pointcloud [default: no_ground/oneshot/pointcloud]
      • output [default: /perception/occupancy_grid_map/map]
      • use_intra_process [default: false]
      • use_multithread [default: false]
      • pointcloud_container_name [default: pointcloud_container]
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • input_obstacle_pointcloud [default: false]
      • input_obstacle_and_raw_pointcloud [default: true]
      • use_pointcloud_container [default: true]
  • launch/perception.launch.xml
      • object_recognition_detection_euclidean_cluster_param_path
      • object_recognition_detection_outlier_param_path
      • object_recognition_detection_object_lanelet_filter_param_path
      • object_recognition_detection_object_position_filter_param_path
      • object_recognition_detection_pointcloud_map_filter_param_path
      • object_recognition_prediction_map_based_prediction_param_path
      • object_recognition_detection_object_merger_data_association_matrix_param_path
      • ml_camera_lidar_object_association_merger_param_path
      • object_recognition_detection_object_merger_distance_threshold_list_path
      • object_recognition_detection_fusion_sync_param_path
      • object_recognition_detection_roi_cluster_fusion_param_path
      • object_recognition_detection_irregular_object_detector_param_path
      • object_recognition_detection_roi_detected_object_fusion_param_path
      • object_recognition_detection_pointpainting_fusion_common_param_path
      • object_recognition_detection_lidar_model_param_path
      • object_recognition_detection_radar_lanelet_filtering_range_param_path
      • object_recognition_detection_object_velocity_splitter_radar_param_path
      • object_recognition_detection_object_velocity_splitter_radar_fusion_param_path
      • object_recognition_detection_object_range_splitter_radar_param_path
      • object_recognition_detection_object_range_splitter_radar_fusion_param_path
      • object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path
      • object_recognition_tracking_multi_object_tracker_input_channels_param_path
      • object_recognition_tracking_multi_object_tracker_node_param_path
      • object_recognition_tracking_radar_tracked_object_sorter_param_path
      • object_recognition_tracking_radar_tracked_object_lanelet_filter_param_path
      • obstacle_segmentation_ground_segmentation_param_path
      • obstacle_segmentation_ground_segmentation_elevation_map_param_path
      • object_recognition_detection_obstacle_pointcloud_based_validator_param_path
      • object_recognition_detection_detection_by_tracker_param
      • occupancy_grid_map_method
      • occupancy_grid_map_param_path
      • occupancy_grid_map_updater
      • occupancy_grid_map_updater_param_path
      • lidar_detection_model
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • lidar_detection_model_type [default: $(eval "'$(var lidar_detection_model)'.split('/')[0]")]
      • lidar_detection_model_name [default: $(eval "'$(var lidar_detection_model)'.split('/')[1] if '/' in '$(var lidar_detection_model)' else ''")]
      • use_short_range_detection [default: false]
      • lidar_short_range_detection_model_type [default: centerpoint_short_range]
      • lidar_short_range_detection_model_name [default: centerpoint_short_range]
      • bevfusion_model_path [default: $(var data_path)/bevfusion]
      • centerpoint_model_path [default: $(var data_path)/lidar_centerpoint]
      • transfusion_model_path [default: $(var data_path)/lidar_transfusion]
      • short_range_centerpoint_model_path [default: $(var data_path)/lidar_short_range_centerpoint]
      • pointpainting_model_path [default: $(var data_path)/image_projection_based_fusion]
      • input/pointcloud [default: /sensing/lidar/concatenated/pointcloud]
      • mode [default: camera_lidar_fusion]
      • data_path [default: $(env HOME)/autoware_data]
      • lidar_detection_model_type [default: $(var lidar_detection_model_type)]
      • lidar_detection_model_name [default: $(var lidar_detection_model_name)]
      • image_raw0 [default: /sensing/camera/camera0/image_rect_color]
      • camera_info0 [default: /sensing/camera/camera0/camera_info]
      • detection_rois0 [default: /perception/object_recognition/detection/rois0]
      • image_raw1 [default: /sensing/camera/camera1/image_rect_color]
      • camera_info1 [default: /sensing/camera/camera1/camera_info]
      • detection_rois1 [default: /perception/object_recognition/detection/rois1]
      • image_raw2 [default: /sensing/camera/camera2/image_rect_color]
      • camera_info2 [default: /sensing/camera/camera2/camera_info]
      • detection_rois2 [default: /perception/object_recognition/detection/rois2]
      • image_raw3 [default: /sensing/camera/camera3/image_rect_color]
      • camera_info3 [default: /sensing/camera/camera3/camera_info]
      • detection_rois3 [default: /perception/object_recognition/detection/rois3]
      • image_raw4 [default: /sensing/camera/camera4/image_rect_color]
      • camera_info4 [default: /sensing/camera/camera4/camera_info]
      • detection_rois4 [default: /perception/object_recognition/detection/rois4]
      • image_raw5 [default: /sensing/camera/camera5/image_rect_color]
      • camera_info5 [default: /sensing/camera/camera5/camera_info]
      • detection_rois5 [default: /perception/object_recognition/detection/rois5]
      • image_raw6 [default: /sensing/camera/camera6/image_rect_color]
      • camera_info6 [default: /sensing/camera/camera6/camera_info]
      • detection_rois6 [default: /perception/object_recognition/detection/rois6]
      • image_raw7 [default: /sensing/camera/camera7/image_rect_color]
      • camera_info7 [default: /sensing/camera/camera7/camera_info]
      • detection_rois7 [default: /perception/object_recognition/detection/rois7]
      • image_raw8 [default: /sensing/camera/camera8/image_rect_color]
      • camera_info8 [default: /sensing/camera/camera8/camera_info]
      • detection_rois8 [default: /perception/object_recognition/detection/rois8]
      • image_number [default: 6]
      • image_topic_name [default: image_rect_color]
      • segmentation_pointcloud_fusion_camera_ids [default: [0,1,5]]
      • ml_camera_lidar_merger_priority_mode [default: 0]
      • pointcloud_container_name [default: pointcloud_container]
      • use_vector_map [default: true]
      • use_pointcloud_map [default: true]
      • use_low_height_cropbox [default: true]
      • use_object_filter [default: true]
      • objects_filter_method [default: lanelet_filter]
      • use_irregular_object_detector [default: true]
      • use_low_intensity_cluster_filter [default: true]
      • use_image_segmentation_based_filter [default: false]
      • use_empty_dynamic_object_publisher [default: false]
      • use_object_validator [default: true]
      • objects_validation_method [default: obstacle_pointcloud]
      • use_perception_online_evaluator [default: false]
      • use_perception_analytics_publisher [default: true]
      • use_obstacle_segmentation_single_frame_filter
      • use_obstacle_segmentation_time_series_filter
      • use_traffic_light_recognition
      • traffic_light_recognition/fusion_only
      • traffic_light_recognition/camera_namespaces
      • traffic_light_recognition/use_high_accuracy_detection
      • traffic_light_recognition/high_accuracy_detection_type
      • traffic_light_recognition/whole_image_detection/model_path
      • traffic_light_recognition/whole_image_detection/label_path
      • traffic_light_recognition/fine_detection/model_path
      • traffic_light_recognition/fine_detection/label_path
      • traffic_light_recognition/classification/car/model_path
      • traffic_light_recognition/classification/car/label_path
      • traffic_light_recognition/classification/pedestrian/model_path
      • traffic_light_recognition/classification/pedestrian/label_path
      • use_detection_by_tracker [default: true]
      • use_radar_tracking_fusion [default: true]
      • input/radar [default: /sensing/radar/detected_objects]
      • use_multi_channel_tracker_merger [default: false]
      • downsample_perception_common_pointcloud [default: false]
      • common_downsample_voxel_size_x [default: 0.05]
      • common_downsample_voxel_size_y [default: 0.05]
      • common_downsample_voxel_size_z [default: 0.05]
  • launch/traffic_light_recognition/traffic_light.launch.xml
      • enable_image_decompressor [default: true]
      • fusion_only
      • camera_namespaces
      • use_high_accuracy_detection
      • high_accuracy_detection_type
      • each_traffic_light_map_based_detector_param_path
      • traffic_light_fine_detector_param_path
      • yolox_traffic_light_detector_param_path
      • car_traffic_light_classifier_param_path
      • pedestrian_traffic_light_classifier_param_path
      • traffic_light_roi_visualizer_param_path
      • traffic_light_occlusion_predictor_param_path
      • traffic_light_multi_camera_fusion_param_path
      • traffic_light_arbiter_param_path
      • crosswalk_traffic_light_estimator_param_path
      • whole_image_detection/model_path
      • whole_image_detection/label_path
      • fine_detection/model_path
      • fine_detection/label_path
      • classification/car/model_path
      • classification/car/label_path
      • classification/pedestrian/model_path
      • classification/pedestrian/label_path
      • input/vector_map [default: /map/vector_map]
      • input/route [default: /planning/mission_planning/route]
      • input/cloud [default: /sensing/lidar/top/pointcloud_raw_ex]
      • internal/traffic_signals [default: /perception/traffic_light_recognition/internal/traffic_signals]
      • external/traffic_signals [default: /perception/traffic_light_recognition/external/traffic_signals]
      • judged/traffic_signals [default: /perception/traffic_light_recognition/judged/traffic_signals]
      • output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged tier4_perception_launch at Robotics Stack Exchange