No version for distro humble showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_shape_estimation package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

This package implements a shape estimation algorithm as a ROS 2 node

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Kaan Colak
  • Taekjin Lee
  • Lei Gu

Authors

No additional authors.

autoware_shape_estimation

Purpose

This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.

Inner-workings / Algorithms

Fitting algorithms

  • bounding box
    • L-shape fitting: See reference below for details
    • ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
  • cylinder

    cv::minEnclosingCircle

  • convex hull

    cv::convexHull

Inputs / Outputs

Input

Name Type Description
input tier4_perception_msgs::msg::DetectedObjectsWithFeature detected objects with labeled cluster

Output

Name Type Description
output/objects autoware_perception_msgs::msg::DetectedObjects detected objects with refined shape

Parameters

{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}

ML Based Shape Implementation

The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.

ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).

The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.

The model predicts the following outputs for each object:

  • x,y,z coordinates of the object center
  • object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
  • object heading angle residuals
  • object size classification result
  • object size residuals

Training ML Based Shape Estimation Model

To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.

Preparing the Dataset

Install MMDetection3D prerequisites

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation

Step 3. Install PyTorch

conda install pytorch torchvision -c pytorch

Install mmdetection3d

Step 1. Install MMEngine, MMCV, and MMDetection using MIM

pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'

Step 2. Install Autoware’s MMDetection3D fork

git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_shape_estimation

0.47.0 (2025-08-11)

  • style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Contributors: Ryohsuke Mitsudome

0.46.0 (2025-06-20)

0.45.0 (2025-05-22)

  • Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base

  • chore: perception code owner update (#10645)

    • chore: update maintainers in multiple perception packages

    * Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.

    • chore: update maintainers in multiple perception packages

    * chore: add Kok Seang Tan as maintainer in multiple perception packages ---------

  • Contributors: Taekjin LEE, TaikiYamada4

0.44.2 (2025-06-10)

0.44.1 (2025-05-01)

0.44.0 (2025-04-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • chore(perception): code owner revision (#10358)

    • feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • Contributors: Ryohsuke Mitsudome, Taekjin LEE

0.43.0 (2025-03-21)

  • Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
  • chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
  • refactor: add autoware_cuda_dependency_meta (#10073)
  • Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo

0.42.0 (2025-03-03)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
  • Contributors: Fumiya Watanabe, 心刚

0.41.2 (2025-02-19)

  • chore: bump version to 0.41.1 (#10088)
  • Contributors: Ryohsuke Mitsudome

0.41.1 (2025-02-10)

0.41.0 (2025-01-29)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base

  • feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation

  • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)

    • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
    • style(pre-commit): autofix
    • style(autoware_tensorrt_common): linting

File truncated at 100 lines see the full file

Launch files

  • launch/shape_estimation.launch.xml
      • input/objects [default: labeled_clusters]
      • output/objects [default: shape_estimated_objects]
      • node_name [default: shape_estimation]
      • data_path [default: $(env HOME)/autoware_data]
      • model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
      • config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange

No version for distro jazzy showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_shape_estimation package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

This package implements a shape estimation algorithm as a ROS 2 node

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Kaan Colak
  • Taekjin Lee
  • Lei Gu

Authors

No additional authors.

autoware_shape_estimation

Purpose

This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.

Inner-workings / Algorithms

Fitting algorithms

  • bounding box
    • L-shape fitting: See reference below for details
    • ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
  • cylinder

    cv::minEnclosingCircle

  • convex hull

    cv::convexHull

Inputs / Outputs

Input

Name Type Description
input tier4_perception_msgs::msg::DetectedObjectsWithFeature detected objects with labeled cluster

Output

Name Type Description
output/objects autoware_perception_msgs::msg::DetectedObjects detected objects with refined shape

Parameters

{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}

ML Based Shape Implementation

The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.

ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).

The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.

The model predicts the following outputs for each object:

  • x,y,z coordinates of the object center
  • object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
  • object heading angle residuals
  • object size classification result
  • object size residuals

Training ML Based Shape Estimation Model

To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.

Preparing the Dataset

Install MMDetection3D prerequisites

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation

Step 3. Install PyTorch

conda install pytorch torchvision -c pytorch

Install mmdetection3d

Step 1. Install MMEngine, MMCV, and MMDetection using MIM

pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'

Step 2. Install Autoware’s MMDetection3D fork

git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_shape_estimation

0.47.0 (2025-08-11)

  • style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Contributors: Ryohsuke Mitsudome

0.46.0 (2025-06-20)

0.45.0 (2025-05-22)

  • Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base

  • chore: perception code owner update (#10645)

    • chore: update maintainers in multiple perception packages

    * Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.

    • chore: update maintainers in multiple perception packages

    * chore: add Kok Seang Tan as maintainer in multiple perception packages ---------

  • Contributors: Taekjin LEE, TaikiYamada4

0.44.2 (2025-06-10)

0.44.1 (2025-05-01)

0.44.0 (2025-04-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • chore(perception): code owner revision (#10358)

    • feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • Contributors: Ryohsuke Mitsudome, Taekjin LEE

0.43.0 (2025-03-21)

  • Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
  • chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
  • refactor: add autoware_cuda_dependency_meta (#10073)
  • Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo

0.42.0 (2025-03-03)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
  • Contributors: Fumiya Watanabe, 心刚

0.41.2 (2025-02-19)

  • chore: bump version to 0.41.1 (#10088)
  • Contributors: Ryohsuke Mitsudome

0.41.1 (2025-02-10)

0.41.0 (2025-01-29)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base

  • feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation

  • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)

    • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
    • style(pre-commit): autofix
    • style(autoware_tensorrt_common): linting

File truncated at 100 lines see the full file

Launch files

  • launch/shape_estimation.launch.xml
      • input/objects [default: labeled_clusters]
      • output/objects [default: shape_estimated_objects]
      • node_name [default: shape_estimation]
      • data_path [default: $(env HOME)/autoware_data]
      • model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
      • config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange

No version for distro kilted showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_shape_estimation package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

This package implements a shape estimation algorithm as a ROS 2 node

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Kaan Colak
  • Taekjin Lee
  • Lei Gu

Authors

No additional authors.

autoware_shape_estimation

Purpose

This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.

Inner-workings / Algorithms

Fitting algorithms

  • bounding box
    • L-shape fitting: See reference below for details
    • ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
  • cylinder

    cv::minEnclosingCircle

  • convex hull

    cv::convexHull

Inputs / Outputs

Input

Name Type Description
input tier4_perception_msgs::msg::DetectedObjectsWithFeature detected objects with labeled cluster

Output

Name Type Description
output/objects autoware_perception_msgs::msg::DetectedObjects detected objects with refined shape

Parameters

{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}

ML Based Shape Implementation

The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.

ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).

The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.

The model predicts the following outputs for each object:

  • x,y,z coordinates of the object center
  • object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
  • object heading angle residuals
  • object size classification result
  • object size residuals

Training ML Based Shape Estimation Model

To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.

Preparing the Dataset

Install MMDetection3D prerequisites

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation

Step 3. Install PyTorch

conda install pytorch torchvision -c pytorch

Install mmdetection3d

Step 1. Install MMEngine, MMCV, and MMDetection using MIM

pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'

Step 2. Install Autoware’s MMDetection3D fork

git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_shape_estimation

0.47.0 (2025-08-11)

  • style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Contributors: Ryohsuke Mitsudome

0.46.0 (2025-06-20)

0.45.0 (2025-05-22)

  • Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base

  • chore: perception code owner update (#10645)

    • chore: update maintainers in multiple perception packages

    * Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.

    • chore: update maintainers in multiple perception packages

    * chore: add Kok Seang Tan as maintainer in multiple perception packages ---------

  • Contributors: Taekjin LEE, TaikiYamada4

0.44.2 (2025-06-10)

0.44.1 (2025-05-01)

0.44.0 (2025-04-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • chore(perception): code owner revision (#10358)

    • feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • Contributors: Ryohsuke Mitsudome, Taekjin LEE

0.43.0 (2025-03-21)

  • Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
  • chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
  • refactor: add autoware_cuda_dependency_meta (#10073)
  • Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo

0.42.0 (2025-03-03)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
  • Contributors: Fumiya Watanabe, 心刚

0.41.2 (2025-02-19)

  • chore: bump version to 0.41.1 (#10088)
  • Contributors: Ryohsuke Mitsudome

0.41.1 (2025-02-10)

0.41.0 (2025-01-29)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base

  • feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation

  • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)

    • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
    • style(pre-commit): autofix
    • style(autoware_tensorrt_common): linting

File truncated at 100 lines see the full file

Launch files

  • launch/shape_estimation.launch.xml
      • input/objects [default: labeled_clusters]
      • output/objects [default: shape_estimated_objects]
      • node_name [default: shape_estimation]
      • data_path [default: $(env HOME)/autoware_data]
      • model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
      • config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange

No version for distro rolling showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_shape_estimation package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

This package implements a shape estimation algorithm as a ROS 2 node

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Kaan Colak
  • Taekjin Lee
  • Lei Gu

Authors

No additional authors.

autoware_shape_estimation

Purpose

This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.

Inner-workings / Algorithms

Fitting algorithms

  • bounding box
    • L-shape fitting: See reference below for details
    • ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
  • cylinder

    cv::minEnclosingCircle

  • convex hull

    cv::convexHull

Inputs / Outputs

Input

Name Type Description
input tier4_perception_msgs::msg::DetectedObjectsWithFeature detected objects with labeled cluster

Output

Name Type Description
output/objects autoware_perception_msgs::msg::DetectedObjects detected objects with refined shape

Parameters

{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}

ML Based Shape Implementation

The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.

ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).

The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.

The model predicts the following outputs for each object:

  • x,y,z coordinates of the object center
  • object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
  • object heading angle residuals
  • object size classification result
  • object size residuals

Training ML Based Shape Estimation Model

To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.

Preparing the Dataset

Install MMDetection3D prerequisites

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation

Step 3. Install PyTorch

conda install pytorch torchvision -c pytorch

Install mmdetection3d

Step 1. Install MMEngine, MMCV, and MMDetection using MIM

pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'

Step 2. Install Autoware’s MMDetection3D fork

git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_shape_estimation

0.47.0 (2025-08-11)

  • style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Contributors: Ryohsuke Mitsudome

0.46.0 (2025-06-20)

0.45.0 (2025-05-22)

  • Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base

  • chore: perception code owner update (#10645)

    • chore: update maintainers in multiple perception packages

    * Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.

    • chore: update maintainers in multiple perception packages

    * chore: add Kok Seang Tan as maintainer in multiple perception packages ---------

  • Contributors: Taekjin LEE, TaikiYamada4

0.44.2 (2025-06-10)

0.44.1 (2025-05-01)

0.44.0 (2025-04-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • chore(perception): code owner revision (#10358)

    • feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • Contributors: Ryohsuke Mitsudome, Taekjin LEE

0.43.0 (2025-03-21)

  • Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
  • chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
  • refactor: add autoware_cuda_dependency_meta (#10073)
  • Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo

0.42.0 (2025-03-03)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
  • Contributors: Fumiya Watanabe, 心刚

0.41.2 (2025-02-19)

  • chore: bump version to 0.41.1 (#10088)
  • Contributors: Ryohsuke Mitsudome

0.41.1 (2025-02-10)

0.41.0 (2025-01-29)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base

  • feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation

  • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)

    • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
    • style(pre-commit): autofix
    • style(autoware_tensorrt_common): linting

File truncated at 100 lines see the full file

Launch files

  • launch/shape_estimation.launch.xml
      • input/objects [default: labeled_clusters]
      • output/objects [default: shape_estimated_objects]
      • node_name [default: shape_estimation]
      • data_path [default: $(env HOME)/autoware_data]
      • model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
      • config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange

Package symbol

autoware_shape_estimation package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

This package implements a shape estimation algorithm as a ROS 2 node

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Kaan Colak
  • Taekjin Lee
  • Lei Gu

Authors

No additional authors.

autoware_shape_estimation

Purpose

This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.

Inner-workings / Algorithms

Fitting algorithms

  • bounding box
    • L-shape fitting: See reference below for details
    • ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
  • cylinder

    cv::minEnclosingCircle

  • convex hull

    cv::convexHull

Inputs / Outputs

Input

Name Type Description
input tier4_perception_msgs::msg::DetectedObjectsWithFeature detected objects with labeled cluster

Output

Name Type Description
output/objects autoware_perception_msgs::msg::DetectedObjects detected objects with refined shape

Parameters

{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}

ML Based Shape Implementation

The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.

ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).

The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.

The model predicts the following outputs for each object:

  • x,y,z coordinates of the object center
  • object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
  • object heading angle residuals
  • object size classification result
  • object size residuals

Training ML Based Shape Estimation Model

To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.

Preparing the Dataset

Install MMDetection3D prerequisites

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation

Step 3. Install PyTorch

conda install pytorch torchvision -c pytorch

Install mmdetection3d

Step 1. Install MMEngine, MMCV, and MMDetection using MIM

pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'

Step 2. Install Autoware’s MMDetection3D fork

git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_shape_estimation

0.47.0 (2025-08-11)

  • style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Contributors: Ryohsuke Mitsudome

0.46.0 (2025-06-20)

0.45.0 (2025-05-22)

  • Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base

  • chore: perception code owner update (#10645)

    • chore: update maintainers in multiple perception packages

    * Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.

    • chore: update maintainers in multiple perception packages

    * chore: add Kok Seang Tan as maintainer in multiple perception packages ---------

  • Contributors: Taekjin LEE, TaikiYamada4

0.44.2 (2025-06-10)

0.44.1 (2025-05-01)

0.44.0 (2025-04-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • chore(perception): code owner revision (#10358)

    • feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • Contributors: Ryohsuke Mitsudome, Taekjin LEE

0.43.0 (2025-03-21)

  • Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
  • chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
  • refactor: add autoware_cuda_dependency_meta (#10073)
  • Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo

0.42.0 (2025-03-03)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
  • Contributors: Fumiya Watanabe, 心刚

0.41.2 (2025-02-19)

  • chore: bump version to 0.41.1 (#10088)
  • Contributors: Ryohsuke Mitsudome

0.41.1 (2025-02-10)

0.41.0 (2025-01-29)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base

  • feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation

  • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)

    • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
    • style(pre-commit): autofix
    • style(autoware_tensorrt_common): linting

File truncated at 100 lines see the full file

Launch files

  • launch/shape_estimation.launch.xml
      • input/objects [default: labeled_clusters]
      • output/objects [default: shape_estimated_objects]
      • node_name [default: shape_estimation]
      • data_path [default: $(env HOME)/autoware_data]
      • model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
      • config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange

No version for distro galactic showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_shape_estimation package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

This package implements a shape estimation algorithm as a ROS 2 node

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Kaan Colak
  • Taekjin Lee
  • Lei Gu

Authors

No additional authors.

autoware_shape_estimation

Purpose

This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.

Inner-workings / Algorithms

Fitting algorithms

  • bounding box
    • L-shape fitting: See reference below for details
    • ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
  • cylinder

    cv::minEnclosingCircle

  • convex hull

    cv::convexHull

Inputs / Outputs

Input

Name Type Description
input tier4_perception_msgs::msg::DetectedObjectsWithFeature detected objects with labeled cluster

Output

Name Type Description
output/objects autoware_perception_msgs::msg::DetectedObjects detected objects with refined shape

Parameters

{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}

ML Based Shape Implementation

The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.

ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).

The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.

The model predicts the following outputs for each object:

  • x,y,z coordinates of the object center
  • object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
  • object heading angle residuals
  • object size classification result
  • object size residuals

Training ML Based Shape Estimation Model

To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.

Preparing the Dataset

Install MMDetection3D prerequisites

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation

Step 3. Install PyTorch

conda install pytorch torchvision -c pytorch

Install mmdetection3d

Step 1. Install MMEngine, MMCV, and MMDetection using MIM

pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'

Step 2. Install Autoware’s MMDetection3D fork

git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_shape_estimation

0.47.0 (2025-08-11)

  • style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Contributors: Ryohsuke Mitsudome

0.46.0 (2025-06-20)

0.45.0 (2025-05-22)

  • Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base

  • chore: perception code owner update (#10645)

    • chore: update maintainers in multiple perception packages

    * Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.

    • chore: update maintainers in multiple perception packages

    * chore: add Kok Seang Tan as maintainer in multiple perception packages ---------

  • Contributors: Taekjin LEE, TaikiYamada4

0.44.2 (2025-06-10)

0.44.1 (2025-05-01)

0.44.0 (2025-04-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • chore(perception): code owner revision (#10358)

    • feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • Contributors: Ryohsuke Mitsudome, Taekjin LEE

0.43.0 (2025-03-21)

  • Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
  • chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
  • refactor: add autoware_cuda_dependency_meta (#10073)
  • Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo

0.42.0 (2025-03-03)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
  • Contributors: Fumiya Watanabe, 心刚

0.41.2 (2025-02-19)

  • chore: bump version to 0.41.1 (#10088)
  • Contributors: Ryohsuke Mitsudome

0.41.1 (2025-02-10)

0.41.0 (2025-01-29)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base

  • feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation

  • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)

    • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
    • style(pre-commit): autofix
    • style(autoware_tensorrt_common): linting

File truncated at 100 lines see the full file

Launch files

  • launch/shape_estimation.launch.xml
      • input/objects [default: labeled_clusters]
      • output/objects [default: shape_estimated_objects]
      • node_name [default: shape_estimation]
      • data_path [default: $(env HOME)/autoware_data]
      • model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
      • config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange

No version for distro iron showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_shape_estimation package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

This package implements a shape estimation algorithm as a ROS 2 node

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Kaan Colak
  • Taekjin Lee
  • Lei Gu

Authors

No additional authors.

autoware_shape_estimation

Purpose

This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.

Inner-workings / Algorithms

Fitting algorithms

  • bounding box
    • L-shape fitting: See reference below for details
    • ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
  • cylinder

    cv::minEnclosingCircle

  • convex hull

    cv::convexHull

Inputs / Outputs

Input

Name Type Description
input tier4_perception_msgs::msg::DetectedObjectsWithFeature detected objects with labeled cluster

Output

Name Type Description
output/objects autoware_perception_msgs::msg::DetectedObjects detected objects with refined shape

Parameters

{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}

ML Based Shape Implementation

The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.

ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).

The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.

The model predicts the following outputs for each object:

  • x,y,z coordinates of the object center
  • object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
  • object heading angle residuals
  • object size classification result
  • object size residuals

Training ML Based Shape Estimation Model

To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.

Preparing the Dataset

Install MMDetection3D prerequisites

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation

Step 3. Install PyTorch

conda install pytorch torchvision -c pytorch

Install mmdetection3d

Step 1. Install MMEngine, MMCV, and MMDetection using MIM

pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'

Step 2. Install Autoware’s MMDetection3D fork

git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_shape_estimation

0.47.0 (2025-08-11)

  • style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Contributors: Ryohsuke Mitsudome

0.46.0 (2025-06-20)

0.45.0 (2025-05-22)

  • Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base

  • chore: perception code owner update (#10645)

    • chore: update maintainers in multiple perception packages

    * Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.

    • chore: update maintainers in multiple perception packages

    * chore: add Kok Seang Tan as maintainer in multiple perception packages ---------

  • Contributors: Taekjin LEE, TaikiYamada4

0.44.2 (2025-06-10)

0.44.1 (2025-05-01)

0.44.0 (2025-04-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • chore(perception): code owner revision (#10358)

    • feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • Contributors: Ryohsuke Mitsudome, Taekjin LEE

0.43.0 (2025-03-21)

  • Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
  • chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
  • refactor: add autoware_cuda_dependency_meta (#10073)
  • Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo

0.42.0 (2025-03-03)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
  • Contributors: Fumiya Watanabe, 心刚

0.41.2 (2025-02-19)

  • chore: bump version to 0.41.1 (#10088)
  • Contributors: Ryohsuke Mitsudome

0.41.1 (2025-02-10)

0.41.0 (2025-01-29)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base

  • feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation

  • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)

    • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
    • style(pre-commit): autofix
    • style(autoware_tensorrt_common): linting

File truncated at 100 lines see the full file

Launch files

  • launch/shape_estimation.launch.xml
      • input/objects [default: labeled_clusters]
      • output/objects [default: shape_estimated_objects]
      • node_name [default: shape_estimation]
      • data_path [default: $(env HOME)/autoware_data]
      • model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
      • config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange

No version for distro melodic showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_shape_estimation package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

This package implements a shape estimation algorithm as a ROS 2 node

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Kaan Colak
  • Taekjin Lee
  • Lei Gu

Authors

No additional authors.

autoware_shape_estimation

Purpose

This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.

Inner-workings / Algorithms

Fitting algorithms

  • bounding box
    • L-shape fitting: See reference below for details
    • ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
  • cylinder

    cv::minEnclosingCircle

  • convex hull

    cv::convexHull

Inputs / Outputs

Input

Name Type Description
input tier4_perception_msgs::msg::DetectedObjectsWithFeature detected objects with labeled cluster

Output

Name Type Description
output/objects autoware_perception_msgs::msg::DetectedObjects detected objects with refined shape

Parameters

{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}

ML Based Shape Implementation

The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.

ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).

The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.

The model predicts the following outputs for each object:

  • x,y,z coordinates of the object center
  • object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
  • object heading angle residuals
  • object size classification result
  • object size residuals

Training ML Based Shape Estimation Model

To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.

Preparing the Dataset

Install MMDetection3D prerequisites

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation

Step 3. Install PyTorch

conda install pytorch torchvision -c pytorch

Install mmdetection3d

Step 1. Install MMEngine, MMCV, and MMDetection using MIM

pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'

Step 2. Install Autoware’s MMDetection3D fork

git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_shape_estimation

0.47.0 (2025-08-11)

  • style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Contributors: Ryohsuke Mitsudome

0.46.0 (2025-06-20)

0.45.0 (2025-05-22)

  • Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base

  • chore: perception code owner update (#10645)

    • chore: update maintainers in multiple perception packages

    * Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.

    • chore: update maintainers in multiple perception packages

    * chore: add Kok Seang Tan as maintainer in multiple perception packages ---------

  • Contributors: Taekjin LEE, TaikiYamada4

0.44.2 (2025-06-10)

0.44.1 (2025-05-01)

0.44.0 (2025-04-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • chore(perception): code owner revision (#10358)

    • feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • Contributors: Ryohsuke Mitsudome, Taekjin LEE

0.43.0 (2025-03-21)

  • Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
  • chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
  • refactor: add autoware_cuda_dependency_meta (#10073)
  • Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo

0.42.0 (2025-03-03)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
  • Contributors: Fumiya Watanabe, 心刚

0.41.2 (2025-02-19)

  • chore: bump version to 0.41.1 (#10088)
  • Contributors: Ryohsuke Mitsudome

0.41.1 (2025-02-10)

0.41.0 (2025-01-29)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base

  • feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation

  • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)

    • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
    • style(pre-commit): autofix
    • style(autoware_tensorrt_common): linting

File truncated at 100 lines see the full file

Launch files

  • launch/shape_estimation.launch.xml
      • input/objects [default: labeled_clusters]
      • output/objects [default: shape_estimated_objects]
      • node_name [default: shape_estimation]
      • data_path [default: $(env HOME)/autoware_data]
      • model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
      • config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange

No version for distro noetic showing github. Known supported distros are highlighted in the buttons above.
Package symbol

autoware_shape_estimation package from autoware_universe repo

autoware_agnocast_wrapper autoware_auto_common autoware_boundary_departure_checker autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_command_gate autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_stop_mode_operator autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter autoware_diagnostic_graph_test_examples tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_map_tf_generator autoware_bevfusion autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_sorter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_fusion_to_detected_object autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simpl_prediction autoware_simple_object_merger autoware_tensorrt_bevdet autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_plugins autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_diffusion_planner autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_hazard_lights_selector autoware_mission_planner_universe autoware_path_optimizer autoware_path_smoother autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_bidirectional_traffic_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_boundary_departure_prevention_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_motion_velocity_road_user_stop_module autoware_motion_velocity_run_out_module autoware_planning_validator autoware_planning_validator_intersection_collision_checker autoware_planning_validator_latency_checker autoware_planning_validator_rear_collision_checker autoware_planning_validator_test_utils autoware_planning_validator_trajectory_checker autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_objects_adapter autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_command_mode_decider autoware_command_mode_decider_plugins autoware_command_mode_switcher autoware_command_mode_switcher_plugins autoware_command_mode_types autoware_component_monitor autoware_component_state_monitor autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_default_adapi_universe autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_pipeline_latency_monitor autoware_processing_time_checker autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_traffic_light_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_mode_rviz_plugin tier4_datetime_rviz_plugin tier4_perception_rviz_plugin tier4_planning_factor_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.47.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description
Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-08-16
Dev Status UNKNOWN
Released UNRELEASED
Tags planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

This package implements a shape estimation algorithm as a ROS 2 node

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Kaan Colak
  • Taekjin Lee
  • Lei Gu

Authors

No additional authors.

autoware_shape_estimation

Purpose

This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.

Inner-workings / Algorithms

Fitting algorithms

  • bounding box
    • L-shape fitting: See reference below for details
    • ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
  • cylinder

    cv::minEnclosingCircle

  • convex hull

    cv::convexHull

Inputs / Outputs

Input

Name Type Description
input tier4_perception_msgs::msg::DetectedObjectsWithFeature detected objects with labeled cluster

Output

Name Type Description
output/objects autoware_perception_msgs::msg::DetectedObjects detected objects with refined shape

Parameters

{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}

ML Based Shape Implementation

The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.

ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).

The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.

The model predicts the following outputs for each object:

  • x,y,z coordinates of the object center
  • object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
  • object heading angle residuals
  • object size classification result
  • object size residuals

Training ML Based Shape Estimation Model

To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.

Preparing the Dataset

Install MMDetection3D prerequisites

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation

Step 3. Install PyTorch

conda install pytorch torchvision -c pytorch

Install mmdetection3d

Step 1. Install MMEngine, MMCV, and MMDetection using MIM

pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'

Step 2. Install Autoware’s MMDetection3D fork

git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package autoware_shape_estimation

0.47.0 (2025-08-11)

  • style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Contributors: Ryohsuke Mitsudome

0.46.0 (2025-06-20)

0.45.0 (2025-05-22)

  • Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base

  • chore: perception code owner update (#10645)

    • chore: update maintainers in multiple perception packages

    * Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.

    • chore: update maintainers in multiple perception packages

    * chore: add Kok Seang Tan as maintainer in multiple perception packages ---------

  • Contributors: Taekjin LEE, TaikiYamada4

0.44.2 (2025-06-10)

0.44.1 (2025-05-01)

0.44.0 (2025-04-18)

  • Merge remote-tracking branch 'origin/main' into humble

  • chore(perception): code owner revision (#10358)

    • feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

  • Contributors: Ryohsuke Mitsudome, Taekjin LEE

0.43.0 (2025-03-21)

  • Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
  • chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
  • refactor: add autoware_cuda_dependency_meta (#10073)
  • Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo

0.42.0 (2025-03-03)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
  • Contributors: Fumiya Watanabe, 心刚

0.41.2 (2025-02-19)

  • chore: bump version to 0.41.1 (#10088)
  • Contributors: Ryohsuke Mitsudome

0.41.1 (2025-02-10)

0.41.0 (2025-01-29)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base

  • feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation

  • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)

    • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
    • style(pre-commit): autofix
    • style(autoware_tensorrt_common): linting

File truncated at 100 lines see the full file

Launch files

  • launch/shape_estimation.launch.xml
      • input/objects [default: labeled_clusters]
      • output/objects [default: shape_estimated_objects]
      • node_name [default: shape_estimation]
      • data_path [default: $(env HOME)/autoware_data]
      • model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
      • config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange