Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Kaan Colak
- Taekjin Lee
- Lei Gu
Authors
autoware_shape_estimation
Purpose
This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.
Inner-workings / Algorithms
Fitting algorithms
- bounding box
- L-shape fitting: See reference below for details
- ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
-
cylinder
cv::minEnclosingCircle
-
convex hull
cv::convexHull
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
input |
tier4_perception_msgs::msg::DetectedObjectsWithFeature |
detected objects with labeled cluster |
Output
Name | Type | Description |
---|---|---|
output/objects |
autoware_perception_msgs::msg::DetectedObjects |
detected objects with refined shape |
Parameters
{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}
ML Based Shape Implementation
The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.
ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).
The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.
The model predicts the following outputs for each object:
- x,y,z coordinates of the object center
- object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
- object heading angle residuals
- object size classification result
- object size residuals
Training ML Based Shape Estimation Model
To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.
Preparing the Dataset
Install MMDetection3D prerequisites
Step 1. Download and install Miniconda from the official website.
Step 2. Create a conda virtual environment and activate it
conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation
Step 3. Install PyTorch
conda install pytorch torchvision -c pytorch
Install mmdetection3d
Step 1. Install MMEngine, MMCV, and MMDetection using MIM
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'
Step 2. Install Autoware’s MMDetection3D fork
git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
File truncated at 100 lines see the full file
Changelog for package autoware_shape_estimation
0.47.0 (2025-08-11)
- style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Ryohsuke Mitsudome
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
chore(perception): code owner revision (#10358)
- feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Taekjin LEE
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
- Contributors: Fumiya Watanabe, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/shape_estimation.launch.xml
-
- input/objects [default: labeled_clusters]
- output/objects [default: shape_estimated_objects]
- node_name [default: shape_estimation]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
- config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Kaan Colak
- Taekjin Lee
- Lei Gu
Authors
autoware_shape_estimation
Purpose
This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.
Inner-workings / Algorithms
Fitting algorithms
- bounding box
- L-shape fitting: See reference below for details
- ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
-
cylinder
cv::minEnclosingCircle
-
convex hull
cv::convexHull
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
input |
tier4_perception_msgs::msg::DetectedObjectsWithFeature |
detected objects with labeled cluster |
Output
Name | Type | Description |
---|---|---|
output/objects |
autoware_perception_msgs::msg::DetectedObjects |
detected objects with refined shape |
Parameters
{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}
ML Based Shape Implementation
The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.
ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).
The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.
The model predicts the following outputs for each object:
- x,y,z coordinates of the object center
- object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
- object heading angle residuals
- object size classification result
- object size residuals
Training ML Based Shape Estimation Model
To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.
Preparing the Dataset
Install MMDetection3D prerequisites
Step 1. Download and install Miniconda from the official website.
Step 2. Create a conda virtual environment and activate it
conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation
Step 3. Install PyTorch
conda install pytorch torchvision -c pytorch
Install mmdetection3d
Step 1. Install MMEngine, MMCV, and MMDetection using MIM
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'
Step 2. Install Autoware’s MMDetection3D fork
git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
File truncated at 100 lines see the full file
Changelog for package autoware_shape_estimation
0.47.0 (2025-08-11)
- style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Ryohsuke Mitsudome
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
chore(perception): code owner revision (#10358)
- feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Taekjin LEE
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
- Contributors: Fumiya Watanabe, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/shape_estimation.launch.xml
-
- input/objects [default: labeled_clusters]
- output/objects [default: shape_estimated_objects]
- node_name [default: shape_estimation]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
- config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Kaan Colak
- Taekjin Lee
- Lei Gu
Authors
autoware_shape_estimation
Purpose
This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.
Inner-workings / Algorithms
Fitting algorithms
- bounding box
- L-shape fitting: See reference below for details
- ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
-
cylinder
cv::minEnclosingCircle
-
convex hull
cv::convexHull
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
input |
tier4_perception_msgs::msg::DetectedObjectsWithFeature |
detected objects with labeled cluster |
Output
Name | Type | Description |
---|---|---|
output/objects |
autoware_perception_msgs::msg::DetectedObjects |
detected objects with refined shape |
Parameters
{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}
ML Based Shape Implementation
The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.
ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).
The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.
The model predicts the following outputs for each object:
- x,y,z coordinates of the object center
- object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
- object heading angle residuals
- object size classification result
- object size residuals
Training ML Based Shape Estimation Model
To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.
Preparing the Dataset
Install MMDetection3D prerequisites
Step 1. Download and install Miniconda from the official website.
Step 2. Create a conda virtual environment and activate it
conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation
Step 3. Install PyTorch
conda install pytorch torchvision -c pytorch
Install mmdetection3d
Step 1. Install MMEngine, MMCV, and MMDetection using MIM
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'
Step 2. Install Autoware’s MMDetection3D fork
git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
File truncated at 100 lines see the full file
Changelog for package autoware_shape_estimation
0.47.0 (2025-08-11)
- style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Ryohsuke Mitsudome
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
chore(perception): code owner revision (#10358)
- feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Taekjin LEE
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
- Contributors: Fumiya Watanabe, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/shape_estimation.launch.xml
-
- input/objects [default: labeled_clusters]
- output/objects [default: shape_estimated_objects]
- node_name [default: shape_estimation]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
- config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Kaan Colak
- Taekjin Lee
- Lei Gu
Authors
autoware_shape_estimation
Purpose
This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.
Inner-workings / Algorithms
Fitting algorithms
- bounding box
- L-shape fitting: See reference below for details
- ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
-
cylinder
cv::minEnclosingCircle
-
convex hull
cv::convexHull
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
input |
tier4_perception_msgs::msg::DetectedObjectsWithFeature |
detected objects with labeled cluster |
Output
Name | Type | Description |
---|---|---|
output/objects |
autoware_perception_msgs::msg::DetectedObjects |
detected objects with refined shape |
Parameters
{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}
ML Based Shape Implementation
The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.
ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).
The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.
The model predicts the following outputs for each object:
- x,y,z coordinates of the object center
- object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
- object heading angle residuals
- object size classification result
- object size residuals
Training ML Based Shape Estimation Model
To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.
Preparing the Dataset
Install MMDetection3D prerequisites
Step 1. Download and install Miniconda from the official website.
Step 2. Create a conda virtual environment and activate it
conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation
Step 3. Install PyTorch
conda install pytorch torchvision -c pytorch
Install mmdetection3d
Step 1. Install MMEngine, MMCV, and MMDetection using MIM
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'
Step 2. Install Autoware’s MMDetection3D fork
git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
File truncated at 100 lines see the full file
Changelog for package autoware_shape_estimation
0.47.0 (2025-08-11)
- style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Ryohsuke Mitsudome
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
chore(perception): code owner revision (#10358)
- feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Taekjin LEE
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
- Contributors: Fumiya Watanabe, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/shape_estimation.launch.xml
-
- input/objects [default: labeled_clusters]
- output/objects [default: shape_estimated_objects]
- node_name [default: shape_estimation]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
- config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Kaan Colak
- Taekjin Lee
- Lei Gu
Authors
autoware_shape_estimation
Purpose
This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.
Inner-workings / Algorithms
Fitting algorithms
- bounding box
- L-shape fitting: See reference below for details
- ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
-
cylinder
cv::minEnclosingCircle
-
convex hull
cv::convexHull
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
input |
tier4_perception_msgs::msg::DetectedObjectsWithFeature |
detected objects with labeled cluster |
Output
Name | Type | Description |
---|---|---|
output/objects |
autoware_perception_msgs::msg::DetectedObjects |
detected objects with refined shape |
Parameters
{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}
ML Based Shape Implementation
The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.
ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).
The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.
The model predicts the following outputs for each object:
- x,y,z coordinates of the object center
- object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
- object heading angle residuals
- object size classification result
- object size residuals
Training ML Based Shape Estimation Model
To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.
Preparing the Dataset
Install MMDetection3D prerequisites
Step 1. Download and install Miniconda from the official website.
Step 2. Create a conda virtual environment and activate it
conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation
Step 3. Install PyTorch
conda install pytorch torchvision -c pytorch
Install mmdetection3d
Step 1. Install MMEngine, MMCV, and MMDetection using MIM
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'
Step 2. Install Autoware’s MMDetection3D fork
git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
File truncated at 100 lines see the full file
Changelog for package autoware_shape_estimation
0.47.0 (2025-08-11)
- style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Ryohsuke Mitsudome
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
chore(perception): code owner revision (#10358)
- feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Taekjin LEE
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
- Contributors: Fumiya Watanabe, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/shape_estimation.launch.xml
-
- input/objects [default: labeled_clusters]
- output/objects [default: shape_estimated_objects]
- node_name [default: shape_estimation]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
- config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Kaan Colak
- Taekjin Lee
- Lei Gu
Authors
autoware_shape_estimation
Purpose
This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.
Inner-workings / Algorithms
Fitting algorithms
- bounding box
- L-shape fitting: See reference below for details
- ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
-
cylinder
cv::minEnclosingCircle
-
convex hull
cv::convexHull
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
input |
tier4_perception_msgs::msg::DetectedObjectsWithFeature |
detected objects with labeled cluster |
Output
Name | Type | Description |
---|---|---|
output/objects |
autoware_perception_msgs::msg::DetectedObjects |
detected objects with refined shape |
Parameters
{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}
ML Based Shape Implementation
The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.
ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).
The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.
The model predicts the following outputs for each object:
- x,y,z coordinates of the object center
- object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
- object heading angle residuals
- object size classification result
- object size residuals
Training ML Based Shape Estimation Model
To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.
Preparing the Dataset
Install MMDetection3D prerequisites
Step 1. Download and install Miniconda from the official website.
Step 2. Create a conda virtual environment and activate it
conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation
Step 3. Install PyTorch
conda install pytorch torchvision -c pytorch
Install mmdetection3d
Step 1. Install MMEngine, MMCV, and MMDetection using MIM
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'
Step 2. Install Autoware’s MMDetection3D fork
git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
File truncated at 100 lines see the full file
Changelog for package autoware_shape_estimation
0.47.0 (2025-08-11)
- style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Ryohsuke Mitsudome
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
chore(perception): code owner revision (#10358)
- feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Taekjin LEE
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
- Contributors: Fumiya Watanabe, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/shape_estimation.launch.xml
-
- input/objects [default: labeled_clusters]
- output/objects [default: shape_estimated_objects]
- node_name [default: shape_estimation]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
- config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Kaan Colak
- Taekjin Lee
- Lei Gu
Authors
autoware_shape_estimation
Purpose
This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.
Inner-workings / Algorithms
Fitting algorithms
- bounding box
- L-shape fitting: See reference below for details
- ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
-
cylinder
cv::minEnclosingCircle
-
convex hull
cv::convexHull
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
input |
tier4_perception_msgs::msg::DetectedObjectsWithFeature |
detected objects with labeled cluster |
Output
Name | Type | Description |
---|---|---|
output/objects |
autoware_perception_msgs::msg::DetectedObjects |
detected objects with refined shape |
Parameters
{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}
ML Based Shape Implementation
The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.
ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).
The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.
The model predicts the following outputs for each object:
- x,y,z coordinates of the object center
- object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
- object heading angle residuals
- object size classification result
- object size residuals
Training ML Based Shape Estimation Model
To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.
Preparing the Dataset
Install MMDetection3D prerequisites
Step 1. Download and install Miniconda from the official website.
Step 2. Create a conda virtual environment and activate it
conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation
Step 3. Install PyTorch
conda install pytorch torchvision -c pytorch
Install mmdetection3d
Step 1. Install MMEngine, MMCV, and MMDetection using MIM
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'
Step 2. Install Autoware’s MMDetection3D fork
git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
File truncated at 100 lines see the full file
Changelog for package autoware_shape_estimation
0.47.0 (2025-08-11)
- style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Ryohsuke Mitsudome
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
chore(perception): code owner revision (#10358)
- feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Taekjin LEE
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
- Contributors: Fumiya Watanabe, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/shape_estimation.launch.xml
-
- input/objects [default: labeled_clusters]
- output/objects [default: shape_estimated_objects]
- node_name [default: shape_estimation]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
- config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Kaan Colak
- Taekjin Lee
- Lei Gu
Authors
autoware_shape_estimation
Purpose
This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.
Inner-workings / Algorithms
Fitting algorithms
- bounding box
- L-shape fitting: See reference below for details
- ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
-
cylinder
cv::minEnclosingCircle
-
convex hull
cv::convexHull
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
input |
tier4_perception_msgs::msg::DetectedObjectsWithFeature |
detected objects with labeled cluster |
Output
Name | Type | Description |
---|---|---|
output/objects |
autoware_perception_msgs::msg::DetectedObjects |
detected objects with refined shape |
Parameters
{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}
ML Based Shape Implementation
The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.
ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).
The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.
The model predicts the following outputs for each object:
- x,y,z coordinates of the object center
- object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
- object heading angle residuals
- object size classification result
- object size residuals
Training ML Based Shape Estimation Model
To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.
Preparing the Dataset
Install MMDetection3D prerequisites
Step 1. Download and install Miniconda from the official website.
Step 2. Create a conda virtual environment and activate it
conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation
Step 3. Install PyTorch
conda install pytorch torchvision -c pytorch
Install mmdetection3d
Step 1. Install MMEngine, MMCV, and MMDetection using MIM
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'
Step 2. Install Autoware’s MMDetection3D fork
git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
File truncated at 100 lines see the full file
Changelog for package autoware_shape_estimation
0.47.0 (2025-08-11)
- style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Ryohsuke Mitsudome
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
chore(perception): code owner revision (#10358)
- feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Taekjin LEE
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
- Contributors: Fumiya Watanabe, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/shape_estimation.launch.xml
-
- input/objects [default: labeled_clusters]
- output/objects [default: shape_estimated_objects]
- node_name [default: shape_estimation]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
- config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]
Messages
Services
Plugins
Recent questions tagged autoware_shape_estimation at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.47.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-08-16 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Kaan Colak
- Taekjin Lee
- Lei Gu
Authors
autoware_shape_estimation
Purpose
This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.
Inner-workings / Algorithms
Fitting algorithms
- bounding box
- L-shape fitting: See reference below for details
- ML based shape fitting: See ML Based Shape Fitting Implementation section below for details
-
cylinder
cv::minEnclosingCircle
-
convex hull
cv::convexHull
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
input |
tier4_perception_msgs::msg::DetectedObjectsWithFeature |
detected objects with labeled cluster |
Output
Name | Type | Description |
---|---|---|
output/objects |
autoware_perception_msgs::msg::DetectedObjects |
detected objects with refined shape |
Parameters
{{ json_to_markdown(“perception/autoware_shape_estimation/schema/shape_estimation.schema.json”) }}
ML Based Shape Implementation
The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.
ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).
The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.
The model predicts the following outputs for each object:
- x,y,z coordinates of the object center
- object heading angle classification result(Uses 12 bins for angle classification - 30 degrees each)
- object heading angle residuals
- object size classification result
- object size residuals
Training ML Based Shape Estimation Model
To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.
Preparing the Dataset
Install MMDetection3D prerequisites
Step 1. Download and install Miniconda from the official website.
Step 2. Create a conda virtual environment and activate it
conda create --name train-shape-estimation python=3.8 -y
conda activate train-shape-estimation
Step 3. Install PyTorch
conda install pytorch torchvision -c pytorch
Install mmdetection3d
Step 1. Install MMEngine, MMCV, and MMDetection using MIM
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'
Step 2. Install Autoware’s MMDetection3D fork
git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
File truncated at 100 lines see the full file
Changelog for package autoware_shape_estimation
0.47.0 (2025-08-11)
- style(pre-commit): autofix (#10982) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Contributors: Ryohsuke Mitsudome
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
chore(perception): code owner revision (#10358)
- feat: add Masato Saeki and Taekjin Lee as maintainer to multiple package.xml files
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Ryohsuke Mitsudome, Taekjin LEE
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
- Contributors: Fumiya Watanabe, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
feat(autoware_shape_estimation): tier4_debug_msgs chnaged to autoware_internal_debug_msgs in autoware_shape_estimation (#9897) feat: tier4_debug_msgs chnaged to autoware_internal_debug_msgs in files perception/autoware_shape_estimation
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/shape_estimation.launch.xml
-
- input/objects [default: labeled_clusters]
- output/objects [default: shape_estimated_objects]
- node_name [default: shape_estimation]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/shape_estimation/pointnet.onnx]
- config_file [default: $(find-pkg-share autoware_shape_estimation)/config/shape_estimation.param.yaml]