Package Summary
Tags | No category tags. |
Version | 0.46.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Dan Umeda
- Manato Hirabayashi
- Amadeusz Szymko
- Kenzo Lobos-Tsunekawa
- Masato Saeki
Authors
- Taichi Higashide
- Daisuke Nishimatsu
autoware_tensorrt_common
This package provides a high-level API to work with TensorRT. This library simplifies the process of loading, building, and executing TensorRT inference engines using ONNX models. It also includes utilities for profiling and managing TensorRT execution contexts, making it easier to integrate TensorRT-based packages in Autoware.
Usage
Here is an example usage of the library. For the full API documentation, please refer to the doxygen documentation (see header file).
#include <autoware/tensorrt_common/tensorrt_common.hpp>
#include <memory>
#include <utility>
#include <vector>
using autoware::tensorrt_common::TrtCommon;
using autoware::tensorrt_common::TrtCommonConfig;
using autoware::tensorrt_common::TensorInfo;
using autoware::tensorrt_common::NetworkIO;
using autoware::tensorrt_common::ProfileDims;
std::unique_ptr<TrtCommon> trt_common_;
Create a tensorrt common instance and setup engine
- With minimal configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx"));
trt_common_->setup();
- With full configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx", "fp16", "/path/to/engine/model.engine", (1ULL << 30U), -1, false));
std::vector<NetworkIO> network_io{
NetworkIO("sample_input", {3, {-1, 64, 512}}), NetworkIO("sample_output", {1, {50}})};
std::vector<ProfileDims> profile_dims{
ProfileDims("sample_input", {3, {1, 64, 512}}, {3, {3, 64, 512}}, {3, {9, 64, 512}})};
auto network_io_ptr = std::make_unique<std::vector<NetworkIO>>(network_io);
auto profile_dims_ptr = std::make_unique<std::vector<ProfileDims>>(profile_dims);
trt_common_->setup(std::move(profile_dims_ptr), std::move(network_io_ptr));
By defining network IO names and dimensions, an extra shapes validation will be performed after building / loading engine. This is useful to ensure the model is compatible with current code for preprocessing as it might consists of operations dependent on tensor shapes.
Profile dimension is used to specify the min, opt, and max dimensions for dynamic shapes.
Network IO or / and profile dimensions can be omitted if not needed.
Setting input and output tensors
bool success = true;
success &= trt_common_->setTensor("sample_input", sample_input_d_.get(), nvinfer1::Dims{3, {var_size, 64, 512}});
success &= trt_common_->setTensor("sample_output", sample_output_d_.get());
return success;
Execute inference
auto success = trt_common_->enqueueV3(stream_);
return success;
Changelog for package autoware_tensorrt_common
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
perf(autoware_tensorrt_common): set cudaSetDeviceFlags explicitly (#10523)
- Synchronize CUDA stream by blocking instead of spin
- Use blocking-sync in BEVFusion
- Call cudaSetDeviceFlags in tensorrt_common
-
Contributors: Taekjin LEE, TaikiYamada4, prime number
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat: should be using NvInferRuntime.h (#10399)
-
feat(autoware_tenssort_common): validate TensorRT engine version for cached engine (#10320)
- autoware_tenssort_common): validate TensorRT engine version for cached engine
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* docs(autoware_tensorrt_common): add source ---------Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
-
Contributors: Amadeusz Szymko, Ryohsuke Mitsudome, Yuxuan Liu
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
Deps | Name |
---|---|
ament_cmake | |
cudnn_cmake_module | |
tensorrt_cmake_module | |
ament_lint_auto | |
ament_lint_common | |
autoware_cuda_dependency_meta | |
rclcpp |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_common at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.46.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Dan Umeda
- Manato Hirabayashi
- Amadeusz Szymko
- Kenzo Lobos-Tsunekawa
- Masato Saeki
Authors
- Taichi Higashide
- Daisuke Nishimatsu
autoware_tensorrt_common
This package provides a high-level API to work with TensorRT. This library simplifies the process of loading, building, and executing TensorRT inference engines using ONNX models. It also includes utilities for profiling and managing TensorRT execution contexts, making it easier to integrate TensorRT-based packages in Autoware.
Usage
Here is an example usage of the library. For the full API documentation, please refer to the doxygen documentation (see header file).
#include <autoware/tensorrt_common/tensorrt_common.hpp>
#include <memory>
#include <utility>
#include <vector>
using autoware::tensorrt_common::TrtCommon;
using autoware::tensorrt_common::TrtCommonConfig;
using autoware::tensorrt_common::TensorInfo;
using autoware::tensorrt_common::NetworkIO;
using autoware::tensorrt_common::ProfileDims;
std::unique_ptr<TrtCommon> trt_common_;
Create a tensorrt common instance and setup engine
- With minimal configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx"));
trt_common_->setup();
- With full configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx", "fp16", "/path/to/engine/model.engine", (1ULL << 30U), -1, false));
std::vector<NetworkIO> network_io{
NetworkIO("sample_input", {3, {-1, 64, 512}}), NetworkIO("sample_output", {1, {50}})};
std::vector<ProfileDims> profile_dims{
ProfileDims("sample_input", {3, {1, 64, 512}}, {3, {3, 64, 512}}, {3, {9, 64, 512}})};
auto network_io_ptr = std::make_unique<std::vector<NetworkIO>>(network_io);
auto profile_dims_ptr = std::make_unique<std::vector<ProfileDims>>(profile_dims);
trt_common_->setup(std::move(profile_dims_ptr), std::move(network_io_ptr));
By defining network IO names and dimensions, an extra shapes validation will be performed after building / loading engine. This is useful to ensure the model is compatible with current code for preprocessing as it might consists of operations dependent on tensor shapes.
Profile dimension is used to specify the min, opt, and max dimensions for dynamic shapes.
Network IO or / and profile dimensions can be omitted if not needed.
Setting input and output tensors
bool success = true;
success &= trt_common_->setTensor("sample_input", sample_input_d_.get(), nvinfer1::Dims{3, {var_size, 64, 512}});
success &= trt_common_->setTensor("sample_output", sample_output_d_.get());
return success;
Execute inference
auto success = trt_common_->enqueueV3(stream_);
return success;
Changelog for package autoware_tensorrt_common
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
perf(autoware_tensorrt_common): set cudaSetDeviceFlags explicitly (#10523)
- Synchronize CUDA stream by blocking instead of spin
- Use blocking-sync in BEVFusion
- Call cudaSetDeviceFlags in tensorrt_common
-
Contributors: Taekjin LEE, TaikiYamada4, prime number
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat: should be using NvInferRuntime.h (#10399)
-
feat(autoware_tenssort_common): validate TensorRT engine version for cached engine (#10320)
- autoware_tenssort_common): validate TensorRT engine version for cached engine
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* docs(autoware_tensorrt_common): add source ---------Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
-
Contributors: Amadeusz Szymko, Ryohsuke Mitsudome, Yuxuan Liu
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
Deps | Name |
---|---|
ament_cmake | |
cudnn_cmake_module | |
tensorrt_cmake_module | |
ament_lint_auto | |
ament_lint_common | |
autoware_cuda_dependency_meta | |
rclcpp |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_common at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.46.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Dan Umeda
- Manato Hirabayashi
- Amadeusz Szymko
- Kenzo Lobos-Tsunekawa
- Masato Saeki
Authors
- Taichi Higashide
- Daisuke Nishimatsu
autoware_tensorrt_common
This package provides a high-level API to work with TensorRT. This library simplifies the process of loading, building, and executing TensorRT inference engines using ONNX models. It also includes utilities for profiling and managing TensorRT execution contexts, making it easier to integrate TensorRT-based packages in Autoware.
Usage
Here is an example usage of the library. For the full API documentation, please refer to the doxygen documentation (see header file).
#include <autoware/tensorrt_common/tensorrt_common.hpp>
#include <memory>
#include <utility>
#include <vector>
using autoware::tensorrt_common::TrtCommon;
using autoware::tensorrt_common::TrtCommonConfig;
using autoware::tensorrt_common::TensorInfo;
using autoware::tensorrt_common::NetworkIO;
using autoware::tensorrt_common::ProfileDims;
std::unique_ptr<TrtCommon> trt_common_;
Create a tensorrt common instance and setup engine
- With minimal configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx"));
trt_common_->setup();
- With full configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx", "fp16", "/path/to/engine/model.engine", (1ULL << 30U), -1, false));
std::vector<NetworkIO> network_io{
NetworkIO("sample_input", {3, {-1, 64, 512}}), NetworkIO("sample_output", {1, {50}})};
std::vector<ProfileDims> profile_dims{
ProfileDims("sample_input", {3, {1, 64, 512}}, {3, {3, 64, 512}}, {3, {9, 64, 512}})};
auto network_io_ptr = std::make_unique<std::vector<NetworkIO>>(network_io);
auto profile_dims_ptr = std::make_unique<std::vector<ProfileDims>>(profile_dims);
trt_common_->setup(std::move(profile_dims_ptr), std::move(network_io_ptr));
By defining network IO names and dimensions, an extra shapes validation will be performed after building / loading engine. This is useful to ensure the model is compatible with current code for preprocessing as it might consists of operations dependent on tensor shapes.
Profile dimension is used to specify the min, opt, and max dimensions for dynamic shapes.
Network IO or / and profile dimensions can be omitted if not needed.
Setting input and output tensors
bool success = true;
success &= trt_common_->setTensor("sample_input", sample_input_d_.get(), nvinfer1::Dims{3, {var_size, 64, 512}});
success &= trt_common_->setTensor("sample_output", sample_output_d_.get());
return success;
Execute inference
auto success = trt_common_->enqueueV3(stream_);
return success;
Changelog for package autoware_tensorrt_common
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
perf(autoware_tensorrt_common): set cudaSetDeviceFlags explicitly (#10523)
- Synchronize CUDA stream by blocking instead of spin
- Use blocking-sync in BEVFusion
- Call cudaSetDeviceFlags in tensorrt_common
-
Contributors: Taekjin LEE, TaikiYamada4, prime number
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat: should be using NvInferRuntime.h (#10399)
-
feat(autoware_tenssort_common): validate TensorRT engine version for cached engine (#10320)
- autoware_tenssort_common): validate TensorRT engine version for cached engine
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* docs(autoware_tensorrt_common): add source ---------Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
-
Contributors: Amadeusz Szymko, Ryohsuke Mitsudome, Yuxuan Liu
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
Deps | Name |
---|---|
ament_cmake | |
cudnn_cmake_module | |
tensorrt_cmake_module | |
ament_lint_auto | |
ament_lint_common | |
autoware_cuda_dependency_meta | |
rclcpp |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_common at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.46.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Dan Umeda
- Manato Hirabayashi
- Amadeusz Szymko
- Kenzo Lobos-Tsunekawa
- Masato Saeki
Authors
- Taichi Higashide
- Daisuke Nishimatsu
autoware_tensorrt_common
This package provides a high-level API to work with TensorRT. This library simplifies the process of loading, building, and executing TensorRT inference engines using ONNX models. It also includes utilities for profiling and managing TensorRT execution contexts, making it easier to integrate TensorRT-based packages in Autoware.
Usage
Here is an example usage of the library. For the full API documentation, please refer to the doxygen documentation (see header file).
#include <autoware/tensorrt_common/tensorrt_common.hpp>
#include <memory>
#include <utility>
#include <vector>
using autoware::tensorrt_common::TrtCommon;
using autoware::tensorrt_common::TrtCommonConfig;
using autoware::tensorrt_common::TensorInfo;
using autoware::tensorrt_common::NetworkIO;
using autoware::tensorrt_common::ProfileDims;
std::unique_ptr<TrtCommon> trt_common_;
Create a tensorrt common instance and setup engine
- With minimal configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx"));
trt_common_->setup();
- With full configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx", "fp16", "/path/to/engine/model.engine", (1ULL << 30U), -1, false));
std::vector<NetworkIO> network_io{
NetworkIO("sample_input", {3, {-1, 64, 512}}), NetworkIO("sample_output", {1, {50}})};
std::vector<ProfileDims> profile_dims{
ProfileDims("sample_input", {3, {1, 64, 512}}, {3, {3, 64, 512}}, {3, {9, 64, 512}})};
auto network_io_ptr = std::make_unique<std::vector<NetworkIO>>(network_io);
auto profile_dims_ptr = std::make_unique<std::vector<ProfileDims>>(profile_dims);
trt_common_->setup(std::move(profile_dims_ptr), std::move(network_io_ptr));
By defining network IO names and dimensions, an extra shapes validation will be performed after building / loading engine. This is useful to ensure the model is compatible with current code for preprocessing as it might consists of operations dependent on tensor shapes.
Profile dimension is used to specify the min, opt, and max dimensions for dynamic shapes.
Network IO or / and profile dimensions can be omitted if not needed.
Setting input and output tensors
bool success = true;
success &= trt_common_->setTensor("sample_input", sample_input_d_.get(), nvinfer1::Dims{3, {var_size, 64, 512}});
success &= trt_common_->setTensor("sample_output", sample_output_d_.get());
return success;
Execute inference
auto success = trt_common_->enqueueV3(stream_);
return success;
Changelog for package autoware_tensorrt_common
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
perf(autoware_tensorrt_common): set cudaSetDeviceFlags explicitly (#10523)
- Synchronize CUDA stream by blocking instead of spin
- Use blocking-sync in BEVFusion
- Call cudaSetDeviceFlags in tensorrt_common
-
Contributors: Taekjin LEE, TaikiYamada4, prime number
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat: should be using NvInferRuntime.h (#10399)
-
feat(autoware_tenssort_common): validate TensorRT engine version for cached engine (#10320)
- autoware_tenssort_common): validate TensorRT engine version for cached engine
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* docs(autoware_tensorrt_common): add source ---------Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
-
Contributors: Amadeusz Szymko, Ryohsuke Mitsudome, Yuxuan Liu
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
Deps | Name |
---|---|
ament_cmake | |
cudnn_cmake_module | |
tensorrt_cmake_module | |
ament_lint_auto | |
ament_lint_common | |
autoware_cuda_dependency_meta | |
rclcpp |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_common at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.46.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Dan Umeda
- Manato Hirabayashi
- Amadeusz Szymko
- Kenzo Lobos-Tsunekawa
- Masato Saeki
Authors
- Taichi Higashide
- Daisuke Nishimatsu
autoware_tensorrt_common
This package provides a high-level API to work with TensorRT. This library simplifies the process of loading, building, and executing TensorRT inference engines using ONNX models. It also includes utilities for profiling and managing TensorRT execution contexts, making it easier to integrate TensorRT-based packages in Autoware.
Usage
Here is an example usage of the library. For the full API documentation, please refer to the doxygen documentation (see header file).
#include <autoware/tensorrt_common/tensorrt_common.hpp>
#include <memory>
#include <utility>
#include <vector>
using autoware::tensorrt_common::TrtCommon;
using autoware::tensorrt_common::TrtCommonConfig;
using autoware::tensorrt_common::TensorInfo;
using autoware::tensorrt_common::NetworkIO;
using autoware::tensorrt_common::ProfileDims;
std::unique_ptr<TrtCommon> trt_common_;
Create a tensorrt common instance and setup engine
- With minimal configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx"));
trt_common_->setup();
- With full configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx", "fp16", "/path/to/engine/model.engine", (1ULL << 30U), -1, false));
std::vector<NetworkIO> network_io{
NetworkIO("sample_input", {3, {-1, 64, 512}}), NetworkIO("sample_output", {1, {50}})};
std::vector<ProfileDims> profile_dims{
ProfileDims("sample_input", {3, {1, 64, 512}}, {3, {3, 64, 512}}, {3, {9, 64, 512}})};
auto network_io_ptr = std::make_unique<std::vector<NetworkIO>>(network_io);
auto profile_dims_ptr = std::make_unique<std::vector<ProfileDims>>(profile_dims);
trt_common_->setup(std::move(profile_dims_ptr), std::move(network_io_ptr));
By defining network IO names and dimensions, an extra shapes validation will be performed after building / loading engine. This is useful to ensure the model is compatible with current code for preprocessing as it might consists of operations dependent on tensor shapes.
Profile dimension is used to specify the min, opt, and max dimensions for dynamic shapes.
Network IO or / and profile dimensions can be omitted if not needed.
Setting input and output tensors
bool success = true;
success &= trt_common_->setTensor("sample_input", sample_input_d_.get(), nvinfer1::Dims{3, {var_size, 64, 512}});
success &= trt_common_->setTensor("sample_output", sample_output_d_.get());
return success;
Execute inference
auto success = trt_common_->enqueueV3(stream_);
return success;
Changelog for package autoware_tensorrt_common
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
perf(autoware_tensorrt_common): set cudaSetDeviceFlags explicitly (#10523)
- Synchronize CUDA stream by blocking instead of spin
- Use blocking-sync in BEVFusion
- Call cudaSetDeviceFlags in tensorrt_common
-
Contributors: Taekjin LEE, TaikiYamada4, prime number
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat: should be using NvInferRuntime.h (#10399)
-
feat(autoware_tenssort_common): validate TensorRT engine version for cached engine (#10320)
- autoware_tenssort_common): validate TensorRT engine version for cached engine
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* docs(autoware_tensorrt_common): add source ---------Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
-
Contributors: Amadeusz Szymko, Ryohsuke Mitsudome, Yuxuan Liu
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
Deps | Name |
---|---|
ament_cmake | |
cudnn_cmake_module | |
tensorrt_cmake_module | |
ament_lint_auto | |
ament_lint_common | |
autoware_cuda_dependency_meta | |
rclcpp |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_common at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.46.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Dan Umeda
- Manato Hirabayashi
- Amadeusz Szymko
- Kenzo Lobos-Tsunekawa
- Masato Saeki
Authors
- Taichi Higashide
- Daisuke Nishimatsu
autoware_tensorrt_common
This package provides a high-level API to work with TensorRT. This library simplifies the process of loading, building, and executing TensorRT inference engines using ONNX models. It also includes utilities for profiling and managing TensorRT execution contexts, making it easier to integrate TensorRT-based packages in Autoware.
Usage
Here is an example usage of the library. For the full API documentation, please refer to the doxygen documentation (see header file).
#include <autoware/tensorrt_common/tensorrt_common.hpp>
#include <memory>
#include <utility>
#include <vector>
using autoware::tensorrt_common::TrtCommon;
using autoware::tensorrt_common::TrtCommonConfig;
using autoware::tensorrt_common::TensorInfo;
using autoware::tensorrt_common::NetworkIO;
using autoware::tensorrt_common::ProfileDims;
std::unique_ptr<TrtCommon> trt_common_;
Create a tensorrt common instance and setup engine
- With minimal configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx"));
trt_common_->setup();
- With full configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx", "fp16", "/path/to/engine/model.engine", (1ULL << 30U), -1, false));
std::vector<NetworkIO> network_io{
NetworkIO("sample_input", {3, {-1, 64, 512}}), NetworkIO("sample_output", {1, {50}})};
std::vector<ProfileDims> profile_dims{
ProfileDims("sample_input", {3, {1, 64, 512}}, {3, {3, 64, 512}}, {3, {9, 64, 512}})};
auto network_io_ptr = std::make_unique<std::vector<NetworkIO>>(network_io);
auto profile_dims_ptr = std::make_unique<std::vector<ProfileDims>>(profile_dims);
trt_common_->setup(std::move(profile_dims_ptr), std::move(network_io_ptr));
By defining network IO names and dimensions, an extra shapes validation will be performed after building / loading engine. This is useful to ensure the model is compatible with current code for preprocessing as it might consists of operations dependent on tensor shapes.
Profile dimension is used to specify the min, opt, and max dimensions for dynamic shapes.
Network IO or / and profile dimensions can be omitted if not needed.
Setting input and output tensors
bool success = true;
success &= trt_common_->setTensor("sample_input", sample_input_d_.get(), nvinfer1::Dims{3, {var_size, 64, 512}});
success &= trt_common_->setTensor("sample_output", sample_output_d_.get());
return success;
Execute inference
auto success = trt_common_->enqueueV3(stream_);
return success;
Changelog for package autoware_tensorrt_common
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
perf(autoware_tensorrt_common): set cudaSetDeviceFlags explicitly (#10523)
- Synchronize CUDA stream by blocking instead of spin
- Use blocking-sync in BEVFusion
- Call cudaSetDeviceFlags in tensorrt_common
-
Contributors: Taekjin LEE, TaikiYamada4, prime number
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat: should be using NvInferRuntime.h (#10399)
-
feat(autoware_tenssort_common): validate TensorRT engine version for cached engine (#10320)
- autoware_tenssort_common): validate TensorRT engine version for cached engine
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* docs(autoware_tensorrt_common): add source ---------Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
-
Contributors: Amadeusz Szymko, Ryohsuke Mitsudome, Yuxuan Liu
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
Deps | Name |
---|---|
ament_cmake | |
cudnn_cmake_module | |
tensorrt_cmake_module | |
ament_lint_auto | |
ament_lint_common | |
autoware_cuda_dependency_meta | |
rclcpp |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_common at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.46.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Dan Umeda
- Manato Hirabayashi
- Amadeusz Szymko
- Kenzo Lobos-Tsunekawa
- Masato Saeki
Authors
- Taichi Higashide
- Daisuke Nishimatsu
autoware_tensorrt_common
This package provides a high-level API to work with TensorRT. This library simplifies the process of loading, building, and executing TensorRT inference engines using ONNX models. It also includes utilities for profiling and managing TensorRT execution contexts, making it easier to integrate TensorRT-based packages in Autoware.
Usage
Here is an example usage of the library. For the full API documentation, please refer to the doxygen documentation (see header file).
#include <autoware/tensorrt_common/tensorrt_common.hpp>
#include <memory>
#include <utility>
#include <vector>
using autoware::tensorrt_common::TrtCommon;
using autoware::tensorrt_common::TrtCommonConfig;
using autoware::tensorrt_common::TensorInfo;
using autoware::tensorrt_common::NetworkIO;
using autoware::tensorrt_common::ProfileDims;
std::unique_ptr<TrtCommon> trt_common_;
Create a tensorrt common instance and setup engine
- With minimal configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx"));
trt_common_->setup();
- With full configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx", "fp16", "/path/to/engine/model.engine", (1ULL << 30U), -1, false));
std::vector<NetworkIO> network_io{
NetworkIO("sample_input", {3, {-1, 64, 512}}), NetworkIO("sample_output", {1, {50}})};
std::vector<ProfileDims> profile_dims{
ProfileDims("sample_input", {3, {1, 64, 512}}, {3, {3, 64, 512}}, {3, {9, 64, 512}})};
auto network_io_ptr = std::make_unique<std::vector<NetworkIO>>(network_io);
auto profile_dims_ptr = std::make_unique<std::vector<ProfileDims>>(profile_dims);
trt_common_->setup(std::move(profile_dims_ptr), std::move(network_io_ptr));
By defining network IO names and dimensions, an extra shapes validation will be performed after building / loading engine. This is useful to ensure the model is compatible with current code for preprocessing as it might consists of operations dependent on tensor shapes.
Profile dimension is used to specify the min, opt, and max dimensions for dynamic shapes.
Network IO or / and profile dimensions can be omitted if not needed.
Setting input and output tensors
bool success = true;
success &= trt_common_->setTensor("sample_input", sample_input_d_.get(), nvinfer1::Dims{3, {var_size, 64, 512}});
success &= trt_common_->setTensor("sample_output", sample_output_d_.get());
return success;
Execute inference
auto success = trt_common_->enqueueV3(stream_);
return success;
Changelog for package autoware_tensorrt_common
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
perf(autoware_tensorrt_common): set cudaSetDeviceFlags explicitly (#10523)
- Synchronize CUDA stream by blocking instead of spin
- Use blocking-sync in BEVFusion
- Call cudaSetDeviceFlags in tensorrt_common
-
Contributors: Taekjin LEE, TaikiYamada4, prime number
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat: should be using NvInferRuntime.h (#10399)
-
feat(autoware_tenssort_common): validate TensorRT engine version for cached engine (#10320)
- autoware_tenssort_common): validate TensorRT engine version for cached engine
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* docs(autoware_tensorrt_common): add source ---------Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
-
Contributors: Amadeusz Szymko, Ryohsuke Mitsudome, Yuxuan Liu
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
Deps | Name |
---|---|
ament_cmake | |
cudnn_cmake_module | |
tensorrt_cmake_module | |
ament_lint_auto | |
ament_lint_common | |
autoware_cuda_dependency_meta | |
rclcpp |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_common at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.46.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Dan Umeda
- Manato Hirabayashi
- Amadeusz Szymko
- Kenzo Lobos-Tsunekawa
- Masato Saeki
Authors
- Taichi Higashide
- Daisuke Nishimatsu
autoware_tensorrt_common
This package provides a high-level API to work with TensorRT. This library simplifies the process of loading, building, and executing TensorRT inference engines using ONNX models. It also includes utilities for profiling and managing TensorRT execution contexts, making it easier to integrate TensorRT-based packages in Autoware.
Usage
Here is an example usage of the library. For the full API documentation, please refer to the doxygen documentation (see header file).
#include <autoware/tensorrt_common/tensorrt_common.hpp>
#include <memory>
#include <utility>
#include <vector>
using autoware::tensorrt_common::TrtCommon;
using autoware::tensorrt_common::TrtCommonConfig;
using autoware::tensorrt_common::TensorInfo;
using autoware::tensorrt_common::NetworkIO;
using autoware::tensorrt_common::ProfileDims;
std::unique_ptr<TrtCommon> trt_common_;
Create a tensorrt common instance and setup engine
- With minimal configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx"));
trt_common_->setup();
- With full configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx", "fp16", "/path/to/engine/model.engine", (1ULL << 30U), -1, false));
std::vector<NetworkIO> network_io{
NetworkIO("sample_input", {3, {-1, 64, 512}}), NetworkIO("sample_output", {1, {50}})};
std::vector<ProfileDims> profile_dims{
ProfileDims("sample_input", {3, {1, 64, 512}}, {3, {3, 64, 512}}, {3, {9, 64, 512}})};
auto network_io_ptr = std::make_unique<std::vector<NetworkIO>>(network_io);
auto profile_dims_ptr = std::make_unique<std::vector<ProfileDims>>(profile_dims);
trt_common_->setup(std::move(profile_dims_ptr), std::move(network_io_ptr));
By defining network IO names and dimensions, an extra shapes validation will be performed after building / loading engine. This is useful to ensure the model is compatible with current code for preprocessing as it might consists of operations dependent on tensor shapes.
Profile dimension is used to specify the min, opt, and max dimensions for dynamic shapes.
Network IO or / and profile dimensions can be omitted if not needed.
Setting input and output tensors
bool success = true;
success &= trt_common_->setTensor("sample_input", sample_input_d_.get(), nvinfer1::Dims{3, {var_size, 64, 512}});
success &= trt_common_->setTensor("sample_output", sample_output_d_.get());
return success;
Execute inference
auto success = trt_common_->enqueueV3(stream_);
return success;
Changelog for package autoware_tensorrt_common
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
perf(autoware_tensorrt_common): set cudaSetDeviceFlags explicitly (#10523)
- Synchronize CUDA stream by blocking instead of spin
- Use blocking-sync in BEVFusion
- Call cudaSetDeviceFlags in tensorrt_common
-
Contributors: Taekjin LEE, TaikiYamada4, prime number
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat: should be using NvInferRuntime.h (#10399)
-
feat(autoware_tenssort_common): validate TensorRT engine version for cached engine (#10320)
- autoware_tenssort_common): validate TensorRT engine version for cached engine
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* docs(autoware_tensorrt_common): add source ---------Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
-
Contributors: Amadeusz Szymko, Ryohsuke Mitsudome, Yuxuan Liu
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
Deps | Name |
---|---|
ament_cmake | |
cudnn_cmake_module | |
tensorrt_cmake_module | |
ament_lint_auto | |
ament_lint_common | |
autoware_cuda_dependency_meta | |
rclcpp |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged autoware_tensorrt_common at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.46.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-07-31 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Dan Umeda
- Manato Hirabayashi
- Amadeusz Szymko
- Kenzo Lobos-Tsunekawa
- Masato Saeki
Authors
- Taichi Higashide
- Daisuke Nishimatsu
autoware_tensorrt_common
This package provides a high-level API to work with TensorRT. This library simplifies the process of loading, building, and executing TensorRT inference engines using ONNX models. It also includes utilities for profiling and managing TensorRT execution contexts, making it easier to integrate TensorRT-based packages in Autoware.
Usage
Here is an example usage of the library. For the full API documentation, please refer to the doxygen documentation (see header file).
#include <autoware/tensorrt_common/tensorrt_common.hpp>
#include <memory>
#include <utility>
#include <vector>
using autoware::tensorrt_common::TrtCommon;
using autoware::tensorrt_common::TrtCommonConfig;
using autoware::tensorrt_common::TensorInfo;
using autoware::tensorrt_common::NetworkIO;
using autoware::tensorrt_common::ProfileDims;
std::unique_ptr<TrtCommon> trt_common_;
Create a tensorrt common instance and setup engine
- With minimal configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx"));
trt_common_->setup();
- With full configuration.
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig("/path/to/onnx/model.onnx", "fp16", "/path/to/engine/model.engine", (1ULL << 30U), -1, false));
std::vector<NetworkIO> network_io{
NetworkIO("sample_input", {3, {-1, 64, 512}}), NetworkIO("sample_output", {1, {50}})};
std::vector<ProfileDims> profile_dims{
ProfileDims("sample_input", {3, {1, 64, 512}}, {3, {3, 64, 512}}, {3, {9, 64, 512}})};
auto network_io_ptr = std::make_unique<std::vector<NetworkIO>>(network_io);
auto profile_dims_ptr = std::make_unique<std::vector<ProfileDims>>(profile_dims);
trt_common_->setup(std::move(profile_dims_ptr), std::move(network_io_ptr));
By defining network IO names and dimensions, an extra shapes validation will be performed after building / loading engine. This is useful to ensure the model is compatible with current code for preprocessing as it might consists of operations dependent on tensor shapes.
Profile dimension is used to specify the min, opt, and max dimensions for dynamic shapes.
Network IO or / and profile dimensions can be omitted if not needed.
Setting input and output tensors
bool success = true;
success &= trt_common_->setTensor("sample_input", sample_input_d_.get(), nvinfer1::Dims{3, {var_size, 64, 512}});
success &= trt_common_->setTensor("sample_output", sample_output_d_.get());
return success;
Execute inference
auto success = trt_common_->enqueueV3(stream_);
return success;
Changelog for package autoware_tensorrt_common
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
-
Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
-
chore: perception code owner update (#10645)
- chore: update maintainers in multiple perception packages
* Revert "chore: update maintainers in multiple perception packages" This reverts commit f2838c33d6cd82bd032039e2a12b9cb8ba6eb584.
- chore: update maintainers in multiple perception packages
* chore: add Kok Seang Tan as maintainer in multiple perception packages ---------
-
perf(autoware_tensorrt_common): set cudaSetDeviceFlags explicitly (#10523)
- Synchronize CUDA stream by blocking instead of spin
- Use blocking-sync in BEVFusion
- Call cudaSetDeviceFlags in tensorrt_common
-
Contributors: Taekjin LEE, TaikiYamada4, prime number
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
feat: should be using NvInferRuntime.h (#10399)
-
feat(autoware_tenssort_common): validate TensorRT engine version for cached engine (#10320)
- autoware_tenssort_common): validate TensorRT engine version for cached engine
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* style(autoware_tensorrt_common): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
* docs(autoware_tensorrt_common): add source ---------Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
-
Contributors: Amadeusz Szymko, Ryohsuke Mitsudome, Yuxuan Liu
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
File truncated at 100 lines see the full file
Package Dependencies
Deps | Name |
---|---|
ament_cmake | |
cudnn_cmake_module | |
tensorrt_cmake_module | |
ament_lint_auto | |
ament_lint_common | |
autoware_cuda_dependency_meta | |
rclcpp |