Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/ieiauto/autodrrt.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-05-30 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daisuke Nishimatsu
- Dan Umeda
- Manato Hirabayashi
Authors
- Daisuke Nishimatsu
tensorrt_yolox
Purpose
This package detects target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLOX model.
Inner-workings / Algorithms
Cite
Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, “YOLOX: Exceeding YOLO Series in 2021”, arXiv preprint arXiv:2107.08430, 2021 [ref]
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
in/image |
sensor_msgs/Image |
The input image |
Output
Name | Type | Description |
---|---|---|
out/objects |
tier4_perception_msgs/DetectedObjectsWithFeature |
The detected objects with 2D bounding boxes |
out/image |
sensor_msgs/Image |
The image with 2D bounding boxes for visualization |
Parameters
Core Parameters
Name | Type | Default Value | Description |
---|---|---|---|
score_threshold |
float | 0.3 | If the objectness score is less than this value, the object is ignored in yolox layer. |
nms_threshold |
float | 0.7 | The IoU threshold for NMS method |
NOTE: These two parameters are only valid for “plain” model (described later).
Node Parameters
Name | Type | Default Value | Description |
---|---|---|---|
model_path |
string | ”” | The onnx file name for yolox model |
label_path |
string | ”” | The label file with label names for detected objects written on it |
precision |
string | “fp16” | The inference mode: “fp32”, “fp16”, “int8” |
build_only |
bool | false | shutdown node after TensorRT engine file is built |
calibration_algorithm |
string | “MinMax” | Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: Entropy”,(“Legacy” | “Percentile”), “MinMax”] |
dla_core_id |
int | -1 | If positive ID value is specified, the node assign inference task to the DLA core |
quantize_first_layer |
bool | false | If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8 |
quantize_last_layer |
bool | false | If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8 |
profile_per_layer |
bool | false | If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. |
clip_value |
double | 0.0 | If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration |
preprocess_on_gpu |
bool | true | If true, pre-processing is performed on GPU |
calibration_image_list_path |
string | ”” | Path to a file which contains path to images. Those images will be used for int8 quantization. |
Assumptions / Known limits
The label contained in detected 2D bounding boxes (i.e., out/objects
) will be either one of the followings:
- CAR
- PEDESTRIAN (“PERSON” will also be categorized as “PEDESTRIAN”)
- BUS
- TRUCK
- BICYCLE
- MOTORCYCLE
If other labels (case insensitive) are contained in the file specified via the label_file
parameter,
those are labeled as UNKNOWN
, while detected rectangles are drawn in the visualization result (out/image
).
Onnx model
A sample model (named yolox-tiny.onnx
) is downloaded by ansible script on env preparation stage, if not, please, follow Manual downloading of artifacts.
To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference,
EfficientNMS_TRT
module is attached after the ordinal YOLOX (tiny) network.
The EfficientNMS_TRT
module contains fixed values for score_threshold
and nms_threshold
in it,
hence these parameters are ignored when users specify ONNX models including this module.
This package accepts both EfficientNMS_TRT
attached ONNXs and models published from the official YOLOX repository (we referred to them as “plain” models).
In addition to yolox-tiny.onnx
, a custom model named yolox-sPlus-opt.onnx
is either available.
This model is based on YOLOX-s and tuned to perform more accurate detection with almost comparable execution speed with yolox-tiny
.
To get better results with this model, users are recommended to use some specific running arguments
such as precision:=int8
, calibration_algorithm:=Entropy
, clip_value:=6.0
.
Users can refer launch/yolox_sPlus_opt.launch.xml
to see how this model can be used.
All models are automatically converted to TensorRT format.
These converted files will be saved in the same directory as specified ONNX files
with .engine
filename extension and reused from the next run.
The conversion process may take a while (typically 10 to 20 minutes) and the inference process is blocked
until complete the conversion, so it will take some time until detection results are published (even until appearing in the topic list) on the first run
Package acceptable model generation
To convert users’ own model that saved in PyTorch’s pth
format into ONNX,
users can exploit the converter offered by the official repository.
For the convenience, only procedures are described below.
Please refer the official document for more detail.
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
libopencv-dev |
Dependant Packages
Name | Deps |
---|---|
traffic_light_fine_detector |
Launch files
- launch/yolox.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_s_plus_opt.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_tiny.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-tiny]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: fp16]
- calibration_algorithm [default: MinMax]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 0.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
Messages
Services
Plugins
Recent questions tagged tensorrt_yolox at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/ieiauto/autodrrt.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-05-30 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daisuke Nishimatsu
- Dan Umeda
- Manato Hirabayashi
Authors
- Daisuke Nishimatsu
tensorrt_yolox
Purpose
This package detects target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLOX model.
Inner-workings / Algorithms
Cite
Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, “YOLOX: Exceeding YOLO Series in 2021”, arXiv preprint arXiv:2107.08430, 2021 [ref]
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
in/image |
sensor_msgs/Image |
The input image |
Output
Name | Type | Description |
---|---|---|
out/objects |
tier4_perception_msgs/DetectedObjectsWithFeature |
The detected objects with 2D bounding boxes |
out/image |
sensor_msgs/Image |
The image with 2D bounding boxes for visualization |
Parameters
Core Parameters
Name | Type | Default Value | Description |
---|---|---|---|
score_threshold |
float | 0.3 | If the objectness score is less than this value, the object is ignored in yolox layer. |
nms_threshold |
float | 0.7 | The IoU threshold for NMS method |
NOTE: These two parameters are only valid for “plain” model (described later).
Node Parameters
Name | Type | Default Value | Description |
---|---|---|---|
model_path |
string | ”” | The onnx file name for yolox model |
label_path |
string | ”” | The label file with label names for detected objects written on it |
precision |
string | “fp16” | The inference mode: “fp32”, “fp16”, “int8” |
build_only |
bool | false | shutdown node after TensorRT engine file is built |
calibration_algorithm |
string | “MinMax” | Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: Entropy”,(“Legacy” | “Percentile”), “MinMax”] |
dla_core_id |
int | -1 | If positive ID value is specified, the node assign inference task to the DLA core |
quantize_first_layer |
bool | false | If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8 |
quantize_last_layer |
bool | false | If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8 |
profile_per_layer |
bool | false | If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. |
clip_value |
double | 0.0 | If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration |
preprocess_on_gpu |
bool | true | If true, pre-processing is performed on GPU |
calibration_image_list_path |
string | ”” | Path to a file which contains path to images. Those images will be used for int8 quantization. |
Assumptions / Known limits
The label contained in detected 2D bounding boxes (i.e., out/objects
) will be either one of the followings:
- CAR
- PEDESTRIAN (“PERSON” will also be categorized as “PEDESTRIAN”)
- BUS
- TRUCK
- BICYCLE
- MOTORCYCLE
If other labels (case insensitive) are contained in the file specified via the label_file
parameter,
those are labeled as UNKNOWN
, while detected rectangles are drawn in the visualization result (out/image
).
Onnx model
A sample model (named yolox-tiny.onnx
) is downloaded by ansible script on env preparation stage, if not, please, follow Manual downloading of artifacts.
To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference,
EfficientNMS_TRT
module is attached after the ordinal YOLOX (tiny) network.
The EfficientNMS_TRT
module contains fixed values for score_threshold
and nms_threshold
in it,
hence these parameters are ignored when users specify ONNX models including this module.
This package accepts both EfficientNMS_TRT
attached ONNXs and models published from the official YOLOX repository (we referred to them as “plain” models).
In addition to yolox-tiny.onnx
, a custom model named yolox-sPlus-opt.onnx
is either available.
This model is based on YOLOX-s and tuned to perform more accurate detection with almost comparable execution speed with yolox-tiny
.
To get better results with this model, users are recommended to use some specific running arguments
such as precision:=int8
, calibration_algorithm:=Entropy
, clip_value:=6.0
.
Users can refer launch/yolox_sPlus_opt.launch.xml
to see how this model can be used.
All models are automatically converted to TensorRT format.
These converted files will be saved in the same directory as specified ONNX files
with .engine
filename extension and reused from the next run.
The conversion process may take a while (typically 10 to 20 minutes) and the inference process is blocked
until complete the conversion, so it will take some time until detection results are published (even until appearing in the topic list) on the first run
Package acceptable model generation
To convert users’ own model that saved in PyTorch’s pth
format into ONNX,
users can exploit the converter offered by the official repository.
For the convenience, only procedures are described below.
Please refer the official document for more detail.
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
libopencv-dev |
Dependant Packages
Name | Deps |
---|---|
traffic_light_fine_detector |
Launch files
- launch/yolox.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_s_plus_opt.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_tiny.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-tiny]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: fp16]
- calibration_algorithm [default: MinMax]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 0.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
Messages
Services
Plugins
Recent questions tagged tensorrt_yolox at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/ieiauto/autodrrt.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-05-30 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daisuke Nishimatsu
- Dan Umeda
- Manato Hirabayashi
Authors
- Daisuke Nishimatsu
tensorrt_yolox
Purpose
This package detects target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLOX model.
Inner-workings / Algorithms
Cite
Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, “YOLOX: Exceeding YOLO Series in 2021”, arXiv preprint arXiv:2107.08430, 2021 [ref]
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
in/image |
sensor_msgs/Image |
The input image |
Output
Name | Type | Description |
---|---|---|
out/objects |
tier4_perception_msgs/DetectedObjectsWithFeature |
The detected objects with 2D bounding boxes |
out/image |
sensor_msgs/Image |
The image with 2D bounding boxes for visualization |
Parameters
Core Parameters
Name | Type | Default Value | Description |
---|---|---|---|
score_threshold |
float | 0.3 | If the objectness score is less than this value, the object is ignored in yolox layer. |
nms_threshold |
float | 0.7 | The IoU threshold for NMS method |
NOTE: These two parameters are only valid for “plain” model (described later).
Node Parameters
Name | Type | Default Value | Description |
---|---|---|---|
model_path |
string | ”” | The onnx file name for yolox model |
label_path |
string | ”” | The label file with label names for detected objects written on it |
precision |
string | “fp16” | The inference mode: “fp32”, “fp16”, “int8” |
build_only |
bool | false | shutdown node after TensorRT engine file is built |
calibration_algorithm |
string | “MinMax” | Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: Entropy”,(“Legacy” | “Percentile”), “MinMax”] |
dla_core_id |
int | -1 | If positive ID value is specified, the node assign inference task to the DLA core |
quantize_first_layer |
bool | false | If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8 |
quantize_last_layer |
bool | false | If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8 |
profile_per_layer |
bool | false | If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. |
clip_value |
double | 0.0 | If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration |
preprocess_on_gpu |
bool | true | If true, pre-processing is performed on GPU |
calibration_image_list_path |
string | ”” | Path to a file which contains path to images. Those images will be used for int8 quantization. |
Assumptions / Known limits
The label contained in detected 2D bounding boxes (i.e., out/objects
) will be either one of the followings:
- CAR
- PEDESTRIAN (“PERSON” will also be categorized as “PEDESTRIAN”)
- BUS
- TRUCK
- BICYCLE
- MOTORCYCLE
If other labels (case insensitive) are contained in the file specified via the label_file
parameter,
those are labeled as UNKNOWN
, while detected rectangles are drawn in the visualization result (out/image
).
Onnx model
A sample model (named yolox-tiny.onnx
) is downloaded by ansible script on env preparation stage, if not, please, follow Manual downloading of artifacts.
To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference,
EfficientNMS_TRT
module is attached after the ordinal YOLOX (tiny) network.
The EfficientNMS_TRT
module contains fixed values for score_threshold
and nms_threshold
in it,
hence these parameters are ignored when users specify ONNX models including this module.
This package accepts both EfficientNMS_TRT
attached ONNXs and models published from the official YOLOX repository (we referred to them as “plain” models).
In addition to yolox-tiny.onnx
, a custom model named yolox-sPlus-opt.onnx
is either available.
This model is based on YOLOX-s and tuned to perform more accurate detection with almost comparable execution speed with yolox-tiny
.
To get better results with this model, users are recommended to use some specific running arguments
such as precision:=int8
, calibration_algorithm:=Entropy
, clip_value:=6.0
.
Users can refer launch/yolox_sPlus_opt.launch.xml
to see how this model can be used.
All models are automatically converted to TensorRT format.
These converted files will be saved in the same directory as specified ONNX files
with .engine
filename extension and reused from the next run.
The conversion process may take a while (typically 10 to 20 minutes) and the inference process is blocked
until complete the conversion, so it will take some time until detection results are published (even until appearing in the topic list) on the first run
Package acceptable model generation
To convert users’ own model that saved in PyTorch’s pth
format into ONNX,
users can exploit the converter offered by the official repository.
For the convenience, only procedures are described below.
Please refer the official document for more detail.
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
libopencv-dev |
Dependant Packages
Name | Deps |
---|---|
traffic_light_fine_detector |
Launch files
- launch/yolox.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_s_plus_opt.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_tiny.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-tiny]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: fp16]
- calibration_algorithm [default: MinMax]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 0.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
Messages
Services
Plugins
Recent questions tagged tensorrt_yolox at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/ieiauto/autodrrt.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-05-30 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daisuke Nishimatsu
- Dan Umeda
- Manato Hirabayashi
Authors
- Daisuke Nishimatsu
tensorrt_yolox
Purpose
This package detects target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLOX model.
Inner-workings / Algorithms
Cite
Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, “YOLOX: Exceeding YOLO Series in 2021”, arXiv preprint arXiv:2107.08430, 2021 [ref]
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
in/image |
sensor_msgs/Image |
The input image |
Output
Name | Type | Description |
---|---|---|
out/objects |
tier4_perception_msgs/DetectedObjectsWithFeature |
The detected objects with 2D bounding boxes |
out/image |
sensor_msgs/Image |
The image with 2D bounding boxes for visualization |
Parameters
Core Parameters
Name | Type | Default Value | Description |
---|---|---|---|
score_threshold |
float | 0.3 | If the objectness score is less than this value, the object is ignored in yolox layer. |
nms_threshold |
float | 0.7 | The IoU threshold for NMS method |
NOTE: These two parameters are only valid for “plain” model (described later).
Node Parameters
Name | Type | Default Value | Description |
---|---|---|---|
model_path |
string | ”” | The onnx file name for yolox model |
label_path |
string | ”” | The label file with label names for detected objects written on it |
precision |
string | “fp16” | The inference mode: “fp32”, “fp16”, “int8” |
build_only |
bool | false | shutdown node after TensorRT engine file is built |
calibration_algorithm |
string | “MinMax” | Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: Entropy”,(“Legacy” | “Percentile”), “MinMax”] |
dla_core_id |
int | -1 | If positive ID value is specified, the node assign inference task to the DLA core |
quantize_first_layer |
bool | false | If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8 |
quantize_last_layer |
bool | false | If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8 |
profile_per_layer |
bool | false | If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. |
clip_value |
double | 0.0 | If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration |
preprocess_on_gpu |
bool | true | If true, pre-processing is performed on GPU |
calibration_image_list_path |
string | ”” | Path to a file which contains path to images. Those images will be used for int8 quantization. |
Assumptions / Known limits
The label contained in detected 2D bounding boxes (i.e., out/objects
) will be either one of the followings:
- CAR
- PEDESTRIAN (“PERSON” will also be categorized as “PEDESTRIAN”)
- BUS
- TRUCK
- BICYCLE
- MOTORCYCLE
If other labels (case insensitive) are contained in the file specified via the label_file
parameter,
those are labeled as UNKNOWN
, while detected rectangles are drawn in the visualization result (out/image
).
Onnx model
A sample model (named yolox-tiny.onnx
) is downloaded by ansible script on env preparation stage, if not, please, follow Manual downloading of artifacts.
To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference,
EfficientNMS_TRT
module is attached after the ordinal YOLOX (tiny) network.
The EfficientNMS_TRT
module contains fixed values for score_threshold
and nms_threshold
in it,
hence these parameters are ignored when users specify ONNX models including this module.
This package accepts both EfficientNMS_TRT
attached ONNXs and models published from the official YOLOX repository (we referred to them as “plain” models).
In addition to yolox-tiny.onnx
, a custom model named yolox-sPlus-opt.onnx
is either available.
This model is based on YOLOX-s and tuned to perform more accurate detection with almost comparable execution speed with yolox-tiny
.
To get better results with this model, users are recommended to use some specific running arguments
such as precision:=int8
, calibration_algorithm:=Entropy
, clip_value:=6.0
.
Users can refer launch/yolox_sPlus_opt.launch.xml
to see how this model can be used.
All models are automatically converted to TensorRT format.
These converted files will be saved in the same directory as specified ONNX files
with .engine
filename extension and reused from the next run.
The conversion process may take a while (typically 10 to 20 minutes) and the inference process is blocked
until complete the conversion, so it will take some time until detection results are published (even until appearing in the topic list) on the first run
Package acceptable model generation
To convert users’ own model that saved in PyTorch’s pth
format into ONNX,
users can exploit the converter offered by the official repository.
For the convenience, only procedures are described below.
Please refer the official document for more detail.
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
libopencv-dev |
Dependant Packages
Name | Deps |
---|---|
traffic_light_fine_detector |
Launch files
- launch/yolox.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_s_plus_opt.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_tiny.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-tiny]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: fp16]
- calibration_algorithm [default: MinMax]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 0.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
Messages
Services
Plugins
Recent questions tagged tensorrt_yolox at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/ieiauto/autodrrt.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-05-30 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daisuke Nishimatsu
- Dan Umeda
- Manato Hirabayashi
Authors
- Daisuke Nishimatsu
tensorrt_yolox
Purpose
This package detects target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLOX model.
Inner-workings / Algorithms
Cite
Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, “YOLOX: Exceeding YOLO Series in 2021”, arXiv preprint arXiv:2107.08430, 2021 [ref]
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
in/image |
sensor_msgs/Image |
The input image |
Output
Name | Type | Description |
---|---|---|
out/objects |
tier4_perception_msgs/DetectedObjectsWithFeature |
The detected objects with 2D bounding boxes |
out/image |
sensor_msgs/Image |
The image with 2D bounding boxes for visualization |
Parameters
Core Parameters
Name | Type | Default Value | Description |
---|---|---|---|
score_threshold |
float | 0.3 | If the objectness score is less than this value, the object is ignored in yolox layer. |
nms_threshold |
float | 0.7 | The IoU threshold for NMS method |
NOTE: These two parameters are only valid for “plain” model (described later).
Node Parameters
Name | Type | Default Value | Description |
---|---|---|---|
model_path |
string | ”” | The onnx file name for yolox model |
label_path |
string | ”” | The label file with label names for detected objects written on it |
precision |
string | “fp16” | The inference mode: “fp32”, “fp16”, “int8” |
build_only |
bool | false | shutdown node after TensorRT engine file is built |
calibration_algorithm |
string | “MinMax” | Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: Entropy”,(“Legacy” | “Percentile”), “MinMax”] |
dla_core_id |
int | -1 | If positive ID value is specified, the node assign inference task to the DLA core |
quantize_first_layer |
bool | false | If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8 |
quantize_last_layer |
bool | false | If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8 |
profile_per_layer |
bool | false | If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. |
clip_value |
double | 0.0 | If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration |
preprocess_on_gpu |
bool | true | If true, pre-processing is performed on GPU |
calibration_image_list_path |
string | ”” | Path to a file which contains path to images. Those images will be used for int8 quantization. |
Assumptions / Known limits
The label contained in detected 2D bounding boxes (i.e., out/objects
) will be either one of the followings:
- CAR
- PEDESTRIAN (“PERSON” will also be categorized as “PEDESTRIAN”)
- BUS
- TRUCK
- BICYCLE
- MOTORCYCLE
If other labels (case insensitive) are contained in the file specified via the label_file
parameter,
those are labeled as UNKNOWN
, while detected rectangles are drawn in the visualization result (out/image
).
Onnx model
A sample model (named yolox-tiny.onnx
) is downloaded by ansible script on env preparation stage, if not, please, follow Manual downloading of artifacts.
To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference,
EfficientNMS_TRT
module is attached after the ordinal YOLOX (tiny) network.
The EfficientNMS_TRT
module contains fixed values for score_threshold
and nms_threshold
in it,
hence these parameters are ignored when users specify ONNX models including this module.
This package accepts both EfficientNMS_TRT
attached ONNXs and models published from the official YOLOX repository (we referred to them as “plain” models).
In addition to yolox-tiny.onnx
, a custom model named yolox-sPlus-opt.onnx
is either available.
This model is based on YOLOX-s and tuned to perform more accurate detection with almost comparable execution speed with yolox-tiny
.
To get better results with this model, users are recommended to use some specific running arguments
such as precision:=int8
, calibration_algorithm:=Entropy
, clip_value:=6.0
.
Users can refer launch/yolox_sPlus_opt.launch.xml
to see how this model can be used.
All models are automatically converted to TensorRT format.
These converted files will be saved in the same directory as specified ONNX files
with .engine
filename extension and reused from the next run.
The conversion process may take a while (typically 10 to 20 minutes) and the inference process is blocked
until complete the conversion, so it will take some time until detection results are published (even until appearing in the topic list) on the first run
Package acceptable model generation
To convert users’ own model that saved in PyTorch’s pth
format into ONNX,
users can exploit the converter offered by the official repository.
For the convenience, only procedures are described below.
Please refer the official document for more detail.
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
libopencv-dev |
Dependant Packages
Name | Deps |
---|---|
traffic_light_fine_detector |
Launch files
- launch/yolox.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_s_plus_opt.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_tiny.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-tiny]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: fp16]
- calibration_algorithm [default: MinMax]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 0.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
Messages
Services
Plugins
Recent questions tagged tensorrt_yolox at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/ieiauto/autodrrt.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-05-30 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daisuke Nishimatsu
- Dan Umeda
- Manato Hirabayashi
Authors
- Daisuke Nishimatsu
tensorrt_yolox
Purpose
This package detects target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLOX model.
Inner-workings / Algorithms
Cite
Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, “YOLOX: Exceeding YOLO Series in 2021”, arXiv preprint arXiv:2107.08430, 2021 [ref]
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
in/image |
sensor_msgs/Image |
The input image |
Output
Name | Type | Description |
---|---|---|
out/objects |
tier4_perception_msgs/DetectedObjectsWithFeature |
The detected objects with 2D bounding boxes |
out/image |
sensor_msgs/Image |
The image with 2D bounding boxes for visualization |
Parameters
Core Parameters
Name | Type | Default Value | Description |
---|---|---|---|
score_threshold |
float | 0.3 | If the objectness score is less than this value, the object is ignored in yolox layer. |
nms_threshold |
float | 0.7 | The IoU threshold for NMS method |
NOTE: These two parameters are only valid for “plain” model (described later).
Node Parameters
Name | Type | Default Value | Description |
---|---|---|---|
model_path |
string | ”” | The onnx file name for yolox model |
label_path |
string | ”” | The label file with label names for detected objects written on it |
precision |
string | “fp16” | The inference mode: “fp32”, “fp16”, “int8” |
build_only |
bool | false | shutdown node after TensorRT engine file is built |
calibration_algorithm |
string | “MinMax” | Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: Entropy”,(“Legacy” | “Percentile”), “MinMax”] |
dla_core_id |
int | -1 | If positive ID value is specified, the node assign inference task to the DLA core |
quantize_first_layer |
bool | false | If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8 |
quantize_last_layer |
bool | false | If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8 |
profile_per_layer |
bool | false | If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. |
clip_value |
double | 0.0 | If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration |
preprocess_on_gpu |
bool | true | If true, pre-processing is performed on GPU |
calibration_image_list_path |
string | ”” | Path to a file which contains path to images. Those images will be used for int8 quantization. |
Assumptions / Known limits
The label contained in detected 2D bounding boxes (i.e., out/objects
) will be either one of the followings:
- CAR
- PEDESTRIAN (“PERSON” will also be categorized as “PEDESTRIAN”)
- BUS
- TRUCK
- BICYCLE
- MOTORCYCLE
If other labels (case insensitive) are contained in the file specified via the label_file
parameter,
those are labeled as UNKNOWN
, while detected rectangles are drawn in the visualization result (out/image
).
Onnx model
A sample model (named yolox-tiny.onnx
) is downloaded by ansible script on env preparation stage, if not, please, follow Manual downloading of artifacts.
To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference,
EfficientNMS_TRT
module is attached after the ordinal YOLOX (tiny) network.
The EfficientNMS_TRT
module contains fixed values for score_threshold
and nms_threshold
in it,
hence these parameters are ignored when users specify ONNX models including this module.
This package accepts both EfficientNMS_TRT
attached ONNXs and models published from the official YOLOX repository (we referred to them as “plain” models).
In addition to yolox-tiny.onnx
, a custom model named yolox-sPlus-opt.onnx
is either available.
This model is based on YOLOX-s and tuned to perform more accurate detection with almost comparable execution speed with yolox-tiny
.
To get better results with this model, users are recommended to use some specific running arguments
such as precision:=int8
, calibration_algorithm:=Entropy
, clip_value:=6.0
.
Users can refer launch/yolox_sPlus_opt.launch.xml
to see how this model can be used.
All models are automatically converted to TensorRT format.
These converted files will be saved in the same directory as specified ONNX files
with .engine
filename extension and reused from the next run.
The conversion process may take a while (typically 10 to 20 minutes) and the inference process is blocked
until complete the conversion, so it will take some time until detection results are published (even until appearing in the topic list) on the first run
Package acceptable model generation
To convert users’ own model that saved in PyTorch’s pth
format into ONNX,
users can exploit the converter offered by the official repository.
For the convenience, only procedures are described below.
Please refer the official document for more detail.
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
libopencv-dev |
Dependant Packages
Name | Deps |
---|---|
traffic_light_fine_detector |
Launch files
- launch/yolox.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_s_plus_opt.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_tiny.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-tiny]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: fp16]
- calibration_algorithm [default: MinMax]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 0.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
Messages
Services
Plugins
Recent questions tagged tensorrt_yolox at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/ieiauto/autodrrt.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-05-30 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daisuke Nishimatsu
- Dan Umeda
- Manato Hirabayashi
Authors
- Daisuke Nishimatsu
tensorrt_yolox
Purpose
This package detects target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLOX model.
Inner-workings / Algorithms
Cite
Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, “YOLOX: Exceeding YOLO Series in 2021”, arXiv preprint arXiv:2107.08430, 2021 [ref]
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
in/image |
sensor_msgs/Image |
The input image |
Output
Name | Type | Description |
---|---|---|
out/objects |
tier4_perception_msgs/DetectedObjectsWithFeature |
The detected objects with 2D bounding boxes |
out/image |
sensor_msgs/Image |
The image with 2D bounding boxes for visualization |
Parameters
Core Parameters
Name | Type | Default Value | Description |
---|---|---|---|
score_threshold |
float | 0.3 | If the objectness score is less than this value, the object is ignored in yolox layer. |
nms_threshold |
float | 0.7 | The IoU threshold for NMS method |
NOTE: These two parameters are only valid for “plain” model (described later).
Node Parameters
Name | Type | Default Value | Description |
---|---|---|---|
model_path |
string | ”” | The onnx file name for yolox model |
label_path |
string | ”” | The label file with label names for detected objects written on it |
precision |
string | “fp16” | The inference mode: “fp32”, “fp16”, “int8” |
build_only |
bool | false | shutdown node after TensorRT engine file is built |
calibration_algorithm |
string | “MinMax” | Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: Entropy”,(“Legacy” | “Percentile”), “MinMax”] |
dla_core_id |
int | -1 | If positive ID value is specified, the node assign inference task to the DLA core |
quantize_first_layer |
bool | false | If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8 |
quantize_last_layer |
bool | false | If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8 |
profile_per_layer |
bool | false | If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. |
clip_value |
double | 0.0 | If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration |
preprocess_on_gpu |
bool | true | If true, pre-processing is performed on GPU |
calibration_image_list_path |
string | ”” | Path to a file which contains path to images. Those images will be used for int8 quantization. |
Assumptions / Known limits
The label contained in detected 2D bounding boxes (i.e., out/objects
) will be either one of the followings:
- CAR
- PEDESTRIAN (“PERSON” will also be categorized as “PEDESTRIAN”)
- BUS
- TRUCK
- BICYCLE
- MOTORCYCLE
If other labels (case insensitive) are contained in the file specified via the label_file
parameter,
those are labeled as UNKNOWN
, while detected rectangles are drawn in the visualization result (out/image
).
Onnx model
A sample model (named yolox-tiny.onnx
) is downloaded by ansible script on env preparation stage, if not, please, follow Manual downloading of artifacts.
To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference,
EfficientNMS_TRT
module is attached after the ordinal YOLOX (tiny) network.
The EfficientNMS_TRT
module contains fixed values for score_threshold
and nms_threshold
in it,
hence these parameters are ignored when users specify ONNX models including this module.
This package accepts both EfficientNMS_TRT
attached ONNXs and models published from the official YOLOX repository (we referred to them as “plain” models).
In addition to yolox-tiny.onnx
, a custom model named yolox-sPlus-opt.onnx
is either available.
This model is based on YOLOX-s and tuned to perform more accurate detection with almost comparable execution speed with yolox-tiny
.
To get better results with this model, users are recommended to use some specific running arguments
such as precision:=int8
, calibration_algorithm:=Entropy
, clip_value:=6.0
.
Users can refer launch/yolox_sPlus_opt.launch.xml
to see how this model can be used.
All models are automatically converted to TensorRT format.
These converted files will be saved in the same directory as specified ONNX files
with .engine
filename extension and reused from the next run.
The conversion process may take a while (typically 10 to 20 minutes) and the inference process is blocked
until complete the conversion, so it will take some time until detection results are published (even until appearing in the topic list) on the first run
Package acceptable model generation
To convert users’ own model that saved in PyTorch’s pth
format into ONNX,
users can exploit the converter offered by the official repository.
For the convenience, only procedures are described below.
Please refer the official document for more detail.
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
libopencv-dev |
Dependant Packages
Name | Deps |
---|---|
traffic_light_fine_detector |
Launch files
- launch/yolox.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_s_plus_opt.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_tiny.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-tiny]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: fp16]
- calibration_algorithm [default: MinMax]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 0.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
Messages
Services
Plugins
Recent questions tagged tensorrt_yolox at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/ieiauto/autodrrt.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-05-30 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daisuke Nishimatsu
- Dan Umeda
- Manato Hirabayashi
Authors
- Daisuke Nishimatsu
tensorrt_yolox
Purpose
This package detects target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLOX model.
Inner-workings / Algorithms
Cite
Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, “YOLOX: Exceeding YOLO Series in 2021”, arXiv preprint arXiv:2107.08430, 2021 [ref]
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
in/image |
sensor_msgs/Image |
The input image |
Output
Name | Type | Description |
---|---|---|
out/objects |
tier4_perception_msgs/DetectedObjectsWithFeature |
The detected objects with 2D bounding boxes |
out/image |
sensor_msgs/Image |
The image with 2D bounding boxes for visualization |
Parameters
Core Parameters
Name | Type | Default Value | Description |
---|---|---|---|
score_threshold |
float | 0.3 | If the objectness score is less than this value, the object is ignored in yolox layer. |
nms_threshold |
float | 0.7 | The IoU threshold for NMS method |
NOTE: These two parameters are only valid for “plain” model (described later).
Node Parameters
Name | Type | Default Value | Description |
---|---|---|---|
model_path |
string | ”” | The onnx file name for yolox model |
label_path |
string | ”” | The label file with label names for detected objects written on it |
precision |
string | “fp16” | The inference mode: “fp32”, “fp16”, “int8” |
build_only |
bool | false | shutdown node after TensorRT engine file is built |
calibration_algorithm |
string | “MinMax” | Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: Entropy”,(“Legacy” | “Percentile”), “MinMax”] |
dla_core_id |
int | -1 | If positive ID value is specified, the node assign inference task to the DLA core |
quantize_first_layer |
bool | false | If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8 |
quantize_last_layer |
bool | false | If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8 |
profile_per_layer |
bool | false | If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. |
clip_value |
double | 0.0 | If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration |
preprocess_on_gpu |
bool | true | If true, pre-processing is performed on GPU |
calibration_image_list_path |
string | ”” | Path to a file which contains path to images. Those images will be used for int8 quantization. |
Assumptions / Known limits
The label contained in detected 2D bounding boxes (i.e., out/objects
) will be either one of the followings:
- CAR
- PEDESTRIAN (“PERSON” will also be categorized as “PEDESTRIAN”)
- BUS
- TRUCK
- BICYCLE
- MOTORCYCLE
If other labels (case insensitive) are contained in the file specified via the label_file
parameter,
those are labeled as UNKNOWN
, while detected rectangles are drawn in the visualization result (out/image
).
Onnx model
A sample model (named yolox-tiny.onnx
) is downloaded by ansible script on env preparation stage, if not, please, follow Manual downloading of artifacts.
To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference,
EfficientNMS_TRT
module is attached after the ordinal YOLOX (tiny) network.
The EfficientNMS_TRT
module contains fixed values for score_threshold
and nms_threshold
in it,
hence these parameters are ignored when users specify ONNX models including this module.
This package accepts both EfficientNMS_TRT
attached ONNXs and models published from the official YOLOX repository (we referred to them as “plain” models).
In addition to yolox-tiny.onnx
, a custom model named yolox-sPlus-opt.onnx
is either available.
This model is based on YOLOX-s and tuned to perform more accurate detection with almost comparable execution speed with yolox-tiny
.
To get better results with this model, users are recommended to use some specific running arguments
such as precision:=int8
, calibration_algorithm:=Entropy
, clip_value:=6.0
.
Users can refer launch/yolox_sPlus_opt.launch.xml
to see how this model can be used.
All models are automatically converted to TensorRT format.
These converted files will be saved in the same directory as specified ONNX files
with .engine
filename extension and reused from the next run.
The conversion process may take a while (typically 10 to 20 minutes) and the inference process is blocked
until complete the conversion, so it will take some time until detection results are published (even until appearing in the topic list) on the first run
Package acceptable model generation
To convert users’ own model that saved in PyTorch’s pth
format into ONNX,
users can exploit the converter offered by the official repository.
For the convenience, only procedures are described below.
Please refer the official document for more detail.
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
libopencv-dev |
Dependant Packages
Name | Deps |
---|---|
traffic_light_fine_detector |
Launch files
- launch/yolox.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_s_plus_opt.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_tiny.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-tiny]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: fp16]
- calibration_algorithm [default: MinMax]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 0.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
Messages
Services
Plugins
Recent questions tagged tensorrt_yolox at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 0.0.1 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Description | |
Checkout URI | https://github.com/ieiauto/autodrrt.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-05-30 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Daisuke Nishimatsu
- Dan Umeda
- Manato Hirabayashi
Authors
- Daisuke Nishimatsu
tensorrt_yolox
Purpose
This package detects target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLOX model.
Inner-workings / Algorithms
Cite
Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, “YOLOX: Exceeding YOLO Series in 2021”, arXiv preprint arXiv:2107.08430, 2021 [ref]
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
in/image |
sensor_msgs/Image |
The input image |
Output
Name | Type | Description |
---|---|---|
out/objects |
tier4_perception_msgs/DetectedObjectsWithFeature |
The detected objects with 2D bounding boxes |
out/image |
sensor_msgs/Image |
The image with 2D bounding boxes for visualization |
Parameters
Core Parameters
Name | Type | Default Value | Description |
---|---|---|---|
score_threshold |
float | 0.3 | If the objectness score is less than this value, the object is ignored in yolox layer. |
nms_threshold |
float | 0.7 | The IoU threshold for NMS method |
NOTE: These two parameters are only valid for “plain” model (described later).
Node Parameters
Name | Type | Default Value | Description |
---|---|---|---|
model_path |
string | ”” | The onnx file name for yolox model |
label_path |
string | ”” | The label file with label names for detected objects written on it |
precision |
string | “fp16” | The inference mode: “fp32”, “fp16”, “int8” |
build_only |
bool | false | shutdown node after TensorRT engine file is built |
calibration_algorithm |
string | “MinMax” | Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: Entropy”,(“Legacy” | “Percentile”), “MinMax”] |
dla_core_id |
int | -1 | If positive ID value is specified, the node assign inference task to the DLA core |
quantize_first_layer |
bool | false | If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8 |
quantize_last_layer |
bool | false | If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8 |
profile_per_layer |
bool | false | If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. |
clip_value |
double | 0.0 | If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration |
preprocess_on_gpu |
bool | true | If true, pre-processing is performed on GPU |
calibration_image_list_path |
string | ”” | Path to a file which contains path to images. Those images will be used for int8 quantization. |
Assumptions / Known limits
The label contained in detected 2D bounding boxes (i.e., out/objects
) will be either one of the followings:
- CAR
- PEDESTRIAN (“PERSON” will also be categorized as “PEDESTRIAN”)
- BUS
- TRUCK
- BICYCLE
- MOTORCYCLE
If other labels (case insensitive) are contained in the file specified via the label_file
parameter,
those are labeled as UNKNOWN
, while detected rectangles are drawn in the visualization result (out/image
).
Onnx model
A sample model (named yolox-tiny.onnx
) is downloaded by ansible script on env preparation stage, if not, please, follow Manual downloading of artifacts.
To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference,
EfficientNMS_TRT
module is attached after the ordinal YOLOX (tiny) network.
The EfficientNMS_TRT
module contains fixed values for score_threshold
and nms_threshold
in it,
hence these parameters are ignored when users specify ONNX models including this module.
This package accepts both EfficientNMS_TRT
attached ONNXs and models published from the official YOLOX repository (we referred to them as “plain” models).
In addition to yolox-tiny.onnx
, a custom model named yolox-sPlus-opt.onnx
is either available.
This model is based on YOLOX-s and tuned to perform more accurate detection with almost comparable execution speed with yolox-tiny
.
To get better results with this model, users are recommended to use some specific running arguments
such as precision:=int8
, calibration_algorithm:=Entropy
, clip_value:=6.0
.
Users can refer launch/yolox_sPlus_opt.launch.xml
to see how this model can be used.
All models are automatically converted to TensorRT format.
These converted files will be saved in the same directory as specified ONNX files
with .engine
filename extension and reused from the next run.
The conversion process may take a while (typically 10 to 20 minutes) and the inference process is blocked
until complete the conversion, so it will take some time until detection results are published (even until appearing in the topic list) on the first run
Package acceptable model generation
To convert users’ own model that saved in PyTorch’s pth
format into ONNX,
users can exploit the converter offered by the official repository.
For the convenience, only procedures are described below.
Please refer the official document for more detail.
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
libopencv-dev |
Dependant Packages
Name | Deps |
---|---|
traffic_light_fine_detector |
Launch files
- launch/yolox.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_s_plus_opt.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-sPlus-T4-960x960-pseudo-finetune]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: int8]
- calibration_algorithm [default: Entropy]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 6.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]
- launch/yolox_tiny.launch.xml
-
- input/image [default: /sensing/camera/camera0/image_rect_color]
- output/objects [default: /perception/object_recognition/detection/rois0]
- model_name [default: yolox-tiny]
- data_path [default: $(env HOME)/autoware_data]
- model_path [default: $(var data_path)/tensorrt_yolox]
- score_threshold [default: 0.35]
- nms_threshold [default: 0.7]
- precision [default: fp16]
- calibration_algorithm [default: MinMax]
- dla_core_id [default: -1]
- quantize_first_layer [default: false]
- quantize_last_layer [default: false]
- profile_per_layer [default: false]
- clip_value [default: 0.0]
- preprocess_on_gpu [default: true]
- calibration_image_list_path [default: ]
- use_decompress [default: true]
- build_only [default: false]