No version for distro humble showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.0.1
License BSD-3-Clause
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Tutorial code referenced in https://docs.nav2.org/
Checkout URI https://github.com/ros-navigation/navigation2_tutorials.git
VCS Type git
VCS Version rolling
Last Updated 2026-02-20
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 node for semantic segmentation inference

Maintainers

  • Pedro Gonzalez

Authors

No additional authors.

Semantic Segmentation Node

ROS2 node for real-time semantic segmentation inference using ONNX Runtime.

Overview

This node performs semantic segmentation on camera images and publishes segmentation masks, confidence maps, and colored overlays. It uses ONNX Runtime for efficient inference without requiring PyTorch or super-gradients at runtime.

Topics

Subscribed:

  • /rgbd_camera/image (sensor_msgs/Image) - Input RGB camera images

Published:

  • /segmentation/mask (sensor_msgs/Image) - Segmentation mask with class IDs (mono8)
  • /segmentation/confidence (sensor_msgs/Image) - Per-pixel confidence (mono8, 0-255)
  • /segmentation/overlay (sensor_msgs/Image) - Colored overlay visualization (bgr8)
  • /segmentation/label_info (vision_msgs/LabelInfo) - Class mappings (latched)

Model

The ONNX model (models/model.onnx) can be generated using the Simple Segmentation Toolkit.

Training Your Own Model

  1. Capture training images from a real robot or from Gazebo, with varying lighting and environmental conditions
  2. Use the Simple Segmentation Toolkit to label and train a model
  3. Convert the trained model to ONNX format: python3 convert_to_onnx.py
  4. Copy model.onnx to this package’s models/ directory

The ontology configuration (config/ontology.yaml) must match the classes used during training.

Usage

ros2 run semantic_segmentation_node segmentation_node

All dependencies are included in the devcontainer.

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged semantic_segmentation_node at Robotics Stack Exchange

No version for distro jazzy showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.0.1
License BSD-3-Clause
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Tutorial code referenced in https://docs.nav2.org/
Checkout URI https://github.com/ros-navigation/navigation2_tutorials.git
VCS Type git
VCS Version rolling
Last Updated 2026-02-20
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 node for semantic segmentation inference

Maintainers

  • Pedro Gonzalez

Authors

No additional authors.

Semantic Segmentation Node

ROS2 node for real-time semantic segmentation inference using ONNX Runtime.

Overview

This node performs semantic segmentation on camera images and publishes segmentation masks, confidence maps, and colored overlays. It uses ONNX Runtime for efficient inference without requiring PyTorch or super-gradients at runtime.

Topics

Subscribed:

  • /rgbd_camera/image (sensor_msgs/Image) - Input RGB camera images

Published:

  • /segmentation/mask (sensor_msgs/Image) - Segmentation mask with class IDs (mono8)
  • /segmentation/confidence (sensor_msgs/Image) - Per-pixel confidence (mono8, 0-255)
  • /segmentation/overlay (sensor_msgs/Image) - Colored overlay visualization (bgr8)
  • /segmentation/label_info (vision_msgs/LabelInfo) - Class mappings (latched)

Model

The ONNX model (models/model.onnx) can be generated using the Simple Segmentation Toolkit.

Training Your Own Model

  1. Capture training images from a real robot or from Gazebo, with varying lighting and environmental conditions
  2. Use the Simple Segmentation Toolkit to label and train a model
  3. Convert the trained model to ONNX format: python3 convert_to_onnx.py
  4. Copy model.onnx to this package’s models/ directory

The ontology configuration (config/ontology.yaml) must match the classes used during training.

Usage

ros2 run semantic_segmentation_node segmentation_node

All dependencies are included in the devcontainer.

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged semantic_segmentation_node at Robotics Stack Exchange

No version for distro kilted showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.0.1
License BSD-3-Clause
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Tutorial code referenced in https://docs.nav2.org/
Checkout URI https://github.com/ros-navigation/navigation2_tutorials.git
VCS Type git
VCS Version rolling
Last Updated 2026-02-20
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 node for semantic segmentation inference

Maintainers

  • Pedro Gonzalez

Authors

No additional authors.

Semantic Segmentation Node

ROS2 node for real-time semantic segmentation inference using ONNX Runtime.

Overview

This node performs semantic segmentation on camera images and publishes segmentation masks, confidence maps, and colored overlays. It uses ONNX Runtime for efficient inference without requiring PyTorch or super-gradients at runtime.

Topics

Subscribed:

  • /rgbd_camera/image (sensor_msgs/Image) - Input RGB camera images

Published:

  • /segmentation/mask (sensor_msgs/Image) - Segmentation mask with class IDs (mono8)
  • /segmentation/confidence (sensor_msgs/Image) - Per-pixel confidence (mono8, 0-255)
  • /segmentation/overlay (sensor_msgs/Image) - Colored overlay visualization (bgr8)
  • /segmentation/label_info (vision_msgs/LabelInfo) - Class mappings (latched)

Model

The ONNX model (models/model.onnx) can be generated using the Simple Segmentation Toolkit.

Training Your Own Model

  1. Capture training images from a real robot or from Gazebo, with varying lighting and environmental conditions
  2. Use the Simple Segmentation Toolkit to label and train a model
  3. Convert the trained model to ONNX format: python3 convert_to_onnx.py
  4. Copy model.onnx to this package’s models/ directory

The ontology configuration (config/ontology.yaml) must match the classes used during training.

Usage

ros2 run semantic_segmentation_node segmentation_node

All dependencies are included in the devcontainer.

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged semantic_segmentation_node at Robotics Stack Exchange

No version for distro rolling showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.0.1
License BSD-3-Clause
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Tutorial code referenced in https://docs.nav2.org/
Checkout URI https://github.com/ros-navigation/navigation2_tutorials.git
VCS Type git
VCS Version rolling
Last Updated 2026-02-20
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 node for semantic segmentation inference

Maintainers

  • Pedro Gonzalez

Authors

No additional authors.

Semantic Segmentation Node

ROS2 node for real-time semantic segmentation inference using ONNX Runtime.

Overview

This node performs semantic segmentation on camera images and publishes segmentation masks, confidence maps, and colored overlays. It uses ONNX Runtime for efficient inference without requiring PyTorch or super-gradients at runtime.

Topics

Subscribed:

  • /rgbd_camera/image (sensor_msgs/Image) - Input RGB camera images

Published:

  • /segmentation/mask (sensor_msgs/Image) - Segmentation mask with class IDs (mono8)
  • /segmentation/confidence (sensor_msgs/Image) - Per-pixel confidence (mono8, 0-255)
  • /segmentation/overlay (sensor_msgs/Image) - Colored overlay visualization (bgr8)
  • /segmentation/label_info (vision_msgs/LabelInfo) - Class mappings (latched)

Model

The ONNX model (models/model.onnx) can be generated using the Simple Segmentation Toolkit.

Training Your Own Model

  1. Capture training images from a real robot or from Gazebo, with varying lighting and environmental conditions
  2. Use the Simple Segmentation Toolkit to label and train a model
  3. Convert the trained model to ONNX format: python3 convert_to_onnx.py
  4. Copy model.onnx to this package’s models/ directory

The ontology configuration (config/ontology.yaml) must match the classes used during training.

Usage

ros2 run semantic_segmentation_node segmentation_node

All dependencies are included in the devcontainer.

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged semantic_segmentation_node at Robotics Stack Exchange

Package Summary

Version 0.0.1
License BSD-3-Clause
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Tutorial code referenced in https://docs.nav2.org/
Checkout URI https://github.com/ros-navigation/navigation2_tutorials.git
VCS Type git
VCS Version rolling
Last Updated 2026-02-20
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 node for semantic segmentation inference

Maintainers

  • Pedro Gonzalez

Authors

No additional authors.

Semantic Segmentation Node

ROS2 node for real-time semantic segmentation inference using ONNX Runtime.

Overview

This node performs semantic segmentation on camera images and publishes segmentation masks, confidence maps, and colored overlays. It uses ONNX Runtime for efficient inference without requiring PyTorch or super-gradients at runtime.

Topics

Subscribed:

  • /rgbd_camera/image (sensor_msgs/Image) - Input RGB camera images

Published:

  • /segmentation/mask (sensor_msgs/Image) - Segmentation mask with class IDs (mono8)
  • /segmentation/confidence (sensor_msgs/Image) - Per-pixel confidence (mono8, 0-255)
  • /segmentation/overlay (sensor_msgs/Image) - Colored overlay visualization (bgr8)
  • /segmentation/label_info (vision_msgs/LabelInfo) - Class mappings (latched)

Model

The ONNX model (models/model.onnx) can be generated using the Simple Segmentation Toolkit.

Training Your Own Model

  1. Capture training images from a real robot or from Gazebo, with varying lighting and environmental conditions
  2. Use the Simple Segmentation Toolkit to label and train a model
  3. Convert the trained model to ONNX format: python3 convert_to_onnx.py
  4. Copy model.onnx to this package’s models/ directory

The ontology configuration (config/ontology.yaml) must match the classes used during training.

Usage

ros2 run semantic_segmentation_node segmentation_node

All dependencies are included in the devcontainer.

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged semantic_segmentation_node at Robotics Stack Exchange

No version for distro galactic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.0.1
License BSD-3-Clause
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Tutorial code referenced in https://docs.nav2.org/
Checkout URI https://github.com/ros-navigation/navigation2_tutorials.git
VCS Type git
VCS Version rolling
Last Updated 2026-02-20
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 node for semantic segmentation inference

Maintainers

  • Pedro Gonzalez

Authors

No additional authors.

Semantic Segmentation Node

ROS2 node for real-time semantic segmentation inference using ONNX Runtime.

Overview

This node performs semantic segmentation on camera images and publishes segmentation masks, confidence maps, and colored overlays. It uses ONNX Runtime for efficient inference without requiring PyTorch or super-gradients at runtime.

Topics

Subscribed:

  • /rgbd_camera/image (sensor_msgs/Image) - Input RGB camera images

Published:

  • /segmentation/mask (sensor_msgs/Image) - Segmentation mask with class IDs (mono8)
  • /segmentation/confidence (sensor_msgs/Image) - Per-pixel confidence (mono8, 0-255)
  • /segmentation/overlay (sensor_msgs/Image) - Colored overlay visualization (bgr8)
  • /segmentation/label_info (vision_msgs/LabelInfo) - Class mappings (latched)

Model

The ONNX model (models/model.onnx) can be generated using the Simple Segmentation Toolkit.

Training Your Own Model

  1. Capture training images from a real robot or from Gazebo, with varying lighting and environmental conditions
  2. Use the Simple Segmentation Toolkit to label and train a model
  3. Convert the trained model to ONNX format: python3 convert_to_onnx.py
  4. Copy model.onnx to this package’s models/ directory

The ontology configuration (config/ontology.yaml) must match the classes used during training.

Usage

ros2 run semantic_segmentation_node segmentation_node

All dependencies are included in the devcontainer.

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged semantic_segmentation_node at Robotics Stack Exchange

No version for distro iron showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.0.1
License BSD-3-Clause
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Tutorial code referenced in https://docs.nav2.org/
Checkout URI https://github.com/ros-navigation/navigation2_tutorials.git
VCS Type git
VCS Version rolling
Last Updated 2026-02-20
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 node for semantic segmentation inference

Maintainers

  • Pedro Gonzalez

Authors

No additional authors.

Semantic Segmentation Node

ROS2 node for real-time semantic segmentation inference using ONNX Runtime.

Overview

This node performs semantic segmentation on camera images and publishes segmentation masks, confidence maps, and colored overlays. It uses ONNX Runtime for efficient inference without requiring PyTorch or super-gradients at runtime.

Topics

Subscribed:

  • /rgbd_camera/image (sensor_msgs/Image) - Input RGB camera images

Published:

  • /segmentation/mask (sensor_msgs/Image) - Segmentation mask with class IDs (mono8)
  • /segmentation/confidence (sensor_msgs/Image) - Per-pixel confidence (mono8, 0-255)
  • /segmentation/overlay (sensor_msgs/Image) - Colored overlay visualization (bgr8)
  • /segmentation/label_info (vision_msgs/LabelInfo) - Class mappings (latched)

Model

The ONNX model (models/model.onnx) can be generated using the Simple Segmentation Toolkit.

Training Your Own Model

  1. Capture training images from a real robot or from Gazebo, with varying lighting and environmental conditions
  2. Use the Simple Segmentation Toolkit to label and train a model
  3. Convert the trained model to ONNX format: python3 convert_to_onnx.py
  4. Copy model.onnx to this package’s models/ directory

The ontology configuration (config/ontology.yaml) must match the classes used during training.

Usage

ros2 run semantic_segmentation_node segmentation_node

All dependencies are included in the devcontainer.

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged semantic_segmentation_node at Robotics Stack Exchange

No version for distro melodic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.0.1
License BSD-3-Clause
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Tutorial code referenced in https://docs.nav2.org/
Checkout URI https://github.com/ros-navigation/navigation2_tutorials.git
VCS Type git
VCS Version rolling
Last Updated 2026-02-20
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 node for semantic segmentation inference

Maintainers

  • Pedro Gonzalez

Authors

No additional authors.

Semantic Segmentation Node

ROS2 node for real-time semantic segmentation inference using ONNX Runtime.

Overview

This node performs semantic segmentation on camera images and publishes segmentation masks, confidence maps, and colored overlays. It uses ONNX Runtime for efficient inference without requiring PyTorch or super-gradients at runtime.

Topics

Subscribed:

  • /rgbd_camera/image (sensor_msgs/Image) - Input RGB camera images

Published:

  • /segmentation/mask (sensor_msgs/Image) - Segmentation mask with class IDs (mono8)
  • /segmentation/confidence (sensor_msgs/Image) - Per-pixel confidence (mono8, 0-255)
  • /segmentation/overlay (sensor_msgs/Image) - Colored overlay visualization (bgr8)
  • /segmentation/label_info (vision_msgs/LabelInfo) - Class mappings (latched)

Model

The ONNX model (models/model.onnx) can be generated using the Simple Segmentation Toolkit.

Training Your Own Model

  1. Capture training images from a real robot or from Gazebo, with varying lighting and environmental conditions
  2. Use the Simple Segmentation Toolkit to label and train a model
  3. Convert the trained model to ONNX format: python3 convert_to_onnx.py
  4. Copy model.onnx to this package’s models/ directory

The ontology configuration (config/ontology.yaml) must match the classes used during training.

Usage

ros2 run semantic_segmentation_node segmentation_node

All dependencies are included in the devcontainer.

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged semantic_segmentation_node at Robotics Stack Exchange

No version for distro noetic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Version 0.0.1
License BSD-3-Clause
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Tutorial code referenced in https://docs.nav2.org/
Checkout URI https://github.com/ros-navigation/navigation2_tutorials.git
VCS Type git
VCS Version rolling
Last Updated 2026-02-20
Dev Status UNKNOWN
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 node for semantic segmentation inference

Maintainers

  • Pedro Gonzalez

Authors

No additional authors.

Semantic Segmentation Node

ROS2 node for real-time semantic segmentation inference using ONNX Runtime.

Overview

This node performs semantic segmentation on camera images and publishes segmentation masks, confidence maps, and colored overlays. It uses ONNX Runtime for efficient inference without requiring PyTorch or super-gradients at runtime.

Topics

Subscribed:

  • /rgbd_camera/image (sensor_msgs/Image) - Input RGB camera images

Published:

  • /segmentation/mask (sensor_msgs/Image) - Segmentation mask with class IDs (mono8)
  • /segmentation/confidence (sensor_msgs/Image) - Per-pixel confidence (mono8, 0-255)
  • /segmentation/overlay (sensor_msgs/Image) - Colored overlay visualization (bgr8)
  • /segmentation/label_info (vision_msgs/LabelInfo) - Class mappings (latched)

Model

The ONNX model (models/model.onnx) can be generated using the Simple Segmentation Toolkit.

Training Your Own Model

  1. Capture training images from a real robot or from Gazebo, with varying lighting and environmental conditions
  2. Use the Simple Segmentation Toolkit to label and train a model
  3. Convert the trained model to ONNX format: python3 convert_to_onnx.py
  4. Copy model.onnx to this package’s models/ directory

The ontology configuration (config/ontology.yaml) must match the classes used during training.

Usage

ros2 run semantic_segmentation_node segmentation_node

All dependencies are included in the devcontainer.

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged semantic_segmentation_node at Robotics Stack Exchange