No version for distro humble showing github. Known supported distros are highlighted in the buttons above.
Package symbol

yolov5_isaac_ros package from yolov5-with-isaac-ros repo

yolov5_isaac_ros

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.0.0
License Apache-2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Sample showing how to use YOLOv5 with Nvidia Isaac ROS DNN Inference
Checkout URI https://github.com/nvidia-ai-iot/yolov5-with-isaac-ros.git
VCS Type git
VCS Version main
Last Updated 2022-12-02
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 package for YOLOv5 object detection to use with Nvidia Isaac ROS

Additional Links

No additional links.

Maintainers

  • admin

Authors

No additional authors.

YOLOv5 object detection with Isaac ROS

This is a sample showing how to integrate YOLOv5 with Nvidia Isaac ROS DNN Inference.

Requirements

Tested on Jetson Orin running JetPack 5.0.2 and Intel RealSense D435 Webcam.

Development Environment Setup

Use the Isaac ROS Dev Docker for development. This provides an environment with all dependencies installed to run Isaac ROS packages.

Usage

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

Model preparation

  • Download the YOLOv5 PyTorch model - yolov5s.pt from the Ultralytics YOLOv5 project.
  • Export to ONNX following steps here and visualize the ONNX model using Netron. Note input and output names - these will be used to run the node. For instance, images for input and output0 for output. Also note input dimensions, for instance, (1x3x640x640).

Object Detection pipeline Setup

  1. Following the development environment setup above, you should have a ROS2 workspace named workspaces/isaac_ros-dev. Clone this repository and its dependencies under workspaces/isaac_ros-dev/src:
cd ~/workspaces/isaac_ros-dev/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
git clone https://github.com/NVIDIA-AI-IOT/YOLOv5-with-Isaac-ROS.git

  1. Download requirements.txt from the Ultralytics YOLOv5 project to workspaces/isaac_ros-dev/src.
  2. Copy your ONNX model (say, yolov5s.onnx) from above to workspaces/isaac_ros-dev/src.
  3. Follow Isaac ROS Realsense Setup to setup the camera.
  4. Launch the Docker container using the run_dev.sh script:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common
./scripts/run_dev.sh

  1. Inside the container, run the following:
pip install -r src/requirements.txt

  1. Install Torchvision: This project runs on a device with an Nvidia GPU. The Isaac ROS Dev container uses the Nvidia-built PyTorch version with CUDA-acceleration. Ensure that you install a compatible Torchvision version from source for CUDA-acceleration. Specify the compatible version in place of $torchvision_tag below:
git clone https://github.com/pytorch/vision.git
cd vision
git checkout $torchvision_tag
pip install -v .

  1. Download the utils folder from the Ultralytics YOLOv5 project and put it in the yolov5_isaac_ros folder of this repository. Finally, your file structure should look like this (all files not shown here):
.
+- workspaces
   +- isaac_ros-dev
      +- src
         +- requirements.txt
         +- yolov5s.onnx
         +- isaac_ros_common
         +- YOLOv5-with-Isaac-ROS
            +- README
            +- launch
            +- images
            +- yolov5_isaac_ros
               +- utils
               +- Yolov5Decoder.py  
               +- Yolov5DecoderUtils.py    

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

  1. Make the following changes to utils/general.py, utils/torch_utils.py and utils/metrics.py after downloading utils from the Ultralytics YOLOv5 project:
    1. In the import statements, add yolov5_isaac_ros before utils. For instance - change from utils.metrics import box_iou to from yolov5_isaac_ros.utils.metrics import box_iou

Running the pipeline with TensorRT inference node

  1. Inside the container, build and source the workspace:
cd /workspaces/isaac_ros-dev
colcon build --symlink-install
source install/setup.bash

  1. Launch the RealSense camera node as per step 7 here: ros2 launch realsense2_camera rs_launch.py
  2. Verify that images are being published on /camera/color/image_raw. You could use RQt/Foxglove for this or use this command in another terminal inside the container: ros2 topic echo /camera/color/image_raw
  3. In another terminal inside the container, run the isaac_ros_yolov5_tensor_rt launch file. This launches the DNN image encoder node, TensorRT inference node and YOLOv5 decoder node. It also launches a visualization script that shows results on RQt. Use the names noted above in Model preparation as input_binding_names and output_binding_names (for example, images for input_binding_names and output0 for output_binding_names). Similarly, use the input dimensions noted above as network_image_width and network_image_height:
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_tensor_rt.launch.py model_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.onnx engine_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640

  1. For subsequent runs, use the following command as the engine file yolov5s.plan is generated and saved in workspaces/isaac_ros-dev/src/ after running the command above:

```

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged yolov5_isaac_ros at Robotics Stack Exchange

No version for distro jazzy showing github. Known supported distros are highlighted in the buttons above.
Package symbol

yolov5_isaac_ros package from yolov5-with-isaac-ros repo

yolov5_isaac_ros

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.0.0
License Apache-2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Sample showing how to use YOLOv5 with Nvidia Isaac ROS DNN Inference
Checkout URI https://github.com/nvidia-ai-iot/yolov5-with-isaac-ros.git
VCS Type git
VCS Version main
Last Updated 2022-12-02
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 package for YOLOv5 object detection to use with Nvidia Isaac ROS

Additional Links

No additional links.

Maintainers

  • admin

Authors

No additional authors.

YOLOv5 object detection with Isaac ROS

This is a sample showing how to integrate YOLOv5 with Nvidia Isaac ROS DNN Inference.

Requirements

Tested on Jetson Orin running JetPack 5.0.2 and Intel RealSense D435 Webcam.

Development Environment Setup

Use the Isaac ROS Dev Docker for development. This provides an environment with all dependencies installed to run Isaac ROS packages.

Usage

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

Model preparation

  • Download the YOLOv5 PyTorch model - yolov5s.pt from the Ultralytics YOLOv5 project.
  • Export to ONNX following steps here and visualize the ONNX model using Netron. Note input and output names - these will be used to run the node. For instance, images for input and output0 for output. Also note input dimensions, for instance, (1x3x640x640).

Object Detection pipeline Setup

  1. Following the development environment setup above, you should have a ROS2 workspace named workspaces/isaac_ros-dev. Clone this repository and its dependencies under workspaces/isaac_ros-dev/src:
cd ~/workspaces/isaac_ros-dev/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
git clone https://github.com/NVIDIA-AI-IOT/YOLOv5-with-Isaac-ROS.git

  1. Download requirements.txt from the Ultralytics YOLOv5 project to workspaces/isaac_ros-dev/src.
  2. Copy your ONNX model (say, yolov5s.onnx) from above to workspaces/isaac_ros-dev/src.
  3. Follow Isaac ROS Realsense Setup to setup the camera.
  4. Launch the Docker container using the run_dev.sh script:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common
./scripts/run_dev.sh

  1. Inside the container, run the following:
pip install -r src/requirements.txt

  1. Install Torchvision: This project runs on a device with an Nvidia GPU. The Isaac ROS Dev container uses the Nvidia-built PyTorch version with CUDA-acceleration. Ensure that you install a compatible Torchvision version from source for CUDA-acceleration. Specify the compatible version in place of $torchvision_tag below:
git clone https://github.com/pytorch/vision.git
cd vision
git checkout $torchvision_tag
pip install -v .

  1. Download the utils folder from the Ultralytics YOLOv5 project and put it in the yolov5_isaac_ros folder of this repository. Finally, your file structure should look like this (all files not shown here):
.
+- workspaces
   +- isaac_ros-dev
      +- src
         +- requirements.txt
         +- yolov5s.onnx
         +- isaac_ros_common
         +- YOLOv5-with-Isaac-ROS
            +- README
            +- launch
            +- images
            +- yolov5_isaac_ros
               +- utils
               +- Yolov5Decoder.py  
               +- Yolov5DecoderUtils.py    

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

  1. Make the following changes to utils/general.py, utils/torch_utils.py and utils/metrics.py after downloading utils from the Ultralytics YOLOv5 project:
    1. In the import statements, add yolov5_isaac_ros before utils. For instance - change from utils.metrics import box_iou to from yolov5_isaac_ros.utils.metrics import box_iou

Running the pipeline with TensorRT inference node

  1. Inside the container, build and source the workspace:
cd /workspaces/isaac_ros-dev
colcon build --symlink-install
source install/setup.bash

  1. Launch the RealSense camera node as per step 7 here: ros2 launch realsense2_camera rs_launch.py
  2. Verify that images are being published on /camera/color/image_raw. You could use RQt/Foxglove for this or use this command in another terminal inside the container: ros2 topic echo /camera/color/image_raw
  3. In another terminal inside the container, run the isaac_ros_yolov5_tensor_rt launch file. This launches the DNN image encoder node, TensorRT inference node and YOLOv5 decoder node. It also launches a visualization script that shows results on RQt. Use the names noted above in Model preparation as input_binding_names and output_binding_names (for example, images for input_binding_names and output0 for output_binding_names). Similarly, use the input dimensions noted above as network_image_width and network_image_height:
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_tensor_rt.launch.py model_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.onnx engine_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640

  1. For subsequent runs, use the following command as the engine file yolov5s.plan is generated and saved in workspaces/isaac_ros-dev/src/ after running the command above:

```

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged yolov5_isaac_ros at Robotics Stack Exchange

No version for distro kilted showing github. Known supported distros are highlighted in the buttons above.
Package symbol

yolov5_isaac_ros package from yolov5-with-isaac-ros repo

yolov5_isaac_ros

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.0.0
License Apache-2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Sample showing how to use YOLOv5 with Nvidia Isaac ROS DNN Inference
Checkout URI https://github.com/nvidia-ai-iot/yolov5-with-isaac-ros.git
VCS Type git
VCS Version main
Last Updated 2022-12-02
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 package for YOLOv5 object detection to use with Nvidia Isaac ROS

Additional Links

No additional links.

Maintainers

  • admin

Authors

No additional authors.

YOLOv5 object detection with Isaac ROS

This is a sample showing how to integrate YOLOv5 with Nvidia Isaac ROS DNN Inference.

Requirements

Tested on Jetson Orin running JetPack 5.0.2 and Intel RealSense D435 Webcam.

Development Environment Setup

Use the Isaac ROS Dev Docker for development. This provides an environment with all dependencies installed to run Isaac ROS packages.

Usage

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

Model preparation

  • Download the YOLOv5 PyTorch model - yolov5s.pt from the Ultralytics YOLOv5 project.
  • Export to ONNX following steps here and visualize the ONNX model using Netron. Note input and output names - these will be used to run the node. For instance, images for input and output0 for output. Also note input dimensions, for instance, (1x3x640x640).

Object Detection pipeline Setup

  1. Following the development environment setup above, you should have a ROS2 workspace named workspaces/isaac_ros-dev. Clone this repository and its dependencies under workspaces/isaac_ros-dev/src:
cd ~/workspaces/isaac_ros-dev/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
git clone https://github.com/NVIDIA-AI-IOT/YOLOv5-with-Isaac-ROS.git

  1. Download requirements.txt from the Ultralytics YOLOv5 project to workspaces/isaac_ros-dev/src.
  2. Copy your ONNX model (say, yolov5s.onnx) from above to workspaces/isaac_ros-dev/src.
  3. Follow Isaac ROS Realsense Setup to setup the camera.
  4. Launch the Docker container using the run_dev.sh script:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common
./scripts/run_dev.sh

  1. Inside the container, run the following:
pip install -r src/requirements.txt

  1. Install Torchvision: This project runs on a device with an Nvidia GPU. The Isaac ROS Dev container uses the Nvidia-built PyTorch version with CUDA-acceleration. Ensure that you install a compatible Torchvision version from source for CUDA-acceleration. Specify the compatible version in place of $torchvision_tag below:
git clone https://github.com/pytorch/vision.git
cd vision
git checkout $torchvision_tag
pip install -v .

  1. Download the utils folder from the Ultralytics YOLOv5 project and put it in the yolov5_isaac_ros folder of this repository. Finally, your file structure should look like this (all files not shown here):
.
+- workspaces
   +- isaac_ros-dev
      +- src
         +- requirements.txt
         +- yolov5s.onnx
         +- isaac_ros_common
         +- YOLOv5-with-Isaac-ROS
            +- README
            +- launch
            +- images
            +- yolov5_isaac_ros
               +- utils
               +- Yolov5Decoder.py  
               +- Yolov5DecoderUtils.py    

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

  1. Make the following changes to utils/general.py, utils/torch_utils.py and utils/metrics.py after downloading utils from the Ultralytics YOLOv5 project:
    1. In the import statements, add yolov5_isaac_ros before utils. For instance - change from utils.metrics import box_iou to from yolov5_isaac_ros.utils.metrics import box_iou

Running the pipeline with TensorRT inference node

  1. Inside the container, build and source the workspace:
cd /workspaces/isaac_ros-dev
colcon build --symlink-install
source install/setup.bash

  1. Launch the RealSense camera node as per step 7 here: ros2 launch realsense2_camera rs_launch.py
  2. Verify that images are being published on /camera/color/image_raw. You could use RQt/Foxglove for this or use this command in another terminal inside the container: ros2 topic echo /camera/color/image_raw
  3. In another terminal inside the container, run the isaac_ros_yolov5_tensor_rt launch file. This launches the DNN image encoder node, TensorRT inference node and YOLOv5 decoder node. It also launches a visualization script that shows results on RQt. Use the names noted above in Model preparation as input_binding_names and output_binding_names (for example, images for input_binding_names and output0 for output_binding_names). Similarly, use the input dimensions noted above as network_image_width and network_image_height:
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_tensor_rt.launch.py model_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.onnx engine_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640

  1. For subsequent runs, use the following command as the engine file yolov5s.plan is generated and saved in workspaces/isaac_ros-dev/src/ after running the command above:

```

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged yolov5_isaac_ros at Robotics Stack Exchange

No version for distro rolling showing github. Known supported distros are highlighted in the buttons above.
Package symbol

yolov5_isaac_ros package from yolov5-with-isaac-ros repo

yolov5_isaac_ros

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.0.0
License Apache-2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Sample showing how to use YOLOv5 with Nvidia Isaac ROS DNN Inference
Checkout URI https://github.com/nvidia-ai-iot/yolov5-with-isaac-ros.git
VCS Type git
VCS Version main
Last Updated 2022-12-02
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 package for YOLOv5 object detection to use with Nvidia Isaac ROS

Additional Links

No additional links.

Maintainers

  • admin

Authors

No additional authors.

YOLOv5 object detection with Isaac ROS

This is a sample showing how to integrate YOLOv5 with Nvidia Isaac ROS DNN Inference.

Requirements

Tested on Jetson Orin running JetPack 5.0.2 and Intel RealSense D435 Webcam.

Development Environment Setup

Use the Isaac ROS Dev Docker for development. This provides an environment with all dependencies installed to run Isaac ROS packages.

Usage

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

Model preparation

  • Download the YOLOv5 PyTorch model - yolov5s.pt from the Ultralytics YOLOv5 project.
  • Export to ONNX following steps here and visualize the ONNX model using Netron. Note input and output names - these will be used to run the node. For instance, images for input and output0 for output. Also note input dimensions, for instance, (1x3x640x640).

Object Detection pipeline Setup

  1. Following the development environment setup above, you should have a ROS2 workspace named workspaces/isaac_ros-dev. Clone this repository and its dependencies under workspaces/isaac_ros-dev/src:
cd ~/workspaces/isaac_ros-dev/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
git clone https://github.com/NVIDIA-AI-IOT/YOLOv5-with-Isaac-ROS.git

  1. Download requirements.txt from the Ultralytics YOLOv5 project to workspaces/isaac_ros-dev/src.
  2. Copy your ONNX model (say, yolov5s.onnx) from above to workspaces/isaac_ros-dev/src.
  3. Follow Isaac ROS Realsense Setup to setup the camera.
  4. Launch the Docker container using the run_dev.sh script:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common
./scripts/run_dev.sh

  1. Inside the container, run the following:
pip install -r src/requirements.txt

  1. Install Torchvision: This project runs on a device with an Nvidia GPU. The Isaac ROS Dev container uses the Nvidia-built PyTorch version with CUDA-acceleration. Ensure that you install a compatible Torchvision version from source for CUDA-acceleration. Specify the compatible version in place of $torchvision_tag below:
git clone https://github.com/pytorch/vision.git
cd vision
git checkout $torchvision_tag
pip install -v .

  1. Download the utils folder from the Ultralytics YOLOv5 project and put it in the yolov5_isaac_ros folder of this repository. Finally, your file structure should look like this (all files not shown here):
.
+- workspaces
   +- isaac_ros-dev
      +- src
         +- requirements.txt
         +- yolov5s.onnx
         +- isaac_ros_common
         +- YOLOv5-with-Isaac-ROS
            +- README
            +- launch
            +- images
            +- yolov5_isaac_ros
               +- utils
               +- Yolov5Decoder.py  
               +- Yolov5DecoderUtils.py    

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

  1. Make the following changes to utils/general.py, utils/torch_utils.py and utils/metrics.py after downloading utils from the Ultralytics YOLOv5 project:
    1. In the import statements, add yolov5_isaac_ros before utils. For instance - change from utils.metrics import box_iou to from yolov5_isaac_ros.utils.metrics import box_iou

Running the pipeline with TensorRT inference node

  1. Inside the container, build and source the workspace:
cd /workspaces/isaac_ros-dev
colcon build --symlink-install
source install/setup.bash

  1. Launch the RealSense camera node as per step 7 here: ros2 launch realsense2_camera rs_launch.py
  2. Verify that images are being published on /camera/color/image_raw. You could use RQt/Foxglove for this or use this command in another terminal inside the container: ros2 topic echo /camera/color/image_raw
  3. In another terminal inside the container, run the isaac_ros_yolov5_tensor_rt launch file. This launches the DNN image encoder node, TensorRT inference node and YOLOv5 decoder node. It also launches a visualization script that shows results on RQt. Use the names noted above in Model preparation as input_binding_names and output_binding_names (for example, images for input_binding_names and output0 for output_binding_names). Similarly, use the input dimensions noted above as network_image_width and network_image_height:
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_tensor_rt.launch.py model_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.onnx engine_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640

  1. For subsequent runs, use the following command as the engine file yolov5s.plan is generated and saved in workspaces/isaac_ros-dev/src/ after running the command above:

```

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged yolov5_isaac_ros at Robotics Stack Exchange

Package symbol

yolov5_isaac_ros package from yolov5-with-isaac-ros repo

yolov5_isaac_ros

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.0.0
License Apache-2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Sample showing how to use YOLOv5 with Nvidia Isaac ROS DNN Inference
Checkout URI https://github.com/nvidia-ai-iot/yolov5-with-isaac-ros.git
VCS Type git
VCS Version main
Last Updated 2022-12-02
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 package for YOLOv5 object detection to use with Nvidia Isaac ROS

Additional Links

No additional links.

Maintainers

  • admin

Authors

No additional authors.

YOLOv5 object detection with Isaac ROS

This is a sample showing how to integrate YOLOv5 with Nvidia Isaac ROS DNN Inference.

Requirements

Tested on Jetson Orin running JetPack 5.0.2 and Intel RealSense D435 Webcam.

Development Environment Setup

Use the Isaac ROS Dev Docker for development. This provides an environment with all dependencies installed to run Isaac ROS packages.

Usage

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

Model preparation

  • Download the YOLOv5 PyTorch model - yolov5s.pt from the Ultralytics YOLOv5 project.
  • Export to ONNX following steps here and visualize the ONNX model using Netron. Note input and output names - these will be used to run the node. For instance, images for input and output0 for output. Also note input dimensions, for instance, (1x3x640x640).

Object Detection pipeline Setup

  1. Following the development environment setup above, you should have a ROS2 workspace named workspaces/isaac_ros-dev. Clone this repository and its dependencies under workspaces/isaac_ros-dev/src:
cd ~/workspaces/isaac_ros-dev/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
git clone https://github.com/NVIDIA-AI-IOT/YOLOv5-with-Isaac-ROS.git

  1. Download requirements.txt from the Ultralytics YOLOv5 project to workspaces/isaac_ros-dev/src.
  2. Copy your ONNX model (say, yolov5s.onnx) from above to workspaces/isaac_ros-dev/src.
  3. Follow Isaac ROS Realsense Setup to setup the camera.
  4. Launch the Docker container using the run_dev.sh script:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common
./scripts/run_dev.sh

  1. Inside the container, run the following:
pip install -r src/requirements.txt

  1. Install Torchvision: This project runs on a device with an Nvidia GPU. The Isaac ROS Dev container uses the Nvidia-built PyTorch version with CUDA-acceleration. Ensure that you install a compatible Torchvision version from source for CUDA-acceleration. Specify the compatible version in place of $torchvision_tag below:
git clone https://github.com/pytorch/vision.git
cd vision
git checkout $torchvision_tag
pip install -v .

  1. Download the utils folder from the Ultralytics YOLOv5 project and put it in the yolov5_isaac_ros folder of this repository. Finally, your file structure should look like this (all files not shown here):
.
+- workspaces
   +- isaac_ros-dev
      +- src
         +- requirements.txt
         +- yolov5s.onnx
         +- isaac_ros_common
         +- YOLOv5-with-Isaac-ROS
            +- README
            +- launch
            +- images
            +- yolov5_isaac_ros
               +- utils
               +- Yolov5Decoder.py  
               +- Yolov5DecoderUtils.py    

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

  1. Make the following changes to utils/general.py, utils/torch_utils.py and utils/metrics.py after downloading utils from the Ultralytics YOLOv5 project:
    1. In the import statements, add yolov5_isaac_ros before utils. For instance - change from utils.metrics import box_iou to from yolov5_isaac_ros.utils.metrics import box_iou

Running the pipeline with TensorRT inference node

  1. Inside the container, build and source the workspace:
cd /workspaces/isaac_ros-dev
colcon build --symlink-install
source install/setup.bash

  1. Launch the RealSense camera node as per step 7 here: ros2 launch realsense2_camera rs_launch.py
  2. Verify that images are being published on /camera/color/image_raw. You could use RQt/Foxglove for this or use this command in another terminal inside the container: ros2 topic echo /camera/color/image_raw
  3. In another terminal inside the container, run the isaac_ros_yolov5_tensor_rt launch file. This launches the DNN image encoder node, TensorRT inference node and YOLOv5 decoder node. It also launches a visualization script that shows results on RQt. Use the names noted above in Model preparation as input_binding_names and output_binding_names (for example, images for input_binding_names and output0 for output_binding_names). Similarly, use the input dimensions noted above as network_image_width and network_image_height:
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_tensor_rt.launch.py model_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.onnx engine_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640

  1. For subsequent runs, use the following command as the engine file yolov5s.plan is generated and saved in workspaces/isaac_ros-dev/src/ after running the command above:

```

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged yolov5_isaac_ros at Robotics Stack Exchange

No version for distro galactic showing github. Known supported distros are highlighted in the buttons above.
Package symbol

yolov5_isaac_ros package from yolov5-with-isaac-ros repo

yolov5_isaac_ros

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.0.0
License Apache-2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Sample showing how to use YOLOv5 with Nvidia Isaac ROS DNN Inference
Checkout URI https://github.com/nvidia-ai-iot/yolov5-with-isaac-ros.git
VCS Type git
VCS Version main
Last Updated 2022-12-02
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 package for YOLOv5 object detection to use with Nvidia Isaac ROS

Additional Links

No additional links.

Maintainers

  • admin

Authors

No additional authors.

YOLOv5 object detection with Isaac ROS

This is a sample showing how to integrate YOLOv5 with Nvidia Isaac ROS DNN Inference.

Requirements

Tested on Jetson Orin running JetPack 5.0.2 and Intel RealSense D435 Webcam.

Development Environment Setup

Use the Isaac ROS Dev Docker for development. This provides an environment with all dependencies installed to run Isaac ROS packages.

Usage

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

Model preparation

  • Download the YOLOv5 PyTorch model - yolov5s.pt from the Ultralytics YOLOv5 project.
  • Export to ONNX following steps here and visualize the ONNX model using Netron. Note input and output names - these will be used to run the node. For instance, images for input and output0 for output. Also note input dimensions, for instance, (1x3x640x640).

Object Detection pipeline Setup

  1. Following the development environment setup above, you should have a ROS2 workspace named workspaces/isaac_ros-dev. Clone this repository and its dependencies under workspaces/isaac_ros-dev/src:
cd ~/workspaces/isaac_ros-dev/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
git clone https://github.com/NVIDIA-AI-IOT/YOLOv5-with-Isaac-ROS.git

  1. Download requirements.txt from the Ultralytics YOLOv5 project to workspaces/isaac_ros-dev/src.
  2. Copy your ONNX model (say, yolov5s.onnx) from above to workspaces/isaac_ros-dev/src.
  3. Follow Isaac ROS Realsense Setup to setup the camera.
  4. Launch the Docker container using the run_dev.sh script:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common
./scripts/run_dev.sh

  1. Inside the container, run the following:
pip install -r src/requirements.txt

  1. Install Torchvision: This project runs on a device with an Nvidia GPU. The Isaac ROS Dev container uses the Nvidia-built PyTorch version with CUDA-acceleration. Ensure that you install a compatible Torchvision version from source for CUDA-acceleration. Specify the compatible version in place of $torchvision_tag below:
git clone https://github.com/pytorch/vision.git
cd vision
git checkout $torchvision_tag
pip install -v .

  1. Download the utils folder from the Ultralytics YOLOv5 project and put it in the yolov5_isaac_ros folder of this repository. Finally, your file structure should look like this (all files not shown here):
.
+- workspaces
   +- isaac_ros-dev
      +- src
         +- requirements.txt
         +- yolov5s.onnx
         +- isaac_ros_common
         +- YOLOv5-with-Isaac-ROS
            +- README
            +- launch
            +- images
            +- yolov5_isaac_ros
               +- utils
               +- Yolov5Decoder.py  
               +- Yolov5DecoderUtils.py    

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

  1. Make the following changes to utils/general.py, utils/torch_utils.py and utils/metrics.py after downloading utils from the Ultralytics YOLOv5 project:
    1. In the import statements, add yolov5_isaac_ros before utils. For instance - change from utils.metrics import box_iou to from yolov5_isaac_ros.utils.metrics import box_iou

Running the pipeline with TensorRT inference node

  1. Inside the container, build and source the workspace:
cd /workspaces/isaac_ros-dev
colcon build --symlink-install
source install/setup.bash

  1. Launch the RealSense camera node as per step 7 here: ros2 launch realsense2_camera rs_launch.py
  2. Verify that images are being published on /camera/color/image_raw. You could use RQt/Foxglove for this or use this command in another terminal inside the container: ros2 topic echo /camera/color/image_raw
  3. In another terminal inside the container, run the isaac_ros_yolov5_tensor_rt launch file. This launches the DNN image encoder node, TensorRT inference node and YOLOv5 decoder node. It also launches a visualization script that shows results on RQt. Use the names noted above in Model preparation as input_binding_names and output_binding_names (for example, images for input_binding_names and output0 for output_binding_names). Similarly, use the input dimensions noted above as network_image_width and network_image_height:
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_tensor_rt.launch.py model_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.onnx engine_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640

  1. For subsequent runs, use the following command as the engine file yolov5s.plan is generated and saved in workspaces/isaac_ros-dev/src/ after running the command above:

```

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged yolov5_isaac_ros at Robotics Stack Exchange

No version for distro iron showing github. Known supported distros are highlighted in the buttons above.
Package symbol

yolov5_isaac_ros package from yolov5-with-isaac-ros repo

yolov5_isaac_ros

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.0.0
License Apache-2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Sample showing how to use YOLOv5 with Nvidia Isaac ROS DNN Inference
Checkout URI https://github.com/nvidia-ai-iot/yolov5-with-isaac-ros.git
VCS Type git
VCS Version main
Last Updated 2022-12-02
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 package for YOLOv5 object detection to use with Nvidia Isaac ROS

Additional Links

No additional links.

Maintainers

  • admin

Authors

No additional authors.

YOLOv5 object detection with Isaac ROS

This is a sample showing how to integrate YOLOv5 with Nvidia Isaac ROS DNN Inference.

Requirements

Tested on Jetson Orin running JetPack 5.0.2 and Intel RealSense D435 Webcam.

Development Environment Setup

Use the Isaac ROS Dev Docker for development. This provides an environment with all dependencies installed to run Isaac ROS packages.

Usage

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

Model preparation

  • Download the YOLOv5 PyTorch model - yolov5s.pt from the Ultralytics YOLOv5 project.
  • Export to ONNX following steps here and visualize the ONNX model using Netron. Note input and output names - these will be used to run the node. For instance, images for input and output0 for output. Also note input dimensions, for instance, (1x3x640x640).

Object Detection pipeline Setup

  1. Following the development environment setup above, you should have a ROS2 workspace named workspaces/isaac_ros-dev. Clone this repository and its dependencies under workspaces/isaac_ros-dev/src:
cd ~/workspaces/isaac_ros-dev/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
git clone https://github.com/NVIDIA-AI-IOT/YOLOv5-with-Isaac-ROS.git

  1. Download requirements.txt from the Ultralytics YOLOv5 project to workspaces/isaac_ros-dev/src.
  2. Copy your ONNX model (say, yolov5s.onnx) from above to workspaces/isaac_ros-dev/src.
  3. Follow Isaac ROS Realsense Setup to setup the camera.
  4. Launch the Docker container using the run_dev.sh script:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common
./scripts/run_dev.sh

  1. Inside the container, run the following:
pip install -r src/requirements.txt

  1. Install Torchvision: This project runs on a device with an Nvidia GPU. The Isaac ROS Dev container uses the Nvidia-built PyTorch version with CUDA-acceleration. Ensure that you install a compatible Torchvision version from source for CUDA-acceleration. Specify the compatible version in place of $torchvision_tag below:
git clone https://github.com/pytorch/vision.git
cd vision
git checkout $torchvision_tag
pip install -v .

  1. Download the utils folder from the Ultralytics YOLOv5 project and put it in the yolov5_isaac_ros folder of this repository. Finally, your file structure should look like this (all files not shown here):
.
+- workspaces
   +- isaac_ros-dev
      +- src
         +- requirements.txt
         +- yolov5s.onnx
         +- isaac_ros_common
         +- YOLOv5-with-Isaac-ROS
            +- README
            +- launch
            +- images
            +- yolov5_isaac_ros
               +- utils
               +- Yolov5Decoder.py  
               +- Yolov5DecoderUtils.py    

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

  1. Make the following changes to utils/general.py, utils/torch_utils.py and utils/metrics.py after downloading utils from the Ultralytics YOLOv5 project:
    1. In the import statements, add yolov5_isaac_ros before utils. For instance - change from utils.metrics import box_iou to from yolov5_isaac_ros.utils.metrics import box_iou

Running the pipeline with TensorRT inference node

  1. Inside the container, build and source the workspace:
cd /workspaces/isaac_ros-dev
colcon build --symlink-install
source install/setup.bash

  1. Launch the RealSense camera node as per step 7 here: ros2 launch realsense2_camera rs_launch.py
  2. Verify that images are being published on /camera/color/image_raw. You could use RQt/Foxglove for this or use this command in another terminal inside the container: ros2 topic echo /camera/color/image_raw
  3. In another terminal inside the container, run the isaac_ros_yolov5_tensor_rt launch file. This launches the DNN image encoder node, TensorRT inference node and YOLOv5 decoder node. It also launches a visualization script that shows results on RQt. Use the names noted above in Model preparation as input_binding_names and output_binding_names (for example, images for input_binding_names and output0 for output_binding_names). Similarly, use the input dimensions noted above as network_image_width and network_image_height:
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_tensor_rt.launch.py model_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.onnx engine_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640

  1. For subsequent runs, use the following command as the engine file yolov5s.plan is generated and saved in workspaces/isaac_ros-dev/src/ after running the command above:

```

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged yolov5_isaac_ros at Robotics Stack Exchange

No version for distro melodic showing github. Known supported distros are highlighted in the buttons above.
Package symbol

yolov5_isaac_ros package from yolov5-with-isaac-ros repo

yolov5_isaac_ros

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.0.0
License Apache-2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Sample showing how to use YOLOv5 with Nvidia Isaac ROS DNN Inference
Checkout URI https://github.com/nvidia-ai-iot/yolov5-with-isaac-ros.git
VCS Type git
VCS Version main
Last Updated 2022-12-02
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 package for YOLOv5 object detection to use with Nvidia Isaac ROS

Additional Links

No additional links.

Maintainers

  • admin

Authors

No additional authors.

YOLOv5 object detection with Isaac ROS

This is a sample showing how to integrate YOLOv5 with Nvidia Isaac ROS DNN Inference.

Requirements

Tested on Jetson Orin running JetPack 5.0.2 and Intel RealSense D435 Webcam.

Development Environment Setup

Use the Isaac ROS Dev Docker for development. This provides an environment with all dependencies installed to run Isaac ROS packages.

Usage

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

Model preparation

  • Download the YOLOv5 PyTorch model - yolov5s.pt from the Ultralytics YOLOv5 project.
  • Export to ONNX following steps here and visualize the ONNX model using Netron. Note input and output names - these will be used to run the node. For instance, images for input and output0 for output. Also note input dimensions, for instance, (1x3x640x640).

Object Detection pipeline Setup

  1. Following the development environment setup above, you should have a ROS2 workspace named workspaces/isaac_ros-dev. Clone this repository and its dependencies under workspaces/isaac_ros-dev/src:
cd ~/workspaces/isaac_ros-dev/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
git clone https://github.com/NVIDIA-AI-IOT/YOLOv5-with-Isaac-ROS.git

  1. Download requirements.txt from the Ultralytics YOLOv5 project to workspaces/isaac_ros-dev/src.
  2. Copy your ONNX model (say, yolov5s.onnx) from above to workspaces/isaac_ros-dev/src.
  3. Follow Isaac ROS Realsense Setup to setup the camera.
  4. Launch the Docker container using the run_dev.sh script:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common
./scripts/run_dev.sh

  1. Inside the container, run the following:
pip install -r src/requirements.txt

  1. Install Torchvision: This project runs on a device with an Nvidia GPU. The Isaac ROS Dev container uses the Nvidia-built PyTorch version with CUDA-acceleration. Ensure that you install a compatible Torchvision version from source for CUDA-acceleration. Specify the compatible version in place of $torchvision_tag below:
git clone https://github.com/pytorch/vision.git
cd vision
git checkout $torchvision_tag
pip install -v .

  1. Download the utils folder from the Ultralytics YOLOv5 project and put it in the yolov5_isaac_ros folder of this repository. Finally, your file structure should look like this (all files not shown here):
.
+- workspaces
   +- isaac_ros-dev
      +- src
         +- requirements.txt
         +- yolov5s.onnx
         +- isaac_ros_common
         +- YOLOv5-with-Isaac-ROS
            +- README
            +- launch
            +- images
            +- yolov5_isaac_ros
               +- utils
               +- Yolov5Decoder.py  
               +- Yolov5DecoderUtils.py    

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

  1. Make the following changes to utils/general.py, utils/torch_utils.py and utils/metrics.py after downloading utils from the Ultralytics YOLOv5 project:
    1. In the import statements, add yolov5_isaac_ros before utils. For instance - change from utils.metrics import box_iou to from yolov5_isaac_ros.utils.metrics import box_iou

Running the pipeline with TensorRT inference node

  1. Inside the container, build and source the workspace:
cd /workspaces/isaac_ros-dev
colcon build --symlink-install
source install/setup.bash

  1. Launch the RealSense camera node as per step 7 here: ros2 launch realsense2_camera rs_launch.py
  2. Verify that images are being published on /camera/color/image_raw. You could use RQt/Foxglove for this or use this command in another terminal inside the container: ros2 topic echo /camera/color/image_raw
  3. In another terminal inside the container, run the isaac_ros_yolov5_tensor_rt launch file. This launches the DNN image encoder node, TensorRT inference node and YOLOv5 decoder node. It also launches a visualization script that shows results on RQt. Use the names noted above in Model preparation as input_binding_names and output_binding_names (for example, images for input_binding_names and output0 for output_binding_names). Similarly, use the input dimensions noted above as network_image_width and network_image_height:
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_tensor_rt.launch.py model_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.onnx engine_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640

  1. For subsequent runs, use the following command as the engine file yolov5s.plan is generated and saved in workspaces/isaac_ros-dev/src/ after running the command above:

```

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged yolov5_isaac_ros at Robotics Stack Exchange

No version for distro noetic showing github. Known supported distros are highlighted in the buttons above.
Package symbol

yolov5_isaac_ros package from yolov5-with-isaac-ros repo

yolov5_isaac_ros

ROS Distro
github

Package Summary

Tags No category tags.
Version 0.0.0
License Apache-2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Description Sample showing how to use YOLOv5 with Nvidia Isaac ROS DNN Inference
Checkout URI https://github.com/nvidia-ai-iot/yolov5-with-isaac-ros.git
VCS Type git
VCS Version main
Last Updated 2022-12-02
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS2 package for YOLOv5 object detection to use with Nvidia Isaac ROS

Additional Links

No additional links.

Maintainers

  • admin

Authors

No additional authors.

YOLOv5 object detection with Isaac ROS

This is a sample showing how to integrate YOLOv5 with Nvidia Isaac ROS DNN Inference.

Requirements

Tested on Jetson Orin running JetPack 5.0.2 and Intel RealSense D435 Webcam.

Development Environment Setup

Use the Isaac ROS Dev Docker for development. This provides an environment with all dependencies installed to run Isaac ROS packages.

Usage

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

Model preparation

  • Download the YOLOv5 PyTorch model - yolov5s.pt from the Ultralytics YOLOv5 project.
  • Export to ONNX following steps here and visualize the ONNX model using Netron. Note input and output names - these will be used to run the node. For instance, images for input and output0 for output. Also note input dimensions, for instance, (1x3x640x640).

Object Detection pipeline Setup

  1. Following the development environment setup above, you should have a ROS2 workspace named workspaces/isaac_ros-dev. Clone this repository and its dependencies under workspaces/isaac_ros-dev/src:
cd ~/workspaces/isaac_ros-dev/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
git clone https://github.com/NVIDIA-AI-IOT/YOLOv5-with-Isaac-ROS.git

  1. Download requirements.txt from the Ultralytics YOLOv5 project to workspaces/isaac_ros-dev/src.
  2. Copy your ONNX model (say, yolov5s.onnx) from above to workspaces/isaac_ros-dev/src.
  3. Follow Isaac ROS Realsense Setup to setup the camera.
  4. Launch the Docker container using the run_dev.sh script:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common
./scripts/run_dev.sh

  1. Inside the container, run the following:
pip install -r src/requirements.txt

  1. Install Torchvision: This project runs on a device with an Nvidia GPU. The Isaac ROS Dev container uses the Nvidia-built PyTorch version with CUDA-acceleration. Ensure that you install a compatible Torchvision version from source for CUDA-acceleration. Specify the compatible version in place of $torchvision_tag below:
git clone https://github.com/pytorch/vision.git
cd vision
git checkout $torchvision_tag
pip install -v .

  1. Download the utils folder from the Ultralytics YOLOv5 project and put it in the yolov5_isaac_ros folder of this repository. Finally, your file structure should look like this (all files not shown here):
.
+- workspaces
   +- isaac_ros-dev
      +- src
         +- requirements.txt
         +- yolov5s.onnx
         +- isaac_ros_common
         +- YOLOv5-with-Isaac-ROS
            +- README
            +- launch
            +- images
            +- yolov5_isaac_ros
               +- utils
               +- Yolov5Decoder.py  
               +- Yolov5DecoderUtils.py    

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

  1. Make the following changes to utils/general.py, utils/torch_utils.py and utils/metrics.py after downloading utils from the Ultralytics YOLOv5 project:
    1. In the import statements, add yolov5_isaac_ros before utils. For instance - change from utils.metrics import box_iou to from yolov5_isaac_ros.utils.metrics import box_iou

Running the pipeline with TensorRT inference node

  1. Inside the container, build and source the workspace:
cd /workspaces/isaac_ros-dev
colcon build --symlink-install
source install/setup.bash

  1. Launch the RealSense camera node as per step 7 here: ros2 launch realsense2_camera rs_launch.py
  2. Verify that images are being published on /camera/color/image_raw. You could use RQt/Foxglove for this or use this command in another terminal inside the container: ros2 topic echo /camera/color/image_raw
  3. In another terminal inside the container, run the isaac_ros_yolov5_tensor_rt launch file. This launches the DNN image encoder node, TensorRT inference node and YOLOv5 decoder node. It also launches a visualization script that shows results on RQt. Use the names noted above in Model preparation as input_binding_names and output_binding_names (for example, images for input_binding_names and output0 for output_binding_names). Similarly, use the input dimensions noted above as network_image_width and network_image_height:
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_tensor_rt.launch.py model_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.onnx engine_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640

  1. For subsequent runs, use the following command as the engine file yolov5s.plan is generated and saved in workspaces/isaac_ros-dev/src/ after running the command above:

```

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged yolov5_isaac_ros at Robotics Stack Exchange