![]() |
mini_pupper_tracking package from mini_pupper_ros repomini_pupper_bringup mini_pupper_dance mini_pupper_description mini_pupper_driver mini_pupper_fleet mini_pupper_interfaces mini_pupper_music mini_pupper_navigation mini_pupper_recognition mini_pupper_simulation mini_pupper_slam mini_pupper_tracking stanford_controller |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache 2.0 |
Build type | AMENT_PYTHON |
Use | RECOMMENDED |
Repository Summary
Description | ROS 1 & 2 repos based on Mini Pupper legged robots from MangDang |
Checkout URI | https://github.com/mangdangroboticsclub/mini_pupper_ros.git |
VCS Type | git |
VCS Version | ros2-dev |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- kishan
Authors
Mini Pupper Tracking System
This ROS 2 package enables real-time person tracking for the Mini Pupper 2 robot.
It combines visual detection, multi-object tracking, and IMU-based motion control to guide the robot’s head and orientation toward detected individuals.
Features
- YOLO11n object detection on live camera feed
- Real-time tracking with unique temporary IDs per person (via motpy)
- IMU-based PID control for yaw correction and smooth pitch tracking
- Flask web interface for monitoring camera and tracking overlays
- RViz visualisation for 3D spatial awareness of detections and camera field of view
Tracking Behaviour
The robot uses YOLO11n to detect people and converts these detections into movement commands via PID control. Yaw adjustments are smoothed using IMU feedback to maintain heading stability.
RViz Visualisation
RViz displays:
- A pyramid cone representing the camera’s field of view
- Red points in 3D space representing detected individuals, estimated using bounding box area and field-of-view angles
Web Interface (Flask)
The Flask web interface shows:
- The live camera feed
- Detected individuals with bounding boxes
- Assigned temporary UUIDs for short-term identification
This is useful for remote observation and debugging.
Note: This package is only supported with the Stanford Controller. The CHAMP Controller is not supported.
Hardware Requirements
-
Camera: A Raspberry Pi Camera Module is required to run the tracking system.
This package was developed using the v2 module, compatibility with earlier camera versions such as v1.3 has not been verified and may vary.
Note: You will need to change the
camera
parameter inmini_pupper_bringup/config/mini_pupper_2.yaml
to true
Package Architecture
The tracking system consists of four main components:
-
Detection & Tracking (
main.py
+tracking_node.py
): YOLO11n-based person detection with multi-object tracking using motpy -
Movement Control (
movement_node.py
): PID-based robot control for yaw and pitch tracking with configurable parameters -
Visualisation (
camera_visualisation_node.py
): RViz camera FOV and 3d position markers for visualising the locations of people -
Web Interface (
flask_server.py
): Real-time video streaming with detection overlays
Dependencies
Install the required Python packages and ROS2 components to use in the ROS2 workspace:
# Downgrade numpy to a compatible version
pip install "numpy<2.0"
# Python dependencies
pip install flask onnxruntime motpy
# ROS2 dependencies
sudo apt install ros-humble-imu-filter-madgwick ros-humble-tf-transformations
1. Export the YOLO11n ONNX Model
The required YOLO11n ONNX model (yolo11n.onnx) is already included in this repository at models/yolo11n.onnx with 320x320 input resolution.
Using a Different Model or Resolution (Optional) If you want to use a different YOLO model or change the input resolution, follow these steps to export your own ONNX model:
Step 1: Set up a virtual environment
```bash python3 -m venv yolo-env source yolo-env/bin/activate
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
python3-pytest |
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mini_pupper_tracking at Robotics Stack Exchange
![]() |
mini_pupper_tracking package from mini_pupper_ros repomini_pupper_bringup mini_pupper_dance mini_pupper_description mini_pupper_driver mini_pupper_fleet mini_pupper_interfaces mini_pupper_music mini_pupper_navigation mini_pupper_recognition mini_pupper_simulation mini_pupper_slam mini_pupper_tracking stanford_controller |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache 2.0 |
Build type | AMENT_PYTHON |
Use | RECOMMENDED |
Repository Summary
Description | ROS 1 & 2 repos based on Mini Pupper legged robots from MangDang |
Checkout URI | https://github.com/mangdangroboticsclub/mini_pupper_ros.git |
VCS Type | git |
VCS Version | ros2-dev |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- kishan
Authors
Mini Pupper Tracking System
This ROS 2 package enables real-time person tracking for the Mini Pupper 2 robot.
It combines visual detection, multi-object tracking, and IMU-based motion control to guide the robot’s head and orientation toward detected individuals.
Features
- YOLO11n object detection on live camera feed
- Real-time tracking with unique temporary IDs per person (via motpy)
- IMU-based PID control for yaw correction and smooth pitch tracking
- Flask web interface for monitoring camera and tracking overlays
- RViz visualisation for 3D spatial awareness of detections and camera field of view
Tracking Behaviour
The robot uses YOLO11n to detect people and converts these detections into movement commands via PID control. Yaw adjustments are smoothed using IMU feedback to maintain heading stability.
RViz Visualisation
RViz displays:
- A pyramid cone representing the camera’s field of view
- Red points in 3D space representing detected individuals, estimated using bounding box area and field-of-view angles
Web Interface (Flask)
The Flask web interface shows:
- The live camera feed
- Detected individuals with bounding boxes
- Assigned temporary UUIDs for short-term identification
This is useful for remote observation and debugging.
Note: This package is only supported with the Stanford Controller. The CHAMP Controller is not supported.
Hardware Requirements
-
Camera: A Raspberry Pi Camera Module is required to run the tracking system.
This package was developed using the v2 module, compatibility with earlier camera versions such as v1.3 has not been verified and may vary.
Note: You will need to change the
camera
parameter inmini_pupper_bringup/config/mini_pupper_2.yaml
to true
Package Architecture
The tracking system consists of four main components:
-
Detection & Tracking (
main.py
+tracking_node.py
): YOLO11n-based person detection with multi-object tracking using motpy -
Movement Control (
movement_node.py
): PID-based robot control for yaw and pitch tracking with configurable parameters -
Visualisation (
camera_visualisation_node.py
): RViz camera FOV and 3d position markers for visualising the locations of people -
Web Interface (
flask_server.py
): Real-time video streaming with detection overlays
Dependencies
Install the required Python packages and ROS2 components to use in the ROS2 workspace:
# Downgrade numpy to a compatible version
pip install "numpy<2.0"
# Python dependencies
pip install flask onnxruntime motpy
# ROS2 dependencies
sudo apt install ros-humble-imu-filter-madgwick ros-humble-tf-transformations
1. Export the YOLO11n ONNX Model
The required YOLO11n ONNX model (yolo11n.onnx) is already included in this repository at models/yolo11n.onnx with 320x320 input resolution.
Using a Different Model or Resolution (Optional) If you want to use a different YOLO model or change the input resolution, follow these steps to export your own ONNX model:
Step 1: Set up a virtual environment
```bash python3 -m venv yolo-env source yolo-env/bin/activate
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
python3-pytest |
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mini_pupper_tracking at Robotics Stack Exchange
![]() |
mini_pupper_tracking package from mini_pupper_ros repomini_pupper_bringup mini_pupper_dance mini_pupper_description mini_pupper_driver mini_pupper_fleet mini_pupper_interfaces mini_pupper_music mini_pupper_navigation mini_pupper_recognition mini_pupper_simulation mini_pupper_slam mini_pupper_tracking stanford_controller |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache 2.0 |
Build type | AMENT_PYTHON |
Use | RECOMMENDED |
Repository Summary
Description | ROS 1 & 2 repos based on Mini Pupper legged robots from MangDang |
Checkout URI | https://github.com/mangdangroboticsclub/mini_pupper_ros.git |
VCS Type | git |
VCS Version | ros2-dev |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- kishan
Authors
Mini Pupper Tracking System
This ROS 2 package enables real-time person tracking for the Mini Pupper 2 robot.
It combines visual detection, multi-object tracking, and IMU-based motion control to guide the robot’s head and orientation toward detected individuals.
Features
- YOLO11n object detection on live camera feed
- Real-time tracking with unique temporary IDs per person (via motpy)
- IMU-based PID control for yaw correction and smooth pitch tracking
- Flask web interface for monitoring camera and tracking overlays
- RViz visualisation for 3D spatial awareness of detections and camera field of view
Tracking Behaviour
The robot uses YOLO11n to detect people and converts these detections into movement commands via PID control. Yaw adjustments are smoothed using IMU feedback to maintain heading stability.
RViz Visualisation
RViz displays:
- A pyramid cone representing the camera’s field of view
- Red points in 3D space representing detected individuals, estimated using bounding box area and field-of-view angles
Web Interface (Flask)
The Flask web interface shows:
- The live camera feed
- Detected individuals with bounding boxes
- Assigned temporary UUIDs for short-term identification
This is useful for remote observation and debugging.
Note: This package is only supported with the Stanford Controller. The CHAMP Controller is not supported.
Hardware Requirements
-
Camera: A Raspberry Pi Camera Module is required to run the tracking system.
This package was developed using the v2 module, compatibility with earlier camera versions such as v1.3 has not been verified and may vary.
Note: You will need to change the
camera
parameter inmini_pupper_bringup/config/mini_pupper_2.yaml
to true
Package Architecture
The tracking system consists of four main components:
-
Detection & Tracking (
main.py
+tracking_node.py
): YOLO11n-based person detection with multi-object tracking using motpy -
Movement Control (
movement_node.py
): PID-based robot control for yaw and pitch tracking with configurable parameters -
Visualisation (
camera_visualisation_node.py
): RViz camera FOV and 3d position markers for visualising the locations of people -
Web Interface (
flask_server.py
): Real-time video streaming with detection overlays
Dependencies
Install the required Python packages and ROS2 components to use in the ROS2 workspace:
# Downgrade numpy to a compatible version
pip install "numpy<2.0"
# Python dependencies
pip install flask onnxruntime motpy
# ROS2 dependencies
sudo apt install ros-humble-imu-filter-madgwick ros-humble-tf-transformations
1. Export the YOLO11n ONNX Model
The required YOLO11n ONNX model (yolo11n.onnx) is already included in this repository at models/yolo11n.onnx with 320x320 input resolution.
Using a Different Model or Resolution (Optional) If you want to use a different YOLO model or change the input resolution, follow these steps to export your own ONNX model:
Step 1: Set up a virtual environment
```bash python3 -m venv yolo-env source yolo-env/bin/activate
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
python3-pytest |
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mini_pupper_tracking at Robotics Stack Exchange
![]() |
mini_pupper_tracking package from mini_pupper_ros repomini_pupper_bringup mini_pupper_dance mini_pupper_description mini_pupper_driver mini_pupper_fleet mini_pupper_interfaces mini_pupper_music mini_pupper_navigation mini_pupper_recognition mini_pupper_simulation mini_pupper_slam mini_pupper_tracking stanford_controller |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache 2.0 |
Build type | AMENT_PYTHON |
Use | RECOMMENDED |
Repository Summary
Description | ROS 1 & 2 repos based on Mini Pupper legged robots from MangDang |
Checkout URI | https://github.com/mangdangroboticsclub/mini_pupper_ros.git |
VCS Type | git |
VCS Version | ros2-dev |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- kishan
Authors
Mini Pupper Tracking System
This ROS 2 package enables real-time person tracking for the Mini Pupper 2 robot.
It combines visual detection, multi-object tracking, and IMU-based motion control to guide the robot’s head and orientation toward detected individuals.
Features
- YOLO11n object detection on live camera feed
- Real-time tracking with unique temporary IDs per person (via motpy)
- IMU-based PID control for yaw correction and smooth pitch tracking
- Flask web interface for monitoring camera and tracking overlays
- RViz visualisation for 3D spatial awareness of detections and camera field of view
Tracking Behaviour
The robot uses YOLO11n to detect people and converts these detections into movement commands via PID control. Yaw adjustments are smoothed using IMU feedback to maintain heading stability.
RViz Visualisation
RViz displays:
- A pyramid cone representing the camera’s field of view
- Red points in 3D space representing detected individuals, estimated using bounding box area and field-of-view angles
Web Interface (Flask)
The Flask web interface shows:
- The live camera feed
- Detected individuals with bounding boxes
- Assigned temporary UUIDs for short-term identification
This is useful for remote observation and debugging.
Note: This package is only supported with the Stanford Controller. The CHAMP Controller is not supported.
Hardware Requirements
-
Camera: A Raspberry Pi Camera Module is required to run the tracking system.
This package was developed using the v2 module, compatibility with earlier camera versions such as v1.3 has not been verified and may vary.
Note: You will need to change the
camera
parameter inmini_pupper_bringup/config/mini_pupper_2.yaml
to true
Package Architecture
The tracking system consists of four main components:
-
Detection & Tracking (
main.py
+tracking_node.py
): YOLO11n-based person detection with multi-object tracking using motpy -
Movement Control (
movement_node.py
): PID-based robot control for yaw and pitch tracking with configurable parameters -
Visualisation (
camera_visualisation_node.py
): RViz camera FOV and 3d position markers for visualising the locations of people -
Web Interface (
flask_server.py
): Real-time video streaming with detection overlays
Dependencies
Install the required Python packages and ROS2 components to use in the ROS2 workspace:
# Downgrade numpy to a compatible version
pip install "numpy<2.0"
# Python dependencies
pip install flask onnxruntime motpy
# ROS2 dependencies
sudo apt install ros-humble-imu-filter-madgwick ros-humble-tf-transformations
1. Export the YOLO11n ONNX Model
The required YOLO11n ONNX model (yolo11n.onnx) is already included in this repository at models/yolo11n.onnx with 320x320 input resolution.
Using a Different Model or Resolution (Optional) If you want to use a different YOLO model or change the input resolution, follow these steps to export your own ONNX model:
Step 1: Set up a virtual environment
```bash python3 -m venv yolo-env source yolo-env/bin/activate
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
python3-pytest |
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mini_pupper_tracking at Robotics Stack Exchange
![]() |
mini_pupper_tracking package from mini_pupper_ros repomini_pupper_bringup mini_pupper_dance mini_pupper_description mini_pupper_driver mini_pupper_fleet mini_pupper_interfaces mini_pupper_music mini_pupper_navigation mini_pupper_recognition mini_pupper_simulation mini_pupper_slam mini_pupper_tracking stanford_controller |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache 2.0 |
Build type | AMENT_PYTHON |
Use | RECOMMENDED |
Repository Summary
Description | ROS 1 & 2 repos based on Mini Pupper legged robots from MangDang |
Checkout URI | https://github.com/mangdangroboticsclub/mini_pupper_ros.git |
VCS Type | git |
VCS Version | ros2-dev |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- kishan
Authors
Mini Pupper Tracking System
This ROS 2 package enables real-time person tracking for the Mini Pupper 2 robot.
It combines visual detection, multi-object tracking, and IMU-based motion control to guide the robot’s head and orientation toward detected individuals.
Features
- YOLO11n object detection on live camera feed
- Real-time tracking with unique temporary IDs per person (via motpy)
- IMU-based PID control for yaw correction and smooth pitch tracking
- Flask web interface for monitoring camera and tracking overlays
- RViz visualisation for 3D spatial awareness of detections and camera field of view
Tracking Behaviour
The robot uses YOLO11n to detect people and converts these detections into movement commands via PID control. Yaw adjustments are smoothed using IMU feedback to maintain heading stability.
RViz Visualisation
RViz displays:
- A pyramid cone representing the camera’s field of view
- Red points in 3D space representing detected individuals, estimated using bounding box area and field-of-view angles
Web Interface (Flask)
The Flask web interface shows:
- The live camera feed
- Detected individuals with bounding boxes
- Assigned temporary UUIDs for short-term identification
This is useful for remote observation and debugging.
Note: This package is only supported with the Stanford Controller. The CHAMP Controller is not supported.
Hardware Requirements
-
Camera: A Raspberry Pi Camera Module is required to run the tracking system.
This package was developed using the v2 module, compatibility with earlier camera versions such as v1.3 has not been verified and may vary.
Note: You will need to change the
camera
parameter inmini_pupper_bringup/config/mini_pupper_2.yaml
to true
Package Architecture
The tracking system consists of four main components:
-
Detection & Tracking (
main.py
+tracking_node.py
): YOLO11n-based person detection with multi-object tracking using motpy -
Movement Control (
movement_node.py
): PID-based robot control for yaw and pitch tracking with configurable parameters -
Visualisation (
camera_visualisation_node.py
): RViz camera FOV and 3d position markers for visualising the locations of people -
Web Interface (
flask_server.py
): Real-time video streaming with detection overlays
Dependencies
Install the required Python packages and ROS2 components to use in the ROS2 workspace:
# Downgrade numpy to a compatible version
pip install "numpy<2.0"
# Python dependencies
pip install flask onnxruntime motpy
# ROS2 dependencies
sudo apt install ros-humble-imu-filter-madgwick ros-humble-tf-transformations
1. Export the YOLO11n ONNX Model
The required YOLO11n ONNX model (yolo11n.onnx) is already included in this repository at models/yolo11n.onnx with 320x320 input resolution.
Using a Different Model or Resolution (Optional) If you want to use a different YOLO model or change the input resolution, follow these steps to export your own ONNX model:
Step 1: Set up a virtual environment
```bash python3 -m venv yolo-env source yolo-env/bin/activate
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
python3-pytest |
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mini_pupper_tracking at Robotics Stack Exchange
![]() |
mini_pupper_tracking package from mini_pupper_ros repomini_pupper_bringup mini_pupper_dance mini_pupper_description mini_pupper_driver mini_pupper_fleet mini_pupper_interfaces mini_pupper_music mini_pupper_navigation mini_pupper_recognition mini_pupper_simulation mini_pupper_slam mini_pupper_tracking stanford_controller |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache 2.0 |
Build type | AMENT_PYTHON |
Use | RECOMMENDED |
Repository Summary
Description | ROS 1 & 2 repos based on Mini Pupper legged robots from MangDang |
Checkout URI | https://github.com/mangdangroboticsclub/mini_pupper_ros.git |
VCS Type | git |
VCS Version | ros2-dev |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- kishan
Authors
Mini Pupper Tracking System
This ROS 2 package enables real-time person tracking for the Mini Pupper 2 robot.
It combines visual detection, multi-object tracking, and IMU-based motion control to guide the robot’s head and orientation toward detected individuals.
Features
- YOLO11n object detection on live camera feed
- Real-time tracking with unique temporary IDs per person (via motpy)
- IMU-based PID control for yaw correction and smooth pitch tracking
- Flask web interface for monitoring camera and tracking overlays
- RViz visualisation for 3D spatial awareness of detections and camera field of view
Tracking Behaviour
The robot uses YOLO11n to detect people and converts these detections into movement commands via PID control. Yaw adjustments are smoothed using IMU feedback to maintain heading stability.
RViz Visualisation
RViz displays:
- A pyramid cone representing the camera’s field of view
- Red points in 3D space representing detected individuals, estimated using bounding box area and field-of-view angles
Web Interface (Flask)
The Flask web interface shows:
- The live camera feed
- Detected individuals with bounding boxes
- Assigned temporary UUIDs for short-term identification
This is useful for remote observation and debugging.
Note: This package is only supported with the Stanford Controller. The CHAMP Controller is not supported.
Hardware Requirements
-
Camera: A Raspberry Pi Camera Module is required to run the tracking system.
This package was developed using the v2 module, compatibility with earlier camera versions such as v1.3 has not been verified and may vary.
Note: You will need to change the
camera
parameter inmini_pupper_bringup/config/mini_pupper_2.yaml
to true
Package Architecture
The tracking system consists of four main components:
-
Detection & Tracking (
main.py
+tracking_node.py
): YOLO11n-based person detection with multi-object tracking using motpy -
Movement Control (
movement_node.py
): PID-based robot control for yaw and pitch tracking with configurable parameters -
Visualisation (
camera_visualisation_node.py
): RViz camera FOV and 3d position markers for visualising the locations of people -
Web Interface (
flask_server.py
): Real-time video streaming with detection overlays
Dependencies
Install the required Python packages and ROS2 components to use in the ROS2 workspace:
# Downgrade numpy to a compatible version
pip install "numpy<2.0"
# Python dependencies
pip install flask onnxruntime motpy
# ROS2 dependencies
sudo apt install ros-humble-imu-filter-madgwick ros-humble-tf-transformations
1. Export the YOLO11n ONNX Model
The required YOLO11n ONNX model (yolo11n.onnx) is already included in this repository at models/yolo11n.onnx with 320x320 input resolution.
Using a Different Model or Resolution (Optional) If you want to use a different YOLO model or change the input resolution, follow these steps to export your own ONNX model:
Step 1: Set up a virtual environment
```bash python3 -m venv yolo-env source yolo-env/bin/activate
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
python3-pytest |
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mini_pupper_tracking at Robotics Stack Exchange
![]() |
mini_pupper_tracking package from mini_pupper_ros repomini_pupper_bringup mini_pupper_dance mini_pupper_description mini_pupper_driver mini_pupper_fleet mini_pupper_interfaces mini_pupper_music mini_pupper_navigation mini_pupper_recognition mini_pupper_simulation mini_pupper_slam mini_pupper_tracking stanford_controller |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache 2.0 |
Build type | AMENT_PYTHON |
Use | RECOMMENDED |
Repository Summary
Description | ROS 1 & 2 repos based on Mini Pupper legged robots from MangDang |
Checkout URI | https://github.com/mangdangroboticsclub/mini_pupper_ros.git |
VCS Type | git |
VCS Version | ros2-dev |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- kishan
Authors
Mini Pupper Tracking System
This ROS 2 package enables real-time person tracking for the Mini Pupper 2 robot.
It combines visual detection, multi-object tracking, and IMU-based motion control to guide the robot’s head and orientation toward detected individuals.
Features
- YOLO11n object detection on live camera feed
- Real-time tracking with unique temporary IDs per person (via motpy)
- IMU-based PID control for yaw correction and smooth pitch tracking
- Flask web interface for monitoring camera and tracking overlays
- RViz visualisation for 3D spatial awareness of detections and camera field of view
Tracking Behaviour
The robot uses YOLO11n to detect people and converts these detections into movement commands via PID control. Yaw adjustments are smoothed using IMU feedback to maintain heading stability.
RViz Visualisation
RViz displays:
- A pyramid cone representing the camera’s field of view
- Red points in 3D space representing detected individuals, estimated using bounding box area and field-of-view angles
Web Interface (Flask)
The Flask web interface shows:
- The live camera feed
- Detected individuals with bounding boxes
- Assigned temporary UUIDs for short-term identification
This is useful for remote observation and debugging.
Note: This package is only supported with the Stanford Controller. The CHAMP Controller is not supported.
Hardware Requirements
-
Camera: A Raspberry Pi Camera Module is required to run the tracking system.
This package was developed using the v2 module, compatibility with earlier camera versions such as v1.3 has not been verified and may vary.
Note: You will need to change the
camera
parameter inmini_pupper_bringup/config/mini_pupper_2.yaml
to true
Package Architecture
The tracking system consists of four main components:
-
Detection & Tracking (
main.py
+tracking_node.py
): YOLO11n-based person detection with multi-object tracking using motpy -
Movement Control (
movement_node.py
): PID-based robot control for yaw and pitch tracking with configurable parameters -
Visualisation (
camera_visualisation_node.py
): RViz camera FOV and 3d position markers for visualising the locations of people -
Web Interface (
flask_server.py
): Real-time video streaming with detection overlays
Dependencies
Install the required Python packages and ROS2 components to use in the ROS2 workspace:
# Downgrade numpy to a compatible version
pip install "numpy<2.0"
# Python dependencies
pip install flask onnxruntime motpy
# ROS2 dependencies
sudo apt install ros-humble-imu-filter-madgwick ros-humble-tf-transformations
1. Export the YOLO11n ONNX Model
The required YOLO11n ONNX model (yolo11n.onnx) is already included in this repository at models/yolo11n.onnx with 320x320 input resolution.
Using a Different Model or Resolution (Optional) If you want to use a different YOLO model or change the input resolution, follow these steps to export your own ONNX model:
Step 1: Set up a virtual environment
```bash python3 -m venv yolo-env source yolo-env/bin/activate
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
python3-pytest |
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mini_pupper_tracking at Robotics Stack Exchange
![]() |
mini_pupper_tracking package from mini_pupper_ros repomini_pupper_bringup mini_pupper_dance mini_pupper_description mini_pupper_driver mini_pupper_fleet mini_pupper_interfaces mini_pupper_music mini_pupper_navigation mini_pupper_recognition mini_pupper_simulation mini_pupper_slam mini_pupper_tracking stanford_controller |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache 2.0 |
Build type | AMENT_PYTHON |
Use | RECOMMENDED |
Repository Summary
Description | ROS 1 & 2 repos based on Mini Pupper legged robots from MangDang |
Checkout URI | https://github.com/mangdangroboticsclub/mini_pupper_ros.git |
VCS Type | git |
VCS Version | ros2-dev |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- kishan
Authors
Mini Pupper Tracking System
This ROS 2 package enables real-time person tracking for the Mini Pupper 2 robot.
It combines visual detection, multi-object tracking, and IMU-based motion control to guide the robot’s head and orientation toward detected individuals.
Features
- YOLO11n object detection on live camera feed
- Real-time tracking with unique temporary IDs per person (via motpy)
- IMU-based PID control for yaw correction and smooth pitch tracking
- Flask web interface for monitoring camera and tracking overlays
- RViz visualisation for 3D spatial awareness of detections and camera field of view
Tracking Behaviour
The robot uses YOLO11n to detect people and converts these detections into movement commands via PID control. Yaw adjustments are smoothed using IMU feedback to maintain heading stability.
RViz Visualisation
RViz displays:
- A pyramid cone representing the camera’s field of view
- Red points in 3D space representing detected individuals, estimated using bounding box area and field-of-view angles
Web Interface (Flask)
The Flask web interface shows:
- The live camera feed
- Detected individuals with bounding boxes
- Assigned temporary UUIDs for short-term identification
This is useful for remote observation and debugging.
Note: This package is only supported with the Stanford Controller. The CHAMP Controller is not supported.
Hardware Requirements
-
Camera: A Raspberry Pi Camera Module is required to run the tracking system.
This package was developed using the v2 module, compatibility with earlier camera versions such as v1.3 has not been verified and may vary.
Note: You will need to change the
camera
parameter inmini_pupper_bringup/config/mini_pupper_2.yaml
to true
Package Architecture
The tracking system consists of four main components:
-
Detection & Tracking (
main.py
+tracking_node.py
): YOLO11n-based person detection with multi-object tracking using motpy -
Movement Control (
movement_node.py
): PID-based robot control for yaw and pitch tracking with configurable parameters -
Visualisation (
camera_visualisation_node.py
): RViz camera FOV and 3d position markers for visualising the locations of people -
Web Interface (
flask_server.py
): Real-time video streaming with detection overlays
Dependencies
Install the required Python packages and ROS2 components to use in the ROS2 workspace:
# Downgrade numpy to a compatible version
pip install "numpy<2.0"
# Python dependencies
pip install flask onnxruntime motpy
# ROS2 dependencies
sudo apt install ros-humble-imu-filter-madgwick ros-humble-tf-transformations
1. Export the YOLO11n ONNX Model
The required YOLO11n ONNX model (yolo11n.onnx) is already included in this repository at models/yolo11n.onnx with 320x320 input resolution.
Using a Different Model or Resolution (Optional) If you want to use a different YOLO model or change the input resolution, follow these steps to export your own ONNX model:
Step 1: Set up a virtual environment
```bash python3 -m venv yolo-env source yolo-env/bin/activate
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
python3-pytest |
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged mini_pupper_tracking at Robotics Stack Exchange
![]() |
mini_pupper_tracking package from mini_pupper_ros repomini_pupper_bringup mini_pupper_dance mini_pupper_description mini_pupper_driver mini_pupper_fleet mini_pupper_interfaces mini_pupper_music mini_pupper_navigation mini_pupper_recognition mini_pupper_simulation mini_pupper_slam mini_pupper_tracking stanford_controller |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache 2.0 |
Build type | AMENT_PYTHON |
Use | RECOMMENDED |
Repository Summary
Description | ROS 1 & 2 repos based on Mini Pupper legged robots from MangDang |
Checkout URI | https://github.com/mangdangroboticsclub/mini_pupper_ros.git |
VCS Type | git |
VCS Version | ros2-dev |
Last Updated | 2025-09-26 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- kishan
Authors
Mini Pupper Tracking System
This ROS 2 package enables real-time person tracking for the Mini Pupper 2 robot.
It combines visual detection, multi-object tracking, and IMU-based motion control to guide the robot’s head and orientation toward detected individuals.
Features
- YOLO11n object detection on live camera feed
- Real-time tracking with unique temporary IDs per person (via motpy)
- IMU-based PID control for yaw correction and smooth pitch tracking
- Flask web interface for monitoring camera and tracking overlays
- RViz visualisation for 3D spatial awareness of detections and camera field of view
Tracking Behaviour
The robot uses YOLO11n to detect people and converts these detections into movement commands via PID control. Yaw adjustments are smoothed using IMU feedback to maintain heading stability.
RViz Visualisation
RViz displays:
- A pyramid cone representing the camera’s field of view
- Red points in 3D space representing detected individuals, estimated using bounding box area and field-of-view angles
Web Interface (Flask)
The Flask web interface shows:
- The live camera feed
- Detected individuals with bounding boxes
- Assigned temporary UUIDs for short-term identification
This is useful for remote observation and debugging.
Note: This package is only supported with the Stanford Controller. The CHAMP Controller is not supported.
Hardware Requirements
-
Camera: A Raspberry Pi Camera Module is required to run the tracking system.
This package was developed using the v2 module, compatibility with earlier camera versions such as v1.3 has not been verified and may vary.
Note: You will need to change the
camera
parameter inmini_pupper_bringup/config/mini_pupper_2.yaml
to true
Package Architecture
The tracking system consists of four main components:
-
Detection & Tracking (
main.py
+tracking_node.py
): YOLO11n-based person detection with multi-object tracking using motpy -
Movement Control (
movement_node.py
): PID-based robot control for yaw and pitch tracking with configurable parameters -
Visualisation (
camera_visualisation_node.py
): RViz camera FOV and 3d position markers for visualising the locations of people -
Web Interface (
flask_server.py
): Real-time video streaming with detection overlays
Dependencies
Install the required Python packages and ROS2 components to use in the ROS2 workspace:
# Downgrade numpy to a compatible version
pip install "numpy<2.0"
# Python dependencies
pip install flask onnxruntime motpy
# ROS2 dependencies
sudo apt install ros-humble-imu-filter-madgwick ros-humble-tf-transformations
1. Export the YOLO11n ONNX Model
The required YOLO11n ONNX model (yolo11n.onnx) is already included in this repository at models/yolo11n.onnx with 320x320 input resolution.
Using a Different Model or Resolution (Optional) If you want to use a different YOLO model or change the input resolution, follow these steps to export your own ONNX model:
Step 1: Set up a virtual environment
```bash python3 -m venv yolo-env source yolo-env/bin/activate
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Name |
---|
python3-pytest |