Repository Summary
| Description | MuSoHu: Multi-Modal Social Human Navigation Dataset |
| Checkout URI | https://github.com/robotixx/musohu-data-collection.git |
| VCS Type | git |
| VCS Version | master |
| Last Updated | 2024-05-17 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| learning_tf | 0.0.0 |
| custom_package | 0.0.0 |
README
MuSoHu: Multi-Modal Social Human Navigation Dataset.

Dependencies :
ROS Noetic.
Python 3.8.10.
LiDAR driver Velodyne VLP-16.
Stereo camera driver ZED2.
Create a workspace and clone sources
mkdir -p catkin_ws/src; cd catkin_ws/src; catkin_init_workspace
git clone https://github.com/ros-drivers/velodyne.git
git clone --recursive https://github.com/stereolabs/zed-ros-wrapper.git
git clone https://github.com/stereolabs/zed-ros-interfaces.git
Install dependencies with rosdep :
cd catkin_ws; rosdep install --from-paths . --ignore-src --rosdistro=noetic
Build and source
cd catkin_ws; catkin_make; source devel/setup.bash
Launch visualization:

In terminal 1 :
roslaunch musohu_package musohu_suite.launch
Record data:

In terminal 2 :
python3 record.py
Downloading the data
Thanks to Jiaxu, you can download the dataset using this notebook.
Parsing bag files
To parse bag files and create samples please follow this guide. To load the data, here is a PyTorch dataloader example which loads ego-centeric images.
CONTRIBUTING
Repository Summary
| Description | MuSoHu: Multi-Modal Social Human Navigation Dataset |
| Checkout URI | https://github.com/robotixx/musohu-data-collection.git |
| VCS Type | git |
| VCS Version | master |
| Last Updated | 2024-05-17 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| learning_tf | 0.0.0 |
| custom_package | 0.0.0 |
README
MuSoHu: Multi-Modal Social Human Navigation Dataset.

Dependencies :
ROS Noetic.
Python 3.8.10.
LiDAR driver Velodyne VLP-16.
Stereo camera driver ZED2.
Create a workspace and clone sources
mkdir -p catkin_ws/src; cd catkin_ws/src; catkin_init_workspace
git clone https://github.com/ros-drivers/velodyne.git
git clone --recursive https://github.com/stereolabs/zed-ros-wrapper.git
git clone https://github.com/stereolabs/zed-ros-interfaces.git
Install dependencies with rosdep :
cd catkin_ws; rosdep install --from-paths . --ignore-src --rosdistro=noetic
Build and source
cd catkin_ws; catkin_make; source devel/setup.bash
Launch visualization:

In terminal 1 :
roslaunch musohu_package musohu_suite.launch
Record data:

In terminal 2 :
python3 record.py
Downloading the data
Thanks to Jiaxu, you can download the dataset using this notebook.
Parsing bag files
To parse bag files and create samples please follow this guide. To load the data, here is a PyTorch dataloader example which loads ego-centeric images.
CONTRIBUTING
Repository Summary
| Description | MuSoHu: Multi-Modal Social Human Navigation Dataset |
| Checkout URI | https://github.com/robotixx/musohu-data-collection.git |
| VCS Type | git |
| VCS Version | master |
| Last Updated | 2024-05-17 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| learning_tf | 0.0.0 |
| custom_package | 0.0.0 |
README
MuSoHu: Multi-Modal Social Human Navigation Dataset.

Dependencies :
ROS Noetic.
Python 3.8.10.
LiDAR driver Velodyne VLP-16.
Stereo camera driver ZED2.
Create a workspace and clone sources
mkdir -p catkin_ws/src; cd catkin_ws/src; catkin_init_workspace
git clone https://github.com/ros-drivers/velodyne.git
git clone --recursive https://github.com/stereolabs/zed-ros-wrapper.git
git clone https://github.com/stereolabs/zed-ros-interfaces.git
Install dependencies with rosdep :
cd catkin_ws; rosdep install --from-paths . --ignore-src --rosdistro=noetic
Build and source
cd catkin_ws; catkin_make; source devel/setup.bash
Launch visualization:

In terminal 1 :
roslaunch musohu_package musohu_suite.launch
Record data:

In terminal 2 :
python3 record.py
Downloading the data
Thanks to Jiaxu, you can download the dataset using this notebook.
Parsing bag files
To parse bag files and create samples please follow this guide. To load the data, here is a PyTorch dataloader example which loads ego-centeric images.
CONTRIBUTING
Repository Summary
| Description | MuSoHu: Multi-Modal Social Human Navigation Dataset |
| Checkout URI | https://github.com/robotixx/musohu-data-collection.git |
| VCS Type | git |
| VCS Version | master |
| Last Updated | 2024-05-17 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| learning_tf | 0.0.0 |
| custom_package | 0.0.0 |
README
MuSoHu: Multi-Modal Social Human Navigation Dataset.

Dependencies :
ROS Noetic.
Python 3.8.10.
LiDAR driver Velodyne VLP-16.
Stereo camera driver ZED2.
Create a workspace and clone sources
mkdir -p catkin_ws/src; cd catkin_ws/src; catkin_init_workspace
git clone https://github.com/ros-drivers/velodyne.git
git clone --recursive https://github.com/stereolabs/zed-ros-wrapper.git
git clone https://github.com/stereolabs/zed-ros-interfaces.git
Install dependencies with rosdep :
cd catkin_ws; rosdep install --from-paths . --ignore-src --rosdistro=noetic
Build and source
cd catkin_ws; catkin_make; source devel/setup.bash
Launch visualization:

In terminal 1 :
roslaunch musohu_package musohu_suite.launch
Record data:

In terminal 2 :
python3 record.py
Downloading the data
Thanks to Jiaxu, you can download the dataset using this notebook.
Parsing bag files
To parse bag files and create samples please follow this guide. To load the data, here is a PyTorch dataloader example which loads ego-centeric images.
CONTRIBUTING
Repository Summary
| Description | MuSoHu: Multi-Modal Social Human Navigation Dataset |
| Checkout URI | https://github.com/robotixx/musohu-data-collection.git |
| VCS Type | git |
| VCS Version | master |
| Last Updated | 2024-05-17 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| learning_tf | 0.0.0 |
| custom_package | 0.0.0 |
README
MuSoHu: Multi-Modal Social Human Navigation Dataset.

Dependencies :
ROS Noetic.
Python 3.8.10.
LiDAR driver Velodyne VLP-16.
Stereo camera driver ZED2.
Create a workspace and clone sources
mkdir -p catkin_ws/src; cd catkin_ws/src; catkin_init_workspace
git clone https://github.com/ros-drivers/velodyne.git
git clone --recursive https://github.com/stereolabs/zed-ros-wrapper.git
git clone https://github.com/stereolabs/zed-ros-interfaces.git
Install dependencies with rosdep :
cd catkin_ws; rosdep install --from-paths . --ignore-src --rosdistro=noetic
Build and source
cd catkin_ws; catkin_make; source devel/setup.bash
Launch visualization:

In terminal 1 :
roslaunch musohu_package musohu_suite.launch
Record data:

In terminal 2 :
python3 record.py
Downloading the data
Thanks to Jiaxu, you can download the dataset using this notebook.
Parsing bag files
To parse bag files and create samples please follow this guide. To load the data, here is a PyTorch dataloader example which loads ego-centeric images.
CONTRIBUTING
Repository Summary
| Description | MuSoHu: Multi-Modal Social Human Navigation Dataset |
| Checkout URI | https://github.com/robotixx/musohu-data-collection.git |
| VCS Type | git |
| VCS Version | master |
| Last Updated | 2024-05-17 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| learning_tf | 0.0.0 |
| custom_package | 0.0.0 |
README
MuSoHu: Multi-Modal Social Human Navigation Dataset.

Dependencies :
ROS Noetic.
Python 3.8.10.
LiDAR driver Velodyne VLP-16.
Stereo camera driver ZED2.
Create a workspace and clone sources
mkdir -p catkin_ws/src; cd catkin_ws/src; catkin_init_workspace
git clone https://github.com/ros-drivers/velodyne.git
git clone --recursive https://github.com/stereolabs/zed-ros-wrapper.git
git clone https://github.com/stereolabs/zed-ros-interfaces.git
Install dependencies with rosdep :
cd catkin_ws; rosdep install --from-paths . --ignore-src --rosdistro=noetic
Build and source
cd catkin_ws; catkin_make; source devel/setup.bash
Launch visualization:

In terminal 1 :
roslaunch musohu_package musohu_suite.launch
Record data:

In terminal 2 :
python3 record.py
Downloading the data
Thanks to Jiaxu, you can download the dataset using this notebook.
Parsing bag files
To parse bag files and create samples please follow this guide. To load the data, here is a PyTorch dataloader example which loads ego-centeric images.
CONTRIBUTING
Repository Summary
| Description | MuSoHu: Multi-Modal Social Human Navigation Dataset |
| Checkout URI | https://github.com/robotixx/musohu-data-collection.git |
| VCS Type | git |
| VCS Version | master |
| Last Updated | 2024-05-17 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| learning_tf | 0.0.0 |
| custom_package | 0.0.0 |
README
MuSoHu: Multi-Modal Social Human Navigation Dataset.

Dependencies :
ROS Noetic.
Python 3.8.10.
LiDAR driver Velodyne VLP-16.
Stereo camera driver ZED2.
Create a workspace and clone sources
mkdir -p catkin_ws/src; cd catkin_ws/src; catkin_init_workspace
git clone https://github.com/ros-drivers/velodyne.git
git clone --recursive https://github.com/stereolabs/zed-ros-wrapper.git
git clone https://github.com/stereolabs/zed-ros-interfaces.git
Install dependencies with rosdep :
cd catkin_ws; rosdep install --from-paths . --ignore-src --rosdistro=noetic
Build and source
cd catkin_ws; catkin_make; source devel/setup.bash
Launch visualization:

In terminal 1 :
roslaunch musohu_package musohu_suite.launch
Record data:

In terminal 2 :
python3 record.py
Downloading the data
Thanks to Jiaxu, you can download the dataset using this notebook.
Parsing bag files
To parse bag files and create samples please follow this guide. To load the data, here is a PyTorch dataloader example which loads ego-centeric images.
CONTRIBUTING
Repository Summary
| Description | MuSoHu: Multi-Modal Social Human Navigation Dataset |
| Checkout URI | https://github.com/robotixx/musohu-data-collection.git |
| VCS Type | git |
| VCS Version | master |
| Last Updated | 2024-05-17 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| learning_tf | 0.0.0 |
| custom_package | 0.0.0 |
README
MuSoHu: Multi-Modal Social Human Navigation Dataset.

Dependencies :
ROS Noetic.
Python 3.8.10.
LiDAR driver Velodyne VLP-16.
Stereo camera driver ZED2.
Create a workspace and clone sources
mkdir -p catkin_ws/src; cd catkin_ws/src; catkin_init_workspace
git clone https://github.com/ros-drivers/velodyne.git
git clone --recursive https://github.com/stereolabs/zed-ros-wrapper.git
git clone https://github.com/stereolabs/zed-ros-interfaces.git
Install dependencies with rosdep :
cd catkin_ws; rosdep install --from-paths . --ignore-src --rosdistro=noetic
Build and source
cd catkin_ws; catkin_make; source devel/setup.bash
Launch visualization:

In terminal 1 :
roslaunch musohu_package musohu_suite.launch
Record data:

In terminal 2 :
python3 record.py
Downloading the data
Thanks to Jiaxu, you can download the dataset using this notebook.
Parsing bag files
To parse bag files and create samples please follow this guide. To load the data, here is a PyTorch dataloader example which loads ego-centeric images.
CONTRIBUTING
Repository Summary
| Description | MuSoHu: Multi-Modal Social Human Navigation Dataset |
| Checkout URI | https://github.com/robotixx/musohu-data-collection.git |
| VCS Type | git |
| VCS Version | master |
| Last Updated | 2024-05-17 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| learning_tf | 0.0.0 |
| custom_package | 0.0.0 |
README
MuSoHu: Multi-Modal Social Human Navigation Dataset.

Dependencies :
ROS Noetic.
Python 3.8.10.
LiDAR driver Velodyne VLP-16.
Stereo camera driver ZED2.
Create a workspace and clone sources
mkdir -p catkin_ws/src; cd catkin_ws/src; catkin_init_workspace
git clone https://github.com/ros-drivers/velodyne.git
git clone --recursive https://github.com/stereolabs/zed-ros-wrapper.git
git clone https://github.com/stereolabs/zed-ros-interfaces.git
Install dependencies with rosdep :
cd catkin_ws; rosdep install --from-paths . --ignore-src --rosdistro=noetic
Build and source
cd catkin_ws; catkin_make; source devel/setup.bash
Launch visualization:

In terminal 1 :
roslaunch musohu_package musohu_suite.launch
Record data:

In terminal 2 :
python3 record.py
Downloading the data
Thanks to Jiaxu, you can download the dataset using this notebook.
Parsing bag files
To parse bag files and create samples please follow this guide. To load the data, here is a PyTorch dataloader example which loads ego-centeric images.