Repo symbol

navrl repository

reinforcement-learning robotics collision-avoidance robot-navigation nvidia-isaac embodied-ai isaac-sim ros1-noetic ros2-humble
Repo symbol

navrl repository

reinforcement-learning robotics collision-avoidance robot-navigation nvidia-isaac embodied-ai isaac-sim ros1-noetic ros2-humble
Repo symbol

navrl repository

reinforcement-learning robotics collision-avoidance robot-navigation nvidia-isaac embodied-ai isaac-sim ros1-noetic ros2-humble
Repo symbol

navrl repository

reinforcement-learning robotics collision-avoidance robot-navigation nvidia-isaac embodied-ai isaac-sim ros1-noetic ros2-humble
Repo symbol

navrl repository

reinforcement-learning robotics collision-avoidance robot-navigation nvidia-isaac embodied-ai isaac-sim ros1-noetic ros2-humble map_manager navigation_runner onboard_detector uav_simulator

Repository Summary

Description [IEEE RA-L'25] NavRL: Learning Safe Flight in Dynamic Environments (NVIDIA Isaac/Python/ROS1/ROS2)
Checkout URI https://github.com/zhefan-xu/navrl.git
VCS Type git
VCS Version main
Last Updated 2025-07-03
Dev Status UNKNOWN
Released UNRELEASED
Tags reinforcement-learning robotics collision-avoidance robot-navigation nvidia-isaac embodied-ai isaac-sim ros1-noetic ros2-humble
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
map_manager 1.0.0
navigation_runner 1.0.0
onboard_detector 1.0.0
uav_simulator 1.0.0

README

NavRL: Learning Safe Flight in Dynamic Environments

Python ROS1 ROS2 IsaacSim Linux platform

Welcome to the NavRL repository! This repository provides the implementation of the NavRL framework, designed to enable robots to safely navigate dynamic environments using Reinforcement Learning. While the original paper focuses on UAV navigation, the NavRL can be extended to any robot that adopts a velocity-based control system.

For additional details, please refer to the related paper available here:

Zhefan Xu, Xinming Han, Haoyu Shen, Hanyu Jin, and Kenji Shimada, “NavRL: Learning Safe Flight in Dynamic Environments”, IEEE Robotics and Automation Letters (RA-L), 2025. [IEEE Xplore] [preprint] [YouTube] [BiliBili]

News

  • 2025-04-06: We release easy-to-run Python scripts that allows users to quickly run demos.
  • 2025-02-23: The GitHub code, video demos, and relavant papers for our NavRL framework are released. The authors will actively maintain and update this repo!

Table of Contents

We provide a pretrained model and easy-to-run Python scripts for quick demos of the NavRL framework.

To get started, please follow the steps in Deployment Virtual Environment to set up the Conda environment. Once the setup is complete, you can run the following three demos with the following commands:

conda activate NavRL
cd NavRL/quick-demos

# DEMO I: Navigating to a predefined goal point
python simple-navigation.py

# DEMO II: Navigating to dynamically/randomly assigned goal points
python random-navigation.py

# DEMO III: Multi-robot navigation
python multi-robot-navigation.py

I. Training in NVIDIA Isaac Sim

This section provides the steps for training your own RL agent with the NavRL framework in Isaac Sim. If you are not interested in training the agent yourself, feel free to skip this section and jump straight to the deployment section.

Isaac Sim Installation

This project was developed using Isaac Sim version 2023.1.0-hotfix.1, released in November 2023. Please make sure you download and use this exact version, as using a different version may lead to errors due to version incompatibility. Also, ensure that you have conda installed.

If you have already downloaded Isaac Sim version 2023.1.0-hotfix.1, you can skip the following steps. Otherwise, please follow the instructions below to download the legacy version of Isaac Sim, as the official installation does not support legacy version downloads.

To download Isaac Sim version 2023.1.0-hotfix.1:

a. First, follow the steps on this link to complete the Docker Container Setup.

b. Then, download the Isaac Sim to your docker container:

docker pull nvcr.io/nvidia/isaac-sim:2023.1.0-hotfix.1

docker run --name isaac-sim --entrypoint bash -it --runtime=nvidia --gpus all -e "ACCEPT_EULA=Y" --rm --network=host \
    -e "PRIVACY_CONSENT=Y" \
    -v ~/docker/isaac-sim/cache/kit:/isaac-sim/kit/cache:rw \
    -v ~/docker/isaac-sim/cache/ov:/root/.cache/ov:rw \
    -v ~/docker/isaac-sim/cache/pip:/root/.cache/pip:rw \
    -v ~/docker/isaac-sim/cache/glcache:/root/.cache/nvidia/GLCache:rw \
    -v ~/docker/isaac-sim/cache/computecache:/root/.nv/ComputeCache:rw \
    -v ~/docker/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw \
    -v ~/docker/isaac-sim/data:/root/.local/share/ov/data:rw \
    -v ~/docker/isaac-sim/documents:/root/Documents:rw \
    nvcr.io/nvidia/isaac-sim:2023.1.0-hotfix.1

c. Move the downloaded Isaac Sim from the docker container to your local machine:

bash docker ps # check your container ID in another terminal

# Replace <id_container> with the output from the previous command
docker cp <id_container>:isaac-sim/. /path/to/local/folder # absolute path

File truncated at 100 lines see the full file

Repo symbol

navrl repository

reinforcement-learning robotics collision-avoidance robot-navigation nvidia-isaac embodied-ai isaac-sim ros1-noetic ros2-humble
Repo symbol

navrl repository

reinforcement-learning robotics collision-avoidance robot-navigation nvidia-isaac embodied-ai isaac-sim ros1-noetic ros2-humble
Repo symbol

navrl repository

reinforcement-learning robotics collision-avoidance robot-navigation nvidia-isaac embodied-ai isaac-sim ros1-noetic ros2-humble
Repo symbol

navrl repository

reinforcement-learning robotics collision-avoidance robot-navigation nvidia-isaac embodied-ai isaac-sim ros1-noetic ros2-humble