Repository Summary
| Description | |
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-25 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit). -
Multi-arch Docker Support: Ready-to-use Docker images for
amd64andarm64, fully configurable via environment variables for easy deployment.
Docker Usage
The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).
Running with Docker
docker run -it --rm \
--name bob-llm \
-e LLM_API_URL="http://192.168.1.100:8000/v1" \
-e LLM_API_KEY="your_secret_token" \
-e LLM_API_MODEL="llama3" \
-e LLM_TEMPERATURE="0.5" \
ghcr.io/bob-ros2/bob-llm:latest
Running with Docker Compose
services:
llm:
image: ghcr.io/bob-ros2/bob-llm:latest
container_name: bob-llm
environment:
- LLM_API_URL=http://llm-backend:8000/v1
- LLM_API_KEY=sk-12345
- LLM_API_MODEL=gpt-4
- LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
- LLM_TEMPERATURE=0.8
restart: always
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Description | |
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-25 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit). -
Multi-arch Docker Support: Ready-to-use Docker images for
amd64andarm64, fully configurable via environment variables for easy deployment.
Docker Usage
The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).
Running with Docker
docker run -it --rm \
--name bob-llm \
-e LLM_API_URL="http://192.168.1.100:8000/v1" \
-e LLM_API_KEY="your_secret_token" \
-e LLM_API_MODEL="llama3" \
-e LLM_TEMPERATURE="0.5" \
ghcr.io/bob-ros2/bob-llm:latest
Running with Docker Compose
services:
llm:
image: ghcr.io/bob-ros2/bob-llm:latest
container_name: bob-llm
environment:
- LLM_API_URL=http://llm-backend:8000/v1
- LLM_API_KEY=sk-12345
- LLM_API_MODEL=gpt-4
- LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
- LLM_TEMPERATURE=0.8
restart: always
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Description | |
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-25 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit). -
Multi-arch Docker Support: Ready-to-use Docker images for
amd64andarm64, fully configurable via environment variables for easy deployment.
Docker Usage
The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).
Running with Docker
docker run -it --rm \
--name bob-llm \
-e LLM_API_URL="http://192.168.1.100:8000/v1" \
-e LLM_API_KEY="your_secret_token" \
-e LLM_API_MODEL="llama3" \
-e LLM_TEMPERATURE="0.5" \
ghcr.io/bob-ros2/bob-llm:latest
Running with Docker Compose
services:
llm:
image: ghcr.io/bob-ros2/bob-llm:latest
container_name: bob-llm
environment:
- LLM_API_URL=http://llm-backend:8000/v1
- LLM_API_KEY=sk-12345
- LLM_API_MODEL=gpt-4
- LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
- LLM_TEMPERATURE=0.8
restart: always
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Description | |
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-25 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit). -
Multi-arch Docker Support: Ready-to-use Docker images for
amd64andarm64, fully configurable via environment variables for easy deployment.
Docker Usage
The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).
Running with Docker
docker run -it --rm \
--name bob-llm \
-e LLM_API_URL="http://192.168.1.100:8000/v1" \
-e LLM_API_KEY="your_secret_token" \
-e LLM_API_MODEL="llama3" \
-e LLM_TEMPERATURE="0.5" \
ghcr.io/bob-ros2/bob-llm:latest
Running with Docker Compose
services:
llm:
image: ghcr.io/bob-ros2/bob-llm:latest
container_name: bob-llm
environment:
- LLM_API_URL=http://llm-backend:8000/v1
- LLM_API_KEY=sk-12345
- LLM_API_MODEL=gpt-4
- LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
- LLM_TEMPERATURE=0.8
restart: always
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Description | |
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-25 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit). -
Multi-arch Docker Support: Ready-to-use Docker images for
amd64andarm64, fully configurable via environment variables for easy deployment.
Docker Usage
The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).
Running with Docker
docker run -it --rm \
--name bob-llm \
-e LLM_API_URL="http://192.168.1.100:8000/v1" \
-e LLM_API_KEY="your_secret_token" \
-e LLM_API_MODEL="llama3" \
-e LLM_TEMPERATURE="0.5" \
ghcr.io/bob-ros2/bob-llm:latest
Running with Docker Compose
services:
llm:
image: ghcr.io/bob-ros2/bob-llm:latest
container_name: bob-llm
environment:
- LLM_API_URL=http://llm-backend:8000/v1
- LLM_API_KEY=sk-12345
- LLM_API_MODEL=gpt-4
- LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
- LLM_TEMPERATURE=0.8
restart: always
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Description | |
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-25 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit). -
Multi-arch Docker Support: Ready-to-use Docker images for
amd64andarm64, fully configurable via environment variables for easy deployment.
Docker Usage
The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).
Running with Docker
docker run -it --rm \
--name bob-llm \
-e LLM_API_URL="http://192.168.1.100:8000/v1" \
-e LLM_API_KEY="your_secret_token" \
-e LLM_API_MODEL="llama3" \
-e LLM_TEMPERATURE="0.5" \
ghcr.io/bob-ros2/bob-llm:latest
Running with Docker Compose
services:
llm:
image: ghcr.io/bob-ros2/bob-llm:latest
container_name: bob-llm
environment:
- LLM_API_URL=http://llm-backend:8000/v1
- LLM_API_KEY=sk-12345
- LLM_API_MODEL=gpt-4
- LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
- LLM_TEMPERATURE=0.8
restart: always
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Description | |
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-25 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit). -
Multi-arch Docker Support: Ready-to-use Docker images for
amd64andarm64, fully configurable via environment variables for easy deployment.
Docker Usage
The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).
Running with Docker
docker run -it --rm \
--name bob-llm \
-e LLM_API_URL="http://192.168.1.100:8000/v1" \
-e LLM_API_KEY="your_secret_token" \
-e LLM_API_MODEL="llama3" \
-e LLM_TEMPERATURE="0.5" \
ghcr.io/bob-ros2/bob-llm:latest
Running with Docker Compose
services:
llm:
image: ghcr.io/bob-ros2/bob-llm:latest
container_name: bob-llm
environment:
- LLM_API_URL=http://llm-backend:8000/v1
- LLM_API_KEY=sk-12345
- LLM_API_MODEL=gpt-4
- LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
- LLM_TEMPERATURE=0.8
restart: always
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Description | |
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-25 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit). -
Multi-arch Docker Support: Ready-to-use Docker images for
amd64andarm64, fully configurable via environment variables for easy deployment.
Docker Usage
The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).
Running with Docker
docker run -it --rm \
--name bob-llm \
-e LLM_API_URL="http://192.168.1.100:8000/v1" \
-e LLM_API_KEY="your_secret_token" \
-e LLM_API_MODEL="llama3" \
-e LLM_TEMPERATURE="0.5" \
ghcr.io/bob-ros2/bob-llm:latest
Running with Docker Compose
services:
llm:
image: ghcr.io/bob-ros2/bob-llm:latest
container_name: bob-llm
environment:
- LLM_API_URL=http://llm-backend:8000/v1
- LLM_API_KEY=sk-12345
- LLM_API_MODEL=gpt-4
- LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
- LLM_TEMPERATURE=0.8
restart: always
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Description | |
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-25 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit). -
Multi-arch Docker Support: Ready-to-use Docker images for
amd64andarm64, fully configurable via environment variables for easy deployment.
Docker Usage
The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).
Running with Docker
docker run -it --rm \
--name bob-llm \
-e LLM_API_URL="http://192.168.1.100:8000/v1" \
-e LLM_API_KEY="your_secret_token" \
-e LLM_API_MODEL="llama3" \
-e LLM_TEMPERATURE="0.5" \
ghcr.io/bob-ros2/bob-llm:latest
Running with Docker Compose
services:
llm:
image: ghcr.io/bob-ros2/bob-llm:latest
container_name: bob-llm
environment:
- LLM_API_URL=http://llm-backend:8000/v1
- LLM_API_KEY=sk-12345
- LLM_API_MODEL=gpt-4
- LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
- LLM_TEMPERATURE=0.8
restart: always
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.