Repository Summary
| Description | |
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-21 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.6.0 |
README
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
CONTRIBUTING
Repository Summary
| Description | |
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-21 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.6.0 |
README
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
CONTRIBUTING
Repository Summary
| Description | |
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-21 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.6.0 |
README
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
CONTRIBUTING
Repository Summary
| Description | |
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-21 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.6.0 |
README
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
CONTRIBUTING
Repository Summary
| Description | |
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-21 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.6.0 |
README
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
CONTRIBUTING
Repository Summary
| Description | |
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-21 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.6.0 |
README
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
CONTRIBUTING
Repository Summary
| Description | |
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-21 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.6.0 |
README
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
CONTRIBUTING
Repository Summary
| Description | |
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-21 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.6.0 |
README
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
CONTRIBUTING
Repository Summary
| Description | |
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-21 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.6.0 |
README
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file