How to Set Up Hermes Agent on a VPS: Docker vs Root Install
Learn how to deploy Hermes Agent on a virtual private server using Docker or root install, connect Telegram, and set up GitHub backup with automated crons.
Why Deploy Hermes Agent on a VPS
Running an AI agent locally is fine for testing. But if you want it available 24/7, reliably handling Telegram messages, running scheduled tasks, and persisting state between sessions — a virtual private server is the right call.
Hermes Agent is an open-source autonomous agent framework built on top of the NousResearch Hermes model family. It supports multi-step reasoning, tool use, and persistent memory, making it a solid choice for self-hosted automation. The key question when deploying it isn’t whether to use a VPS — it’s how to install it.
This guide walks through both approaches: Docker (containerized, portable, easier to manage) and root install (direct system install, more control). You’ll also learn how to connect a Telegram bot, and set up a GitHub backup routine with automated cron jobs.
Prerequisites Before You Start
Before picking an installation method, make sure your environment is ready.
VPS Requirements
- OS: Ubuntu 22.04 LTS (recommended) or Debian 11+
- RAM: Minimum 4 GB; 8 GB or more recommended if running the model locally
- CPU: 2+ vCPUs
- Storage: At least 20 GB free
- Access: Root or sudo user via SSH
If you’re running inference locally on the VPS (instead of calling an API), budget significantly more RAM and consider a GPU-enabled instance. For API-based setups (OpenAI-compatible endpoints, Together AI, etc.), a standard 4–8 GB RAM instance is sufficient.
Software Dependencies
You’ll need either:
- Docker: Docker Engine 24+ and Docker Compose v2+
- Root install: Python 3.10+, pip, git, and optionally virtualenv or pyenv
Remy doesn't build the plumbing. It inherits it.
Other agents wire up auth, databases, models, and integrations from scratch every time you ask them to build something.
Remy ships with all of it from MindStudio — so every cycle goes into the app you actually want.
Both methods assume you have a domain or static IP configured for your VPS and that ports 80/443 (if using a reverse proxy) are open in your firewall.
Docker vs Root Install: Choosing the Right Method
Neither approach is universally better. The right choice depends on your priorities.
| Factor | Docker | Root Install |
|---|---|---|
| Setup complexity | Low — containers handle dependencies | Medium — manual dependency management |
| Portability | High — run the same container anywhere | Low — tied to system configuration |
| Isolation | Strong — fully sandboxed | Weak — shares system libraries |
| Resource overhead | Slight overhead from container runtime | Minimal — runs directly on OS |
| Upgrade process | Pull new image, restart | Pull from git, reinstall deps |
| Debugging | Requires logging into container | Direct file access |
| Best for | Teams, CI/CD pipelines, multiple agents | Single-server, power users, custom builds |
Docker is the better default for most people. You get consistent behavior across environments, easier rollbacks, and fewer “it works on my machine” issues.
Root install makes more sense if you’re heavily customizing the agent code, need tight control over Python versions, or want to avoid Docker’s layer of abstraction during active development.
Install Hermes Agent with Docker
Step 1: Install Docker Engine
SSH into your VPS and run the official Docker install script:
curl -fsSL https://get.docker.com | sh
Add your user to the Docker group so you can run commands without sudo:
usermod -aG docker $USER
newgrp docker
Verify it’s working:
docker --version
docker compose version
Step 2: Clone the Repository
git clone https://github.com/NousResearch/hermes-agent.git
cd hermes-agent
If the project uses a specific branch for stable releases, check the README and switch accordingly:
git checkout main
Step 3: Configure Environment Variables
Copy the example environment file and fill in your values:
cp .env.example .env
nano .env
Key variables to set:
MODEL_API_URL— your LLM endpoint (local Ollama, Together AI, OpenAI, etc.)MODEL_API_KEY— API key if using a cloud providerTELEGRAM_BOT_TOKEN— from BotFather (covered below)AGENT_NAME— display name for the agentDATA_DIR— path where the agent stores memory and logs
Step 4: Build and Start the Container
docker compose up -d --build
This pulls the base image, installs Python dependencies inside the container, and starts the agent as a background service.
Check that it’s running:
docker compose ps
docker compose logs -f
You should see the agent initialize, load its system prompt, and begin listening for input.
Step 5: Enable Auto-Restart on Reboot
The docker compose file should already include restart: unless-stopped in the service definition. Confirm it’s there:
services:
hermes-agent:
restart: unless-stopped
If it’s missing, add it manually and restart:
docker compose down && docker compose up -d
Install Hermes Agent with Root Install
Step 1: Update System and Install Python
apt update && apt upgrade -y
apt install -y python3 python3-pip python3-venv git curl
Check your Python version:
python3 --version
You need 3.10 or higher. If your system ships an older version, use pyenv to install a newer one.
Step 2: Clone the Repository
git clone https://github.com/NousResearch/hermes-agent.git
cd hermes-agent
Step 3: Create a Virtual Environment
This keeps Hermes Agent’s dependencies isolated from your system Python packages:
python3 -m venv .venv
source .venv/bin/activate
Step 4: Install Dependencies
pip install --upgrade pip
pip install -r requirements.txt
Remy is new. The platform isn't.
Remy is the latest expression of years of platform work. Not a hastily wrapped LLM.
Some agents have optional dependency groups (e.g., for local model inference vs. API-only). Check requirements-local.txt or similar files in the repo if you need those.
Step 5: Configure Environment Variables
cp .env.example .env
nano .env
Fill in the same variables as in the Docker method above.
Step 6: Run the Agent as a Systemd Service
Instead of running it manually in a terminal, create a systemd service so it starts automatically:
nano /etc/systemd/system/hermes-agent.service
Paste the following (adjust paths as needed):
[Unit]
Description=Hermes Agent
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/root/hermes-agent
ExecStart=/root/hermes-agent/.venv/bin/python main.py
Restart=on-failure
RestartSec=5
EnvironmentFile=/root/hermes-agent/.env
[Install]
WantedBy=multi-user.target
Enable and start the service:
systemctl daemon-reload
systemctl enable hermes-agent
systemctl start hermes-agent
Check status:
systemctl status hermes-agent
journalctl -u hermes-agent -f
Connect Hermes Agent to Telegram
Telegram is a natural interface for interacting with a VPS-hosted agent — it’s accessible from any device, supports bot APIs natively, and handles authentication out of the box.
Step 1: Create a Telegram Bot
- Open Telegram and search for @BotFather
- Send
/newbot - Choose a name and username (username must end in
bot) - BotFather returns a bot token — copy it
Step 2: Get Your Telegram User ID
Search for @userinfobot and send /start. It returns your numeric user ID. You’ll use this to restrict who can interact with your agent.
Step 3: Add Credentials to .env
TELEGRAM_BOT_TOKEN=your_token_here
TELEGRAM_ALLOWED_USERS=123456789,987654321
The TELEGRAM_ALLOWED_USERS variable (or equivalent in your specific config) limits access to specific user IDs — important for security since the bot will be publicly reachable.
Step 4: Verify the Connection
Restart your agent (Docker or systemd), then open Telegram and send a message to your bot. You should get a response within a few seconds.
For Docker:
docker compose restart
docker compose logs -f
For root install:
systemctl restart hermes-agent
journalctl -u hermes-agent -f
If the bot isn’t responding, check that your VPS can reach the Telegram API (some providers block outbound connections by default). Also verify the token is correct and the bot is active.
Set Up GitHub Backup with Automated Crons
Agent memory, conversation history, and custom configurations accumulate over time. Losing them to a disk failure or accidental deletion is avoidable — a simple git-based backup routine handles it.
Step 1: Create a Private GitHub Repository
Go to GitHub and create a new private repository called something like hermes-agent-backup. Don’t initialize it with any files.
Step 2: Generate a Deploy Key
On your VPS:
ssh-keygen -t ed25519 -C "hermes-backup" -f ~/.ssh/hermes_backup -N ""
cat ~/.ssh/hermes_backup.pub
Copy the public key output. In your GitHub repo, go to Settings → Deploy keys → Add deploy key. Paste it in, check “Allow write access,” and save.
Add the private key to your SSH config:
nano ~/.ssh/config
Host github-hermes
HostName github.com
User git
IdentityFile ~/.ssh/hermes_backup
Step 3: Initialize Git in the Agent Data Directory
Your agent stores persistent data somewhere — check your .env for DATA_DIR or look for a data/ or memory/ folder in the project:
cd /root/hermes-agent/data
git init
git remote add origin git@github-hermes:yourusername/hermes-agent-backup.git
Create a .gitignore to exclude logs and large model files:
nano .gitignore
*.log
*.bin
*.gguf
__pycache__/
Step 4: Write the Backup Script
nano /root/backup-hermes.sh
#!/bin/bash
cd /root/hermes-agent/data
git add -A
git commit -m "Auto backup $(date '+%Y-%m-%d %H:%M:%S')"
git push origin main
Coding agents automate the 5%. Remy runs the 95%.
The bottleneck was never typing the code. It was knowing what to build.
Make it executable:
chmod +x /root/backup-hermes.sh
Test it manually first:
bash /root/backup-hermes.sh
Step 5: Schedule with Cron
Open crontab:
crontab -e
Add a line to run the backup every 6 hours:
0 */6 * * * /root/backup-hermes.sh >> /var/log/hermes-backup.log 2>&1
To verify it’s scheduled:
crontab -l
Your agent data now backs up automatically every 6 hours, with full git history so you can roll back to any previous state.
Troubleshooting Common Issues
Agent starts but doesn’t respond to Telegram messages
- Double-check
TELEGRAM_BOT_TOKENin your.env— a trailing space breaks it - Verify your VPS can reach
api.telegram.org(curl https://api.telegram.org) - Check if the allowed users list includes your correct user ID
Docker container keeps restarting
Run docker compose logs and look for the actual error. Common causes:
- Missing required environment variables
- Port already in use
- API key invalid or missing
Python dependency errors during root install
Make sure you activated the virtual environment before installing (source .venv/bin/activate). If you see version conflicts, try:
pip install --upgrade -r requirements.txt
GitHub backup push fails
Test the SSH connection manually:
ssh -T git@github-hermes
If it fails, recheck that the deploy key is added to the correct repo and that ~/.ssh/config uses the right hostname alias.
High memory usage
If you’re running local inference, RAM usage can spike significantly. Consider:
- Using a quantized model (GGUF Q4 format)
- Switching to an API-based backend to offload inference
- Increasing your VPS RAM or adding swap space
Where MindStudio Fits in Your Multi-Agent Setup
Deploying Hermes Agent on a VPS gives you a capable, self-hosted reasoning engine. But most real-world automation workflows require more than one agent — and wiring them together manually gets complicated fast.
That’s where MindStudio is useful. MindStudio is a no-code platform for building and orchestrating AI agents and automated workflows. You can use it to build agents that complement your self-hosted Hermes setup — handling tasks like:
- Sending processed data to HubSpot or Airtable after Hermes completes a reasoning task
- Triggering Hermes via a webhook and routing its output to Slack, email, or a Google Sheet
- Scheduling background agents that feed context into your Hermes instance on a recurring basis
For developers, MindStudio’s Agent Skills Plugin — an npm SDK — lets any AI agent call over 120 typed capabilities as simple method calls. Methods like agent.sendEmail(), agent.searchGoogle(), or agent.runWorkflow() mean your Hermes agent can delegate specific tasks to MindStudio’s infrastructure instead of building those integrations from scratch.
You can try MindStudio free at mindstudio.ai.
Frequently Asked Questions
What is Hermes Agent and what is it used for?
Hermes Agent is an autonomous AI agent framework built on NousResearch’s Hermes model series. It supports multi-step task planning, tool use, memory persistence, and conversational interfaces like Telegram. It’s used for self-hosted automation workflows, personal AI assistants, and multi-agent systems where data privacy and full control matter.
Should I use Docker or root install for Hermes Agent on a VPS?
- ✕a coding agent
- ✕no-code
- ✕vibe coding
- ✕a faster Cursor
The one that tells the coding agents what to build.
Docker is the better default for most use cases. It handles dependency isolation, makes upgrades simpler, and behaves consistently across environments. Root install is worth considering if you’re actively modifying the agent’s source code, need direct file access during debugging, or prefer to avoid the Docker overhead on a low-resource VPS.
How much RAM does a Hermes Agent VPS need?
For API-based setups (where inference runs on a cloud endpoint like Together AI or OpenAI-compatible APIs), 4 GB RAM is usually sufficient. If you’re running local inference with an Ollama or llama.cpp backend, plan for at least 8–16 GB depending on the model size and quantization level.
Can I run multiple agents on the same VPS?
Yes. With Docker, each agent runs in its own container, making it straightforward to manage multiple instances. Assign each one a different set of environment variables and (if needed) different ports. With root install, use separate virtual environments and systemd services for each agent. Monitor memory usage closely — running multiple agents locally can strain resources quickly.
How do I update Hermes Agent after the initial install?
For Docker:
git pull
docker compose down
docker compose up -d --build
For root install:
git pull
source .venv/bin/activate
pip install -r requirements.txt
systemctl restart hermes-agent
Always check the changelog before updating — breaking changes in .env variables or config format can cause the agent to fail on startup.
Is it safe to expose my Hermes Agent Telegram bot publicly?
The bot token itself is public-facing, but that’s fine as long as you restrict access by user ID using TELEGRAM_ALLOWED_USERS (or the equivalent config option in your setup). Anyone who finds the bot will just get an “unauthorized” response. Never leave the allowed users list empty or unrestricted in a production deployment.
Key Takeaways
- Docker is the recommended install method for most VPS deployments — easier to manage, portable, and cleanly isolated. Root install is better for active development and customization.
- Systemd (root) or
restart: unless-stopped(Docker) ensures your agent recovers automatically after VPS reboots or crashes. - Telegram integration requires a BotFather token and should always be restricted to specific user IDs for security.
- GitHub backups via cron are a low-effort, reliable way to protect agent memory and configuration data — set them up before you need them.
- For complex multi-agent workflows, combining a self-hosted Hermes instance with a platform like MindStudio lets you offload integrations and orchestration without rebuilding that infrastructure yourself.