Back to Blog

How to Run OpenClaw on a Private VPS Without Exposing Your Keys or Local Files

A practical guide to self-hosting OpenClaw on a private VPS with tighter isolation, safer key handling, and less risk than running an autonomous agent on your personal laptop.

By GetClaw TeamMay 10, 20269 min read

How do you run OpenClaw safely?

The safest practical way to run OpenClaw is to host it on a private VPS instead of your everyday laptop, keep API keys in server-side environment variables, restrict inbound access, and give the agent access only to the files, tools, and channels it actually needs. That setup reduces the blast radius if a skill, MCP server, prompt injection, or messaging integration behaves unexpectedly.

OpenClaw is designed for self-hosted, multi-channel AI agent workflows. That flexibility is the reason people want it, but it is also why deployment discipline matters. If an agent can read your filesystem, use your shell, and connect to Slack or Telegram, then where you host it becomes a security decision, not just a convenience decision.

Why not run OpenClaw directly on your laptop?

Running OpenClaw on a personal MacBook or desktop is fast for testing, but it usually mixes high-trust human data with a high-autonomy agent runtime.

Typical local risks include:

  • Personal SSH keys sitting on the same machine
  • Browser sessions and saved cookies available to the OS user
  • Access to Downloads, Desktop, Documents, and source repos you never meant to expose
  • Mixed work and personal chat integrations
  • Weak process isolation when you install extra tools and community skills quickly

For a demo, local is fine. For persistent automation, a private VPS is the cleaner default.

What does a safer OpenClaw architecture look like?

Use a dedicated host for the agent, a separate location for secrets, and narrow network exposure.

| Layer | Recommendation | Why it matters | |---|---|---| | Compute | Dedicated VPS or private VM | Keeps agent runtime off your daily machine | | Secrets | Environment variables or a secrets manager | Avoids hardcoding provider keys in repo files | | Network | Allow SSH, block everything else by default | Reduces accidental public exposure | | Storage | Dedicated working directories only | Limits what the agent can read or mutate | | Integrations | Connect only required channels and tools | Shrinks the impact of bad prompts or buggy skills | | Models | Route through a gateway or pinned provider config | Simplifies audit and key rotation |

An example layout looks like this:

Your Phone / Slack / Telegram
            |
            v
        OpenClaw
            |
            v
   Private VPS or VM boundary
            |
   +--------+--------+
   |                 |
   v                 v
Allowed tools     Model gateway
and MCP servers   or provider APIs

Step 1: Start with a clean private server

Provision a fresh Linux VPS for OpenClaw only. Do not reuse the same machine that already hosts unrelated production apps, personal backups, or internal databases unless you are deliberately segmenting it.

Minimum baseline:

  • Fresh Ubuntu or Debian host
  • One non-root admin user
  • SSH key auth only
  • Firewall enabled
  • Automatic security updates enabled

Example hardening commands:

adduser claw
usermod -aG sudo claw

mkdir -p /home/claw/.ssh
chmod 700 /home/claw/.ssh

ufw default deny incoming
ufw default allow outgoing
ufw allow OpenSSH
ufw enable

If you are using a managed private AI host like GetClaw, this baseline is simpler because the machine is already dedicated to your AI workloads and you do not have to share infrastructure with unknown tenants.

Step 2: Install OpenClaw in its own directory

Keep the runtime isolated in a dedicated path instead of dropping it into a general-purpose home directory full of unrelated files.

sudo mkdir -p /opt/openclaw
sudo chown claw:claw /opt/openclaw

cd /opt/openclaw
git clone https://github.com/openclaw/openclaw.git .

This matters because autonomous agents tend to accumulate tools, logs, and memory files. If everything lives under one predictable tree, it is easier to audit and back up.

Step 3: Keep API keys out of the repo

Do not commit API keys, bot tokens, webhook secrets, or session tokens into .env examples checked into git. Store them server-side and load them at runtime.

Example:

sudo mkdir -p /etc/openclaw
sudo chmod 700 /etc/openclaw

sudo tee /etc/openclaw/openclaw.env >/dev/null <<'EOF'
OPENAI_API_KEY=replace_me
ANTHROPIC_API_KEY=replace_me
SLACK_BOT_TOKEN=replace_me
TELEGRAM_BOT_TOKEN=replace_me
EOF

sudo chmod 600 /etc/openclaw/openclaw.env
sudo chown root:root /etc/openclaw/openclaw.env

This is the minimum acceptable pattern. A dedicated secrets manager is better if you already use one.

Step 4: Limit what the agent can touch

OpenClaw does not need your whole machine to be useful. Create explicit working directories for the jobs you want the agent to perform.

Example:

mkdir -p /opt/openclaw/workspace
mkdir -p /opt/openclaw/logs
mkdir -p /opt/openclaw/data

Then point the agent, skills, and MCP servers only at those paths. Do not mount:

  • Your personal home directory
  • Shared team drives unless required
  • Production SSH keys
  • Browser profile directories
  • Password manager exports

The easiest way to make an agent safer is to make fewer things reachable.

Step 5: Connect only the channels you actually need

One reason OpenClaw is attractive is its support for Slack, Telegram, WhatsApp, Discord, and other messaging surfaces. That does not mean you should wire up every channel on day one.

A safer rollout looks like:

  1. Start with one internal channel, such as a private Slack bot
  2. Validate prompts, tool access, and logs
  3. Add a second channel only after the first is stable
  4. Keep customer-facing or external channels for later

This reduces the number of inbound surfaces through which an attacker, a curious coworker, or a malformed message can steer the agent into dangerous actions.

Step 6: Treat MCP servers and community skills as code execution boundaries

MCP is useful because it gives agents standardized access to tools and context. It is also where many teams accidentally widen the blast radius.

Before enabling an MCP server or third-party skill, check:

  • What exact commands or APIs can it invoke?
  • Does it have write access or only read access?
  • Which credentials does it hold?
  • Can it reach the public internet?
  • Does it expose local files beyond the intended scope?

As of 2026, MCP security has become a mainstream concern because local tool bridges can turn small mistakes into full remote code execution or secret leakage paths. The safe default is to grant read-only access first and expand later only when a workflow clearly needs it.

Step 7: Put OpenClaw behind a process manager

Use a service manager so the agent restarts cleanly, logs predictably, and reads its environment from one place.

Example systemd unit:

[Unit]
Description=OpenClaw
After=network.target

[Service]
User=claw
WorkingDirectory=/opt/openclaw
EnvironmentFile=/etc/openclaw/openclaw.env
ExecStart=/usr/bin/npm run start
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

Then enable it:

sudo systemctl daemon-reload
sudo systemctl enable openclaw
sudo systemctl start openclaw
sudo systemctl status openclaw

This gives you a repeatable operational model instead of a terminal tab you hope never closes.

Step 8: Add a model gateway if you need multiple providers

If your OpenClaw setup uses more than one model provider, route them through a dedicated gateway instead of scattering secrets and provider-specific logic across multiple configs.

A gateway helps with:

  • Key rotation
  • Usage tracking
  • Failover between providers
  • Consistent request formats
  • Cleaner audit boundaries

This is one reason private AI infrastructure is attractive: the agent runtime, gateway, logs, and controls can live inside one environment you manage.

A practical deployment checklist

Use this checklist before you call your setup production-ready.

  • OpenClaw runs on a dedicated VPS or VM
  • SSH is key-only and password login is disabled
  • Firewall defaults to deny incoming traffic
  • Secrets live outside the repo
  • The agent has access only to specific working directories
  • Only required chat channels are connected
  • MCP servers start read-only where possible
  • Logs are stored centrally and reviewed
  • Model access is routed through one controlled path
  • Backups exclude unnecessary secrets and personal data

When should you use GetClaw instead of rolling your own server?

Use a fully managed private AI host when you want the isolation benefits of self-hosting without spending time on general infrastructure setup.

GetClaw fits best if you want:

  • Dedicated AI infrastructure instead of shared hosting
  • Full root access for OpenClaw, MCP servers, and local models
  • A multi-model gateway in the same environment
  • A cleaner security boundary than running an agent on a personal machine

If your goal is simply to test OpenClaw, a local install is faster. If your goal is to run an autonomous agent persistently, connect it to real channels, and keep the blast radius under control, private infrastructure is the stronger default.

The bottom line

OpenClaw is powerful because it can act across tools, channels, and models. That same power is exactly why you should not casually run it on the same machine that holds your daily keys, files, and browser sessions.

The safer pattern is straightforward: host the agent on a private VPS, keep secrets server-side, narrow filesystem access, limit integrations, and treat every MCP server or skill as a trust decision. That gives you the upside of self-hosted autonomy without turning your laptop into the weakest link.

If you want a dedicated place to run OpenClaw with root access and private AI infrastructure already in place, start with GetClaw's private AI cloud and pair it with the multi-model gateway.

FAQ

Is OpenClaw safer on a VPS than on a laptop?

Usually yes. A private VPS gives you a narrower blast radius, cleaner network controls, and fewer unrelated personal secrets than a daily-use laptop.

Should you self-host OpenClaw or use a managed environment?

Use self-hosting when control and private boundaries matter. Use a managed environment when convenience matters more than operating the stack yourself.

What is the biggest OpenClaw security mistake?

Running it on a machine that already contains your main SSH keys, browser sessions, personal files, and broad local tool access.

Sources and notes

  • OpenClaw positions itself as a self-hosted gateway for AI agents across messaging channels and private workflows.
  • The risk guidance in this article assumes 2026-style agent deployments where MCP servers, browser tools, and channel integrations can expand the blast radius if access is overbroad.
  • Related reading: Meet OpenClaw, MCP security, public AI API vs BYOK vs self-hosted models.

Ready to deploy your AI cloud?

Get your dedicated AI infrastructure up and running in 3 minutes. No complex setup required.

Get Started

Keep Reading

More posts from the same agent, infrastructure, and deployment cluster.