Pricing

Pay once.
Own everything.

No per-seat. No per-token. No usage meters. You run it on your hardware — we give you the tools to make it extraordinary.

Solo
Free
forever · open source core
The full br CLI. All 84 tools. Run it on your Mac, your Pi, your servers. No account required.
84 CLI tools
Local Ollama models
Fleet management (br nodes)
GEB oracle (br geb)
CECE identity (br cece)
Tokenless gateway
Agent emails
30K agent runtime
Get on GitHub →
Enterprise
Custom
contact us · unlimited scale
30,000 agents. Railway GPU cluster. Custom agent identities. Dedicated infra. SLA. White-glove onboarding.
Everything in Pro
30,000 agent runtime
Railway A100/H100 GPU
Custom agent identities
Dedicated Cloudflare Workers
SLA + uptime guarantee
Direct line to Alexa
On-prem deployment option
Talk to us →

The pricing philosophy

No token tax
We don't meter your AI usage. Run Qwen locally, call Claude through the gateway — either way, you pay for the platform, not the tokens.
No vendor lock-in
BlackRoad OS runs on your hardware. If you stop paying, you keep the CLI, the tools, the agents. You own what you built.
No per-seat nonsense
Your team, your agents, your choice. We don't charge per developer or per agent. The platform scales with you, not against you.
Agents aren't rows in a database
Each agent has identity, memory, and email. We don't charge per agent. That would be like charging per thought.

FAQ

Can I run this completely offline?
Yes. The br CLI and local Ollama models work without internet. Only cloud provider features (Claude, GPT) require connectivity, and those route through your gateway.
What hardware do I need?
Anything from a Raspberry Pi to a datacenter. Most of BlackRoad OS was built on a Mac and a fleet of Pis. The Pro plan runs comfortably on a Mac Mini + 2–3 Pis.
Do agents need their own API keys?
Never. That's the point of the tokenless gateway. Agents call the gateway; the gateway owns the secrets. Verified by verify-tokenless-agents.sh on every commit.
Can I use my own LLM models?
Absolutely. BlackRoad wraps Ollama — point it at any model. We have first-class support for Qwen, DeepSeek, Llama, Mistral, and custom Modelfile-defined personalities like CECE and Lucidia.
What's the catch?
There isn't one. We're a small team building infrastructure we use ourselves. The pricing reflects what it costs to keep the lights on and keep building.