Deployment
How a Valmar deployment is provisioned and what it requires.
Every Valmar customer runs on a dedicated deployment. There is no shared multi-tenant cluster: your organization, people, projects, and knowledge requests live inside an isolated stack that only you and your operators can reach. Once the deployment is up, the rest of these docs assume you have a base URL and an org-admin credential to start from.
Interested in a Valmar deployment?
Get in touch on our homepage at www.getvalmar.com to set one up.
What you get
- A dedicated Valmar stack (backend, Admin UI, and Expert UI) at a base URL
the operator gives you, e.g.
https://valmar.your-company.com. - An initial org-admin account for the Admin UI.
- An organization ID. From there an org-admin creates projects, adds people,
and issues project credentials (
valmr_proj_sk_...) that the SDK uses.
Requirements
A Valmar deployment needs only two things from you:
A Docker-compatible host
Anything that runs OCI containers works: Docker or Podman on a Linux VM, Docker Compose, Kubernetes, Fly.io, ECS, Nomad.
An OpenAI-API-compatible LLM endpoint
Anything that speaks the OpenAI Chat Completions API: hosted (OpenAI,
Azure OpenAI, Together, Groq, Bedrock via a gateway), self-hosted (vLLM,
Ollama, LM Studio, llama.cpp server), or fully on-prem. Configured with
OPENAI_BASE_URL and OPENAI_API_KEY.
That is the entire dependency surface. No managed database service, no proprietary vector store, no vendor SDK lock-in.
Deployment shapes
Valmar-hosted
We run the deployment for you. You receive a base URL and an org-admin credential; skip ahead to Installation.
Customer-hosted (single host)
Run the Valmar stack with Docker Compose on a Linux VM. Point
OPENAI_BASE_URL at any OpenAI-compatible endpoint you trust.
Customer-hosted (Kubernetes)
The same containers can be deployed as a Helm chart or plain manifests. The only stateful component is Postgres; everything else is stateless.
What the operator hands off
Before you continue to Installation, make sure you have:
- The deployment's base URL (this becomes
VALMAR_BASE_URL). - An organization ID (this becomes
VALMAR_ORGANIZATION_ID). - An org-admin login for the Admin UI (used to create projects and invite people; not used by the SDK).
Data residency
Because the LLM endpoint is bring-your-own, prompts and gathered context never leave the LLM boundary you choose. Self-hosted deployments paired with a self-hosted LLM keep all data on your infrastructure.
Next
Once you have a base URL and an organization ID, continue to Installation.