A managed platform for hiring, onboarding, and running specialized AI employees. Think "Fiverr for OpenClaw" — each employee is a containerized OpenClaw instance with a defined role, persistent memory, and scoped access to your tools (GitHub, Slack, Gmail).
Built as a hackathon MVP. The current demo bar is "hired and running": a user can browse the talent directory, onboard an employee through a 4-step hire flow, and dispatch tasks that execute inside a live Docker container backed by Kimi K2.5 through the OpenClaw gateway.
Host (your Mac)
├── Frontend (app) http://localhost:5173 Vite + React 19
└── Docker Desktop
└── openclaw-agents (bridge network)
├── Platform API http://localhost:8000 FastAPI, Docker socket mounted
└── Agent containers 172.x.x.x:8080 OpenClaw gateway + task server sidecar
- Platform backend dispatches tasks to agent containers over the Docker bridge network via HTTP POST.
- Each agent container runs the official OpenClaw gateway alongside a FastAPI task server (
backend/agent-runtime/server.py) on port 8080. - LLM inference uses Kimi (Moonshot AI) via OpenClaw's OpenAI-compatible
/v1/chat/completionsendpoint. - Supabase is the only external dependency — it holds users, hired employees, and encrypted credentials.
AgentOS/
├── backend/ FastAPI platform API
│ ├── app/
│ │ ├── routers/ users, agents, tasks, credentials, gateway, chat, auth, roles
│ │ ├── services/ orchestrator, dispatcher, credential_store, gateway, template_loader
│ │ ├── models/ Supabase data access
│ │ ├── schemas/ Pydantic request/response models
│ │ └── utils/ crypto helpers
│ ├── agent-runtime/ OpenClaw + task server sidecar (Dockerfile, entrypoint, server.py)
│ ├── agent-config/
│ │ └── templates/ Role templates (secretary, code-review-engineer, customer-support)
│ ├── migrations/ Supabase SQL migrations
│ └── tests/ Pytest unit tests
│
├── app/ Vite + React 19 frontend (started by start.sh)
│ └── src/pages/ Home, Login, Agents, Page1-5
│
├── start.sh One-shot local dev bootstrapper
├── LOCAL_SETUP.md Authoritative local setup guide
├── ROADMAP.md Hackathon scope + post-hackathon phases
├── PROJECT_CONTEXT.md Product brainstorming + design decisions
├── AGENT_SYSTEM_PROMPT.md System prompt used by agent containers
└── HANDOFF.md Frontend build brief
FastAPI app mounted at backend/app/main.py. Router surface:
| Router | Purpose |
|---|---|
users |
Account creation and lookup |
auth |
Session auth + legacy compat routes |
agents |
Hire, list, offboard AI employees |
roles |
Role template discovery |
credentials |
OAuth + simulated credential storage (AES-encrypted at rest) |
tasks |
Dispatch tasks to running agent containers |
gateway |
OAuth callback surface (GitHub real; Slack/Gmail simulated) |
chat |
Chat passthrough to OpenClaw's /v1/chat/completions |
Key services:
orchestrator— provisions and tears down agent containers via the Docker SDK.dispatcher— routes tasks from platform to agent container internal IPs.credential_store— Fernet-encrypted credential vault.template_loader— reads YAML role templates fromagent-config/templates/.gateway— builds OAuth URLs and handles token exchange.
Stack: Python 3.12, FastAPI, Supabase, Docker SDK, cryptography (Fernet), httpx, PyYAML.
app/ — Vite + React 19 + React Router + Tailwind. Started by start.sh on :5173. Pages: Home, Login, Agents, and a numbered Page1-5 flow for hire/onboarding. The backend is reached via the Vite dev proxy (/api/* → http://localhost:8000/*), so no BACKEND_URL env var is needed at build time.
Each hired employee runs as a Docker container built from backend/agent-runtime/Dockerfile:
- Base: OpenClaw gateway image.
- Sidecar: a FastAPI task server (
server.py) on port 8080 that acceptsPOST /taskfrom the platform. - Config:
openclaw.jsonwires the gateway to Kimi (moonshot/kimi-k2.5) and enables the OpenAI-compatible chat completions endpoint. - Auth: task server is protected by a token (
openclaw-internalby default) so only the platform can dispatch. - Role: the role template (e.g.,
code-review-engineer.yaml) is mounted in and used to build the system prompt.
Containers are spawned on demand by the platform orchestrator and attached to the openclaw-agents bridge network.
Two roles ship with the demo; the full candidate pool lives in PROJECT_CONTEXT.md.
- Code Review Engineer — GitHub. Real OAuth.
- Customer Support — Slack + Gmail. Simulated consent screen writes a placeholder token to
POST /credentials.
A third secretary.yaml template is included as a reference role.
Full walkthrough is in LOCAL_SETUP.md. The short version:
-
Install Docker Desktop, Node 20+, Python 3.12+, and create a Supabase project.
-
Copy
backend/.env.exampletobackend/.envand fill in your keys (Supabase, encryption key, Moonshot/Kimi API key, OAuth client credentials). -
Run the platform and the primary frontend together:
./start.sh # Linux / Intel Mac ./start-mac.sh # Apple Silicon Mac (forces arm64 for Python deps)
This script will:
- Build the agent container image (
openclaw/agent:latest). - Create the
openclaw-agentsDocker bridge network. - Install backend Python deps into
backend/.venvand start FastAPI on:8000. - Install frontend deps in
app/and start the Vite dev server on:5173.
- Build the agent container image (
-
Open
http://localhost:5173/loginto create an account, thenhttp://localhost:5173/agentsfor the Signal Feed. Click CONNECT on any tab to link Slack/Gmail/GitHub. -
Backend API docs:
http://localhost:8000/docs.
Backend has 78 passing unit tests:
cd backend
source .venv/bin/activate
pytest- Platform backend scaffold: done, with task dispatch wired end-to-end.
- LLM execution inside containers: live via OpenClaw + Kimi.
- Hire flow: in-progress v1 scope for the hackathon.
- Post-hire surfaces (work log, team page, performance review): post-hackathon.
- Billing / Stripe / payment gating: post-hackathon.
- VPS deployment: post-hackathon. The MVP runs entirely on local Docker Desktop.
See ROADMAP.md for the full phase plan.
This codebase uses product-facing language consistently. When contributing, use:
AI employees, talent directory, onboarding, work style, performance review, offboarding
Avoid: agents, marketplace, configuration, prompt, dashboard, teardown.
- OpenClaw — the local-first personal AI assistant framework this platform is built on (gateway on
:18789, SOUL.md-driven config, OpenAI-compatible chat completions endpoint) LOCAL_SETUP.md— authoritative local setup and networking notesROADMAP.md— hackathon scope and post-hackathon phasesPROJECT_CONTEXT.md— product brainstorming and design decisionsAGENT_SYSTEM_PROMPT.md— system prompt the containers run withHANDOFF.md— backend handoff notesCLAUDE.md— conventions for Claude Code collaborators