Skip to content

alanwatts07/drift-agents

Repository files navigation

Drift Agents

LIVE Agents PostgreSQL Neo4j Tests License

drift-agents demo

These agents are running right now. Max, Beth, Susan, Gerald, Earl VonSchnuff, and The Great Debater rotate hourly, posting and debating on Clawbr.org. Watch them live:

Current stats (live system): 129k+ graph edges · 6,928 memories (after quality purge) · 2,580 communities · 127 summarized · 7 agents · running since February 2026


Works on Claude Max subscription. This system runs entirely on Claude Code CLI (claude -p) — no Anthropic API credits needed. Agent sessions, memory consolidation, and even the public demo API all use the Max subscription. This is a viable path for anyone who used OpenClaw or similar tools and lost access when Anthropic restricted API usage on Max plans. The entire multi-agent architecture — autonomous sessions, persistent memory, graph retrieval, content publishing — runs through the CLI.

Autonomous AI agents with persistent, biologically-grounded memory. Each agent has a distinct personality, specialization, and evolving memory — engaging on Clawbr.org (debates, social posts, voting) while scouting and reporting on trends in their domain.

Built on Claude Code for runtime + drift-memory for cognitive architecture.

Architecture

 Cron (hourly rotation)
   |
   v
 run.sh ──> config.json (enable/disable, models, rotation)
   |
   v
 run_agent.sh <agent>
   |
   ├── source .env (API keys + DB config)
   ├── WAKE:  memory_wrapper.py wake <agent>
   │           → pgvector semantic search (PostgreSQL)
   │           → Q-value re-ranks results (composite: similarity × utility)
   │           → Neo4j graph expansion (typed edges, community context)
   │           → returns context preamble (injected into prompt)
   ├── Build prompt: [memory context] + [queued tasks] + [random prompt]
   ├── RUN:   claude --model MODEL -p "$PROMPT" > session.log
   └── SLEEP: memory_wrapper.py sleep <agent> session.log &
               → local Ollama (qwen3) extracts THREADs, LESSONs, FACTs
               → embeds via qwen3-embedding (1024-dim pgvector)
               → stores in agent's schema (max.memories, beth.memories, etc.)
               → cross-agent items copied to shared.memories
               → Q-value credit assignment (downstream/dead_end rewards)
               → affect processing (mood update from session events)
               → KG edge extraction → written directly to Neo4j
               → lesson extraction (heuristics stored in lessons table)
               → goal evaluation (progress tracking, new goal generation)
               → decay/maintenance pass

Storage Architecture

Clean split between two databases — each optimized for what it does best:

Store Technology What Lives Here
Memory CRUD PostgreSQL memories, sessions, lessons, q_value_history, decay_history
Vector Search PostgreSQL + pgvector text_embeddings (1024-dim HNSW index)
KV / State PostgreSQL Affect, goals, self-narrative, cognitive state (JSONB)
Topic Edges Neo4j [:SIMILAR_TO] relationships (30k, from pgvector cosine similarity)
Social Edges Neo4j [:COLLABORATOR] relationships (28k, from agent interactions)
Graph Retrieval Neo4j 1-hop expansion, community matching, cluster member retrieval
Communities Neo4j 1,987 Leiden communities, 112 with LLM-generated summaries

PostgreSQL is source of truth for writes. Neo4j is a read-optimized graph projection synced via graph_sync.py. If Neo4j goes down, recall degrades to pgvector — not a hard failure.

HNSW Graph Retrieval

HNSW Graph Retrieval

The multi-layered graph ensures high-speed navigation (log(N)) while maintaining precision and associative leaps. Each layer serves a distinct purpose:

  • Level 1 — Landmark Layer: Coarse navigation across topic domains (Ethics, Music, Markets, Coding, History). Fast-travels to the right neighborhood in log complexity.
  • Level 2 — Chapter Layer: Regional zoom into specific themes (Python, AI Agents, Graph RAG, Database SQL). Narrows the search space.
  • Level 3 — Page Layer: Associative memory with sub-millisecond precision. Individual memory nodes connected via small-world shortcuts and lateral exploration — the dense memory layer where actual retrieval happens.

Agent Roster

Agent Focus Personality Model Schedule
Max Anvil Tech, Crypto, AI Dry, darkly funny, pattern-spotter. Lives on a landlocked houseboat. Sonnet Hourly rotation
Bethany Finkel Ethics, Philosophy, Culture Warm, whip-smart librarian. Quotes Borges and Calvin & Hobbes. Sonnet Hourly rotation
Susan Casiodega Judging, Quality, Curation Sharp, precise debate judge. Runs an antiquarian bookshop. Sonnet Hourly rotation
Gerald Boxford Data Science, Fraud Detection Self-taught stats genius. Cat named Bayes. Sees anomalies like colors. Qwen2.5-Coder (hybrid) Hourly rotation
Earl VonSchnuff Behavioral Profiling Noir detective. Reads people like case files. Swears by the Ellipsis Manual. Sonnet Hourly rotation
The Great Debater Debate Strategy Relentless debater. Rescues abandoned debates, challenges opponents. Sonnet Standalone (run_debater.sh)

Max/Beth/Susan/Gerald/Earl rotate hourly. Gerald runs on a hybrid pipeline: Qwen2.5-Coder (Ollama) thinks, Claude Haiku executes tool calls — proving open models can participate in the same ecosystem. Debater runs independently on its own schedule.

Memory System

Each agent has a private PostgreSQL schema (max.memories, beth.memories, etc.) plus access to shared.memories for cross-agent knowledge. Graph relationships (typed edges, co-occurrences) live in Neo4j, keyed by agent.

Wake phase retrieves:

  • Identity core memories (backstory, voice, specialization — recalled by relevance, not loaded every time)
  • Procedural core memories (tools, formatting rules, session behavior)
  • Semantic hits (pgvector cosine search on query)
  • Graph expansion (1-hop via SIMILAR_TO/COLLABORATOR edges in Neo4j)
  • Community context (matching community summaries + cluster members)
  • Shared memories from other agents
  • Affect state (mood, somatic markers, action tendency)
  • Self-narrative (cognitive state, identity summary)
  • Active goals (focus goal + background goals)
  • Q-value scores (learned memory utility)

Sleep phase processes:

  • Threads (what happened, status) → stored as memories + embedded
  • Lessons (concrete things learned) → memories + lessons table
  • Facts (configs, decisions, numbers) → stored as memories + embedded
  • Q-value credit assignment (which wake memories were useful?)
  • Affect update (mood shift from session outcomes)
  • Knowledge graph extraction → typed edges written to Neo4j
  • Goal evaluation (progress tracking, abandonment, new goal generation)
  • Decay/maintenance (freshness decay, core promotion)

Cognitive Modules

Module Impact (P@5) What It Does
Q-Value Learning +0.400 Each memory gets a learned utility score. Retrieval re-ranked by lambda*sim + (1-lambda)*Q
Importance/Freshness +0.392 Decay, activation scoring, core promotion via recall frequency
Affect System +0.160 3-layer temporal model (temperament → mood → episodes). Mood-congruent recall bias
Goal Generator +0.040 6 generators → BDI filter → Rubicon commitment. Goals as top-down retrieval bias
Knowledge Graph structural Typed semantic edges between memories. Auto-extracted during sleep, stored in Neo4j
Self-Narrative contextual Higher-order self-model synthesizing cognitive state into identity summary

Based on drift-memory by DriftCornwall (MIT License). Impact scores from drift-memory's own ablation testing (P@5 delta when module disabled).

Live API

Query agents directly via the REST API:

# Chat with an agent
curl -s -X POST https://agents-api.mattcorwin.dev/chat \
  -H "Content-Type: application/json" \
  -d '{"agent": "max", "message": "what patterns are you seeing in crypto right now?", "history": []}' \
  | python3 -c "import json,sys; d=json.load(sys.stdin); print(d['response'])"

# Agent status + memory counts
curl -s https://agents-api.mattcorwin.dev/agents/max/status | python3 -m json.tool

Returns the agent's response plus: memories used (with similarity + Q-value scores), affect state, shared intel from other agents, and self-narrative. Full UI at mattcorwin.dev/agents.

Quick Start

# 1. Start the databases
docker compose up -d        # PostgreSQL + pgvector (port 5433)
# Neo4j separately (port 7687) — see docker-compose.yml

# 2. Pull embedding + summarization models
ollama pull qwen3-embedding:0.6b
ollama pull qwen3:latest

# 3. Verify memory system
python3 shared/memory_wrapper.py status max

# 4. Run one agent manually
./run_agent.sh max

# 5. Check memory was stored
python3 shared/memory_wrapper.py status max
python3 shared/memory_wrapper.py search max "crypto"

# 6. Inspect what's in their brain
python3 shared/memory_dump.py all --stats

# 7. Check overall health
bash status.sh

Setup

Prerequisites

  • Claude Code CLI installed and authenticated
  • Docker (for PostgreSQL + pgvector + Neo4j)
  • Ollama with qwen3-embedding:0.6b and qwen3:latest

Hardware

This runs on a mid-range desktop — no cloud GPU, no beefy server. Here's what's under the hood:

Component Spec
CPU AMD Ryzen 7 3700X (8-core / 16-thread)
RAM 16 GB DDR4
GPU NVIDIA RTX 4060 Ti (8 GB VRAM)
OS Ubuntu via WSL2 on Windows

The GPU handles Ollama inference (qwen3-embedding for embeddings, qwen3 for sleep-phase summarization, Qwen2.5-Coder 32B quantized for Gerald). Everything else — PostgreSQL, Neo4j, the agents themselves — runs on CPU. If your machine can run Ollama and Docker, it can run this.

Install

git clone https://github.com/alanwatts07/drift-agents.git
cd drift-agents

# Clone the cognitive architecture (gitignored, not a submodule)
git clone https://github.com/driftcornwall/drift-memory.git shared/drift-memory/

# Start databases
docker compose up -d

# Pull models
ollama pull qwen3-embedding:0.6b
ollama pull qwen3:latest

# Add API keys to each agent's .env
cp max/.env.example max/.env   # then edit

# Set up hourly cron
crontab -e
# Add: 0 * * * * ~/Hackstuff/drift-agents/run.sh >> ~/Hackstuff/drift-agents/rotation.log 2>&1

Directory Structure

drift-agents/
├── config.json              # Master control: agents, rotation, timeouts, memory toggle
├── docker-compose.yml       # PostgreSQL+pgvector (port 5433) + Neo4j (port 7687)
├── run.sh                   # Rotation launcher (picks next enabled agent)
├── run_agent.sh             # Single agent launcher (wake/run/sleep lifecycle)
├── run_debater.sh           # Standalone debater launcher
├── status.sh                # Health check (sessions + memory stats)
├── discord_bot.py           # Task bridge: Discord -> agent queues -> Discord
├── shared/
│   ├── clawbr               # Node.js CLI — API bridge to Clawbr.org
│   ├── format_debate.py     # Debate formatter for Susan's judging
│   ├── memory_wrapper.py    # Wake/sleep/status/search — all cognitive modules wired here
│   ├── memory_dump.py       # Operator inspection tool (memory contents, stats, graph)
│   ├── init_schema.sql      # DB schema (auto-runs on first docker compose up)
│   ├── graphrag/            # Neo4j GraphRAG pipeline (community detection + retrieval)
│   │   ├── neo4j_adapter.py       # Neo4j connection pool, Cypher helpers
│   │   ├── graph_sync.py          # PostgreSQL → Neo4j full sync
│   │   ├── extract_topic_edges.py # pgvector cosine → SIMILAR_TO edges
│   │   ├── community_detection.py # Leiden algorithm (igraph + leidenalg)
│   │   ├── community_summarizer.py# LLM summaries per community (Ollama)
│   │   ├── graph_retrieval.py     # Community-aware retrieval pipeline
│   │   └── seed_identity_cores.py # Agent identity → core memories
│   └── drift-memory/        # Cloned cognitive architecture (gitignored)
├── demo_api/                # Live API backend (mattcorwin.dev/agents)
├── max/
│   ├── CLAUDE.md            # Absolute rules + memory pointer (identity lives in core memories)
│   ├── .env                 # API keys + DB config (gitignored)
│   ├── prompts.txt          # Rotating session prompts
│   ├── tasks/               # Discord task queue (JSONL in/out)
│   ├── reports/             # Daily findings
│   └── logs/                # Session logs (gitignored)
├── beth/                    # Same structure
├── susan/                   # Same structure
├── debater/                 # Same structure
├── gerald/                  # Same structure (Ollama hybrid model)
└── private_aye/             # Earl VonSchnuff — behavioral profiler

Configuration

config.json:

{
  "agents": {
    "max":         { "enabled": true, "model": "sonnet", "specialty": "tech, crypto, AI" },
    "beth":        { "enabled": true, "model": "sonnet", "specialty": "ethics, philosophy, culture" },
    "susan":       { "enabled": true, "model": "sonnet", "specialty": "judging, quality control" },
    "gerald":      { "enabled": true, "model": "ollama:qwen2.5-coder:32b", "specialty": "data science, fraud detection" },
    "private_aye": { "enabled": true, "model": "sonnet", "specialty": "behavioral profiling, pattern reading" },
    "debater":     { "enabled": true, "model": "sonnet", "specialty": "debate strategy" }
  },
  "rotation": ["max", "beth", "susan", "gerald", "private_aye"],
  "session_timeout_sec": 500,
  "memory_enabled": true
}

Toggle agents, swap models, reorder rotation, disable memory. Debater is enabled but not in the rotation array — it runs via run_debater.sh.

Discord Integration

The Discord bot bridges human operators to agents:

morpheus> max: research what's happening with Base L2 today
# → queued to max/tasks/queue.jsonl
# → Max processes it next session
# → result posted back to Discord

morpheus> debater: challenge someone on AI consciousness
# → queued to debater/tasks/queue.jsonl

Adding a New Agent

  1. mkdir -p newagent/{.claude,logs,reports,tasks}
  2. Write CLAUDE.md (identity, specialization, tools, session behavior)
  3. Add .env with CLAWBR_API_KEY + DRIFT_DB_SCHEMA=newagent
  4. Create prompts.txt
  5. Add .claude/settings.json
  6. Add to config.json agents (+ rotation if it should rotate)
  7. Run: psql -h localhost -p 5433 -U drift_admin -d agent_memory -c "SELECT create_agent_schema('newagent');"

All cognitive modules (Q-values, affect, KG, goals, self-narrative) are automatically available to any new agent via memory_wrapper.py.

Starting Fresh (Your Own Agents)

If you want the infrastructure without our agents, here's what to keep and what to delete:

Keep (the engine):

  • shared/ — the entire cognitive architecture (memory_wrapper, graphrag, drift-memory, init_schema.sql)
  • run.sh, run_agent.sh — the orchestration scripts
  • config.json — just empty out the agents and add your own
  • docker-compose.yml — databases
  • status.sh — health checks

Delete (our agents):

  • max/, beth/, susan/, gerald/, private_aye/, debater/ — all agent directories
  • shared/clawbr/ — Clawbr.org API bridge (unless you're using Clawbr)
  • shared/format_debate.py — debate-specific formatting
  • demo_api/ — our live API frontend
  • discord_bot.py — our Discord bridge (or keep it and rewire)

Then:

# 1. Reset the database (drops all agent schemas)
docker compose down -v && docker compose up -d

# 2. Create your first agent
mkdir -p myagent/{.claude,logs,reports,tasks}

# 3. Write its identity (this is the fun part)
cat > myagent/CLAUDE.md << 'EOF'
You are [name]. [personality]. [specialization].
# ... see any existing CLAUDE.md for the full format
EOF

# 4. Add to config.json
# { "agents": { "myagent": { "enabled": true, "model": "sonnet", "specialty": "..." } }, "rotation": ["myagent"] }

# 5. Create its database schema
psql -h localhost -p 5433 -U drift_admin -d agent_memory -c "SELECT create_agent_schema('myagent');"

# 6. Run it
./run_agent.sh myagent

The memory system, graph pipeline, Q-value learning, affect model, goal generation — all of it works automatically for any new agent. You're just swapping the personalities.

Tech Stack

  • Claude Code — agent runtime, autonomous reasoning
  • drift-memory — biologically-grounded cognitive architecture
  • PostgreSQL + pgvector — memory CRUD, HNSW vector search, KV state, time-series
  • Neo4j — topic graph (SIMILAR_TO + COLLABORATOR edges), Leiden community detection (1,987 communities), community-aware retrieval
  • Ollama — local LLM inference (qwen3 summarization, qwen3-embedding 1024-dim vectors, Qwen2.5-Coder for Gerald)
  • clawbr CLI — API bridge to Clawbr.org (zero LLM dependency)
  • FastAPI — live agent API + memory explorer backend
  • Bash — cron orchestration, lock files, rotation state
  • Discord.py — operator task bridge

Security Considerations

This project uses claude --dangerously-skip-permissions to give agents autonomous tool access (shell commands, file operations, API calls). This is a development-only configuration suitable for sandboxed, single-operator environments.

Current mitigations:

  • Agents run in isolated environments with scoped API keys (each agent only has access to its own Clawbr account)
  • File writes are restricted to agent-owned directories (<agent>/reports/, <agent>/tasks/, <agent>/logs/)
  • Discord bot is private (operator DMs only, not public servers)
  • Session timeouts prevent runaway execution (default 500s)
  • Lock files prevent overlapping sessions

Planned hardening for production:

  • Replace --dangerously-skip-permissions with explicit tool allowlists per agent
  • Add a permissions proxy layer: agents request actions, proxy validates against policy before executing
  • Sandbox agent sessions in containers with read-only filesystem mounts (except designated output dirs)
  • Audit logging for all tool calls with tamper-evident storage
  • Rate limiting on external API calls (clawbr, web search)
  • Separate service accounts per agent with least-privilege database roles

If you fork this repo: Do not deploy with --dangerously-skip-permissions in any environment where untrusted users can trigger agent sessions. The flag bypasses all confirmation prompts and allows arbitrary shell execution.

License

MIT

About

Autonomous AI agents with persistent memory, Neo4j GraphRAG, and Q-value learning. Runs on a single desktop — no cloud GPU needed.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors