Verified Execution with eXploration Operations
A behavioral protocol that steers AI coding agents to explore before deciding and verify before shipping.
Quick Start • Why VEXO • How It Works • IDE Setup • Customization • Design Philosophy
AI coding agents are powerful, but they have predictable failure modes:
| Problem | What happens |
|---|---|
| Premature convergence | AI locks onto the first "good enough" solution and skips exploring alternatives |
| No verification | Code is generated without validation plans — bugs ship silently |
| One-size-fits-all responses | Simple questions get over-structured answers; complex ones get shallow treatment |
| Context window waste | Loading all rules at once burns tokens on irrelevant instructions |
VEXO addresses these by giving AI agents a structured behavioral protocol — not just coding rules, but a decision-making framework.
Without VEXO — "Add a caching layer to the user API"
Here's how to add Redis caching:
[immediately writes code with one approach, no verification]
With VEXO — Same prompt
[Explore] Three approaches to consider:
Option A (Industry standard): Redis with TTL-based invalidation
- Pros: Mature ecosystem, team familiarity
- Cons: Additional infrastructure
Option B: In-memory cache (node-cache)
- Pros: Zero infra cost, simple
- Cons: Not shared across instances
Option C: HTTP caching (ETag/Cache-Control)
- Pros: No server-side state
- Cons: Limited control over invalidation
→ Which direction fits your constraints?
[After selection → Execution plan + Verification plan + Failure conditions]
VEXO uses a core + lazy-load architecture inspired by AWS AI-DLC:
rules/
├── vexo.md ← Always loaded (entry point)
│
└── rule-details/ ← Loaded on demand by task type
├── modes/
│ ├── implementation.md ← Build: code, architecture, data
│ ├── exploration.md ← Ideate: brainstorm, plan, explore
│ └── review.md ← Review: audit existing work
│
└── domains/
├── data.md ← Data migration, ETL, aggregation
├── backend.md ← Service API, business logic, infra
└── ai-agent.md ← LLM integration, agents, prompts
- No result without verification. Execution plans and verification plans are designed together.
- Explore before deciding. Options are explored before committing to an approach — preventing design fixation.
- Industry standards first. Mainstream solutions are preferred to minimize team adoption cost. Non-standard choices require explicit justification.
- Ideal → Realistic. The ideal architecture is presented first, followed by a pragmatic alternative with trade-offs.
- Test ROI awareness. When test cost exceeds 1.5× implementation cost, only essential tests are performed with skipped items and risks documented.
Users just write naturally. VEXO classifies the request and loads only the relevant rules:
| Type | Trigger pattern | Rules loaded |
|---|---|---|
| Implementation | "build this", "how to implement", architecture/code design | modes/implementation.md + domain file |
| Exploration | "what do you think about", "should we try", idea discussion | modes/exploration.md |
| Learning | "what is", "why does this work", concept/principle questions | (no rule-details needed) |
| Quick answer | Simple fact check, obvious error, minor fix | (no rule-details needed) |
| Review | "review this code", "how's this structure", existing work audit | modes/review.md + domain file |
Inspired by HAICo's divergent/convergent thinking model, VEXO separates idea exploration from decision-making:
- Explore phase: Generate 3–5 genuinely different approaches (not minor variations). Stay in this phase until the user signals a direction.
- Decide phase: Lock in the chosen approach, detail trade-offs, and transition to implementation.
The AI prefixes responses with [Exploring] or [Decided] so the current phase is always visible.
Copy vexo.md into your AI IDE's rules directory. This single file contains all core principles and task classification logic.
# Example for Kiro
mkdir -p .kiro/steering
cp rules/vexo.md .kiro/steering/Copy both the core file and the rule-details directory for domain-specific guidance.
# Example for Kiro
mkdir -p .kiro/steering
cp rules/vexo.md .kiro/steering/
cp -r rules/rule-details .kiro/VEXO works with any AI coding agent that supports rule/steering files. Below are setup instructions for popular IDEs.
mkdir -p .kiro/steering
cp rules/vexo.md .kiro/steering/
cp -r rules/rule-details .kiro/Directory structure:
<project-root>/
├── .kiro/
│ ├── steering/
│ │ └── vexo.md
│ └── rule-details/
│ ├── modes/
│ └── domains/
mkdir -p .cursor/rules
cp rules/vexo.md .cursor/rules/
cp -r rules/rule-details .cursor/cp rules/vexo.md .claude/CLAUDE.md
cp -r rules/rule-details .claude/rule-details/Note: Claude Code uses
CLAUDE.mdas its primary instruction file. You may need to rename or adjust the entry point.
mkdir -p .codex
cp rules/vexo.md .codex/instructions.md
cp -r rules/rule-details .codex/rule-details/Note: Codex CLI reads from
.codex/instructions.md. Adjust file references invexo.mdif your IDE uses different paths for rule-details.
Create a new file in rule-details/domains/. Follow the existing pattern:
# rule-details/domains/your-domain.md
## Risk defaults
| Task | Default risk | Min verification level |
| ---- | ------------ | ---------------------- |
| ... | ... | ... |
## Verification checklist
- [ ] ...
## Industry standard patterns
| Area | Standard approach |
| ---- | ----------------- |
| ... | ... |Then add the domain to the classification table in vexo.md.
Edit the verification levels in rule-details/modes/implementation.md:
- L1 (Basic): count, null check — low risk
- L2 (Standard): L1 + sampling + before/after diff — medium risk
- L3 (Strict): L2 + checksum + full validation — high risk
Simply delete the domain file from rule-details/domains/. The core workflow will skip it automatically since it's never referenced.
See docs/design-philosophy.md for the full story, including:
- How VEXO evolved from a ChatGPT custom instruction into a structured framework
- The influence of HAICo's divergent/convergent thinking model
- Why the AI-DLC core + lazy-load architecture was adopted
- The "ask first" principle and why AI agents need explicit permission to clarify
vexo/
├── README.md ← You are here
├── LICENSE ← Apache 2.0
├── CHANGELOG.md ← Version history
├── CONTRIBUTING.md ← How to contribute
│
├── rules/
│ ├── vexo.md ← Core rules (always loaded)
│ └── rule-details/ ← Extended rules (lazy loaded)
│ ├── modes/
│ │ ├── implementation.md
│ │ ├── exploration.md
│ │ └── review.md
│ └── domains/
│ ├── data.md
│ ├── backend.md
│ └── ai-agent.md
│
└── docs/
├── design-philosophy.md ← Background and design decisions
└── ko/
├── README.md ← 한국어 README
└── design-philosophy.ko.md ← 한국어 설계 철학
- Evaluation framework — Structured methods to measure how well VEXO rules affect AI agent behavior (before/after scoring, consistency metrics)
- Additional domain files (frontend, mobile, DevOps)
- IDE-specific installation scripts
- Community-contributed rule packs
VEXO is designed to absorb good ideas from other frameworks and harness engineering techniques. If you've found patterns that make AI agents more reliable — whether it's a new domain, a better verification checklist, or an entirely different approach — we want to hear about it.
See CONTRIBUTING.md for guidelines on adding domains, modes, and improvements. PRs and issues are very welcome.
VEXO's design is informed by:
- HAICo — Divergent/convergent thinking separation for human-AI co-creation. The insight that AI tends toward premature convergence directly shaped VEXO's Explore → Decide workflow.
- AWS AI-DLC (GitHub) — The core + lazy-load rule architecture pattern. AI-DLC's approach of always-loaded core rules with conditionally-loaded details inspired VEXO's file structure.
Apache License 2.0 — see LICENSE for details.