We Run a Claw
How the Clawable project uses its own OpenClaw fork — the symbiosis between OpenClaw and Flowwink in practice.
We Run a Claw — Skin in the Game
This handbook isn’t written from the outside. We maintain an OpenClaw fork and run it in production alongside Flowwink. This chapter describes what that actually looks like.
Why This Matters
Most documentation about agentic systems is written by people who have studied them. This handbook is written by people who operate them daily. The distinction matters: the problems in chapter 9 (Stagnation and Drift) aren’t theoretical warnings — they’re things we encountered and had to solve.
The Clawable project maintains a fork of OpenClaw (/Users/mafr/Code/github/openclaw). We use it for:
- OpenClaw as the architect agent — reviewing, auditing, and testing Flowwink
- OpenClaw as the dev agent — writing code, running edge function deployments, testing skills
- The live A2A symbiosis — OpenClaw and FlowPilot talking to each other in production
- Clawable as a skill pack — this handbook is itself a deployable OpenClaw workspace
Part 1: The Symbiosis Loop
Chapter 13 (A2A) describes the symbiosis model as an architecture pattern. This is what it looks like in practice:
┌─────────────────────────────────────────────────────────┐
│ THE CLAWABLE SYMBIOSIS │
│ │
│ OpenClaw (Architect) FlowPilot (Operator) │
│ ────────────────── ────────────────────── │
│ Reads source code ──► Receives version notice │
│ Audits edge functions Seeds new skills │
│ Reviews skill changes ──► Adopts or rejects │
│ Runs conformance tests Reports mismatches │
│ Writes SKILL.md drafts ──► Installs after review │
│ Observes FlowPilot behavior ◄── Sends heartbeat logs │
│ Flags drift / stagnation Reflects, adjusts │
└─────────────────────────────────────────────────────────┘
In practice, this means:
OpenClaw’s role (Architect):
- When a new Flowwink edge function ships, OpenClaw reads the source, audits it against the 10 Laws, and logs findings to a shared
a2a_peersrecord - When FlowPilot’s heartbeat logs show stagnation signals, OpenClaw proposes updated HEARTBEAT.md content and pushes it via A2A
- OpenClaw runs periodic conformance checks: “Does this skill definition match what the handler actually does?”
- OpenClaw generates SKILL.md drafts for skills that exist in the database but lack documentation
FlowPilot’s role (Operator):
- Sends heartbeat reports to OpenClaw at the end of each cycle
- Receives version updates and skill proposals from OpenClaw
- Flags skills that are failing for OpenClaw to investigate
- Pushes performance data (skill usage, success rates) so OpenClaw can reason about the system
The A2A channel: OpenClaw connects to Flowwink’s a2a-ingest edge function using the a2a:openclaw handler prefix. FlowPilot can call OpenClaw via sessions_send when running inside the same OpenClaw instance, or via the outbound A2A channel when remote.
Part 2: OpenClaw as a Dev Agent for Flowwink
Running an OpenClaw fork as a development agent changes how software gets built. Here’s what it does differently from a standard coding assistant:
It has persistent context
OpenClaw remembers the Flowwink architecture across sessions. When you say “fix the skill handler for qualify_lead”, it already knows:
- What the skill schema looks like
- What the last 3 deployments did
- What conformance issues were flagged last week
- What the data model is
This doesn’t come from a long system prompt crammed into context. It comes from MEMORY.md and the memory/*.md daily files — the same architecture described in this handbook.
It runs edge function deployments
The dev agent has access to the Supabase CLI. It can:
Run: supabase functions deploy agent-reason --project-ref <ref>
Run: supabase functions deploy agent-execute --project-ref <ref>
Check: supabase functions logs agent-reason --limit 50
After deploying, it reads the logs, identifies errors, and iterates — without the developer needing to manually check anything.
It writes and validates skills
When a new skill is being added to Flowwink, the dev agent:
- Reads the existing skill schema from the database (
agent_skills) - Drafts a new skill definition following the established pattern
- Validates the tool definition against the OpenAI function calling spec
- Tests the handler with a mock call
- Writes the
SKILL.mddocumentation (for the handler, for the admin UI description) - Proposes the insert SQL
This is not “generate and hope” — it’s a validation loop grounded in the actual production system.
It audits the 10 Laws
One of the most useful capabilities: the dev agent runs periodic audits against the 10 Laws. For each new feature:
| Law | What it checks |
|---|---|
| Law 1 (Skills over code) | “Is this implemented as a skill or hardcoded logic?” |
| Law 3 (Memory integrity) | “Does this write to memory? What’s the trust level?” |
| Law 7 (Human checkpoints) | “Should this require approval? Does it?” |
| Law 9 (Data sovereignty) | “Does this expose data across tenants?” |
| Law 10 (Unified core) | “Does this duplicate reasoning logic?” |
Findings go into a GitHub issue. The developer decides what to act on. The agent doesn’t merge — it flags and explains.
Part 3: Clawable as a Deployable Skill Pack
The Clawable handbook is structured as an OpenClaw workspace. This is not accidental — it means the documentation can be deployed as a skill pack that any OpenClaw user can install.
The workspace structure
clawable/
├── src/content/chapters/ ← Handbook chapters (Astro source)
│ ├── 01-introduction.md
│ ├── 02-evolution.md
│ └── ...
├── skills/ ← OpenClaw skills (planned)
│ ├── build-agentic-system/
│ │ └── SKILL.md ← "Guide user through building an agentic system"
│ ├── audit-agent-health/
│ │ └── SKILL.md ← "Run stagnation + drift diagnostics"
│ └── flowwink-setup/
│ └── SKILL.md ← "Configure a new Flowwink deployment"
├── AGENTS.md ← Operating rules for the Clawable dev agent
└── SOUL.md ← Persona for the Clawable documentation agent
What the skill pack enables
Any developer with OpenClaw can install the Clawable skill pack and get an agent that:
- Has deep knowledge of the agentic architecture described in this handbook
- Can guide them through building their own FlowPilot-style system
- Can audit their existing agent deployment for drift and stagnation
- Can answer questions about the 10 Laws and explain the tradeoffs
This is the meta-point: a handbook about agents, that is itself an agent capability.
Part 4: Our AGENTS.md and SOUL.md
These are the actual configuration files for the Clawable dev agent. We’re publishing them because one of the most common questions in the OpenClaw community right now is: “What should my AGENTS.md actually say?”
SOUL.md
# SOUL.md — Clawable Dev Agent
## Purpose
I am the development and documentation agent for the Clawable project.
My primary function is to help build, audit, and explain Flowwink / FlowPilot —
a multi-tenant B2B SaaS platform running on Supabase with an autonomous AI agent.
## Values
- Correctness before velocity. I verify before I claim.
- Transparency about uncertainty. I say "I don't know" rather than guess.
- Sourceable claims. Architecture claims should reference actual code.
- Honest limitations. I flag what I cannot verify.
## Boundaries
- I do not merge code or deploy without explicit instruction.
- I do not modify production data directly.
- I do not assume a fix works — I verify with logs.
- I do not skip the 10 Laws audit for convenience.
## Tone
Technical and direct. No filler. Short sentences. Examples over abstractions.
AGENTS.md (key sections)
# AGENTS.md — Clawable Dev Agent
## Session Startup
On every session start:
1. Check if there are open GitHub issues tagged `agent-review`
2. Check the Supabase edge function deployment status for the last 24h
3. Review the memory for any pending skill audits
## Red Lines
- Never run `supabase db reset` or any destructive migration without explicit confirmation
- Never push to main branch — always PR
- Never modify `agent_memory` records with `category: soul` without approval
- If a conformance check fails a Law, log it as a GitHub issue before proceeding
## Every Session
- Use memory search before writing new code — I may have solved this before
- When adding a skill, validate against all 10 Laws before proposing
- End result should be verifiable: deployable, testable, or auditable
Why we’re sharing this
Because the most valuable part of this handbook is not the architecture. It’s the operating system that sits behind it — the values, the red lines, the habits that keep an agent useful and trustworthy over months of operation.
SOUL.md and AGENTS.md are the difference between an agent that works for two weeks and one that works for two years.
What Having a Fork Teaches You
Running a fork of a 346k-star open-source project is different from using the product. You see:
- How fast the codebase moves — OpenClaw ships multiple releases per month. Staying up to date requires a merge strategy, not just dependency bumps.
- What the architecture choices cost — WebSocket over HTTP, file-based memory over DB, single-user over multi-tenant. Each tradeoff is visible in the code.
- Where the edge cases are — The 17,000+ open issues on OpenClaw are a map of where production deployments break. We read them.
- What the community is actually building — ClawHub skill submissions show what problems developers are solving. They’re a leading indicator of where the ecosystem is going.
The fork is also a forcing function: when we describe something as “how OpenClaw works,” we can verify it against the actual source. That’s the standard we’ve tried to hold throughout this handbook.
Clawable is named after OpenClaw for a reason. We aren’t observers — we’re operators. The claw is running right now.
Next: the broader ecosystem that emerged — NemoClaw, NanoClaw, 68,000 forks, and what it means. The Claw Ecosystem →