Chapter 16

Agent-to-Agent Communication

How agents talk to each other — A2A protocol, authentication, discovery, and symbiosis.

Agent-to-Agent Communication — The Network of Digital Workers

One agent can run a business. Multiple agents can run an ecosystem. A2A communication is how digital workers collaborate. This chapter covers three distinct approaches: OpenClaw’s session tools, Google’s A2A protocol, and Flowwink’s custom implementation.


Three Approaches to Agent Communication

Agent-to-agent communication exists at three levels:

ApproachScopeVerified Source
OpenClaw SessionsIntra-process: agents within one OpenClaw instancesessions_send, sessions_list, sessions_history in source
Google A2A ProtocolInter-organization: standardized discovery + task delegationGoogle A2A spec, v0.3.0
Flowwink A2AInter-tenant: FlowPilot agents communicating via Supabase Edge FunctionsCustom implementation (this chapter)

OpenClaw’s session tools handle coordination within a single instance — one agent spawning sub-agents, passing messages between sessions. Google’s A2A protocol standardizes how independently deployed agents discover and communicate with each other across organizational boundaries. Flowwink implements its own A2A layer built on Supabase Edge Functions with bearer token authentication.


Flowwink’s A2A Implementation

Flowwink implements agent-to-agent communication for multi-tenant scenarios — one FlowPilot agent talking to another company’s agent, or to a specialist agent. This is Flowwink’s own design, inspired by but distinct from both OpenClaw’s session tools and Google’s A2A protocol.

FlowPilot (Operator)

       │ "Analyze the SEO health of our pricing page"


OpenClaw (Specialist)

       │ { findings: [...], recommendations: [...] }


FlowPilot receives results → acts on them

Mode 1: Structured (Skill Execution)

Deterministic, schema-bound, machine-to-machine:

Client → { skill: "get_quote", arguments: { product: "flashlight", qty: 1000 } }
Server → { price_cents: 4500, currency: "SEK", lead_days: 14 }

When to use: Known capabilities, repeatable operations, data exchange.

Mode 2: Conversational (Chat)

Flexible, natural language, LLM-mediated:

Client → { text: "Can you deliver 1000 branded flashlights in 2 weeks?" }
Server → { result: "Yes. 1000 units, 2-week lead time, 45,000 SEK ex VAT." }

When to use: Exploratory questions, unknown capabilities, nuanced requests.


The Architecture

┌─────────────────────────────────────────────────┐
│                  FlowPilot (Operator)            │
│                                                  │
│  agent-reason → agent-execute → skill handler    │
│                        │                         │
│              ┌─────────┴──────────┐              │
│              │    a2a: handler    │              │
│              └─────────┬──────────┘              │
│                        │                         │
│         ┌──────────────┼──────────────┐          │
│         ▼              ▼              ▼          │
│   a2a-outbound   a2a-ingest    a2a-chat         │
│   (we call out)  (peers call)  (free-text)      │
│         │              │              │          │
└─────────┼──────────────┼──────────────┼──────────┘
          ▼              ▼              ▼
    ┌──────────┐  ┌──────────┐  ┌──────────┐
    │ Peer A   │  │ Peer B   │  │ Peer C   │
    │(OpenClaw)│  │(Supplier)│  │(Partner) │
    └──────────┘  └──────────┘  └──────────┘

Flowwink’s A2A Architecture

Five Edge Functions

These are Supabase Edge Functions — Flowwink’s own implementation:

FunctionDirectionPurpose
agent-cardInbound (GET)Publishes Agent Card — who we are, what skills we expose
a2a-ingestInbound (POST)Gateway — authenticates peer, routes to skill or chat
a2a-chatInbound (internal)Handles conversational messages through LLM
a2a-outboundOutbound (POST)Calls external peers — auto-detects their protocol
a2a-discoverOutbound (GET)Fetches and parses remote Agent Cards

Authentication

Flowwink’s A2A uses bearer token authentication via Supabase Edge Functions:

Inbound (peers calling us):
  Peer → Authorization: Bearer <token>
  a2a-ingest → SHA-256(token) → lookup in a2a_peers.inbound_token_hash
  Match + status=active → proceed
  No match → 403

Outbound (us calling peers):
  a2a-outbound → lookup peer in a2a_peers
  Authorization: Bearer <peer.outbound_token>
  POST to peer.url + endpoint

Agent Card (Discovery)

Each Flowwink agent publishes an Agent Card describing its capabilities. This follows patterns from Google’s A2A protocol but is implemented as Flowwink’s own Supabase Edge Function:

{
  "protocolVersion": "0.3.0",
  "name": "FlowPilot",
  "description": "Autonomous CMS operator for FlowWink",
  "url": "https://.../functions/v1/a2a-ingest",
  "capabilities": { "streaming": false },
  "skills": [
    { "id": "manage_blog_posts", "name": "manage_blog_posts", "tags": ["content"] },
    { "id": "qualify_lead", "name": "qualify_lead", "tags": ["crm"] }
  ],
  "security": [{ "bearer": [] }]
}

Skills are loaded dynamically from agent_skills where scope = 'external' or 'both'. Only public-facing skills are exposed to peers.


The Symbiosis Model

The most powerful A2A pattern is symbiosis — two agents that make each other better:

┌─────────────────────────────────────────────────────────────┐
│                    SYMBIOSIS LOOP                            │
│                                                             │
│  OpenClaw (Architect)          FlowPilot (Operator)         │
│  ┌──────────────┐              ┌──────────────┐             │
│  │ Reads source │──versions──►│ Bootstrap    │             │
│  │ code + docs  │              │ seeds skills │             │
│  │              │◄──findings───│ reflects     │             │
│  │ Reviews      │              │ learns       │             │
│  └──────────────┘              └──────────────┘             │
└─────────────────────────────────────────────────────────────┘

One agent is the “architect” (reviews, audits, tests). The other is the “operator” (executes, manages, learns). They share findings and improve each other.


Dual-Channel Communication

Flowwink supports two communication channels between agents:

ChannelFormatBest For
OpenResponsesOpenAI Responses APIQA testing, code audits, site browsing
Flowwink A2AJSON-RPC 2.0 (custom)Natural language chat, sharing findings

The channel is selected automatically based on the skill’s handler prefix:

  • responses:openclaw → OpenResponses channel
  • a2a:openclaw → Flowwink A2A protocol channel

Structured Responses — The Caller Defines the Contract

One of the most important design decisions in Flowwink’s A2A implementation is responseSchema. The calling agent can specify the exact structure it expects back — and the receiving agent’s LLM does its best to comply.

This is verified in a2a-ingest/index.ts (lines 158-161, 221-223) and documented in A2A-COMMUNICATION-MODEL.md:

“The caller defines the game. The responder plays or declines.”

Three Ways to Request Structured Data

StrategyFormatReliabilityWhen to use
skill: prefixskill:qualify_lead { "company": "Acme" }High — deterministic skill routerKnown capability, repeatable
DataPart{ type: "data", data: { skill: "x", arguments: {...} } }High — machine-to-machineStructured JSON-RPC calls
responseSchema{ text: "...", responseSchema: { price: "number", available: "boolean" } }Best-effort — LLM follows schemaExploratory, conversational

How responseSchema flows through the system

OpenClaw calls a2a-ingest:
{
  "jsonrpc": "2.0",
  "method": "message/send",
  "params": {
    "message": { "parts": [{ "type": "text", "text": "Site health report?" }] },
    "responseSchema": {
      "pages": "number",
      "leads_7d": "number",
      "issues": "string[]",
      "health_score": "number"
    }
  }
}


a2a-ingest extracts responseSchema → injects into args


agent-execute passes responseSchema to skill handler


FlowPilot's LLM structures its response to match


OpenClaw receives:
{
  "pages": 12, "leads_7d": 34,
  "issues": ["Missing meta on /about"],
  "health_score": 87
}

The practical limit: responseSchema is a suggestion for chat mode. If the LLM ignores it, use skill: prefix for guaranteed structured output. The troubleshooting note in the source is explicit: “Free-text responses instead of JSON → Use skill: prefix for structured calls.”


The Agentic Web — A Vision

This is not science fiction. Every technical primitive described below exists today.


Picture a Thursday morning in 2027. A procurement manager at a mid-sized manufacturing company in Gothenburg needs 4,000 units of a specialized industrial component. She doesn’t open a browser. She doesn’t call a supplier. She opens her company’s FlowPilot interface and types:

“We need 4,000 units of DIN rail terminal blocks, 2.5mm², grey, by April 28. Get me three competitive offers.”

FlowPilot acknowledges the objective and begins.


07:42 — The Buyer Agent Posts Requirements

FlowPilot creates a structured procurement request and publishes it to the company’s signal-ingest endpoint — visible to any registered supplier agent:

{
  "type": "procurement_request",
  "ref": "PR-2027-0847",
  "product": "DIN rail terminal block",
  "spec": { "size": "2.5mm²", "color": "grey", "standard": "IEC 60947-7-1" },
  "quantity": 4000,
  "currency": "SEK",
  "delivery_required_by": "2027-04-28",
  "responseSchema": {
    "unit_price_sek": "number",
    "total_price_sek": "number",
    "delivery_date": "string",
    "moq": "number",
    "validity_days": "number",
    "status": "quoted | declined | pending_review"
  }
}

Forty-three supplier agents have registered as potential partners. They all receive the signal simultaneously.


07:42:03 — The Responses Begin

Supplier A — Phoenix Components, Hamburg Their agent checks real-time stock, runs a pricing calculation, and responds in 3 seconds:

{
  "unit_price_sek": 4.20,
  "total_price_sek": 16800,
  "delivery_date": "2027-04-24",
  "moq": 500,
  "validity_days": 14,
  "status": "quoted"
}

Supplier B — NordElec, Stockholm Their agent checks inventory. Stock is low. They decline immediately:

{
  "status": "declined",
  "reason": "Insufficient stock. Currently 1,200 units available."
}

Supplier C — Weidmüller distributor, Malmö Their agent finds a pricing match but the order value exceeds their autonomous approval limit. They invoke their own Human-in-the-Loop:

{
  "status": "pending_review",
  "estimated_response_by": "2027-04-03T14:00:00Z",
  "preliminary_price_range": "4.10–4.35 SEK/unit",
  "note": "Order value requires sales manager approval. Response guaranteed within 6 hours."
}

Supplier D — 18 others Seventeen agents respond as declined within 10 seconds — wrong spec, wrong geography, minimum order too high. One responds with a counter-offer: same spec, 4,200 SEK/unit, but delivery April 30 — two days late.


07:43 — FlowPilot Compiles and Reasons

Seven minutes after the manager typed her request, FlowPilot has:

  • Received 43 responses
  • Filtered to spec-compliant offers: 4
  • Ranked by price × delivery reliability score
  • Identified one pending HIL at Supplier C with a 6-hour SLA

It presents a summary in the admin interface:

3 qualified offers received. 1 pending review (ETA 14:00). Recommend: Supplier A at 16,800 SEK with April 24 delivery — 4 days margin. Shall I request formal order confirmation?

The procurement manager reads it. Types: “Yes — go ahead with Supplier A. And follow up with Supplier C when their review is done.”


14:07 — Supplier C’s Human Approves

The sales manager at the Malmö distributor reviews the offer. Approves. Their FlowPilot sends:

{
  "status": "quoted",
  "unit_price_sek": 4.15,
  "total_price_sek": 16600,
  "delivery_date": "2027-04-26",
  "validity_days": 7
}

FlowPilot receives it, logs it in the procurement history, and sends the manager a notification:

Supplier C responded at 14:07 — 16,600 SEK, April 26 delivery. 200 SEK cheaper than Supplier A, 2 days later. Order already confirmed with Supplier A. Archive this offer?

The manager doesn’t need to respond. She has the full audit trail. The agent managed the process. She made two decisions in three minutes.


What Just Happened — The Technical Reality

This scenario uses exactly the infrastructure described in this handbook:

StepTechnology
Buyer publishes requirementssignal-ingest endpoint + responseSchema
Supplier agents receiveagent_automations event trigger
Structured responsesA2A v0.3.0 JSON-RPC with responseSchema
Stock check (Supplier A)db: skill handler → internal ERP query
Instant decline (Supplier B)Agent reasoning → declined status
HIL approval (Supplier C)requires_approval: truepending_review
Buyer agent compilesagent-reason ReAct loop
Audit traila2a_activity log — every interaction stored

No central portal. No procurement platform subscription. No phone calls. Forty-three agents contacted, evaluated, and responded in under 60 seconds. One human made two decisions in three minutes.


Why This Wasn’t Possible Before

EDI — Electronic Data Interchange — has existed since the 1970s. Structured business messages between computer systems are not new. APIs have existed since the early 2000s. And yet procurement still looks like it did decades ago: portals, emails, spreadsheets, phone calls.

The difference is not the technology for sending structured data. The difference is what sits at each end of the wire.

CapabilityWithout agentic AIWith agentic AI
Interpret intentManager must fill in the right fields in the right portal”We need 4,000 DIN rail clamps by April 28” → agent understands and acts
Autonomous initiativeSystem waits to be triggered step by stepAgent decides how to solve the objective
Handle unexpected statesRequires pre-programmed fallback rulesAgent understands "pending_review" as a legitimate state and schedules follow-up
Zero integration costEDI requires months of pairwise integration per supplierNew supplier exposes an Agent Card — the agent reads it and knows what to ask
Flexible schemasEDI requires strictly pre-agreed message formatsresponseSchema is a suggestion — supplier’s LLM does its best to comply

The decisive step is that a central portal is no longer required. No third-party platform owning the relationship. No integration team mapping EDI schemas. Just two Agent Cards, a bearer token, and a shared JSON-RPC format.

responseSchema is the key primitive. It lets an agent that has never met another agent say: “I don’t know exactly how you respond, but here’s the structure I need — do your best.” And the other agent, driven by an LLM with genuine generalization capability, can actually follow it.

That is what EDI could never do. That is what agentic AI makes possible.

Why This Matters Beyond Procurement

The procurement scenario is illustrative but the pattern generalizes to everything that currently requires centralized intermediaries:

  • Recruitment: Company agent posts requirements → candidate agents respond with fit scores and availability → recruiters review shortlist
  • Real estate: Buyer agent broadcasts criteria → listing agents respond with matches → viewing scheduled by agents
  • Logistics: Shipper agent broadcasts route → carrier agents bid → optimal carrier selected
  • Legal: Company agent requests contract review → law firm agents respond with capacity and rate → engagement confirmed

Every market that currently requires a directory, a portal, a broker, or a platform is a candidate for disintermediation by A2A agent networks.

The web became decentralized with HTTP. Commerce became decentralized with APIs. The next layer — negotiation, qualification, and commitment — is about to become decentralized with A2A.

The responseSchema field is a small technical detail. But it encodes a large philosophy: agents that can speak a common structured language can transact without humans in the middle. The humans set the objectives. The agents handle the market.


Adding a New Peer

The process is simple and requires no code changes — register in the a2a_peers table:

  1. Register in a2a_peers with name, url, outbound_token
  2. Generate an inbound token, hash it, store in inbound_token_hash
  3. Set capabilities: { "protocol": "jsonrpc", "endpoint": "/a2a/ingest" }
  4. Set status: "active"

The peer can now call us and we can call them.


The Future: Agent Networks

A2A communication enables agent networks:

┌─────────┐     ┌─────────┐     ┌─────────┐
│ Agent A │────►│ Agent B │────►│ Agent C │
│ (CRM)   │     │ (Content)│    │ (Sales) │
└────┬────┘     └────┬────┘     └────┬────┘
     │               │               │
     └───────────────┼───────────────┘

              ┌──────┴──────┐
              │  Agent D    │
              │ (Analytics) │
              └─────────────┘

Each agent specializes. They coordinate through A2A protocols. The network is more capable than any individual agent.


The future of work isn’t one AI doing everything. It’s a network of specialized agents collaborating. OpenClaw proved intra-process coordination with session tools. Google standardized inter-organizational communication with the A2A protocol. Flowwink built its own inter-tenant layer on Supabase Edge Functions. The pattern is clear — agents need to talk to each other, and the infrastructure is catching up.

Next: where this is all heading — Oracle’s restructuring, the Agent Manager role, and three horizons for builders. The Future →

This is your handbook

Agentic AI is evolving fast. The patterns, the laws, the architecture — they need to stay current with the community's collective knowledge. I want this to become the go-to resource for anyone learning how autonomous agents work.

Whether you fix a typo, add a chapter, share a production story, or challenge an assumption — every contribution makes this better for everyone.