qOS v21.43+
Outline Graph Architecture
QUIVER DISTRIBUTION / OPERATIONS SYSTEM

Quiver Operations
System.

A rule-governed agent runtime layered over Claude. Production at Quiver Distribution.

v21.35 10,607 lines 84 sections 432 changelog entries

Externalized rules, persistent memory, mechanical failure recovery, explicit consent gates.

BILLY SCHWARTZ · MAY 2026 SCROLL

01 / Abstract

~5 min read

The Quiver Operations System (QOS) is a declarative agent runtime layered over a general-purpose LLM (Claude Opus 4.7) deployed via Claude.ai. It governs production operations at Quiver Distribution — film-distribution workflows including avails generation, metadata generation, royalty statement processing, multi-platform promo classification, partner communications, sales tracking, and rights management.

Architecturally, QOS treats the LLM as a stateless executor and externalizes everything that matters: rules, schema, state, control flow, and failure history.

QOS treats the LLM as a stateless executor and externalizes everything that matters: rules, schema, state, control flow, and failure history.

The system has four primary components:

01

A versioned rulebook

QD_Claude_Master_Rules.md, ~10.6K lines, 84 sections, 432-entry changelog. The executable specification. Every behavior, constraint, trigger, and workflow is declared in markdown with explicit cross-references. Rule changes are atomic commits with mandatory changelog entries listing interacting rules, accepted tensions, gaps-not-closed, and pre-edit/post-edit row counts as mechanical verification. Active velocity: ~80 rule versions in 14 days.

02

A persistent state layer

Supabase, ~14 tables, ca-central-1. State partitioned by concern: cross-session memory (quiver_continuum), task queue (quiver_open_items), failure incidents (quiver_rule_incidents), schema cache, behavioral experiments, CRM intelligence, promo taxonomy, pricing reference, platform reference, provider profiles, inference calibration, journal, expenses. Airtable hosts operational domain records (Titles, Sales Tracker, Pay Tracker, Providers, OKR Goals).

03

An integration layer

Seven external systems via MCP servers: Airtable (operational backbone), Microsoft 365 (Outlook/Calendar/SharePoint), Fireflies (transcripts), Supabase (state), GitHub (rulebook persistence), Netlify (hosting), web search/fetch. Reads unconstrained; writes gated.

04

A consent gate

Unified Quiver consent (Section 16AJ v20.66+): any one of Billy/TJ/Steff approving an Airtable write = Quiver approving. CRM and rulebook writes have separate gate semantics. The gate is the system's primary safety boundary — no autonomous mutation of operational records.

Runtime model. Session-stateful but session-bounded. Each conversation is a Claude.ai chat. go is the session-init trigger: pulls latest rulebook from GitHub, loads Continuum (master row + most-recent session row) from Supabase, fetches parallel CoS briefing data, renders briefing block, then drops into command mode. done is the session-finalize trigger: synthesizes session work, gates on operator confirmation, then writes session row to Continuum and updates master row. Between go and done, ~30 named commands execute against the integration layer under rulebook governance. Idempotency is not guaranteed; writes are confirmed each invocation.

Memory model. Three concentric layers. (1) Conversation context (LLM window, in-session). (2) Continuum (cross-session, structured JSON in Supabase with master-row aggregation and 20/50-row tier-managed compression). (3) Domain-specific persistent tables. The rulebook is read fresh at every go; staleness is detected via project-snapshot drift tracking.

The proof loop is a structured 7-step post-incident protocol embedded in the runtime, with a state machine, recurrence detection, and forced escalation from soft (discipline) to hard (mechanical) enforcement on recurrence.

Failure-recovery model. The proof trigger (Section 16AB) is a structured 7-step post-incident loop: confirm incident verbatim → root-cause analysis → existing-rule check via mdscan → branch classification (A: rule violation, B: schema/process gap, C: context gap, D: one-off) → options + trade-offs → operator approval → atomic execution. Branch A and B INSERT into quiver_rule_incidents. Recurrence detection (≥2 incidents under same rule class) forces escalation from discipline-mediated enforcement to mechanical enforcement. Canonical example: after three formatting-rule violations in four days, an enforcement helper (scripts/xlsx_global_rules.py) was built and made mandatory in the output checklist — discipline replaced by deterministic post-condition check.

Schema drift management. Airtable field IDs are the primary drift vector. quiver_airtable_schema caches the schema; schema sync (Section 16AE) refreshes it; the rulebook references field IDs (not field names) for stability. The audit command runs scheduled health checks against Airtable; mdscan runs pre-codification scans against the rulebook to detect duplicate rules before adding them.

Self-modification discipline. The rulebook can edit itself via update MD (atomic edit + version bump + changelog row + commit + push). Pre-edit checks include git pull origin main (parallel-session collision detection — six known to date), row-count delta verification, and prefix-shape verification on changelog table integrity.

Known weaknesses. Deployment ceiling: every command runs through the operator's Claude.ai session; no team access surface exists beyond two prototype Netlify apps. Most rules are discipline-mediated rather than mechanically enforced; reliance on LLM-side conformance. Project-snapshot drift between file-attached rulebook and GitHub live is not automatically reconciled. Field-level Airtable permissions can block API writes the consent gate would otherwise authorize. Trigger recognition is task-shape pattern-matching at LLM inference time; false negatives are possible.

Strategic posture. Architecture is sound; what's missing is the deployment pattern. The next layer is extracting the highest-leverage workflows into web UIs that team members can drive without exposing the rulebook itself. The Netlify-hosted qd-tasks app is the proof-of-concept for that pattern. Five to ten more such apps would convert the system from operator-only to team-accessible.

02 / System Diagram

Four components, one runtime, one safety boundary.

Click any node to scroll to the relevant category in the logic matrix.

read at go read + write read; writes gated proof loop → incidents → rulebook Rulebook QD_Claude_Master_Rules.md v21.35 · 10,607 lines · GitHub Runtime Claude.ai · Opus 4.7 ~30 commands stateless executor State Layer Supabase · 14 tables ca-central-1 Continuum · incidents · tasks Integrations Airtable · M365 · Fireflies GitHub · Netlify · Web via MCP CONSENT GATE 16AJ incidents A/B/C/D data flow feedback loop gate boundary
RULEBOOK
The executable spec. Read at every go.
RUNTIME
Claude.ai chat. Stateless between sessions.
STATE
Supabase. Persists everything else.
CONSENT
Unified Quiver. Any of Billy/TJ/Steff = Quiver.

03 / Logic Matrix

Eleven categories. Every active rule.

04 / Notes for Engineers

Architectural takeaways.

01

Rules as code where the code is markdown.

Determinism is bounded by LLM conformance. The rulebook is the executable spec; failure-recovery is the primary safety net.

02

The Continuum is append-only.

Closest analog to a session in a stateful application — except append-only with periodic AI-mediated compression and a long-lived master row aggregating across all sessions.

03

proof is a state machine.

A structured retrospective protocol embedded in the runtime with branch classification, recurrence detection, and forced escalation from discipline to mechanical enforcement on recurrence.

04

Schema drift is the dominant operational risk.

Three mitigations: cached schema in quiver_airtable_schema, the audit command, and forced reference-by-field-ID in the rulebook itself.

05

Consent gates are the safety boundary.

The unified Quiver model (v20.66) is a recent simplification from a two-tier model — the kind of design choice you'd expect to oscillate as the system matures.

06

Monolithic markdown today.

Split-into-skills migration is a queued open question. ~10K lines in one file is past the comfortable ceiling for human navigation; tooling fills the gap for now.

ESC