Skip to content

Graph View Internals

The graph view is the runtime used to author, schedule, and evaluate template logic in apps/builder-ui. This page explains the real implementation, including queue mechanics, wave planning, staleness guards, and nested-graph propagation behavior. It is intended for contributors who need to reason about correctness and make changes safely.

Scope and Intent

This document focuses on internal execution semantics: how graph mutations become queued work, how batches are claimed and drained, how nodes are compiled and evaluated in waves, and how terminal results are projected either to entry fields or back into parent graphs. Visual layout details are intentionally omitted unless they affect runtime behavior.

Runtime File Map

SubsystemFileResponsibility
Graph modelcomponents/graph-editor/types/graph.tsGraph type system, DAG helper, evaluation types
Command modelcomponents/graph-editor/lib/commands/types.tsCommand contracts and version bump rules
Tree constraintscomponents/graph-editor/lib/commands/elementary/add-edge.tsSingle-parent and acyclic edge validation
Command -> queuecomponents/graph-editor/store/command.tsApplies commands and enqueues dirty subgraphs
Global queue SMstore/graphs/index.tsQueue, in-flight batch, claim/finish/reset API
Batch contractscomponents/graph-editor/lib/batch/contracts.tsGateway/compiler/evaluator interfaces
Store adaptercomponents/graph-editor/lib/batch/gateway.tsConcrete read/write boundary to Zustand
Scheduler actorcomponents/graph-editor/lib/batch/runner.tsLong-lived serialized drain loop
Wave plannercomponents/graph-editor/lib/batch/graph-batch-pipeline.tsAffected-set and dependency countdown logic
Batch executioncomponents/graph-editor/lib/batch/graph-batch-executor.tsSingle-batch state transitions and staleness checks
Node compilercomponents/graph-editor/lib/batch/compiler.tscompileLogicTree integration and rendering
React host hookcomponents/graph-editor/hooks/use-graph-batch-process.tsRunner lifecycle hosting
Eval transportservices/templating/query.tsRequest/response mapping for node evaluation

Graph Semantics and Structural Constraints

A graph instance consists of id, ref, graphVersion, topologyVersion, nodes, and edges. The ref field is not metadata; it controls where terminal output is applied. A Field graph writes directly into entries.setComputed(fieldId, value). A Param graph does not directly commit a field value; instead, its terminal result triggers reevaluation in its parent graph at the owning node.

Edge orientation is intentionally inverted relative to many DAG engines: from is the child node and to is the parent node. This means dependency resolution and wave progression run leaf-to-root. The command layer enforces tree-like structure by rejecting self-loops, parent conflicts (a child may only have one parent), and any edge that creates a cycle. This tree invariant is fundamental to the batch planner and skip propagation logic.

Queue Envelope and State Machine

Queued work is represented by GraphBatchQueryItem containing id, entryId, graphId, graphVersion, topologyVersion, and a dirty node-id set. The version fields are snapshot markers captured when the work is enqueued. They are later compared against current store versions to detect stale execution.

The global graph slice contains a minimal state machine with batchQueue, batchInFlight, and batchStatus. enqueueGraph appends work; claimBatch moves queue head to batchInFlight; finishBatch clears batchInFlight (optionally guarded by batch id); and resetBatchSM returns in-flight work back to the queue. The hard invariant is one global in-flight batch at a time.

End-to-End Runtime Flow

flowchart TD
    A["User event\ncommand apply / run graph / run node"] --> B["graphs.enqueueGraph(graphId, dirty)"]
    B --> C["graphs.batchQueue"]
    C --> D["GraphBatchRunner"]
    D --> E["GraphBatchGateway.claimNextBatch"]
    E --> F["GraphBatchExecutor.run(batch)"]
    F --> G["GraphBatchPipelineInstance\ncompute waves"]
    G --> H["GraphBatchCompiler.compileNode"]
    H --> I["GraphBatchEvaluator.evaluateWave"]
    I --> J["Gateway writes node states/templates"]
    J --> K{"terminal node result?"}
    K -->|no| L["next wave"]
    K -->|yes| M{"graph ref kind"}
    M -->|Field| N["entries.setComputed(fieldId, value)"]
    M -->|Param| O["enqueue parent graph(ownerNodeId)"]
    N --> P["finishClaim"]
    O --> P
    L --> G

Trigger Sources

The normal trigger path is command execution. CommandSlice.applyCommand (and undo/redo) applies a graph mutation, filters dirty nodes against current graph membership, and calls graph.enqueueGraph(dirtyNodes). Manual triggers also exist: the graph header run button enqueues all nodes, while each node has a run button that enqueues only that node id.

These manual triggers are not cosmetic. They are direct entry points into the same queueing system, so a contributor debugging unexpected reevaluation should treat them exactly like command-generated batches.

React Host vs Runtime Core

useGraphBatchProcess now functions as a host. It initializes the gateway once, keeps compiler/evaluator references current, creates a single runner instance, and starts/stops that runner with component mount lifecycle. The queue drain algorithm is therefore not encoded in React effect loops. React rerenders update dependencies, while the actor owns execution continuity.

Gateway Boundary Responsibilities

The gateway is the single store adapter used by the runner and executor. It claims and finishes batches, builds graph + dag + versions snapshots, reads node evaluations and templates, writes runtime states, and applies terminal side effects. For field graphs, side effects write computed entry values. For parameter graphs, side effects enqueue parent reevaluation at ownerNodeId.

The gateway also maintains a lease map keyed by batch id. Completion only finishes a claim when lease and batch id still match, preventing stale or duplicate completion from corrupting in-flight state.

Runner Actor Semantics

GraphBatchRunner is a serialized actor with running, draining, and wakeRequested guards. start() resets queue state, subscribes to queue/in-flight signals, and immediately wakes the drain loop. wake() either schedules drain or marks another wake request when draining is already active. drain() repeatedly claims a batch, executes it, and finishes claim, then re-enters if signaled or if work remains claimable.

This design prevents reentrant drains and gives deterministic single-threaded processing without locking primitives.

Executor Semantics

GraphBatchExecutor owns one claimed batch. It first loads a snapshot and pipeline instance, then checks staleness (snapshot.graphVersion vs batch.graphVersion). During each wave it partitions nodes into runnable and blocked, marks statuses, compiles runnable nodes, evaluates them, applies results, and re-checks cancellation/staleness after every awaited evaluation.

The compile step uses compileLogicTree(..., rootNodeId=nodeId) to generate a node-local template fragment. Both rawTemplate and rendered template are carried: rendered template goes to evaluation, raw template is persisted for introspection and debugging. The evaluator receives { entryId, reqs: [{ nodeId, template }] } and returns node results that are merged into store evaluation state.

Wave Planner Mechanics

The planner begins from dirty seeds and computes affectedSet = dirty + all ancestors. For each affected node it computes remainingDeps as the number of affected children still unresolved. Initial ready wave is any dirty node with zero remaining dependencies.

A successful node decrements its parent’s dependency count; when parent count reaches zero and parent is still runnable, it is queued for a later wave. A failed node immediately marks itself error and causes ancestor skipping. A skipped node is never requeued. Additionally, executor-level canRunNode blocks a node if any direct child is already in Error status in store.

Worked Example A: Successful Partial Reevaluation

Consider this graph (child -> parent):

graph LR
    A["n1 ExtractTitle"] --> C["n3 Merge"]
    B["n2 ExtractPrice"] --> C
    C --> T["terminal"]

Assume a node-level run enqueues dirty seed {n1}. The planner derives affectedSet = {n1, n3, terminal} and initializes counters such that n1 is ready first. Wave 0 evaluates n1 and unblocks n3. Wave 1 evaluates n3 and unblocks terminal. Wave 2 evaluates terminal and projects result through the graph reference path; for Field graphs this writes computed value to the entry.

A key detail is that n2 is untouched here. Because n2 is outside the affected set for this batch, its previous stable value is reused transitively by parent logic.

Worked Example B: Failure and Skip Propagation

Use the same graph and dirty seed {n1}. If wave evaluation returns error for n1, the executor marks n1 error and propagates skip to ancestors n3 and terminal. No further runnable waves are produced from this batch. The practical effect is short-circuiting upward in the tree, which avoids evaluating nodes whose required child path already failed.

Nested Parameter Graph Propagation

Parameter graphs are connected to a parent graph through GraphRefType.Param metadata (ownerNodeId, paramKey). When such a child graph reaches terminal output, the gateway resolves parent graph id using getGraphParents(childGraphId) and enqueues the parent graph with dirty seed [ownerNodeId]. Parent reevaluation then runs through the same pipeline mechanics.

sequenceDiagram
    participant CG as Child Param Graph
    participant GW as Batch Gateway
    participant Q as Batch Queue
    participant PG as Parent Graph

    CG->>GW: terminal result
    GW->>GW: resolve parentGraphId + ownerNodeId
    GW->>Q: enqueueGraph(parentGraphId, [ownerNodeId])
    Q-->>PG: batch claimed and executed later

Correctness Invariants

  1. Every node has at most one parent within a graph.
  2. Graph edges are acyclic.
  3. Global batch execution has at most one in-flight claim.
  4. Claim completion must match active lease and batch id.
  5. Staleness is checked both before and after awaited evaluation.
  6. Dependency counters never go below zero.
  7. Skipped nodes are never requeued.
  8. Initialized graphs always contain terminal node.
  9. Graph edits bump graphVersion; topology edits bump both graphVersion and topologyVersion.

Failure Modes and Debugging Order

When behavior diverges from expectation, inspect queue state first (batchQueue, batchInFlight), then verify the claimed batch envelope (dirty, versions), then verify DAG orientation and node statuses in evaluations[graphId].nodes, and finally inspect stored node templates and graph reference type. For nested param flows, explicitly verify graphParents mapping and ownerNodeId wiring before touching pipeline logic.

Most regressions in this subsystem originate from violating one of three boundaries: command-layer tree constraints, gateway-only store access, or executor stale-check placement.

Contributor Guidance

If you change this runtime, avoid moving queue draining back into React lifecycle code; keep runner actor semantics intact. Keep store operations centralized in the gateway to preserve testability and separation of concerns. If evaluator response shape or semantics changes, update batch contracts and executor guards in the same change set; otherwise, silent partial failures are likely.