Runtime Architecture¶
This page summarizes the architecture as implemented in the current codebase. It is a maintainer-oriented map, not a replacement for the detailed semantic references.
Target Boundary¶
TopoExec builds around these CMake targets:
topoexec::core: installed headers and compile-feature boundary.topoexec::runtime: dependency-light in-process graph runtime.topoexec::yaml: optional YAML loading over the runtime.topoexecCLI: optional command surface over YAML/runtime.topoexec::adapter_sdkand adapter/FFI/Python/plugin preview targets: optional boundaries that must not pull dependencies intotopoexec::runtime.
The enforced architecture policy is documented in Architecture guardrails.
Main Runtime Flow¶
- A graph enters as
GraphSpec, either constructed in C++ or loaded from YAML. - Graph validation checks schema version, graph identity, lanes, components, edge endpoints, trigger policies, channel policy, payload/port compatibility, lifecycle dependencies, and CompositeLoop ownership.
- The graph compiler computes immediate-edge strongly connected components, requires every immediate cycle to match exactly one
CompositeLoop, condenses components intoCompiledGraphRegionentries, and sorts the region DAG by dependency and runtime priority. RuntimeRunnercreates component instances fromComponentRegistry, configures and activates them, creates channel/state/config/metric/trace/health infrastructure, then hands the compiled plan toEventRuntime.EventRuntimeexecutes regions for bounded ticks, duration, idle stop, or external stop. It gathers ready invocations throughTriggerPolicyEngine, runs components directly or through thread-pool lanes, commits publications, records metrics/trace, and stops on fatal runtime errors.RuntimeRunnerResultreturns validation, lifecycle counts, scheduler stop reason, runtime metrics, trace events, health events, errors, loop data, optional live observe events, and optional component state snapshots.
Core Objects¶
Componentis the user-code boundary. It describes ports/config, handles lifecycle, and executes one invocation at a time unless marked reentrant.GraphContextis the per-component runtime handle for publishing, reading inputs, submitting async tasks, using state/config stores, reporting loop convergence, and checking cancellation.RuntimeChannelBusowns bounded channel storage and reader semantics.RuntimePublicationRouterstages publication visibility across immediate, delay, state, async, and CompositeLoop boundaries.TriggerPolicyEnginetranslates channel/timer/request/task readiness intoInvocationrecords.EventRuntimeowns the scheduler loop over compiled regions.RuntimeRunnerowns lifecycle, orchestration, result aggregation, and observer delivery.
Visibility And Feedback¶
Immediate edges are visible in the current epoch by compiled region order. Delay, state, and async edges cross epoch or deferred boundaries. Immediate feedback cycles are rejected unless the exact cycle is owned by a CompositeLoop; loop policies control iteration, convergence, budget, and partial-success handling.
Observability¶
Metrics, trace events, health events, diagnostics, structured runtime errors, and live observe events are runtime outputs, not control inputs. Descriptor-backed metrics and trace schema fields are documented in Metrics and Trace events. Graph validation diagnostics are documented in Diagnostics.
Live observe is a local-first validation surface over the same runtime
semantics. Runtime activation is explicit through RuntimeRunnerOptions or the
topoexec graph observe command. When disabled, no live events are collected.
When enabled, runtime hot paths emit fixed-size bounded records only; JSON
serialization, assertion evaluation, file artifacts, SSE serving, and dashboard
rendering stay in CLI/tooling layers. Observer overflow is reported as lossy
drop summaries and must not alter scheduler order, channel visibility,
CompositeLoop behavior, runtime success/failure, metrics, trace, or health
results. The event schema is documented in
Live observe events, local
tooling in Live runtime validation,
and overhead policy in
Live observe performance.
Deferred Or Preview Scope¶
Current preview surfaces for adapters, FFI, Python automation, plugin loading, editor/schema UX, and live dashboard UI are documented but not production ecosystem commitments. Live observe does not add pause/resume/step/fault injection, remote multi-user access, WebSocket control, payload-body capture, or production OpenTelemetry/Prometheus export. Schema v2 remains a planning boundary until a reviewed v2 loader and migration path exist.