Concurrency¶
TopoExec concurrency keeps graph semantics first. Worker overlap is allowed only inside explicit lane and component policy bounds, and it never bypasses compiled region barriers or publication commit rules.
Mental Model¶
GraphContext::publish()
-> runtime publication router
-> edge-kind staging
-> commit boundary
-> downstream trigger readiness
No lane is allowed to turn publish() into direct same-call-stack downstream execution.
Thread Pool Lane¶
Use a thread_pool lane when a component can safely process multiple ready invocations at the same time:
lanes:
pool:
type: thread_pool
max_threads: 4
components:
- id: worker
execution:
lane: pool
reentrant: true
Runtime behavior:
max_threadsis the run-scoped persistent worker count and active worker width;0means one worker.queue_capacityoptionally bounds pending ready invocations behind active workers.overflowhandles over-capacity ready invocations before work is submitted to workers:drop_oldest/overwritekeep newest work,drop_newest/reject/reject_new/blockkeep oldest work in the non-blocking runtime, andfail_faststops the run.execution.reentrant: falseserializes invocations for that component.execution.reentrant: truepermits overlap up to the lane worker bound.- Ready invocations admitted to the lane are queued by runtime priority (
high,normal,low,background), then enqueue order, then component id. - Immediate publications are committed at the worker/component barrier, not recursively from
GraphContext::publish(). - Stop/cancel requests prevent new scheduler iterations/submissions and wait for already-admitted invocations to drain cooperatively before workers stop and join. Components can observe requests with
Invocation::cancel_requested()orGraphContext::cancel_requested(). thread_nameis best-effort for persistent worker threads on supported platforms and remains advisory as a portable guarantee.
Advisory lane fields such as lane priority, CPU affinity, RT policy, thread-name portability guarantees, and isolation intent are parsed but not fully enforced by the current runtime. Unsupported policy should be documented as advisory rather than silently claimed.
Validation emits advisory diagnostics when lane-only scheduler fields are set, and plan JSON
reports the lane capability summary. Treat runtime priority and OS priority as
separate concepts: execution.priority is implemented as runtime ordering and
accepts background, low, normal, or high, while
nice_priority/rt_policy/cpu_affinity are OS hints that TopoExec does not
apply today.
Fixed Rate Lane¶
fixed_rate is accepted by schema v1 and defaults to deterministic runner ticks.
It reports simulated overrun and positive jitter when an iteration exceeds hz,
period_ms, or tick_budget_ms. When wall_clock_enabled: true, the runtime
inserts cooperative sleeps before later ticks according to period_ms or hz
and records skipped/max-lateness metrics according to overrun_policy. This is
an opt-in wall-clock cadence smoke, not hard real-time scheduling or independent
per-lane threading.
Async Admission¶
Async edges are deferred-completion edges. They do not execute arbitrary tasks by themselves; they admit a completion event for delivery at a later epoch.
policy.max_inflight limits outstanding deferred completions before channel capacity is considered:
edges:
- id: worker_join
kind: async
from: worker.done
to: join.ready
policy:
mode: queue
capacity: 8
max_inflight: 2
overflow: drop_oldest
copy_policy: shared_view
Overflow behavior:
drop_oldestandoverwritedrop the oldest pending async completion and accept the new one.drop_newestrejects the new completion and records it as dropped.reject,fail_fast, andblockreject the new completion in the current non-blocking runtime.
Async admission metrics use the runtime.async.* namespace; channel metrics report only completions that were actually committed to the runtime channel.
runtime.async.in_flight_count is the number of deferred completions still
pending at the end of the run; it can be non-zero if a bounded run stops before
the next epoch commits accepted async completions. It must not exceed
policy.max_inflight for the edge, and discarded CompositeLoop-staged async
outputs release that count.
What Is Still Deferred¶
- Full service/future response API and default runtime-owned task pools;
ThreadedTaskExecutorexists only as an opt-in bounded preview helper. - OS priority, affinity, and hard real-time policy enforcement.
- Independent fixed-rate lane threads and OS jitter control.
- Advanced starvation aging or OS-backed priority enforcement beyond runtime priority ordering.
- Hard timeout preemption for long-running component code.
- Blocking overflow behavior on the default non-blocking runtime path.