Architecture

Constitution, two-axis model, reactive foundation.

Constitution

Non-negotiable design rules. Every decision traces back to one of these.

  1. Everything is a component. No privileged subsystems.
  2. Components give and take. No globals, no singletons, no ambient state.
  3. Reactivity is the foundation, not a feature. Every injected service is a Signal. Reading it inside @effect or @computed subscribes you. Writes propagate through the graph automatically — no callbacks, no @bind/@unbind, no listener registries. This is not optional plumbing layered on top; it is how the kernel routes change. (Reactivity by Example)
  4. The kernel has zero business logic. It manages reactivity, lifecycle, registry, bus, and traits — nothing domain-specific.
  5. Transport is an adapter, never a core concern. Components declare @api, never import FastAPI.
  6. Distribution is transparent. Same self.rt.invoke() whether local or remote.
  7. Apps are deployment units, components are composition units.
  8. Lifecycle is explicit and managed. Dependency-ordered activation, reverse shutdown.
  9. Every API has a client counterpart. One declaration, every surface.
  10. The kernel is small. ~2600 lines across 9 files. Readable in one sitting.
  11. Scoping is structural, not opt-in. Credentials and storage are per-component by default.
  12. Concurrency is correct by default. Per-flow tracking via ContextVar, shared-state mutation via RLock. Threads and asyncio tasks share Signals safely without consumer code touching a lock. (Threading Model)

Two-Axis Model

graph LR
    subgraph "Axis 1: The Mechanism (irreplaceable)"
        R[reactive.py<br/>Signal/Computed/Effect]
        C[component.py<br/>12 decorators]
        L[lifecycle_manager.py<br/>State machine + toposort]
        REG[registry.py<br/>Service provide/require]
        B[bus.py<br/>invoke + publish]
        RT[runtime.py<br/>Reactive self.rt]
    end

    subgraph "Axis 2: The Vocabulary (replaceable)"
        P[Providers<br/>Config, Logger, Auth...]
        A[Adapters<br/>REST, MCP, CLI]
        U[Your Apps<br/>Search, Splunk, System...]
    end

    R --> RT
    C --> L
    REG --> RT
    B --> RT
    P --> REG
    A --> B
    U --> REG

Axis 1 is the kernel — the irreplaceable mechanism. ~2600 lines. Cannot be swapped.

Axis 2 is everything written using the kernel. Providers, adapters, your apps. All components. All replaceable.

Kernel Primitives

Primitive File Lines Purpose
Reactivity Engine reactive.py 327 Signal, Computed, Effect, batch. The foundation.
Component Model component.py 673 12 decorators. ComponentMeta. _finalize_meta.
Lifecycle Manager lifecycle_manager.py 270 State machine. Dependency-ordered activation (topological sort). Effect disposal.
Service Registry registry.py 199 Provide/require. Ranking. Reference counting (tracks how many consumers hold each service). Factories.
Bus bus.py 155 invoke (req/res) + publish (events). Pluggable transport.
Runtime runtime.py 151 Reactive self.rt. Signal-backed getattr.
Traits traits.py 173 L0–L3 trait computation from metadata.
Contracts contracts.py 74 Protocol interfaces (IConfig, ILogger, etc.)
Kernel __init__.py 639 Orchestration. Boot, shutdown, hot_add, status.

Decoration → Activation

The 12 decorators don’t do much at decoration time. They attach a small marker dataclass to the function and return the function unchanged. The real wiring happens later, in three stages.

Stage 1 — the decorator just tags the function

When Python evaluates the class body, the decorator runs:

# signalpy/kernel/component.py
def effect(fn):
    fn.__effect__ = EffectDef(fn=fn, is_async=inspect.iscoroutinefunction(fn))
    return fn

That’s the whole decorator. It does not wrap the function. It does not create an Effect. It attaches one attribute (__effect__) holding a small dataclass — and returns the original function unchanged. After decoration:

class SearchService:
    @effect
    def on_url_change(self): ...

# At this point:
SearchService.on_url_change                # still the same Python function
SearchService.on_url_change.__effect__     # EffectDef(fn=..., is_async=False)

If you called instance.on_url_change() directly right now, it would just run as a plain method. Nothing reactive yet.

Stage 2 — the kernel collects the tags at discovery time

There is no metaclass. ComponentMeta is a plain @dataclass that holds metadata; it has no __init_subclass__, no type.__init__ hook. Tag collection is triggered explicitly when the kernel registers the class as a factory:

# signalpy/kernel/lifecycle_manager.py
def register_factory(self, cls: type) -> ComponentMeta:
    meta = _finalize_meta(cls)             # ← free function, scans cls
    self._factories[meta.factory_name] = cls
    return meta

_finalize_meta is a free function in component.py, not a method. It walks the class’s MRO and dir(cls) looking for the markers the decorators left:

# signalpy/kernel/component.py — _finalize_meta(cls)
for attr_name in dir(cls):
    obj = getattr(cls, attr_name, None)
    if obj is None:
        continue

    if hasattr(obj, "__runnable_defs__"):
        for rd in obj.__runnable_defs__:
            meta.runnables.append(rd)

    cd = getattr(obj, "__computed__", None)
    if isinstance(cd, ComputedDef):
        meta.computed_defs.append(cd)

    ed = getattr(obj, "__effect__", None)
    if isinstance(ed, EffectDef):
        meta.effect_defs.append(ed)
    # …same pass also collects __subscribe_defs__, __lifecycle__, etc.

The result is meta.effect_defs, meta.computed_defs, meta.runnables, … — lists of plans, one per decorated method. No Effect or Computed runtime object exists yet.

Why a free function instead of a metaclass? Metaclasses run during class creation — at import time, before the kernel exists, before we know whether this class will ever be activated. Discovery is the right hook: the kernel decides when to scan, and a class can be imported, introspected, even unit-tested without ever paying the scan cost.

Stage 3 — activation wraps and runs

When the kernel activates a component instance, it loops over the plans and creates one runtime object per plan. For @effect, it builds a ReactiveEffect:

# signalpy/kernel/__init__.py — inside _activate
for ed in ci.meta.effect_defs:
    fn = ed.fn

    if ed.is_async:
        async def _async_wrapper(f=fn, inst=instance):
            await f(inst)
        re = ReactiveEffect(_async_wrapper, lazy=False)
    else:
        def _sync_wrapper(f=fn, inst=instance):
            f(inst)
        re = ReactiveEffect(_sync_wrapper, lazy=False)

    ci._disposables.append(re)

Two things to notice:

  • The wrapper closes over the bound instance. inst=instance captures the live component, so when the engine calls the wrapper later, it reaches back into your object and runs your method on it.
  • lazy=False triggers an immediate run. Effect.__init__ calls self.run() right away — which sets _active_consumer = self, executes the body, and tracks every Signal read along the way.

So the journey from @effect def foo(self): ... to “an actively tracking reactive consumer” is:

Stage Where What changes
Decoration class body, import time fn.__effect__ = EffectDef(...) — function unchanged
Discovery LifecycleManager.register_factory(cls) calls _finalize_meta(cls) meta.effect_defs populated — plans collected
Activation Kernel._activate(instance) ReactiveEffect(wrapper, lazy=False) created and run — graph entry exists

The same three-stage pattern applies to every decorator:

  • @computed fnfn.__computed__ = ComputedDef(...)meta.computed_defsReactiveComputed(wrapper) at activation (lazy: only runs on first read).
  • @runnable("name") fnfn.__runnable_defs__meta.runnables → bus handler registration at activation.
  • @subscribe("topic") fn (a not-yet-introduced decorator that registers the method as an event handler — see Decorators reference) → fn.__subscribe_defs__meta.subscriptions → bus subscription at activation.

Once you see this pattern you stop wondering “what does @runnable do underneath” — they’re all variations on attach a marker, collect it later, do something useful with it at activation.

Why this design — and not the more common “wrap the function” approach

Most Python frameworks reach for one of these alternatives instead:

# Alternative A — wrap the function in a closure
def effect(fn):
    @functools.wraps(fn)
    def wrapper(self, *args, **kw):
        # …reactive setup runs here every call
        return fn(self, *args, **kw)
    wrapper._is_effect = True
    return wrapper

# Alternative B — replace the method with a descriptor
class effect:
    def __init__(self, fn): self.fn = fn
    def __get__(self, instance, owner):
        return BoundReactiveEffect(self.fn, instance)

# Alternative C — register against a global app at decoration time
def effect(fn):
    _GLOBAL_APP.register_effect(fn)   # à la FastAPI / Flask routes
    return fn

We chose the marker pattern over all three. Concrete reasons:

1. The kernel doesn’t exist yet at decoration time. Class bodies execute during module import. The runtime, the registry, _active_consumer, the instance’s self.rt — none of those exist when @effect runs. The decorator literally has no Signal graph to subscribe to. Any “real work” at decoration time would need either a global kernel singleton (constitution rule #2 forbids globals) or deferred closures (which is a wrapper, see point 3).

2. Decoration declares intent; activation chooses timing. Three reactive decorators, one mechanism: @effect is eager (run immediately at activation), @computed is lazy (run on first read), @runnable is on-demand (run per bus invocation). All three use the same marker pattern — they differ only in what stage 3 does with the plan. A wrapper-based approach bakes the timing into the decorator and then needs three different wrappers; our way separates declaration from execution and reuses the pipeline.

3. Methods stay plain callable Python. instance.on_url_change() is a regular method call. You can unit-test it without booting a kernel. With a wrapper, calling the method outside an Effect context either no-ops, errors, or does something subtly different from what runs inside the engine — and now your unit tests have to construct an _active_consumer to test the body. Markers leave the function behaviorally untouched: bodies are tested as bodies, the reactive layer is tested separately.

4. Inheritance, super(), and method resolution work without surprises. A subclass can override on_url_change and super().on_url_change() calls the parent’s body. Descriptor-based decorators (Alternative B) can break MRO in confusing ways — the descriptor’s __get__ returns a new object every access, defeating identity comparisons and confusing introspection.

5. Re-decoration is idempotent. Reload a module, hot-swap a class, pickle and unpickle, monkey-patch in a test — applying @effect again just overwrites fn.__effect__ with an equivalent EffectDef. With wrappers you get wrappers-of-wrappers and stale closures over old instances. Marker = state; wrapper = state and behavior glued together.

6. Introspection tooling sees the original function. inspect.signature(fn), Sphinx autodoc, IDE go-to-definition, type-checkers reading Callable[...] — they all see the function you wrote. functools.wraps patches this partially for closure wrappers but never fully (e.g. inspect.getsource can still surprise you). Descriptor approaches require custom protocol support in every tool.

7. No coupling to a global kernel at import time. A component class can be imported, type-checked, and unit-tested without ever instantiating a kernel. With Alternative C (global registration at decoration), import myapp.search has the side effect of mutating global state — which makes circular imports, test isolation, and multi-kernel scenarios all painful.

The cost is honest: the kernel does an explicit scan at discovery time, and the activation step is where the real work happens. We think that’s a good place for the work to happen — same place lifecycle, DI, ref-counting, and trait inference all run. One place to look when something’s surprising at boot, instead of N places hidden inside N decorators’ closures.

Concern Marker (ours) Wrapper closure Descriptor Global register
Kernel-free import yes yes yes no
inspect.signature correct yes partial varies yes
super().method() works yes yes brittle yes
Idempotent re-decoration yes no no no
Body unit-testable as plain method yes no no yes
Same decorator → eager / lazy / on-demand yes needs 3 wrappers needs 3 descriptors yes
Where to look when boot misbehaves one stage 3 N closures N descriptors one global

The marker pattern shows up in pytest fixtures, Hypothesis strategies, and Pydantic validators for the same reasons. It’s the boring choice, which is why it’s the right one here.

For a worked example of an effect’s runtime lifecycle once it’s been wired up — first run, dependency tracking, mutation, re-run — see Reactivity by Example.

The Reactive Foundation

v2’s key insight: reactivity IS the kernel, not a feature bolted on.

Every injected service is a Signal. Reading self.rt.config inside an @effect or @computed is a reactive read — the kernel tracks the dependency. When the service changes, the consumer is notified automatically.

sequenceDiagram
    participant CA as ConfigAdmin
    participant R as Registry
    participant RT as Runtime (Signal)
    participant E as @effect

    CA->>R: update("printer", {width: 80})
    R->>RT: Signal.set(new_service)
    RT->>E: notify()
    E->>E: re-run (reads new config)

This replaces manual @bind/@unbind callbacks, manual config polling, and manual state synchronization.

Traits

The kernel auto-computes traits from what a component declares:

Level Traits How acquired
L0 Kernel identifiable, lifecycle, dependable, registrable, inspectable, factoryable Every component
L1 Platform observable, configurable, secured, storable, communicable From @requires
L2 App runnable, subscribable, kinded, skillful, routable, reactive, adaptable From decorators
L3 Instance targeted, scoped, versioned From properties/metadata

Queryable at runtime via kernel.status().