Enjoying this? A quick like helps keep it online longer.

Content Expiring Soon

This content will be deleted in less than 24 hours. If you like it, you can extend its lifetime to keep it available.

0 likes
1 view
13 days left
Like what you see? Create your own
1
0
13d
Deterministic Execution Engine

Exploring Completeness & Purity of the Engine

Interactive view of a minimal resolution–dispatch kernel that, given access to all data and recorded templates, can perform arbitrary work without “understanding” anything. All semantics live in the graph and templates; the engine is a blind but complete performer.

View → Template → Engine → Dispatch
Kernel fixed, behaviour in data
No inference, no learning
Turing-complete via host runtime

Kernel & Operational Contract

What the engine is, strictly: a resolution–dispatch operator over a graph-exposed view.

Definition
Informal definition

The engine implements a single pattern: PK universe from view → attributes → template substitution → dispatch. It never embeds domain rules; it just executes whatever the graph and templates specify.

Shape of a call

E(script_name, pk_column, schema, mode)
  => for each PK in <schema>.<script_name>_V:
    row → tokenised attributes → resolved CLOB → execute

Core invariants
K Kernel properties that never change

– The engine always operates row-wise over a PK universe exposed by a view.
– Attributes are resolved purely by name → value replacement into a stored template CLOB.
– Dispatch is delegated to a host executor (e.g. EXECUTE IMMEDIATE) with no extra logic.

E : (S, V, K, C) → Effects S = set of script templates (MBI_SCRIPT) V = set of views (<SCRIPT_NAME>_V etc.) K = pk selector + schema (P_PK, P_SCHEMA) C = call context (mode, environment) For each pk ∈ V[schema.<SCRIPT_NAME>_V]: F(pk) := projected attributes for pk T := template for SCRIPT_NAME R(pk) := REPLACE(T, '<>', VALUE) for all (NAME, VALUE) ∈ F(pk) dispatch(R(pk), C)

Purity: the kernel has no “if customer then hub, else sat” logic. There is no notion of hub, sat, grant, metric, or API in the code. Those concepts live entirely in the metadata views and templates. The engine is a stable, minimal kernel.

Preconditions & Given Environment

What must exist around the engine for completeness to hold.

Assumptions
Three external givens

The engine assumes, rather than implements:

  • Connectivity: host runtime can reach all required schemas and external endpoints.
  • Representation: domain state is encoded as rows in tables/views (graph, props, events).
  • Semantics: domain meaning is in graph schema, view logic, templates, not in the kernel.
Engine boundary

Inside the boundary: PK enumeration, feature extraction, template resolution, dispatch call.
Outside the boundary: what PKs mean, what the script does, why we are doing it.

Compact statement
P Preconditions as a contract

If:

1. All relevant entities, relationships, and events are visible as SQL queryable structures. 2. For each behaviour type, there exists a view <SCRIPT_NAME>_V exposing a PK universe + attributes. 3. For each behaviour type, there exists a template in MBI_SCRIPT. 4. The host executor can run the generated artefact (SQL, PL/SQL, config, API call). Then: A single engine call is sufficient to realise that behaviour over the current graph state.

The engine doesn’t “bootstrap” semantics. It expects the world to be available and named. Once that is true, it will execute everything the world and templates encode, deterministically.

Functional Completeness

Why, under those preconditions, the engine is enough to express any required behaviour.

Reasoning
Computation aspect

The host runtime (PL/SQL) is Turing-complete. Templates can contain arbitrary PL/SQL blocks that: read any state, update any state, and call back into the engine. Nothing computationally required lies outside view → resolve → execute.

Graph & orchestration

DAGs, incremental runs, dependency resolution, and time windows are all representable as views over the graph. The engine consumes whatever PK universe those views expose, so orchestration is a graph/view problem, not a kernel problem.

Engine capability vs requirement
Read any state in DB/graph Covered
Write any state reachable from host Covered
Express arbitrary control flow Covered
Host all semantics in data Covered
What is deliberately not inside
Meaning of “customer”, “hub”, “risk” External
Policies (dev vs prod) External
AI-style inference / prediction Not present

Completeness here is not “the engine knows everything”; it is “no new kernel instructions are required”. Any new pattern is a new view + new template, not a new function in the package.

Mapping to Any Domain

How to embed a new domain into view → template → engine without touching the kernel.

Domain projection
Generic mapping pattern

For any domain where work can be expressed as “state + operations”, we:

  • Encode entities/relationships as graph tables or relational schemas.
  • Expose per-behaviour slices as <SCRIPT_NAME>_V views.
  • Write templates that turn each row into a program/artefact.
Examples (non-exhaustive)

– CIF / Data Vault / EDW generation
– Lakehouse tables, grants, quality checks
– IAM/RBAC policies, API gateways, routing rules
– Cloud infra templates (ARM, Terraform-like)
– Letters, emails, contract docs driven from graph

Abstract embedding
D Domain → Engine embedding in one view

Any domain 𝔻 that can be represented as:

𝔻_state = { entities, relationships, properties, events } 𝔻_ops = { behaviours we want to realise } Representation: – Encode 𝔻_state as tables/views in the DB. – For each op ∈ 𝔻_ops define: a view <SCRIPT_NAME_op>_V (PK, attributes…) a template in MBI_SCRIPT[op] Then: E(SCRIPT_NAME_op, PK, schema, mode) realises op across current 𝔻_state.

The engine is not “for data warehouses” or “for grants”. Those were just early domains. As long as a domain can appear as rows and we can write a template for what to do with each row, the kernel doesn’t care what the domain is.

Purity vs AI

Blind execution with total access, versus probabilistic modelling with partial access.

Contrast
Execution without understanding

The engine has zero semantic model. Given full access to data and templates, it performs whatever is encoded, exactly, with no inference:

  • It never guesses missing structure.
  • It never “hallucinates” tables, columns, or policies.
  • It never optimises away steps unless you encode that into views/templates.
AI comparison (deliberately sharp)

– AI models operate on partial, noisy observations and learn statistical structure.
– They respond with the most likely completion under their training distribution.
– They have no direct execution power; someone else must implement the result.
Here you invert that: no statistical “understanding”, full execution power.

Key purity statement
No internal semantics, full external semantics

The engine’s only “knowledge” is that a script name has a view and a template. Everything else: entity types, hierarchies, temporal logic, “customer” vs “product”, belongs to the graph and templates. This is precisely why it can’t hallucinate: it has no internal model to drift.

Given access to all data, the engine is a pure performer. AI, given partial data, is a pure approximator. The two are complementary, but not interchangeable.

Boundary Conditions & Failure Modes

Where the engine stops; where discipline in the graph and templates takes over.

Edge cases
Real limits

– No automatic safety: templates can do anything the DB user can do.
– No internal policy: “dev vs prod”, “allowed vs forbidden” are external concerns.
– No static analysis: errors are discovered at execution unless you add preflight templates.

But not kernel gaps

These are deliberate design choices, not missing instructions. You can implement safety, policy, and analysis as additional views/templates/orchestration without touching the kernel.

Purity vs responsibility
B Where responsibility sits

– The kernel guarantees determinism and composability.
– You guarantee semantics and safety through the graph and templates.
– The more of your architecture you encode into metadata, the closer you get to “all behaviour is data”.

As an engine, this is essentially as small and as complete as it can sensibly be. All remaining questions are about how disciplined the surrounding graph and templates are, not about adding more functions to the kernel.