Kernel & Operational Contract
What the engine is, strictly: a resolution–dispatch operator over a graph-exposed view.
The engine implements a single pattern: PK universe from view → attributes → template substitution → dispatch. It never embeds domain rules; it just executes whatever the graph and templates specify.
E(script_name, pk_column, schema, mode)
=> for each PK in <schema>.<script_name>_V:
row → tokenised attributes → resolved CLOB → execute
– The engine always operates row-wise over a PK universe exposed by a view.
– Attributes are resolved purely by name → value replacement into a stored template CLOB.
– Dispatch is delegated to a host executor (e.g. EXECUTE IMMEDIATE) with no extra logic.
Purity: the kernel has no “if customer then hub, else sat” logic. There is no notion of hub, sat, grant, metric, or API in the code. Those concepts live entirely in the metadata views and templates. The engine is a stable, minimal kernel.
Preconditions & Given Environment
What must exist around the engine for completeness to hold.
The engine assumes, rather than implements:
- Connectivity: host runtime can reach all required schemas and external endpoints.
- Representation: domain state is encoded as rows in tables/views (graph, props, events).
- Semantics: domain meaning is in graph schema, view logic, templates, not in the kernel.
Inside the boundary: PK enumeration, feature extraction, template resolution, dispatch call.
Outside the boundary: what PKs mean, what the script does, why we are doing it.
If:
The engine doesn’t “bootstrap” semantics. It expects the world to be available and named. Once that is true, it will execute everything the world and templates encode, deterministically.
Functional Completeness
Why, under those preconditions, the engine is enough to express any required behaviour.
The host runtime (PL/SQL) is Turing-complete. Templates can contain arbitrary PL/SQL blocks that: read any state, update any state, and call back into the engine. Nothing computationally required lies outside view → resolve → execute.
DAGs, incremental runs, dependency resolution, and time windows are all representable as views over the graph. The engine consumes whatever PK universe those views expose, so orchestration is a graph/view problem, not a kernel problem.
Completeness here is not “the engine knows everything”; it is “no new kernel instructions are required”. Any new pattern is a new view + new template, not a new function in the package.
Mapping to Any Domain
How to embed a new domain into view → template → engine without touching the kernel.
For any domain where work can be expressed as “state + operations”, we:
- Encode entities/relationships as graph tables or relational schemas.
- Expose per-behaviour slices as <SCRIPT_NAME>_V views.
- Write templates that turn each row into a program/artefact.
– CIF / Data Vault / EDW generation
– Lakehouse tables, grants, quality checks
– IAM/RBAC policies, API gateways, routing rules
– Cloud infra templates (ARM, Terraform-like)
– Letters, emails, contract docs driven from graph
Any domain 𝔻 that can be represented as:
The engine is not “for data warehouses” or “for grants”. Those were just early domains. As long as a domain can appear as rows and we can write a template for what to do with each row, the kernel doesn’t care what the domain is.
Purity vs AI
Blind execution with total access, versus probabilistic modelling with partial access.
The engine has zero semantic model. Given full access to data and templates, it performs whatever is encoded, exactly, with no inference:
- It never guesses missing structure.
- It never “hallucinates” tables, columns, or policies.
- It never optimises away steps unless you encode that into views/templates.
– AI models operate on partial, noisy observations and learn statistical structure.
– They respond with the most likely completion under their training distribution.
– They have no direct execution power; someone else must implement the result.
Here you invert that: no statistical “understanding”, full execution power.
The engine’s only “knowledge” is that a script name has a view and a template. Everything else: entity types, hierarchies, temporal logic, “customer” vs “product”, belongs to the graph and templates. This is precisely why it can’t hallucinate: it has no internal model to drift.
Given access to all data, the engine is a pure performer. AI, given partial data, is a pure approximator. The two are complementary, but not interchangeable.
Boundary Conditions & Failure Modes
Where the engine stops; where discipline in the graph and templates takes over.
– No automatic safety: templates can do anything the DB user can do.
– No internal policy: “dev vs prod”, “allowed vs forbidden” are external concerns.
– No static analysis: errors are discovered at execution unless you add preflight templates.
These are deliberate design choices, not missing instructions. You can implement safety, policy, and analysis as additional views/templates/orchestration without touching the kernel.
– The kernel guarantees determinism and composability.
– You guarantee semantics and safety through the graph and templates.
– The more of your architecture you encode into metadata, the closer you get to “all behaviour is data”.
As an engine, this is essentially as small and as complete as it can sensibly be. All remaining questions are about how disciplined the surrounding graph and templates are, not about adding more functions to the kernel.