AI genuinely delivers on custom apps, UI generation, and scaffolding.
Business logic is different. A single business rule — customer balance can't exceed credit limit — touches every path that changes an order, payment, or return.
Each path (insert, delete) needs the same logic. Slightly different each time.
Easy to overlook a corner case.
Hard to understand, debug, maintain.
Traditional code generators had this problem. AI is no exception.
Allocate Charges to Departments,
and to their General Ledger Accounts
This screen was built by AI from a business description. It runs.
The callout shows what breaks in production.
See the Allocation System, Native Version
Across two independent examples, same three failures:
Path-dependent logic — the balance check fires on add-order but not delete-order. A new return path is added — no check. Silently wrong data.
Dependency ordering — even on add-order alone, the balance is checked before the order total is computed. Wrong sequence, wrong result.
Missing requirements — the credit limit rule was in the prompt. It didn't make it into the output.
In each case AI diagnosed the root cause: not a prompt problem, a structural limitation. Paths can be tested, never proven complete. Independent research across LLM reasoning, code semantics, and enterprise AI confirms the same finding.
Enterprise Business Automation: from Business Description to
Governed System: API, logic, tests, Admin UI
Two proof points: Allocation (cascade GL/Department logic) and Customs Surtax (CBSA regulatory compliance). Same capability — different domains.
See the Allocation System, GenAI-Logic Version
Logic scattered across paths becomes rules that live on data:
Rules fire on every path — add, delete, update, agent, workflow — automatically
Dependencies declared from rule semantics — not inferred from control flow
No path can bypass enforcement — reuse is an architectural consequence, not a feature
Directly closes two of the three S2 failures: path-dependence and missing requirements.
Intent is distilled into rules, then enforced once—at commit—across every path.
Rules are enforced once — at transaction commit — regardless of whether the change came from an app, service, MCP client, or agent
Reuse and Ordering are unavoidable architectural consequences of Rule distillation.
They cannot be engineered per-path - they must be architectural.
AI infers dependencies from control flow — and fails on transitive chains. The rules engine is different:
Each rule type has a known dependency structure — declared in the DSL, not guessed
At startup, the engine computes the full dependency graph — deterministically, before any transaction runs
No inference. No pattern-matching. The order is known.
Think of a spreadsheet sum formula — it watches and reacts to every change automatically. LogicBank rules work the same way, across tables, at transaction commit.
Not a RETE engine — LogicBank is purpose-built for transactional processing. [See why →]
A generic rules engine sits outside the transaction — it can be bypassed. LogicBank is different:
Hooks directly into the ORM commit — inside the transaction, not watching from outside
Given old-row and new-row — knows exactly what changed, executes only affected rules
Old-row/new-row awareness enables rule pruning and incremental aggregate adjustment — fast where generic engines are slow
The result:
Apps, agents, MCP, Vibe, workflows — all converge on one control point
New paths inherit automatically — no additional governance work
Nothing bypasses it — by architecture, not discipline
You can't govern paths. You can govern the commit.
CE is a 9,000-line architectural knowledge layer — not prompt engineering. It teaches AI the Rules DSL: syntax, semantics, patterns, how rules interact.
CE transforms procedural intent into declarative design
AI alone → FrankenCode. CE + AI → governed rules
Same prompt, same AI — the difference is architectural
Rules remain the executable artifact — AI can explain any rule, any transaction, in plain language
User thinks procedurally. CE + AI produces the correct declarative design
At Versata, thinking declaratively was the bottleneck. AI + CE eliminates it
Procedural intent in. Path-independent invariants out
Tests inferred from rules — not from code
N-fold faster than generating tests from procedural code
Requirement → rule → test → audit
Compliance teams can prove governance, not just assert it
Rules are the same abstraction as the requirement — intent preserved into implementation
No translation layer to rot — six months later, the rule still reads as the business requirement
Rules self-invoke and self-order — add a new rule, the engine determines where it fits
No archaeology. No missed dependencies. The intent is the system of record, always
CE lives in the repo, versioned alongside the rules, compounding as the project grows.
Logic as Infrastructure: Deploying the Business Logic Appliance makes
path-independent governance an unavoidable architectural consequence
See the 8-minute creation demo →
Configure your appliance with logic, just as you configure
a DBMS with DDL, or Security with a Users and Roles.
This is not a tradeoff. Speed, correctness, and governance are architectural consequences of the same design.
The three failures of S2 are closed:
Path-dependent logic → rules live on data, fire on every path, automatically
Dependency ordering → declared from rule semantics, computed at startup, deterministic
Missing requirements → CE ensures intent is captured, not pattern-matched away
The bottlenecks of traditional development are eliminated:
FrankenCode → rules at 40x the conciseness, readable as requirements
Archaeology → rules self-invoke, self-order, intent preserved into implementation
Test gaps → inferred from rules, traced to requirements, auditable by compliance
Remove any element — CE, the rules engine, commit enforcement — and the system reverts to demo-ware. Together they deliver what AI alone cannot: the promise of enterprise AI, fulfilled.
"From intent to governed system — in minutes."